How to concat three YUV buffers to one array? - android

I'm working with Android SDK for IoT camera. I want to implement taking snapshots from the camera and save it to the external storage. SDK provides a method for that which takes absoluteFilePath as a parameter.
int snapshot(String absoluteFilePath, Context context, OperationDelegateCallBack callBack);
Unfortunately because of scope storage introduced in Android 10 this method is not working. There is info that If I want to use scope storage I need to implement this feature by myself. In this case, I need to get raw frame data in YUV420SP (NV21) format. SDK provides callback for that:
fun onReceiveFrameYUVData(
sessionId: Int,
y: ByteBuffer,
u: ByteBuffer,
v: ByteBuffer,
videoFrameInfo: TuyaVideoFrameInfo?,
camera: Any?,
)
I would like to use YuvImage class from android graphics package to convert this image to JPEG (it provides method compressToJpeg). Constructor of that class takes only a single byte array as a parameter. Callback from SDK provides YUV components as separate buffers. How should I concat those three buffers into one array to use YuvImage class?
BTW Is this the proper approach or maybe should I use something else?
SDK documentation: https://developer.tuya.com/en/docs/app-development/avfunction?id=Ka6nuvucjujar#title-3-Video%20screenshots

Unfortunately because of scope storage introduced in Android 10 this method is not working.
Of course it still works if you use a normal writable and readable full path.
For Android 10 you dont have to change your usual path. (I do not understand that you have any problem there).
For Android 11+ use public image directories like DCIM and Pictures.

Related

Accessing HardwareBuffer from ImageReader

I am writing an Application for an Android device where I want to process the image from the camera.
The camera on this device only supports the NV21 and PRIVATE formats. I verified via the CameraCharacteristics and tried different formats like YUV_420_888 only for the app to fail. The camera HW Support is: INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED
To create the CameraCaptureSession, I need to create an ImageReader.
To create the ImageReader I need to select the ImageFormat, in this case only PRIVATE works. If I try NV21, I get an error saying that it's not supported.
My onImageAvailableListener gets triggered but since the ImageFormat is PRIVATE, the "planes" attribute in the Image returns NULL.
According to the documentation, it's possible to access the data via the HardwareBuffer.
Starting in Android P private images may also be accessed through
their hardware buffers (when available) through the
Image#getHardwareBuffer() method. Attempting to access the planes of a
private image, will return an empty array.
I do get an object of type HardwareBuffer when I get the image from the acquireLatestImage, but my question is: How do I get the actual data/bytes that represents the pixels from the HardwareBuffer object?
As mentioned in HardwareBuffer documentation:
For more information, see the NDK documentation for AHardwareBuffer
You have to use AHardwareBuffer C language functions (using NDK) to access pixels of HardwareBuffer.
In short:
Create native JNI method in some helper class to call it from Java/Kotlin
Pass your HardwareBuffer to this method as parameter
Inside this method call AHardwareBuffer_fromHardwareBuffer to wrap HardwareBuffer as AHardwareBuffer
Call AHardwareBuffer_lock or AHardwareBuffer_lockPlanes to obtain raw image bytes
Work with pixels. If needed you can call any Java/Kotlin method to process pixels there (use ByteBuffer to wrap raw data).
Call AHardwareBuffer_unlock to remove lock from buffer
Don't forget that HardwareBuffer may be read only or protected.

Converting uint8_t* buffer to jobject

We are currently working on a specific live image manipulation/effect application, where we are working with NDK and using Camera2 and MediaNDK apis.
I'm using AImageReader as a way to call current frames from camera and assign effects in realtime. This works pretty good, we are getting at least 30 fps in hd resolutions.
However, my job also requires me to return this edited image back to the given java endpoint, that is a (Landroid/media/Image;)V signatured method. However this can be changed to any other jobject I want, but must be an image/bitmap kind.
I found out that the AImage I was using was just a c struct so I'll not able to convert it to jobject.
Our current process is something like this in order:
AImageReader_ImageListener is calling a static method with an assigned this context.
Method uses AImageReader_acquireNextImage and if the media is ok, sends it to a child class/object.
In here we manipulate the image data inside multiple std::thread operations, and merge the resulting image. I'm receiving YUV422 formatted data, but I'm converting it to RGB for easier processing.
Then we lock the mutex, return the resulting data to delegate, and delete the original image.
Delegate calls a static method that is responsible for finding/asking the Java method.
Now I need a simple and low-on-resource solution of converting the data at hand to a C++ object that can be represented also as a jobject.
We are using OpenCv in the processes, so it is possible for me to return a Bitmap object, but looks like job consumes more cpu time than I can let it.
How can I approach this problem? Is there a known fast way of converting uint8_t *buffer to a image like jobject context?

How to customize parameters used on renderscript root function?

Background
I'm new to renderscript, and I would like to try some experiments with it (but small ones and not the complex ones we find in the SDK), so I thought of an exercise to try out, which is based on a previous question of mine (using NDK).
What I want to do
In short, I would like to pass a bitmap data to renderscript, and then I would like it to copy the data to another bitmap that has the dimensions opposite to the previous one, so that the second bitmap would be a rotation of the first one.
For illustration:
From this bitmap (width:2 , height:4):
01
23
45
67
I would like it to rotate (counter clock-wise of 90 degrees) to:
1357
0246
The problem
I've noticed that when I try to change the signature of the root function, Eclipse gives me errors about it.
Even making new functions creates new errors. I've even tried the same code written on Google's blog (here ), but I couldn't find out how he got to create the functions he used, and how come I can't change the filter function to have the input and output bitmap arrays.
What can I do in order to customize the parameters I send to renderscript, and use the data inside it?
Is it ok not to use "filter" or "root" functions (API 11 and above)? What can I do in order to have more flexibility about what I can do there?
You are asking a bunch of separate questions here, so I will answer them in order.
1) You want to rotate a non-square bitmap. Unfortunately, the bitmap model for Renderscript won't allow you to do this easily. The reason for this is that that input and output allocations must have the same shape (i.e. same number of dimensions and values of those dimensions, even if the Types are different). In order to get the effect you want, you should use a root function that only has an output allocation of the new shape (i.e. input columns X input rows). You can create an rs_allocation global variable for holding your input bitmap (which you can then create/bind on the Java side). The kernel then merely needs to set the output cell to the result of rsGetElementAt(globalInAlloc, y, x).
2) If you are using API 11, you can't adjust the signature of the root() function (you can pass NULL allocations as input, output on the Java side if you are not using them). You also can't create more than 1 kernel per source file on these older API levels, so you are forced to only have a single "root()" function. If you want to use more kernels per source file, consider targeting a higher API level.

Mime-type of Android camera PreviewFormat

I'd like to use MediaCodec to encode the data coming from the camera (reason: it's more low-level so hopefully faster than using MediaRecorder). Using Camera.PreviewCallBack, I capture the data from the camera into a byte-buffer, in order to pass it on to a MediaCodec object.
To do this, I need to fill in a MediaFormat-object, which would be fairly easy if I knew the MIME-code of the data coming from the camera. I can pick this format using setPreviewFormat() choosing one of the constants declared in te ImageFormat-class.
Hence my question: given the different options provided by the ImageFormat-class to set the camera preview-format, what are the corresponding MIME-type codes?
Thanks a lot in advance.
See example at https://gist.github.com/3990442. You should set MIME type of what you want to get out of encoder, i.e. "video/avc".

Is there a way to import a 3D model into Android?

Is it possible to create a simple 3D model (for example in 3DS MAX) and then import it to Android?
That's where I got to:
I've used Google's APIDemos as a starting point - there are rotating cubes in there, each specified by two arrays: vertices and indices.
I've build my model using Blender and exported it as OFF file - it's a text file that lists all the vertices and then faces in terms of these vertices (indexed geometry)
Then I've created a simple C++ app that takes that OFF and writes it as two XMLs containing arrays (one for vertices and one for indices)
These XML files are then copied to res/values and this way I can assign the data they contain to arrays like this:
int vertices[] = context.getResources().getIntArray(R.array.vertices);
I also need to manually change the number of faces to be drawn in here: gl.glDrawElements(GL10.GL_TRIANGLES, 212*6, GL10.GL_UNSIGNED_SHORT, mIndexBuffer); - you can find that number (212 in this case) on top of the OFF file
Here you can find my project page, which uses this solution: Github project > vsiogap3d
you may export it to ASE format.
from ASE, you can convert it to your code manually or programatically.
You will need vertex for vertices array and faces for indices in Android.
don't forget you have to set
gl.glFrontFace(GL10.GL_CCW);
because 3ds max default is counter clockwise.
It should be possible. You can have the file as a data file with your program (and as such it will be pushed onto the emulator and packaged for installation onto an actual device). Then you can write a model loader and viewer in java using the Android and GLES libraries to display the model.
Specific resources on this are probably limited though. 3ds is a proprietry format so 3rd party loaders are in shortish supply and mostly reverse engineered. Other formats (such as blender or milkshape) are more open and you should be able to find details on writing a loader for them in java fairly easily.
Have you tried min3d for android? It supports 3ds max,obj and md2 models.
Not sure about Android specifically, but generally speaking you need a script in 3DS Max that manually writes out the formatting you need from the model.
As to whether one exists for Android or not, I do not know.
You can also convert 3DS MAX model with the 3D Object Converter
http://web.t-online.hu/karpo/
This tool can convert 3ds object to text\xml format or c code.
Please note that the tool is not free. You can try for a 30-day trial period. 'C' code and XML converters are available.
'c' OpenGL output example:
glDisable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
glEnable(GL_NORMALIZE);
GLfloat Material_1[] = { 0.498039f, 0.498039f, 0.498039f, 1.000000f };
glBegin(GL_TRIANGLES);
glMaterialfv(GL_FRONT,GL_DIFFUSE,Material_1
glNormal3d(0.452267,0.000000,0.891883);
glVertex3d(5.108326,1.737655,2.650969);
glVertex3d(9.124107,-0.002484,0.614596);
glVertex3d(9.124107,4.039649,0.614596);
glEnd();
Or direct 'c' output:
Point3 Object1_vertex[] = {
{5.108326,1.737655,2.650969},
{9.124107,-0.002484,0.614596},
{9.124107,4.039649,0.614596}};
long Object1_face[] = {
3,0,1,2,
3,3,4,5
3,6,3,5};
You can migrate than those collections of objects to your Java code.

Categories

Resources