Accessing HardwareBuffer from ImageReader - android

I am writing an Application for an Android device where I want to process the image from the camera.
The camera on this device only supports the NV21 and PRIVATE formats. I verified via the CameraCharacteristics and tried different formats like YUV_420_888 only for the app to fail. The camera HW Support is: INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED
To create the CameraCaptureSession, I need to create an ImageReader.
To create the ImageReader I need to select the ImageFormat, in this case only PRIVATE works. If I try NV21, I get an error saying that it's not supported.
My onImageAvailableListener gets triggered but since the ImageFormat is PRIVATE, the "planes" attribute in the Image returns NULL.
According to the documentation, it's possible to access the data via the HardwareBuffer.
Starting in Android P private images may also be accessed through
their hardware buffers (when available) through the
Image#getHardwareBuffer() method. Attempting to access the planes of a
private image, will return an empty array.
I do get an object of type HardwareBuffer when I get the image from the acquireLatestImage, but my question is: How do I get the actual data/bytes that represents the pixels from the HardwareBuffer object?

As mentioned in HardwareBuffer documentation:
For more information, see the NDK documentation for AHardwareBuffer
You have to use AHardwareBuffer C language functions (using NDK) to access pixels of HardwareBuffer.
In short:
Create native JNI method in some helper class to call it from Java/Kotlin
Pass your HardwareBuffer to this method as parameter
Inside this method call AHardwareBuffer_fromHardwareBuffer to wrap HardwareBuffer as AHardwareBuffer
Call AHardwareBuffer_lock or AHardwareBuffer_lockPlanes to obtain raw image bytes
Work with pixels. If needed you can call any Java/Kotlin method to process pixels there (use ByteBuffer to wrap raw data).
Call AHardwareBuffer_unlock to remove lock from buffer
Don't forget that HardwareBuffer may be read only or protected.

Related

How to concat three YUV buffers to one array?

I'm working with Android SDK for IoT camera. I want to implement taking snapshots from the camera and save it to the external storage. SDK provides a method for that which takes absoluteFilePath as a parameter.
int snapshot(String absoluteFilePath, Context context, OperationDelegateCallBack callBack);
Unfortunately because of scope storage introduced in Android 10 this method is not working. There is info that If I want to use scope storage I need to implement this feature by myself. In this case, I need to get raw frame data in YUV420SP (NV21) format. SDK provides callback for that:
fun onReceiveFrameYUVData(
sessionId: Int,
y: ByteBuffer,
u: ByteBuffer,
v: ByteBuffer,
videoFrameInfo: TuyaVideoFrameInfo?,
camera: Any?,
)
I would like to use YuvImage class from android graphics package to convert this image to JPEG (it provides method compressToJpeg). Constructor of that class takes only a single byte array as a parameter. Callback from SDK provides YUV components as separate buffers. How should I concat those three buffers into one array to use YuvImage class?
BTW Is this the proper approach or maybe should I use something else?
SDK documentation: https://developer.tuya.com/en/docs/app-development/avfunction?id=Ka6nuvucjujar#title-3-Video%20screenshots
Unfortunately because of scope storage introduced in Android 10 this method is not working.
Of course it still works if you use a normal writable and readable full path.
For Android 10 you dont have to change your usual path. (I do not understand that you have any problem there).
For Android 11+ use public image directories like DCIM and Pictures.

How does "ImageReader.getSurface()" work?

I am working with the camera2 API in android and am trying to understand this code I am using. Part of the code goes like this:
previewReader = ImageReader.newInstance(previewSize.getWidth(), previewSize.getHeight(),
ImageFormat.YUV_420_888, 4);
previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
// This adds another output surface but not sure where this surface comes from..
previewRequestBuilder.addTarget(previewReader.getSurface());
imageListener is an object from another class that implements android.media.ImageReader.OnImageAvailableListener and backgroundHandler is just a background thread. I am not including code for these two or previewRequestBuilder as they do not seem to be important for understanding my question.
I have searched extensively but it just seems like some magic happens and previewReader finds some surface somewhere, somehow. According to the documentation, what getSurface() does is to:
Get a Surface that can be used to produce Image for this ImageReader
Can anyone explain where it gets this?
That Surface belongs to the ImageReader; it was created in the native equivalent of the ImageReader's constructor, and is (effectively) an ImageReader private member, with a getter.
Here is the line in the native constructor that sets up the IGraphicBufferProducer (gbProducer), which is basically the native equivalent of a Surface.
Here is where you can see that the native code uses that same member to form the return value from getSurface()/nativeGetSurface() (you may have to trace through the code a bit, but it's all there).
So that's the literal answer to your question. But maybe you were asking because it isn't clear why the camera doesn't create the Surface, and force you to give it to the ImageReader, instead: A Surface is a complex object (actually, a buffer queue), and shouldn't be thought of as a simple, pre-allocated bitmap. At the time the capture takes place, the camera pipeline will communicate with its output Surfaces, and set up the correct dimensions and color planes and so forth. (Note that you can add multiple targets via addTarget(); the camera can use each of them.) All the camera needs to know is where it's going to send its output; it doesn't need to create the output Surface itself.

Converting uint8_t* buffer to jobject

We are currently working on a specific live image manipulation/effect application, where we are working with NDK and using Camera2 and MediaNDK apis.
I'm using AImageReader as a way to call current frames from camera and assign effects in realtime. This works pretty good, we are getting at least 30 fps in hd resolutions.
However, my job also requires me to return this edited image back to the given java endpoint, that is a (Landroid/media/Image;)V signatured method. However this can be changed to any other jobject I want, but must be an image/bitmap kind.
I found out that the AImage I was using was just a c struct so I'll not able to convert it to jobject.
Our current process is something like this in order:
AImageReader_ImageListener is calling a static method with an assigned this context.
Method uses AImageReader_acquireNextImage and if the media is ok, sends it to a child class/object.
In here we manipulate the image data inside multiple std::thread operations, and merge the resulting image. I'm receiving YUV422 formatted data, but I'm converting it to RGB for easier processing.
Then we lock the mutex, return the resulting data to delegate, and delete the original image.
Delegate calls a static method that is responsible for finding/asking the Java method.
Now I need a simple and low-on-resource solution of converting the data at hand to a C++ object that can be represented also as a jobject.
We are using OpenCv in the processes, so it is possible for me to return a Bitmap object, but looks like job consumes more cpu time than I can let it.
How can I approach this problem? Is there a known fast way of converting uint8_t *buffer to a image like jobject context?

Get the bytes[] from a TotalCaptureResult

I am getting the TotalCaptureResults object from the camera, using the Camera2 API in Android. I am using a preview, not a single image. Is there a way to get bytes[] from TotalCaptureResults?
Thank you.
Short answer: no.
All CaptureResults objects contain only metadata about a frame capture, no actual pixel information. The associated pixel data are sent to wherever you designated as the target Surface in your CaptureRequest.Builder. So you need to check with whatever Surface you set up, such as an ImageReader which will give you access to an Image output from the camera, which will give you access to the bytes[].

Mime-type of Android camera PreviewFormat

I'd like to use MediaCodec to encode the data coming from the camera (reason: it's more low-level so hopefully faster than using MediaRecorder). Using Camera.PreviewCallBack, I capture the data from the camera into a byte-buffer, in order to pass it on to a MediaCodec object.
To do this, I need to fill in a MediaFormat-object, which would be fairly easy if I knew the MIME-code of the data coming from the camera. I can pick this format using setPreviewFormat() choosing one of the constants declared in te ImageFormat-class.
Hence my question: given the different options provided by the ImageFormat-class to set the camera preview-format, what are the corresponding MIME-type codes?
Thanks a lot in advance.
See example at https://gist.github.com/3990442. You should set MIME type of what you want to get out of encoder, i.e. "video/avc".

Categories

Resources