I am getting the TotalCaptureResults object from the camera, using the Camera2 API in Android. I am using a preview, not a single image. Is there a way to get bytes[] from TotalCaptureResults?
Thank you.
Short answer: no.
All CaptureResults objects contain only metadata about a frame capture, no actual pixel information. The associated pixel data are sent to wherever you designated as the target Surface in your CaptureRequest.Builder. So you need to check with whatever Surface you set up, such as an ImageReader which will give you access to an Image output from the camera, which will give you access to the bytes[].
Related
I am writing an Application for an Android device where I want to process the image from the camera.
The camera on this device only supports the NV21 and PRIVATE formats. I verified via the CameraCharacteristics and tried different formats like YUV_420_888 only for the app to fail. The camera HW Support is: INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED
To create the CameraCaptureSession, I need to create an ImageReader.
To create the ImageReader I need to select the ImageFormat, in this case only PRIVATE works. If I try NV21, I get an error saying that it's not supported.
My onImageAvailableListener gets triggered but since the ImageFormat is PRIVATE, the "planes" attribute in the Image returns NULL.
According to the documentation, it's possible to access the data via the HardwareBuffer.
Starting in Android P private images may also be accessed through
their hardware buffers (when available) through the
Image#getHardwareBuffer() method. Attempting to access the planes of a
private image, will return an empty array.
I do get an object of type HardwareBuffer when I get the image from the acquireLatestImage, but my question is: How do I get the actual data/bytes that represents the pixels from the HardwareBuffer object?
As mentioned in HardwareBuffer documentation:
For more information, see the NDK documentation for AHardwareBuffer
You have to use AHardwareBuffer C language functions (using NDK) to access pixels of HardwareBuffer.
In short:
Create native JNI method in some helper class to call it from Java/Kotlin
Pass your HardwareBuffer to this method as parameter
Inside this method call AHardwareBuffer_fromHardwareBuffer to wrap HardwareBuffer as AHardwareBuffer
Call AHardwareBuffer_lock or AHardwareBuffer_lockPlanes to obtain raw image bytes
Work with pixels. If needed you can call any Java/Kotlin method to process pixels there (use ByteBuffer to wrap raw data).
Call AHardwareBuffer_unlock to remove lock from buffer
Don't forget that HardwareBuffer may be read only or protected.
I have used the official sample for using the Camera2 API to capture a RAW sensor frame. The code is in java but I transformed it to Kotlin with the help of Android Studio. I tested it, and I'm capable of taking and saving a dng picture to my phone. Not a problem so far.
But what I really want is to be able to retrieve some information about the picture, I don't care about saving it. I want to do the processing directly with my smartphone.
What I tried so far, is to get the byte array of the image.
In the function dequeueAndSaveImage, I retrieve the image from a ImageReader : image = reader.get()!!.acquireNextImage().
I suppose that is here that I have to process the image. I tried to log the image.width, image.height and the image.planes.count and there was no problem.
By the way, since the format is RAW_SENSOR, the image.planes.count is 1, corresponding to a single plane of raw sensor image data, with 16 bits per color sample.
But when I'm trying to log the image.planes[0].buffer.array().size for example, I'm getting a FATAL EXCEPTION: CameraBackground with java.lang.UnsupportedOperationException.
And if I'm trying to log the same thing, but in the function that saves the image to a dng file, I'm getting another type of error : FATAL EXCEPTION: AsyncTask #1 with java.lang.UnsupportedOperationException
Am I even going the right way in order to retrieve information about the image? For example the the intensity of the pixels, the average, the standard deviation for each channel of color, etc...
EDIT : I think that I found the problem, although not the solution.
When I log image.planes[0].buffer.hasArray(), it returns false, that's why calling array() throws an exception.
But then, how do I get the data from the image?
API level 21 introduced camera2, with it setRepeatingRequest and setRepeatingBurst. I have read the doc here, but still cannot catch the difference between the two. Any idea?
Well, you'll notice that the constructors for these two methods are slightly different. setRepeatingBurst's first argument is List<CaptureRequest>, and setRepeatingRequests's is just a CaptureRequest.
According to the docs,
setRepeatingBurst
With this method, the camera device will continually capture images, cycling through the settings in the provided list of CaptureRequests, at the maximum rate possible.
setRepeatingRequest
With this method, the camera device will continually capture images using the settings in the provided CaptureRequest, at the maximum rate possible.
So, setRepeatingBurst can be used to capture images with a list of different settings.
That's my best understanding, hope it helps!
Think of setRepeatingRequest as ONE CaptureRequest with one set of settings to continually capture images.
Where as in setRepeatingBurst there is a list CaptureRequest and each "CaptureRequest" has its own setting to continually capture images.
Conclusion: setRepeatingBurst call is like making multiple setRepeatingRequest calls in one call.
this is regarding Android's Camera2 APIs. Since capture result and output frame are produced asynchronously, one could get capture result much before the actual frame. Is there a good way to associate produced frame with the corresponding capture result ?
Assuming you are talking about a frame that is sent to an ImageReader or SurfaceTexture upon capture (as in the ubiquitous camera2basic example), the trick is to compare unique timestamps identifying the images.
Save the TotalCaptureResult somewhere accessible when it is available in your CameraCaptureSession.CaptureCallback's onCaptureComplete(...) call.
Then, when the actual image is available via your ImageReader.OnAvailableListener or SurfaceTexture.OnFrameAvailableListener, get the image's timestamp:
Long imageTimestamp = Long.valueOf(reader.acquireNextImage().getTimestamp()); or
Long imageTimestamp = Long.valueOf(surfaceTexture.getTimestamp()), respectively.
Compare timestamps with: imageTimestamp.equals(totalCaptureResult.get(CaptureResult.SENSOR_TIMESTAMP));
Notes:
The timestamp may not be an actual true system timestamp for your device, but it is guaranteed to be unique and monotonically increasing, so it works as an ID.
If you are sending the image to a SurfaceHolder or something else instead, you're out of luck as only the pixel information gets sent, not the timestamp present in the Image object. I'm not sure about the other places you can send a frame, e.g., MediaRecorder or Allocation, but I think not.
You probably need to add each new TotalCaptureResult to a growing set as they are generated, and then compare an incoming image's timestamp against all of these, because of the asynchronous nature you noted. I'll let you figure out how to do that as you see fit.
I had to solve a similar situation (sync frames across surfaces); Sumner's solution (.getTimestamp() of the respective received Image object) did the trick for me for SurfaceTexture and ImageReader.
Just a quick note on other surfaces (which, as pointed out, don't give you an Image object): at least for MediaCodec, the BufferInfo object received by the onOutputBufferAvailable callback has a presentationTimeUs, which is "derived from the presentation timestamp passed in with the corresponding input buffer" and, at least for me, appears to match the timestamps from other surfaces. (Note the different unit though.)
I'd like to use MediaCodec to encode the data coming from the camera (reason: it's more low-level so hopefully faster than using MediaRecorder). Using Camera.PreviewCallBack, I capture the data from the camera into a byte-buffer, in order to pass it on to a MediaCodec object.
To do this, I need to fill in a MediaFormat-object, which would be fairly easy if I knew the MIME-code of the data coming from the camera. I can pick this format using setPreviewFormat() choosing one of the constants declared in te ImageFormat-class.
Hence my question: given the different options provided by the ImageFormat-class to set the camera preview-format, what are the corresponding MIME-type codes?
Thanks a lot in advance.
See example at https://gist.github.com/3990442. You should set MIME type of what you want to get out of encoder, i.e. "video/avc".