Is the Camera.takePicture callback for raw data null when running in emulator environment?
I got the CameraSurface method out of the WWW, so it should be correct.
Before calling takePicture you have to provide a buffer large enough to hold the raw image data. (detailed here). For setting the buffer, use the addCallbackBuffer method.
This issue seems to be related to the raw-data callback only.
Using the picture callback which retrieves a jpeg solves the issue.
Related
I am writing an Application for an Android device where I want to process the image from the camera.
The camera on this device only supports the NV21 and PRIVATE formats. I verified via the CameraCharacteristics and tried different formats like YUV_420_888 only for the app to fail. The camera HW Support is: INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED
To create the CameraCaptureSession, I need to create an ImageReader.
To create the ImageReader I need to select the ImageFormat, in this case only PRIVATE works. If I try NV21, I get an error saying that it's not supported.
My onImageAvailableListener gets triggered but since the ImageFormat is PRIVATE, the "planes" attribute in the Image returns NULL.
According to the documentation, it's possible to access the data via the HardwareBuffer.
Starting in Android P private images may also be accessed through
their hardware buffers (when available) through the
Image#getHardwareBuffer() method. Attempting to access the planes of a
private image, will return an empty array.
I do get an object of type HardwareBuffer when I get the image from the acquireLatestImage, but my question is: How do I get the actual data/bytes that represents the pixels from the HardwareBuffer object?
As mentioned in HardwareBuffer documentation:
For more information, see the NDK documentation for AHardwareBuffer
You have to use AHardwareBuffer C language functions (using NDK) to access pixels of HardwareBuffer.
In short:
Create native JNI method in some helper class to call it from Java/Kotlin
Pass your HardwareBuffer to this method as parameter
Inside this method call AHardwareBuffer_fromHardwareBuffer to wrap HardwareBuffer as AHardwareBuffer
Call AHardwareBuffer_lock or AHardwareBuffer_lockPlanes to obtain raw image bytes
Work with pixels. If needed you can call any Java/Kotlin method to process pixels there (use ByteBuffer to wrap raw data).
Call AHardwareBuffer_unlock to remove lock from buffer
Don't forget that HardwareBuffer may be read only or protected.
I have implemented my own subclass of ImageAnalysis.Analyzer and it works as expected. Now I want to "attach/detach" this analyzer to the ImageAnalysis class dynamically (based on some user actions). From the ImageAnalysis API it looks like it is possible, there is setAnalyzer method and also clearAnalyzer method. However, this works correctly only on the first setAnalyzer call. If I call then clearAnalyzer and setAnalyzer again, analyze method is not called.
In the documentation of clearAnalyzer method is
Removes a previously set analyzer.
This will stop data from streaming to the ImageAnalysis.
so it looks like its working maybe correctly as expected? That it will stop data streaming to ImageAnalysis. But is it possible to clear/set analyzer the way I want? I dont want to add some enabled boolean flags to my analyzer, there is threading issues, image queue issues and the solution with set/clear would be the cleanest I think.
Seems like this bug is fixed now, but if anyone still stumbles upon such behavior, make sure you didn't forget to call ImageProxy.close() somewhere.
We are currently working on a specific live image manipulation/effect application, where we are working with NDK and using Camera2 and MediaNDK apis.
I'm using AImageReader as a way to call current frames from camera and assign effects in realtime. This works pretty good, we are getting at least 30 fps in hd resolutions.
However, my job also requires me to return this edited image back to the given java endpoint, that is a (Landroid/media/Image;)V signatured method. However this can be changed to any other jobject I want, but must be an image/bitmap kind.
I found out that the AImage I was using was just a c struct so I'll not able to convert it to jobject.
Our current process is something like this in order:
AImageReader_ImageListener is calling a static method with an assigned this context.
Method uses AImageReader_acquireNextImage and if the media is ok, sends it to a child class/object.
In here we manipulate the image data inside multiple std::thread operations, and merge the resulting image. I'm receiving YUV422 formatted data, but I'm converting it to RGB for easier processing.
Then we lock the mutex, return the resulting data to delegate, and delete the original image.
Delegate calls a static method that is responsible for finding/asking the Java method.
Now I need a simple and low-on-resource solution of converting the data at hand to a C++ object that can be represented also as a jobject.
We are using OpenCv in the processes, so it is possible for me to return a Bitmap object, but looks like job consumes more cpu time than I can let it.
How can I approach this problem? Is there a known fast way of converting uint8_t *buffer to a image like jobject context?
How to capture continuous images using Camera 2 api's.
I wrote a simple application and a thread in it to capture continuous images but it is not working
It would be nice if you provided your code so we could see what's wrong.
In general terms, you'll want to create a CameraDevice object and call the CameraDevice.createCaptureSession(List <Surface>, CameraCaptureSession.StateCallback, Handler) method by specifying which surfaces you might like to output to (maybe just 1). Once the CameraCaptureSession.StateCallback (that you specified in the createCaptureSession method) calls the onConfigured(CameraCaptureSession) method, call the CameraDevice.createCaptureRequest(int) method, which returns a CaptureRequest.Builder object. With this, you can use the CaptureRequest.Builder.addTarget(Surface) method to specify which of the pre-specified surface(s) you want to output to (probably all of them). Once you're done adding targets, call the CaptureRequest.Builder.build() method, which returns a CaptureRequest object. You can then use the CameraCaptureSession object that was provided to you by the onConfigured(CameraCaptureSession) method to finally pass your CaptureRequest object to the CameraCaptureSession.setRepeatingRequest(CaptureRequest, CameraCaptureSession.CaptureCallback, Handler) method. This will start continuous output to the surfaces that you specified.
Seriously, this api is so complicated, you'd think they didn't want you to use it. If you need more detailed information about what these classes and methods do, the Android documentation is very good.
I:
Create android::MediaBufferGroup;
Fill it up with multiple buf_group.add_buffer(new android::MediaBuffer(bufsize)); on initialisation;
Do buf_group->acquire_buffer(&buffer) when I need a buffer to send somewhere;
Use buffer->data() to get actual memory location to store the data at, use set_range and set up metadata, then feed the buffer into other component;
That other component releases the buffer, retuning them back to the MediaBufferGroup.
It works, but not reliably. Sometimes acquired buffer's data() returns NULL, sometimes the program crashes on release()...
How to use MediaBufferGroup properly? Should I use some synchronization?
Almost all your steps are correct. The one point which is not clear is in step 4. Typically, MediaBuffer is pulled by a consumer from a producer through a read call. So, I presume in your setup,
All steps mentioned above are performed by the producer
Consumer invokes mSource->read(&newBuffer); where newBuffer is defined as MediaBuffer *newBuffer;
At producer's end, MediaBuffer *mBuffer;. The read call would be processed and output shall be populated as *out = mBuffer;.
For safety, please initialize mBuffer to NULL after this step.
After consuming the buffer, the consumer shall release the buffer newBuffer->release;
Again, for safety, please initialize newBuffer to NULL after this step.
With these changes, I presume your code should work fine based on your description.
MediaBuffer is a basic container in stagefright framework.
About the usage of MediaBuffer/MediaBufferGroup/MediaSource, There are some simple examples code under the ASOP frameworks/av/cmds/stagefright.
Pay attention to the implementation of class SineSource and its usage.