We are currently working on a specific live image manipulation/effect application, where we are working with NDK and using Camera2 and MediaNDK apis.
I'm using AImageReader as a way to call current frames from camera and assign effects in realtime. This works pretty good, we are getting at least 30 fps in hd resolutions.
However, my job also requires me to return this edited image back to the given java endpoint, that is a (Landroid/media/Image;)V signatured method. However this can be changed to any other jobject I want, but must be an image/bitmap kind.
I found out that the AImage I was using was just a c struct so I'll not able to convert it to jobject.
Our current process is something like this in order:
AImageReader_ImageListener is calling a static method with an assigned this context.
Method uses AImageReader_acquireNextImage and if the media is ok, sends it to a child class/object.
In here we manipulate the image data inside multiple std::thread operations, and merge the resulting image. I'm receiving YUV422 formatted data, but I'm converting it to RGB for easier processing.
Then we lock the mutex, return the resulting data to delegate, and delete the original image.
Delegate calls a static method that is responsible for finding/asking the Java method.
Now I need a simple and low-on-resource solution of converting the data at hand to a C++ object that can be represented also as a jobject.
We are using OpenCv in the processes, so it is possible for me to return a Bitmap object, but looks like job consumes more cpu time than I can let it.
How can I approach this problem? Is there a known fast way of converting uint8_t *buffer to a image like jobject context?
Related
I'm working with Android SDK for IoT camera. I want to implement taking snapshots from the camera and save it to the external storage. SDK provides a method for that which takes absoluteFilePath as a parameter.
int snapshot(String absoluteFilePath, Context context, OperationDelegateCallBack callBack);
Unfortunately because of scope storage introduced in Android 10 this method is not working. There is info that If I want to use scope storage I need to implement this feature by myself. In this case, I need to get raw frame data in YUV420SP (NV21) format. SDK provides callback for that:
fun onReceiveFrameYUVData(
sessionId: Int,
y: ByteBuffer,
u: ByteBuffer,
v: ByteBuffer,
videoFrameInfo: TuyaVideoFrameInfo?,
camera: Any?,
)
I would like to use YuvImage class from android graphics package to convert this image to JPEG (it provides method compressToJpeg). Constructor of that class takes only a single byte array as a parameter. Callback from SDK provides YUV components as separate buffers. How should I concat those three buffers into one array to use YuvImage class?
BTW Is this the proper approach or maybe should I use something else?
SDK documentation: https://developer.tuya.com/en/docs/app-development/avfunction?id=Ka6nuvucjujar#title-3-Video%20screenshots
Unfortunately because of scope storage introduced in Android 10 this method is not working.
Of course it still works if you use a normal writable and readable full path.
For Android 10 you dont have to change your usual path. (I do not understand that you have any problem there).
For Android 11+ use public image directories like DCIM and Pictures.
I am writing an Application for an Android device where I want to process the image from the camera.
The camera on this device only supports the NV21 and PRIVATE formats. I verified via the CameraCharacteristics and tried different formats like YUV_420_888 only for the app to fail. The camera HW Support is: INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED
To create the CameraCaptureSession, I need to create an ImageReader.
To create the ImageReader I need to select the ImageFormat, in this case only PRIVATE works. If I try NV21, I get an error saying that it's not supported.
My onImageAvailableListener gets triggered but since the ImageFormat is PRIVATE, the "planes" attribute in the Image returns NULL.
According to the documentation, it's possible to access the data via the HardwareBuffer.
Starting in Android P private images may also be accessed through
their hardware buffers (when available) through the
Image#getHardwareBuffer() method. Attempting to access the planes of a
private image, will return an empty array.
I do get an object of type HardwareBuffer when I get the image from the acquireLatestImage, but my question is: How do I get the actual data/bytes that represents the pixels from the HardwareBuffer object?
As mentioned in HardwareBuffer documentation:
For more information, see the NDK documentation for AHardwareBuffer
You have to use AHardwareBuffer C language functions (using NDK) to access pixels of HardwareBuffer.
In short:
Create native JNI method in some helper class to call it from Java/Kotlin
Pass your HardwareBuffer to this method as parameter
Inside this method call AHardwareBuffer_fromHardwareBuffer to wrap HardwareBuffer as AHardwareBuffer
Call AHardwareBuffer_lock or AHardwareBuffer_lockPlanes to obtain raw image bytes
Work with pixels. If needed you can call any Java/Kotlin method to process pixels there (use ByteBuffer to wrap raw data).
Call AHardwareBuffer_unlock to remove lock from buffer
Don't forget that HardwareBuffer may be read only or protected.
I am working with the camera2 API in android and am trying to understand this code I am using. Part of the code goes like this:
previewReader = ImageReader.newInstance(previewSize.getWidth(), previewSize.getHeight(),
ImageFormat.YUV_420_888, 4);
previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
// This adds another output surface but not sure where this surface comes from..
previewRequestBuilder.addTarget(previewReader.getSurface());
imageListener is an object from another class that implements android.media.ImageReader.OnImageAvailableListener and backgroundHandler is just a background thread. I am not including code for these two or previewRequestBuilder as they do not seem to be important for understanding my question.
I have searched extensively but it just seems like some magic happens and previewReader finds some surface somewhere, somehow. According to the documentation, what getSurface() does is to:
Get a Surface that can be used to produce Image for this ImageReader
Can anyone explain where it gets this?
That Surface belongs to the ImageReader; it was created in the native equivalent of the ImageReader's constructor, and is (effectively) an ImageReader private member, with a getter.
Here is the line in the native constructor that sets up the IGraphicBufferProducer (gbProducer), which is basically the native equivalent of a Surface.
Here is where you can see that the native code uses that same member to form the return value from getSurface()/nativeGetSurface() (you may have to trace through the code a bit, but it's all there).
So that's the literal answer to your question. But maybe you were asking because it isn't clear why the camera doesn't create the Surface, and force you to give it to the ImageReader, instead: A Surface is a complex object (actually, a buffer queue), and shouldn't be thought of as a simple, pre-allocated bitmap. At the time the capture takes place, the camera pipeline will communicate with its output Surfaces, and set up the correct dimensions and color planes and so forth. (Note that you can add multiple targets via addTarget(); the camera can use each of them.) All the camera needs to know is where it's going to send its output; it doesn't need to create the output Surface itself.
I am working on an android project and found that an operation becomes bottleneck in performance. This operation works on a large array A and stores the result into another array B.
I found that this operation can be parallelized. The array A can be divided into N smaller segments. The operation can work on each segment independently and store the result into a corresponding segment in B.
The operation is written in native code with GetPrimitiveArrayCritical/ReleasePrimitiveArrayCritical pairs to access array A and B.
My question is that if using multi threading, GetPrimitiveArrayCritical(pEnv, A, 0) will be called multiple times from different threads. Does GetPrimitiveArrayCritical block? i.e. if one thread makes this call, can a second thread make the same call before the first one calls ReleasePrimitiveArrayCritical()?
Please help.
Yes, you can call GetPrimitiveArrayCritical() from two concurrent threads. The function will not block, and your two threads will grant access to the underlying array to native code. But on the other hand, the function will do nothing to synchronize this access, i.e. if thread 1 changes the value at index 100, and thread 2 also changes the value at index 100, you don't know which will be chosen at the end.
If you don't write to the array, you are guaranteed to be served correctly. Don't forget to ReleasePrimitiveArrayCritical with JNI_ABORT flag.
If you want to write to the array, check the output parameter isCopy as set by GetPrimitiveArrayCritical(JNIEnv *env, jarray array, jboolean *isCopy). If the result is 0, you can safely proceed with your multithreaded approach.
If the result is not 0, ReleasePrimitiveArrayCritical() will overwrite all elements of the Java array, even if some of them was changed in Java or in C on a different thread. If your program detects this situation, it must release the array (with JNI_ABORT) and wait for the other thread to complete. On Android I have never seen an array being copied, they are always locked in-place. But nobody will guarantee that this will not happen to you, either in a current system or in a future version.
That's why you MUST check isCopy parameter.
I am developing an aplication where I need to read large image files (6000x6000) ,then applying some filtering (like blurring and color effects) and then saving the image.
The filtering library is a 3rd party library that is programmed in Java and take something like this as a input :
/**
* rgbArray : is an Array of pixels
*/
public int[] ImageFiltering(int[] rgbArray,int Width,int Height);
The problem is that if I load the image in memory (6000 x 6000 x 4 = 137.33 MB) android throws OutOfMemory error.
After reading some documentation and knowing that the memory allocated from NDK is not part of the application heap ,I get an interesting idea:
Open the image from NDK Read it contents and save it in an array
Pass the array back to Java
Apply filter to the array data
Return the array to NDK
Save the data array into a new image and release the array memory
Here is an example of NDK function with returns the big,fat array:
jint*
Java_com_example_hellojni_HelloJni_stringFromJNI( JNIEnv* env,jobject thiz,jint w,jint h)
{
int* pixels = (int*)malloc(w * h * 4);
read_image_into_array("image.jpg",pixels);
return pixels;
}
The goal is to reserve the memory in native in order to avoid getting OutOfMemory error,and pass the memory reference to Java in order to work with it.
Since I am have not C developer and have never touched JNI ,is all this making sense and how could it be implemented in NDK.
Use a direct ByteBuffer, using the allocated memory as its backing buffer. You can allocate a direct byte buffer from JNI (using NewDirectByteBuffer()) and return it.
You'll need to provide a complementary method for disposing of the memory or otherwise indicating to the native side that it is no longer in use.