Real time image processing Android camera2 api - android

I'm very new to android. I'm trying to use the new Android Camera2 api to build a real time image processing application. My application requires to maintain a good FPS rate as well. Following some examples i managed to do the image processing inside the onImageAvailable(ImageReader reader) method available with ImageReader class. However by doing so, i can only manage to get a frame rate around 5-7 FPS.
I've seen that it is advised to use RenderScript for YUV processing with Android camera2 api. Will using RenderScript gain me higher FPS rates?
If so please can someone guide me on how to implement that, as i'm new to android i'm having a hard time grasping concepts of Allocation and RenderScript. Thanks in advance.

I don't know what type of image processing you want to perform. But in case that you are interested only in the intensity of the image (i.e. grayvalue information) you don't need any conversion of the YUV data array (e.g. into jpeg). For an image consisting of n pixels the intensity information is given by the first n bytes of the YUV data array. So, just cut those bytes out of the YUV data array:
byte[] intensity = new byte[width*height];
intensity = Arrays.copyOfRange(data, 0, width*height);

In theory, you can get the available fps ranges with this call:
characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);
and set the desired fps range here:
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, bestFPSRange);
So in principle, you should choose a range with the same lower and upper bound, and that should keep your frame rate constant.
HOWEVER, on devices with a LEGACY profile, none of the devices I have tested have been able to achieve 30fps at 1080p (S5, Z3 Compact, Huawei Mate S, and HTC One M9). The only way I was able to achieve that was by using a device (LG G4) that turned out to have a FULL profile.
Renderscript will not buy you anything here if you are going to use it inside the onImageAvailable callback. It appears that getting the image at that point is the bottleneck on LEGACY devices since the new camera2 API simply wraps the old one, and is presumably creating so much overhead that the callback does not occur at 30fps anymore. So if Renderscript is to work, you would need to need to create a Surface and find another way of grabbing the frames off of it.
Here is the kicker though... if you move back the deprecated API, I would almost guarantee 30fps at whatever resolution you want. At least that is what I found on all of the devices I tested....

Related

(Camera2 API) Can I run 2 ImageReader instances of different configs at the same time?

I am modifying (Java) the TF Lite sample app for object detection. It has a live video feed that shows boxes around common objects. It takes in ImageReader frames at 640*480.
I want to use these bounds to crop the items, but I want to crop them from a high-quality image. I think the 5T is capable of 4K.
So, is it possible to run 2 instances of ImageReader, one low-quality video feed (used by TF Lite), and one for capturing full-quality still images? I also can't pin the 2nd one to any Surface for user preview, pic has to be captured in the background.
In this medium article (https://link.medium.com/2oaIYoY58db) it says "Due to hardware constraints, only a single configuration can be active in the camera sensor at any given time; this is called the active configuration."
I'm new to android here, so couldn't make much sense of this.
Thanks for your time!
PS: as far as I know, this isn't possible with CameraX, yet.
As the cited article explains, you can use a lower-resolution preview stream and periodically capture higher-rez still images. Depending on hardware, this 'switch' may take time, or be really quick.
In your case, I would run a preview capture session at maximum resolution, and shrink (resize) the frames to feed into TFLite when necessary.

Camera2 - most efficient way to obtain a still capture Bitmap

To start with the question: what is the most efficient way to initialize and use ImageReader with the camera2 api, knowing that I am always going to convert the capture into a Bitmap?
I'm playing around with the Android camera2 samples, and everything is working quite nicely. However, for my purposes I always need to perform some post processing on captured still images, for which I require a Bitmap object. Presently I am using BitmapFactory.decodeByteArray(...) using the bytes coming from the ImageReader.acquireNextImage().getPlanes()[0].getBuffer() (I'm paraphrasing). While this works acceptably, I still feel like there should be a way to improve performance. The captures are encoded in ImageFormat.Jpeg and need to be decoded again to get the Bitmap, which seems redundant. Ideally I'd obtain them in PixelFormat.RGB_888 and just copy that to a Bitmap using Bitmap.copyPixelsFromBuffer(...), but it doesn't seem like initializing an ImageReader with that format has reliable device support. YUV_420_888 could be another option, but looking around SO it seems that it requires jumping through some hoops to decode into a Bitmap. Is there a recommended way to do this?
The question is what you are optimizing for.
Jpeg is without doubt the easiest format supported by all devices. Decoding it to bitmap is not redundant as it seems because encoding the picture into jpeg he is usually performed by kind of hardware. This means that uses minimal bandwidth to transmit the image from the sensor to your application. on some devices this is the only way to get maximum resolution. BitmapFactory.decodeByteArray(...) is often performed by special hardware decoder too. The major problem with this call is that may cause out of memory exception, because the output bitmap is too big. So you will find many examples the do subsampled decoding, tuned for the use case where the bitmap must be displayed on the phone screen.
If your device supports required resolution with RGB_8888, go for it: this needs minimal post-processing. But scaling such image down may be more CPU intensive then dealing with Jpeg, and memory consumption may be huge. Anyways, only few devices support this format for camera capture.
As for YUV_420_888 and other YUV formats,
the advantages over Jpeg are even smaller than for RGB.
If you need the best quality image and don't have memory limitations, you should go for RAW images which are supported on most high-end devices these days. You will need your own conversion algorithm, and probably make different adaptations for different devices, but at least you will have full command of the picture acquisition.
After a while I now sort of have an answer to my own question, albeit not a very satisfying one. After much consideration I attempted the following:
Setup a ScriptIntrinsicYuvToRGB RenderScript of the desired output size
Take the Surface of the used input allocation, and set this as the target surface for the still capture
Run this RenderScript when a new allocation is available and convert the resulting bytes to a Bitmap
This actually worked like a charm, and was super fast. Then I started noticing weird behavior from the camera, which happened on other devices as well. As it would turn out, the camera HAL doesn't really recognize this as a still capture. This means that (a) the flash / exposure routines don't fire in this case when they need to and (b) if you have initiated a precapture sequence before your capture auto-exposure will remain locked unless you manage to unlock it using AE_PRECAPTURE_TRIGGER_CANCEL (API >= 23) or some other lock / unlock magic which I couldn't get to work on either device. Unless you're fine with this only working in optimal lighting conditions where no exposure adjustment is necessary, this approach is entirely useless.
I have one more idea, which is to setup an ImageReader with a YUV_420_888 output and incorporating the conversion routine from this answer to get RGB pixels from it. However, I'm actually working with Xamarin.Android, and RenderScript user scripts are not supported there. I may be able to hack around that, but it's far from trivial.
For my particular use case I have managed to speed up JPEG decoding to acceptable levels by carefully arranging background tasks with subsampled decodes of the versions I need at multiple stages of my processing, so implementing this likely won't be worth my time any time soon. If anyone is looking for ideas on how to approach something similar though; that's what you could do.
Change the Imagereader instance using a different ImageFormat like this:
ImageReader.newInstance(width, height, ImageFormat.JPEG, 1)

Getting the RAW camera data on android?

I want to work with the bit array of each incoming frame, basically tapping to the YUV format received from the camera sensor for each frame and do some processing on it.
I'm new to java/android and learning as I go so some of my questions are pretty basic but I couldn't find any answers that suites my needs.
Q1: How do I get a bit array of each frame received by the camera sensor? (how do I save the YUV byte stream for further use?)
Q2: How to set that for each new frame received a new data array will be received for processing?
Q3: Do I have to set a preview to do that or could I tap straight to a buffer holding the raw data from the open camera?
Q4: Will a preview slow down the process (of receiving new frames)?
Some further explanations if needed: The idea is to create one way communication with a flickering LED light and a smartphone, by pointing the phones camera to the LED, a real-time process will register the slight changes and decode them to the original sent data. to do so I plan to receive the YUV data for each frame, strip it to the Y part and decide for each frame if the light is on or off.
Yes, that's the Camera API. Android 21 and newer support camera2 API which can give you faster response, but this depends on device. I would recommend still to use the deprecated older API if your goal is maximum reach.
Usually, Android camera produces NV21 format, from which it is very easy to extract the 8bpp luminance.
Android requires live preview if you want to capture camera frames. There are quite a few workarounds to keep the preview hidden from the end user, but this is not supported, and any such trick may fail on the next device or on the next system upgrade. But no worry: live preview does not delay your processing speed at all, because it is all done in a separate hardware channel.
All in all, you can expect to receive 30 fps on average device when you use Camera.setPreviewCallbackWithBuffer() and do everything correctly. The high-end devices that have full implementation of camera2 api may deliver higher frame rates. Samsung published their own camera sdk. Use it if you need some special features of Samsung devices.
On a multi-core device, you can offload image processing to a thread pool, but still the frame rate will probably be limited by camera hardware.
Note that you can perform some limited image processing in GPU, applying shaders to the texture that is acquired from camera.
Assuming you have done the basics and you have got a camera preview and a Camera object. You can call:
Camera.PreviewCallback callback = new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
//Do your processing here... Use the byte[] called "data"
}
};
And then:
mCamera.setPreviewCallback(callback);
If Camera.Parameters.setPreviewFormat() is never called, the default format will be NV21, which uses the NV21 encoding format.

Real-time image processing with Camera2

I tried searching in a ton of places about doing this, with no results. I did read that the only (as far as I know) way to obtain image frames was to use a ImageReader, which gives me a Image to work with. However, a lot of work must be done before I have a nice enough image (converting Image to byte array, then converting between formats - YUV_420_888 to ARGB_8888 - using RenderScript, then turning it into a Bitmap and rotating it manually - or running the application on landscape mode). By this point a lot of processing is made, and I haven't even started the actual processing yet (I plan on running some native code on it). Additionally, I tried to lower the resolution, with no success, and there is a significant delay when drawing on the surface.
Is there a better approach to this? Any help would be greatly appreciated.
Im not sure what exactly you are doing with the images, but a lot of times only a grayscale image is actually needed (again depending on your exact goal) If your camera outputs YUV, the grayscale information is in the Y channel. The nice thing,is you don't need to convert to numerous colorspaces and working with only one layer (as opposed to three) decreases the size of your data set greatly.
If you need color images then this wouldn't help

Real Time Image Processing in Android using the NDK

Using an Android (2.3.3) phone, I can use the camera to retrieve a preview with the onPreviewFrame(byte[] data, Camera camera) method to get the YUV image.
For some image processing, I need to convert this data to an RGB image and show it on the device. Using the basic java / android method, this runs at a horrible rate of less then 5 fps...
Now, using the NDK, I want to speed things up. The problem is: How do I convert the YUV array to an RGB array in C? And is there a way to display it (using OpenGL perhaps?) in the native code? Real-time should be possible (the Qualcomm AR demos showed us that).
I cannot use the setTargetDisplay and put an overlay on it!
I know Java, recently started with the Android SDK and have zero experience in C
Have you considered using OpenCV's Android port? It can do a lot more than just color conversion, and it's quite fast.
A Google search returned this page for a C implementation of YUV->RGB565. The author even included the JNI wrapper for it.
You can also succeed by staying with Java. I did this for the imagedetectíon of the androangelo-app.
I used the sample code which you find here by searching "decodeYUV".
For processing the frames, the essential part to consider is the image-size.
Depending on the device you may get quite large images. i.e. for the Galaxy S2
the smallest supported previewsize is 640*480. This is a big amount of pixels.
What I did, is to use only every second row and every second column, after yuvtorgb decoding. So processing a 320*240 image works quite well and allowed me to get frame-rates of 20fps. (including some noise-reduction, a color-conversion from rgb to hsv and a circledetection)
In addition You should carefully check the size of the image-buffer provided to the setPreview function. If it is too small, the garbage-collection will spoil everything.
For the result you can check the calibration-screen of the androangelo-app. There I had an overlay of the detected image over the camera-preview.

Categories

Resources