How to most efficiently use and apply Android CameraX Image Analysis setTargetRotation - android

Similar to what is laid out in the tutorial I am initializing CameraX's Image Analysis use case with code:
ImageAnalysis imageAnalysis =
new ImageAnalysis.Builder()
.setTargetResolution(new Size(1280, 720))
.setTargetRotation(Surface.ROTATION_90)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build();
imageAnalysis.setAnalyzer(executor, new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy image) {
int rotationDegrees = image.getImageInfo().getRotationDegrees();
// insert your code here.
}
});
cameraProvider.bindToLifecycle((LifecycleOwner) this, cameraSelector, imageAnalysis, preview);
I am trying to use setTargetRotation method but I am not clear as to how I am supposed to apply this rotation to the output image as vaguely described in the docs:
The rotation value of ImageInfo will be the rotation, which if applied to the output image, will make the image match target rotation specified here.
If I set a breakpoint in the analyze() method shown above, the image object does not get rotated when changing the setTargetRotation value, so I assume the docs are telling me to grab the orientation with getTargerRotation() in the sense that these two pieces of code (builder vs analyzer) are coded separately and this information can be passed between the two without actually applying any rotation. Did I understand this correctly? This really doesn't make sense to me as the setTargetResolution method actually changes the size sent via the ImageProxy. I'd think setTargetRotation should also apply said rotation, but it appears not.
If my understanding is correct, is there an optimal efficient way to rotate these ImageProxy objects after entering the analyze method? Right now I'm doing it after converting to Bitmap via
Bitmap myImg = BitmapFactory.decodeResource(someInutStream);
Matrix matrix = new Matrix();
matrix.postRotate(30);
Bitmap rotated = Bitmap.createBitmap(myImg, 0, 0, myImg.getWidth(), myImg.getHeight(),
matrix, true);
Above idea came from here, but I'd think this is not the most efficient way to do this. I'm sure I could also come up with a way to transpose the arrays, but this could get tedious and messy quickly. Isn't there any way to setup the ImageAnalysis Builder to send the rotated ImageProxy directly rather than having to make a bitmap of everything?

The rotation value of ImageInfo will be the rotation, which if applied
to the output image, will make the image match target rotation
specified here.
An example to understand the definition would be to assume the target rotation matches the device's orientation. Applying the returned rotation to the output image will result in the image being upright, i.e matching the device's orientation. So if the device is in its natural portrait position, the rotated output image will also be in that orientation.
Output image + Rotated by rotation value --> Upright output image
CameraX's documentation includes a section about rotations, since it can be a confusing topic. You can check it out here.
Going back to your question about setTargetRotation and the ImageAnalysis use case, it isn't meant to rotate the images passed to the Analyzer, but it should affect the rotation information of the images, i.e ImageProxy.getImageInfo().getRotationDegrees(). Transforming images (rotating, cropping, etc) can be an expensive operation, so CameraX does not perform any modification to the analysis frames, but it provides the required metadata to make sense of the output images, metadata that can then be used with image processors that analyze the frames.
If you need to rotate each analysis frame, using the Bitmap approach is one way, but it can be costly. Another more performant way may be to do it in native code.

Related

Best usage of CameraX for MLKit Text Recognition on Android

I need to implement text recognition using MLKit on Android and I have decided to use the new CameraX api as the camera lib. I am struggling with the correct "pipeline" of classes or data flow of the image because CameraX is quite new and not many resources is out there. The use case is that I take the picture, crop it in the middle by some bounds that are visible in the UI and then pass this cropped image to the MLKit that will process the image.
Given that, is there some place for ImageAnalysis.Analyzer
api? From my understanding this analyzer is used only for previews and not the captured image.
My first idea was to use takePicture method that accepts OnImageCapturedCallback but when I've tried access eg. ImageProxy.height the app crashed with an exception java.lang.IllegalStateException: Image is already closed and I could not find any fix for that.
Then I've decided to use another overload of takePicture method and now I save image to the file, then read it to Bitmap, crop this image and now I have an image that can be passed to MLKit. But when I take a look at FirebaseVisionImage that is passed to FirebaseVisionTextRecognizer it has a factory method to which I can pass the Image that I get from OnImageCapturedCallback which seems that I am doing some unnecessary steps.
So my questions are:
Is there some class (CaptureProcessor?) that will take care of the cropping of taken image? I suppose that then I could use OnImageCapturedCallback where I would receive already cropped image.
Should I even use ImageAnalysis.Analyzer if I am not doing realtime processing and I am doing post processing?
I suppose that I can achieve what I want with my current approach but I am feeling that I could use much more of CameraX than I currently am.
Thanks!
Is there some class (CaptureProcessor?) that will take care of the
cropping of taken image?
You can set the crop aspect ratio after you build the ImageCapture use case by using the setCropAspectRatio(Rational) method. This method crops from the center of the rotated output image. So basically what you'd get back after calling takePicture() is what I think you're asking for.
Should I even use ImageAnalysis.Analyzer if I am not doing realtime
processing and I am doing post processing?
No, it wouldn't make sense in your scenario. As you mentioned, only when doing real-time image processing would you want to use ImageAnalysis.Analyzer.
ps: I'd be interested in seeing the code you use for takePicture() that caused the IllegalStateException.
[Edit]
Taking a look at your code
imageCapture?.takePicture(executor, object : ImageCapture.OnImageCapturedCallback() {
override fun onCaptureSuccess(image: ImageProxy) {
// 1
super.onCaptureSuccess(image)
// 2
Log.d("MainActivity", "Image captured: ${image.width}x${image.height}")
}
})
At (1), if you take a look at super.onCaptureSuccess(imageProxy)'s implementation, it actually closes the imageProxy passed to the method. Accessing the image's width and height in (2) throws an exception, which is normal -since the image has been closed-. The documentation states:
The application is responsible for calling ImageProxy.close() to close
the image.
So when using this callback, you should probably not call super..., just use the imageProxy, then before returning from the method, manually close it (ImageProxy.close()).

Android Camera2 take picture while processing frames

I am using Camera2 API to create a Camera component that can scan barcodes and has ability to take pictures during scanning. It is kinda working but the preview is flickering - it seems like previous frames and sometimes green frames are interrupting realtime preview.
My code is based on Google's Camera2Basic. I'm just adding one more ImageReader and its surface as a new output and target for CaptureRequest.Builder. One of the readers uses JPEG and the other YUV. Flickering disappears when I remove the JPEG reader's surface from outputs (not passing this into createCaptureSession).
There's quite a lot of code so I created a gist: click - Tried to get rid of completely irrelevant code.
Is the device you're testing on a LEGACY-level device?
If so, any captures targeting a JPEG output may be much slower since they can run a precapture sequence, and may briefly pause preview as well.
But it should not cause green frames, unless there's a device-level bug.
If anyone ever struggles with this. There is table in the docs showing that if there are 3 targets specified, the YUV ImageReader can use images with maximum size equal to the preview size (maximum 1920x1080). Reducing this helped!
Yes you can. Assuming that you configure your preview to feed the ImageReader with YUV frames (because you could also put JPEG there, check it out), like so:
mImageReaderPreview = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.YUV_420_888, 1);
You can process those frames inside your OnImageAvailable listener:
#Override
public void onImageAvailable(ImageReader reader) {
Image mImage = reader.acquireNextImage();
if (mImage == null) {
return;
}
try {
// Do some custom processing like YUV to RGB conversion, cropping, etc.
mFrameProcessor.setNextFrame(mImage));
mImage.close();
} catch (IllegalStateException e) {
Log.e("TAG", e.getMessage());
}

Get detected face bitmap

I'm experimenting with the following Google sample: https://github.com/googlesamples/android-vision/tree/master/visionSamples/FaceTracker
The sample is using the Play Service new Face detection APIs, and draws a square on detected faces on the camera video stream.
I'm trying to figure out if it is possible to save the frames that has detected faces in them, from following the code it seems that the face detector's processor is a good place to perform the 'saving' but it only supplies the detection meta data and not the actual frame.
Your guidance will be appreciated.
You can get it in the following way:
Bitmap source = ((BitmapDrawable) yourImageView.getDrawable()).getBitmap();
// detect faces
Bitmap faceBitmap = createBitmap(source,
face.getPosition().x,
face.getPosition().y,
face.getWidth(),
face.getHeight());
Yes it is possible. I answered to question about getting frames from CameraSource here. Most trickiest parts are to access CameraSource frames and to convert Frame datatype to Bitmap. Then having frames as Bitmaps you can pass them to you FaceGraphic class and in method draw() save those Bitmaps, because draw() is called only when faces are detected.

Image data from Android camera2 API flipped & squished on Galaxy S5

I am implementing an app that uses real-time image processing on live images from the camera. It was working, with limitations, using the now deprecated android.hardware.Camera; for improved flexibility & performance I'd like to use the new android.hardware.camera2 API. I'm having trouble getting the raw image data for processing however. This is on a Samsung Galaxy S5. (Unfortunately, I don't have another Lollipop device handy to test on other hardware).
I got the overall framework (with inspiration from the 'HdrViewFinder' and 'Camera2Basic' samples) working, and the live image is drawn on the screen via a SurfaceTexture and a GLSurfaceView. However, I also need to access the image data (grayscale only is fine, at least for now) for custom image processing. According to the documentation to StreamConfigurationMap.isOutputSupportedFor(class), the recommended surface to obtain image data directly would be ImageReader (correct?).
So I've set up my capture requests as:
mSurfaceTexture.setDefaultBufferSize(640, 480);
mSurface = new Surface(surfaceTexture);
...
mImageReader = ImageReader.newInstance(640, 480, format, 2);
...
List<Surface> surfaces = new ArrayList<Surface>();
surfaces.add(mSurface);
surfaces.add(mImageReader.getSurface());
...
mCameraDevice.createCaptureSession(surfaces, mCameraSessionListener, mCameraHandler);
and in the onImageAvailable callback for the ImageReader, I'm accessing the data as follows:
Image img = reader.acquireLatestImage();
ByteBuffer grayscalePixelsDirectByteBuffer = img.getPlanes()[0].getBuffer();
...but while (as said) the live image preview is working, there's something wrong with the data I get here (or with the way I get it). According to
mCameraInfo.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputFormats();
...the following ImageFormats should be supported: NV21, JPEG, YV12, YUV_420_888. I've tried all (plugged in for 'format' above), all support the set resolution according to getOutputSizes(format), but none of them give the desired result:
NV21: ImageReader.newInstance throws java.lang.IllegalArgumentException: NV21 format is not supported
JPEG: This does work, but it doesn't seem to make sense for a real-time application to go through JPEG encode and decode for each frame...
YV12 and YUV_420_888: this is the weirdest result -- I can see get the grayscale image, but it is flipped vertically (yes, flipped, not rotated!) and significantly squished (scaled significantly horizontally, but not vertically).
What am I missing here? What causes the image to be flipped and squished? How can I get a geometrically correct grayscale buffer? Should I be using a different type of surface (instead of ImageReader)?
Any hints appreciated.
I found an explanation (though not necessarily a satisfactory solution): it turns out that the sensor array's aspect ratio is 16:9 (found via mCameraInfo.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);).
At least when requesting YV12/YUV_420_888, the streamer appears to not crop the image in any way, but instead scale it non-uniformly, to reach the requested frame size. The images have the correct proportions when requesting a 16:9 format (of which there are only two higher-res ones, unfortunately). Seems a bit odd to me -- it doesn't appear to happen when requesting JPEG, or with the equivalent old camera API functions, or for stills; and I'm not sure what the non-uniformly scaled frames would be good for.
I feel that it's not a really satisfactory solution, because it means that you can't rely on the list of output formats, but instead have to find the sensor size first, find formats with the same aspect ratio, then downsample the image yourself (as needed)...
I don't know if this is the expected outcome here or a 'feature' of the S5. Comments or suggestions still welcome.
I had the same problem and found a solution.
The first part of the problem is setting the size of the surface buffer:
// We configure the size of default buffer to be the size of camera preview we want.
//texture.setDefaultBufferSize(width, height);
This is where the image gets skewed, not in the camera. You should comment it out, and then set an up-scaling of the image when displaying it.
int[] rgba = new int[width*height];
//getImage(rgba);
nativeLoader.convertImage(width, height, data, rgba);
Bitmap bmp = mBitmap;
bmp.setPixels(rgba, 0, width, 0, 0, width, height);
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
//canvas.drawBitmap(bmp, 0, 0, null );//configureTransform(width, height), null);
//canvas.drawBitmap(bmp, configureTransform(width, height), null);
canvas.drawBitmap(bmp, new Rect(0,0,320,240), new Rect(0,0, 640*2,480*2), null );
//canvas.drawBitmap(bmp, (canvas.getWidth() - 320) / 2, (canvas.getHeight() - 240) / 2, null);
mTextureView.unlockCanvasAndPost(canvas);
}
image.close();
You can play around with the values to fine tune the solution for your problem.

How to improve OpenCV face detection performance in android?

I am working on a project in android in which i am using OpenCV to detect faces from all the images which are in the gallery. The process of getting faces from the images is performing in the service. Service continuously working till all the images are processed. It is storing the detected faces in the internal storage and also showing in the grid view if activity is opened.
My code is:
CascadeClassifier mJavaDetector=null;
public void getFaces()
{
for (int i=0 ; i<size ; i++)
{
File file=new File(urls.get(i));
imagepath=urls.get(i);
defaultBitmap=BitmapFactory.decodeFile(file, bitmapFatoryOptions);
mJavaDetector = new CascadeClassifier(FaceDetector.class.getResource("lbpcascade_frontalface").getPath());
Mat image = new Mat (defaultBitmap.getWidth(), defaultBitmap.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(defaultBitmap,image);
MatOfRect faceDetections = new MatOfRect();
try
{
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
}
catch(Exception e)
{
e.printStackTrace();
}
if(faceDetections.toArray().length>0)
{
}
}
}
Everything is fine but it is detection faces very slow. The performance is very slow. When i debug the code then i found the line which is taking time is:
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
I have checked multiple post for this problem but i didn't get any solution.
Please tell me what should i do to solve this problem.
Any help would be greatly appreciated. Thank you.
You should pay attention to the parameters of detectMultiScale():
scaleFactor – Parameter specifying how much the image size is reduced at each image scale. This parameter is used to create a scale pyramid. It is necessary because the model has a fixed size during training. Without pyramid the only size to detect would be this fix one (which can be read from the XML also). However the face detection can be scale-invariant by using multi-scale representation i.e., detecting large and small faces using the same detection window.
scaleFactor depends on the size of your trained detector, but in fact, you need to set it as high as possible while still getting "good" results, so this should be determined empirically.
Your 1.1 value can be a good value for this purpose. It means, a relative small step is used for resizing (reduce size by 10%), you increase the chance of a matching size with the model for detection is found. If your trained detector has the size 10x10 then you can detect faces with size 11x11, 12x12 and so on. But in fact a factor of 1.1 requires roughly double the # of layers in the pyramid (and 2x computation time) than 1.2 does.
minNeighbors – Parameter specifying how many neighbours each candidate rectangle should have to retain it.
Cascade classifier works with a sliding window approach. By applying this approach, you slide a window through over the image than you resize it and search again until you can not resize it further. In every iteration the true outputs (of cascade classifier) are stored but unfortunately it actually detects many false positives. And to eliminate false positives and get the proper face rectangle out of detections, neighbourhood approach is applied. 3-6 is a good value for it. If the value is too high then you can lose true positives too.
minSize – Regarding to the sliding window approach of minNeighbors, this is the smallest window that cascade can detect. Objects smaller than that are ignored. Usually cv::Size(20, 20) are enough for face detections.
maxSize – Maximum possible object size. Objects bigger than that are ignored.
Finally you can try different classifiers based on different features (such as Haar, LBP, HoG). Usually, LBP classifiers are a few times faster than Haar's, but also less accurate.
And it is also strongly recommended to look over these questions:
Recommended values for OpenCV detectMultiScale() parameters
OpenCV detectMultiScale() minNeighbors parameter
Instead reading images as Bitmap and then converting them to Mat via using Utils.bitmapToMat(defaultBitmap,image) you can directly use Mat image = Highgui.imread(imagepath); You can check here for imread() function.
Also, below line takes too much time because the detector is looking for faces with at least having Size(20, 20) which is pretty small. Check this video for visualization of face detection using OpenCV.
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));

Categories

Resources