ImageAnalysis.Analyzer CameraX frame rates drops converting to bitmap - android

I am using CameraX ImageAnalysis.Analyzer to do face detection. However, the Image object size is too large to use FirebaseVisionImage.fromMediaImage, therefore I tried converting the image to bitmap and recale and rotate based on camera rotation.
The perfomance is still huge lag, this is becuase of Bitmap scaling operation.
FirebaseVisionImage.fromMediaImage takes 120ms to convert 640x480 image.
FirebaseVisionImage.fromBitmap takes 30ms to convert 640x480 image (but wrong rotation) so no face detaction.
FirebaseVisionImage.fromBitmap with scaling 90ms to convert 480x360 image.
My question is how to optize this process so that I can get a 480x360 bitmap and improve perfromnce.
I have used this following implementation to convert Image to bitmap.
https://github.com/xizhang/camerax-gpuimage/blob/master/app/src/main/java/com/appinmotion/gpuimage/YuvToRgbConverter.kt
My code.
Bitmap bitmapImage2 = Bitmap.createBitmap(
imageX.getWidth(), imageX.getHeight(), Bitmap.Config.ARGB_8888);
ByteBuffer byteBuffer = converter.yuvToRgb(imageX, bitmapImage2);
bitmapImage2 = BitmapUtils.rotateScaleBitmap(bitmapImage2, rotation, 1, 480);
FirebaseVisionImage image =
FirebaseVisionImage.fromBitmap(bitmapImage2);
Log.d("myApp", "" + (System.currentTimeMillis() - start));

Related

Android Camera X ImageAnalyzer Image Format for TFLite

I am attempting to analyze camera preview frames with a tflite model, using the CameraX api.
This documentation describes using the ImageAnalyzer to process incoming frames. Currently the frames are incoming as YUV, and I'm not sure how to pass YUV image data to a tflite model thats expecting an input of the shape (BATCHxWIDTHxHEIGHTx3). In the old APIs you could specify preview output formats and change it to rgb, however this page specifically says "CameraX produces images in YUV_420_888 format."
First I'm hoping someone has found a way to pass RGB to the Analyzer rather than YUV, and secondly if not, could someone suggest a way of passing a YUV image to a TFLite interpreter? The incoming image object is of the type ImageProxy and it has 3 planes, Y, U, and V.
AFAIK, the ImageAnalysis use case only provides images in the YUV_420_888 format (You can see it defined here).
The official CameraX documentation provides a way to convert YUV images to RGB bitmaps, it's at the bottom of this section.
For sample code that shows how to convert a Media.Image object from
YUV_420_888 format to an RGB Bitmap object, see YuvToRgbConverter.kt.
For anyone having this problem now.
ImageAnalysis use cases now provides images in YUV_420_888 as well as RGBA_8888 which is supported in TFLite interpreter.
Usage:
val imageAnalysis = ImageAnalysis.Builder()
.setTargetAspectRatio(AspectRatio.RATIO_16_9)
.setTargetRotation(viewFinder.display.rotation)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_BLOCK_PRODUCER)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
.build()
imageAnalysis.setAnalyzer(executor, ImageAnalysis.Analyzer { image ->
if (!::bitmapBuffer.isInitialized) {
// The image rotation and RGB image buffer are initialized only once
// the analyzer has started running
imageRotationDegrees = image.imageInfo.rotationDegrees
bitmapBuffer = Bitmap.createBitmap(
image.width, image.height, Bitmap.Config.ARGB_8888)
}
// Copy out RGB bits to our shared buffer
image.use { bitmapBuffer.copyPixelsFromBuffer(image.planes[0].buffer) }
image.close()
val imageProcessor =
ImageProcessor.Builder()
.add(Rot90Op(-frame.imageRotationDegrees / 90))
.build()
// Preprocess the image and convert it into a TensorImage for detection.
val tensorImage = imageProcessor.process(TensorImage.fromBitmap(frame.bitmapBuffer))
val results = objectDetector?.detect(tensorImage)
}
Check official sample app for more details: https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/android_play_services/app/src/main/java/org/tensorflow/lite/examples/objectdetection/fragments/CameraFragment.kt

Bitmap.createBitmap has all pixels set to 0

I am trying to create a bitmap from an int array, but the resulting bitmap is all 0.
Scenario
I have an app which takes a raw image and then does some post processing on the raw image. After this processing, I am trying to save the resulting byte array as a JPEG image and here is where I am converting the array to a bitmap so that it can be saved as a JPG.
int [] rgbArray = getColor( returnValue ); // converts the image to an int array
Bitmap image = Bitmap.createBitmap( rgbArray, 1008, 758, Bitmap.Config.ARGB_8888 );
But the image has every pixel set to 0 and the resulting Jpeg is all black.
I wonder if you can give any pointers as to why this should be the case?
I tried printing out the various get functions from the Bitmap to see if it gives any clue.
Bitmap getHeight() 758
Bitmap getWidth() 1008
Bitmap describeContents() 0
Bitmap getByteCount() 3056256
Bitmap getConfig() ARGB_8888
Bitmap isRecycled() false
Bitmap isPremultiplied() true
Bitmap hasAlpha() true
Bitmap getColorSpace() sRGB IEC61966-2.1 (id=0, model=RGB)
I am new to android programming, so any pointers would be extremely helpful. Thanks in advance.
Update
If I setPremultiplied( false ); for the Bitmap, it works fine. I guess my immediate problem is solved if i set Premultiplied to false.

Android: Display camera still capture on TextureView Quickly?

I'm using the Android Camera2 API to take still capture images and displaying them on a TextureView (for later image editing).
I have been scouring the web for a faster method to:
Decode Camera Image buffer into bitmap
Scale bitmap to size of screen and rotate it (since it comes in rotated 90 degrees)
Display it on a texture view
Currently I've managed an execution time of around 0.8s for the above, but this is too long for my particular application.
A few solutions I've considered were:
Simply taking a single frame of the preview (timing-wise this was fast, except that I had no control over auto flash)
Trying to get instead a YUV_420_888 formatted image and then somehow turning that into a bitmap (there's a lot of stuff online that might help but my initial attempts bore no fruit as of yet)
Simply sending a reduced quality image from the camera itself, but from what I've read it looks like the JPEG_QUALITY parameter in CaptureRequests does nothing! I've also tried setting BitmapFactory options inSampleSize but without any noticeable improvement in speed.
Finding some way to directly manipulate the jpeg byte array from image buffer to transform it and then converting to bitmap, all in one shot
For your reference, the following code takes the image buffer, decodes and transforms it, and displays it on the textureview:
Canvas canvas = mTextureView.lockCanvas();
// obtain image bytes (jpeg) from image in camera fragment
// mFragment.getImage() returns Image object
ByteBuffer buffer = mFragment.getImage().getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
// decoding process takes several hundred milliseconds
Bitmap src = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
mFragment.getImage().close();
// resize horizontally oriented images
if (src.getWidth() > src.getHeight()) {
// transformation matrix that scales and rotates
Matrix matrix = new Matrix();
if (CameraLayout.getFace() == CameraCharacteristics.LENS_FACING_FRONT) {
matrix.setScale(-1, 1);
}
matrix.postRotate(90);
matrix.postScale(((float) canvas.getWidth()) / src.getHeight(),
((float) canvas.getHeight()) / src.getWidth());
// bitmap creation process takes another several hundred millis!
Bitmap resizedBitmap = Bitmap.createBitmap(
src, 0, 0, src.getWidth(), src.getHeight(), matrix, true);
canvas.drawBitmap(resizedBitmap, 0, 0, null);
} else {
canvas.drawBitmap(src, 0, 0, null);
}
// post canvas to texture view
mTextureView.unlockCanvasAndPost(canvas);
This is my first question on stack overflow, so I apologize if I haven't quite followed common conventions.
Thanks in advance for any feedback.
If all you're doing with this is to draw it into a View, and won't be saving it, have you tried to simply request JPEGs that are lower resolution than maximum, and match the screen dimensions better?
Alternatively, if you need the full-size image, JPEG images typically contain a thumbnail - extracting that and displaying it is a lot faster than processing the full-resolution image.
In terms of your current code, if possible, you should avoid having to create a second Bitmap with the scaling. Could you instead place an ImageView on top of your TextureView when you want to display the image, and then rely on its built-in scaling?
Or use Canvas.concat(Matrix) instead of creating the intermediate Bitmap?

How do I efficiently modify captured bytes from the camera?

I'm drawing an overlay on an image from the camera and saving the result to a file. To do this, I am passing a callback containing the code below to takePicture(). With larger image sizes, I am getting crashes with an OutOfMemoryError at the first line of the method.
Is there any way I can do this more efficiently? It seems that it's not possible to make a mutable Bitmap from the byte[], which doubles my memory usage immediately. If it can't be done this way at high resolutions, how can I produce an overlay on a large captured image without running out of memory?
public void onPictureTaken(byte[] rawPlainImage, Camera camera) {
Bitmap plainImage = BitmapFactory.decodeByteArray(rawPlainImage, 0, rawPlainImage.length);
plainImage = plainImage.copy(plainImage.getConfig(), true);
Canvas combinedImage = new Canvas(plainImage);
combinedImage.drawBitmap(mOverlay, mOverlayTransformation, null);
//Write plainImage (now modified) out to a file
plainImage.recycle();
}
You don't actually need to decode the image. Instead draw the overlay to the canvas, save the canvas as a bitmap, convert that bitmap to a byte array and then combine the byte array of the canvas and the bitmap and then save that.

Android YuvImage class format incorrect?

It's well documented that Android's camera preview data is returned back in NV21 (YUV 420). 2.2 added a YuvImage class for decoding the data. The problem I've encountered is that the YuvImage class data appears corrupt or incorrect. I used the Renderscript Sample app called HelloCompute which transforms a Bitmap into a mono-chrome Bitmap. I used two methods for decoding the Preview data into a Bitmap and passing it as input to the Renderscript:
Method 1 - Android YuvImage Class:
YuvImage preview = new YuvImage(data, ImageFormat.NV21, width, height, null);
ByteArrayOutputStream mJpegOutput = new ByteArrayOutputStream(data.length);
preview.compressToJpeg(new Rect(0, 0, width, height), 100, mJpegOutput);
mBitmapIn = BitmapFactory.decodeByteArray( mJpegOutput.toByteArray(), 0, mJpegOutput.size());
// pass mBitmapIn to RS
Method 2 - Posted Decoder Method:
As posted over here by David Pearlman
// work around for Yuv format </p>
mBitmapIn = Bitmap.createBitmap(
ImageUtil.decodeYUV420SP(data, width, height),
width,
height,
Bitmap.Config.ARGB_8888);
// pass mBitmapIn to RS
When the image is processed by the Renderscript and displayed Method 1 is very grainy and not mono-chrome, while Method 2 produces the expected output, a mono-chrome image of the preview frame. Am I doing something wrong or is the YuvImage class not usable? I'm testing this on a Xoom running 3.1.
Furthermore, I displayed the bitmaps produced by both methods on screen prior to passing to the RS. The bitmap from Method 1 has noticeable differences in lighting (I suspected this was due to the JPeg compression), while Method 2's bitmap is identical to the Preview Frame.
There is no justification for using Jpeg encode/decode just to convert a YUV image to a grayscale bitmap (I believe you want grayscale, not monochrome b/w bitmap after all). You can find many code samples that produce the result you need. You may use this one: Converting preview frame to bitmap.

Categories

Resources