I'm using the Android Camera2 API to take still capture images and displaying them on a TextureView (for later image editing).
I have been scouring the web for a faster method to:
Decode Camera Image buffer into bitmap
Scale bitmap to size of screen and rotate it (since it comes in rotated 90 degrees)
Display it on a texture view
Currently I've managed an execution time of around 0.8s for the above, but this is too long for my particular application.
A few solutions I've considered were:
Simply taking a single frame of the preview (timing-wise this was fast, except that I had no control over auto flash)
Trying to get instead a YUV_420_888 formatted image and then somehow turning that into a bitmap (there's a lot of stuff online that might help but my initial attempts bore no fruit as of yet)
Simply sending a reduced quality image from the camera itself, but from what I've read it looks like the JPEG_QUALITY parameter in CaptureRequests does nothing! I've also tried setting BitmapFactory options inSampleSize but without any noticeable improvement in speed.
Finding some way to directly manipulate the jpeg byte array from image buffer to transform it and then converting to bitmap, all in one shot
For your reference, the following code takes the image buffer, decodes and transforms it, and displays it on the textureview:
Canvas canvas = mTextureView.lockCanvas();
// obtain image bytes (jpeg) from image in camera fragment
// mFragment.getImage() returns Image object
ByteBuffer buffer = mFragment.getImage().getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
// decoding process takes several hundred milliseconds
Bitmap src = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
mFragment.getImage().close();
// resize horizontally oriented images
if (src.getWidth() > src.getHeight()) {
// transformation matrix that scales and rotates
Matrix matrix = new Matrix();
if (CameraLayout.getFace() == CameraCharacteristics.LENS_FACING_FRONT) {
matrix.setScale(-1, 1);
}
matrix.postRotate(90);
matrix.postScale(((float) canvas.getWidth()) / src.getHeight(),
((float) canvas.getHeight()) / src.getWidth());
// bitmap creation process takes another several hundred millis!
Bitmap resizedBitmap = Bitmap.createBitmap(
src, 0, 0, src.getWidth(), src.getHeight(), matrix, true);
canvas.drawBitmap(resizedBitmap, 0, 0, null);
} else {
canvas.drawBitmap(src, 0, 0, null);
}
// post canvas to texture view
mTextureView.unlockCanvasAndPost(canvas);
This is my first question on stack overflow, so I apologize if I haven't quite followed common conventions.
Thanks in advance for any feedback.
If all you're doing with this is to draw it into a View, and won't be saving it, have you tried to simply request JPEGs that are lower resolution than maximum, and match the screen dimensions better?
Alternatively, if you need the full-size image, JPEG images typically contain a thumbnail - extracting that and displaying it is a lot faster than processing the full-resolution image.
In terms of your current code, if possible, you should avoid having to create a second Bitmap with the scaling. Could you instead place an ImageView on top of your TextureView when you want to display the image, and then rely on its built-in scaling?
Or use Canvas.concat(Matrix) instead of creating the intermediate Bitmap?
Related
I am using CameraX ImageAnalysis.Analyzer to do face detection. However, the Image object size is too large to use FirebaseVisionImage.fromMediaImage, therefore I tried converting the image to bitmap and recale and rotate based on camera rotation.
The perfomance is still huge lag, this is becuase of Bitmap scaling operation.
FirebaseVisionImage.fromMediaImage takes 120ms to convert 640x480 image.
FirebaseVisionImage.fromBitmap takes 30ms to convert 640x480 image (but wrong rotation) so no face detaction.
FirebaseVisionImage.fromBitmap with scaling 90ms to convert 480x360 image.
My question is how to optize this process so that I can get a 480x360 bitmap and improve perfromnce.
I have used this following implementation to convert Image to bitmap.
https://github.com/xizhang/camerax-gpuimage/blob/master/app/src/main/java/com/appinmotion/gpuimage/YuvToRgbConverter.kt
My code.
Bitmap bitmapImage2 = Bitmap.createBitmap(
imageX.getWidth(), imageX.getHeight(), Bitmap.Config.ARGB_8888);
ByteBuffer byteBuffer = converter.yuvToRgb(imageX, bitmapImage2);
bitmapImage2 = BitmapUtils.rotateScaleBitmap(bitmapImage2, rotation, 1, 480);
FirebaseVisionImage image =
FirebaseVisionImage.fromBitmap(bitmapImage2);
Log.d("myApp", "" + (System.currentTimeMillis() - start));
I'm trying to adapt the code in google developer guides to resize a large image obtained from HTTP.
In order to resize the image, I have to process it once (using Bitmapfactory.decodeStream) to determine its original height and width. Then, I have to run Bitmapfactory.decodeStream again in order to resize it. THe problem with this approach is that I cannot use the same stream twice.
If I do, the second called to decodeStream returns null.
I thought about trying to clone / copy the stream first so that I would have two copies to work with. However, this uses up memory, which was the problem I was trying to solve by resize the image, in the first place.
Just use the Bitmap returned by Bitmapfactory.decodeStream() for the resize operation. You do not need to decode it twice. You have it already.
Bitmap b = Bitmapfactory.decodeStream(/* your InputStream */);
// get original dimensions from b
int h = b.getHeight();
int w = b.getWidth();
// resize b to half (actually quarter) size
Bitmap resizedBitmap = Bitmap.createScaledBitmap(b, w/2, h/2, false);
I´m trying to merge 2 images, one is bitmap from camera, second one is .png file stored in drawables. What I did was that I used both images as bitmaps and I tried to merge them by using canvas, something like this:
Bitmap topImage = BitmapFactory.decodeFile("gui.png");
Bitmap bottomImage = BitmapFactory.decodeByteArray(arg0, 0, arg0.length);
Canvas canvas = new Canvas(bottomImage);
canvas.drawBitmap(topImage, 0, 0, null);
But I keep getting "Bitmap size exceeds VM budget" error all the time. I tried nearly everything, but still, it keeps throwing this error. Is there another way of merging 2 images? What i need to do is simple - I need to take photo and save it merged with that .PNG image stored in drawables. For example this app is very close to what i need - https://play.google.com/store/apps/details?id=com.hl2.hud&feature=search_result#?t=W251bGwsMSwyLDEsImNvbS5obDIuaHVkIl0.
Thanks :)
See the below code for combining two images.
This method returns combined bitmap
public Bitmap combineImages(Bitmap frame, Bitmap image) {
Bitmap cs = null;
Bitmap rs = null;
rs = Bitmap.createScaledBitmap(frame, image.getWidth() + 50,
image.getHeight() + 50, true);
cs = Bitmap.createBitmap(rs.getWidth(), rs.getHeight(),
Bitmap.Config.RGB_565);
Canvas comboImage = new Canvas(cs);
comboImage.drawBitmap(image, 25, 25, null);
comboImage.drawBitmap(rs, 0, 0, null);
if (rs != null) {
rs.recycle();
rs = null;
}
Runtime.getRuntime().gc();
return cs;
}
You can change height and width as per your requirements
Hope this will help...
How large are the images? I've only encountered this problem when trying to load large images into memory.
Is the byte array your decoding actually an image?
From a quick look at the android docs you can capture an image using the default camera app which may work in this situation.
http://developer.android.com/training/camera/photobasics.html
Also see this question: Capture Image from Camera and Display in Activity
Edit: You may also need to scale the image from the camera down if it is very large. See the end of the android page I linked to for details on that.
I'm drawing an overlay on an image from the camera and saving the result to a file. To do this, I am passing a callback containing the code below to takePicture(). With larger image sizes, I am getting crashes with an OutOfMemoryError at the first line of the method.
Is there any way I can do this more efficiently? It seems that it's not possible to make a mutable Bitmap from the byte[], which doubles my memory usage immediately. If it can't be done this way at high resolutions, how can I produce an overlay on a large captured image without running out of memory?
public void onPictureTaken(byte[] rawPlainImage, Camera camera) {
Bitmap plainImage = BitmapFactory.decodeByteArray(rawPlainImage, 0, rawPlainImage.length);
plainImage = plainImage.copy(plainImage.getConfig(), true);
Canvas combinedImage = new Canvas(plainImage);
combinedImage.drawBitmap(mOverlay, mOverlayTransformation, null);
//Write plainImage (now modified) out to a file
plainImage.recycle();
}
You don't actually need to decode the image. Instead draw the overlay to the canvas, save the canvas as a bitmap, convert that bitmap to a byte array and then combine the byte array of the canvas and the bitmap and then save that.
It's well documented that Android's camera preview data is returned back in NV21 (YUV 420). 2.2 added a YuvImage class for decoding the data. The problem I've encountered is that the YuvImage class data appears corrupt or incorrect. I used the Renderscript Sample app called HelloCompute which transforms a Bitmap into a mono-chrome Bitmap. I used two methods for decoding the Preview data into a Bitmap and passing it as input to the Renderscript:
Method 1 - Android YuvImage Class:
YuvImage preview = new YuvImage(data, ImageFormat.NV21, width, height, null);
ByteArrayOutputStream mJpegOutput = new ByteArrayOutputStream(data.length);
preview.compressToJpeg(new Rect(0, 0, width, height), 100, mJpegOutput);
mBitmapIn = BitmapFactory.decodeByteArray( mJpegOutput.toByteArray(), 0, mJpegOutput.size());
// pass mBitmapIn to RS
Method 2 - Posted Decoder Method:
As posted over here by David Pearlman
// work around for Yuv format </p>
mBitmapIn = Bitmap.createBitmap(
ImageUtil.decodeYUV420SP(data, width, height),
width,
height,
Bitmap.Config.ARGB_8888);
// pass mBitmapIn to RS
When the image is processed by the Renderscript and displayed Method 1 is very grainy and not mono-chrome, while Method 2 produces the expected output, a mono-chrome image of the preview frame. Am I doing something wrong or is the YuvImage class not usable? I'm testing this on a Xoom running 3.1.
Furthermore, I displayed the bitmaps produced by both methods on screen prior to passing to the RS. The bitmap from Method 1 has noticeable differences in lighting (I suspected this was due to the JPeg compression), while Method 2's bitmap is identical to the Preview Frame.
There is no justification for using Jpeg encode/decode just to convert a YUV image to a grayscale bitmap (I believe you want grayscale, not monochrome b/w bitmap after all). You can find many code samples that produce the result you need. You may use this one: Converting preview frame to bitmap.