Camera2 API - jpeg capture file size too big - android

I'm using Camera2 api to do a still image capture and save it to a jpeg file. The problem is that the size of the file is always >900kb, even if I set the image dimensions to the smallest available and put jpeg quality low.
This is how I'm saving the file in the ImageAvailableListener. It's a xamarin project so code is in c#.
image = reader.AcquireLatestImage();
ByteBuffer buffer = image.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
output = new FileOutputStream(File);
output.Write(bytes);
output.Close();
The file should be ~20kb, so why can't I get file sizes lower than 900kb?

You can also reduce the capture image quality
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.set(CaptureRequest.JPEG_THUMBNAIL_QUALITY, (byte) 70); // add this line and set your own quality

I figured it out. Needed to create a bitmap to apply compression:
image = reader.AcquireLatestImage();
ByteBuffer buffer = image.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
// need to get the bitmap in order to compress
Bitmap bitmap = BitmapFactory.DecodeByteArray(bytes, 0, bytes.Length);
using (System.IO.MemoryStream stream = new System.IO.MemoryStream())
{
bitmap.Compress(Bitmap.CompressFormat.Jpeg, 85, stream);
Save(stream.GetBuffer());
}

Related

Dart/Flutter Image library decode Bitmap fails

I am trying to read a bitmap generated on the platform site
here is the code I use to generate a byte array in android:
ByteBuffer byteBuffer = ByteBuffer.allocate(bitmap.getByteCount());
bitmap.copyPixelsToBuffer(byteBuffer);
return byteBuffer.array();
When I try to decode it with the image library it fails (dart code):
Image img = decodeBmp(bitmapData);
The method 'getBytes' was called on null.
bitmap data is not empty and looks proper for me
If I instead compress image on the android side to png or jpg than decodeImage (or -Png or -Jpg) works
but I dont want to use bitmap.compress
this code would work with compress
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bufferedBmp.compress(Bitmap.CompressFormat.PNG, 100, baos);
byte[] imageData = baos.toByteArray();
return imageData;

How to covert Image to byte[] or bitmap in Android Studio?

Can someone help me to solve a problem in Android Studio? I am creating an app in which I have a photo in JPEG format(Comming from the camera, not from file) and I want to convert it to bitmap.
You can use this method to convert image to byte[]. Like this:
public byte[] getBytesFromBitmap(Bitmap bitmap) {
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(CompressFormat.JPEG, 100, stream);
return stream.toByteArray();
}
// Here image variable is a JPEG file/image.
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);

Bitmap to byteArray without compress

I am aware of the code below
Bitmap photo = (Bitmap) data.getExtras().get("data");
ByteArrayOutputStream stream = new ByteArrayOutputStream();
photo.compress(Bitmap.CompressFormat.PNG, 100, stream);
byte[] byteArray = stream.toByteArray();
but the issue is that this code reduces the image quality so much that it is unrecognizable how can i avoid
photo.compress(Bitmap.CompressFormat.PNG,100, stream);
You can use CompressFormat.PNG, where the compress quality doesn't matter, its always lossless as described in the documentation
array = new byte[w*h*4];
Buffer dst = ByteBuffer.wrap(array);
bmp.copyPixelsToBuffer(dst);

camera2 api convert yuv420 to rgb green out

i trying convert image from YUV_420_888 to rgb and i have some trouble with output image. In ImageReader i get image in format YUV_420_888 (using camera 2 api for get this image preview).
imageReader = ImageReader.newInstance(1920,1080,ImageFormat.YUV_420_888,10);
In android sdk for YuvImage class writing, that YuvImage using only NV21, YUY2.
as we can see difference between N21 and yuv420 not big and i try convert data to N21
YUV420:
and N21:
in onImageAvailable i get separately each Planes and put them in correct place (as on image)
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
ByteBuffer bufferY = image.getPlanes()[0].getBuffer();
byte[] data0 = new byte[bufferY.remaining()];
bufferY.get(data0);
ByteBuffer bufferU = image.getPlanes()[1].getBuffer();
byte[] data1 = new byte[bufferU.remaining()];
bufferU.get(data1);
ByteBuffer bufferV = image.getPlanes()[2].getBuffer();
byte[] data2 = new byte[bufferV.remaining()];
bufferV.get(data2);
...
outputStream.write(data0);
for (int i=0;i<bufferV.remaining();i++) {
outputStream.write(data1[i]);
outputStream.write(data2[i]);
}
after create YuvImage, convert to Bitmap, view and save
final YuvImage yuvImage = new YuvImage(outputStream.toByteArray(), ImageFormat.NV21, 1920,1080, null);
ByteArrayOutputStream outBitmap = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0,1920, 1080), 95, outBitmap);
byte[] imageBytes = outBitmap.toByteArray();
final Bitmap imageBitmap = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
mImageView.setImageBitmap(imageBitmap);
...
imageBitmap.compress(Bitmap.CompressFormat.JPEG, 95, out);
but my saved image is green and pink:
what did i miss??
I have implemented the YUV_420 logic (exactly as shown in the above diagram) in RenderScript, see full code here:
Conversion YUV_420 _888 to Bitmap, complete code
It produces perfect bimaps for API 22, but for API 21 it shows the "green idyll". From this I can confirm, the results you found. As already mentioned by Silvaren above, the reason seems to be an Android bug in API 21. Looking at my rs code it is clear, that if U and V information is missing (i.e. zero) the G(reen) ARGB component becomes huge during the conversion.
I see similar green pictures on my Galaxy S5 (still API 21) - here even upside down ;-). I suspect that most devices at API 21 currently do not yet use Camera2 for their device-camera apps. There is a free app called "Manual Camera Compatibility" which allows to test this. From this I see that indeed the S5/API21 still not uses Camera2...fortunately not...
There are two main problems on your conversion attempt:
We can not assume that the U and V planes are isolated, they might contain interleaved data (e.g. U-PLANE = {U1, V1, U2, V2, ...} ). In fact, it might even be a NV21 style interleaving already. The key here is looking at the plane's row stride and pixel stride and also check what we can assume about the YUV_420_888 format.
The fact that you've commented that most of the U an V planes data are zeros indicates that you are experiencing an Android bug on the generation of images in YUV_420_888. This means that even if you get the conversion right, the image would still look green if you are affected by the bug, which was only fixed at the Android 5.1.1 and up, so it is worth to check which version you are using besides fixing the code.
bufferV.get(data2) increases the the position of the ByteBuffer. That's why the loop for (int i=0;i<bufferV.remaining();i++) produces 0 iterations. You can easily rewrite it as
for (int i=0; i<data1.length; i++) {
outputStream.write(data1[i]);
outputStream.write(data2[i]);
}
I got an image of ImageFormat.YUV_420_888 and was successful to save to jpeg file, and could view it correctly on windows.
I am sharing here :
private final Image mImage;
private final File mFile;
private final int mImageFormat;
ByteArrayOutputStream outputbytes = new ByteArrayOutputStream();
ByteBuffer bufferY = mImage.getPlanes()[0].getBuffer();
byte[] data0 = new byte[bufferY.remaining()];
bufferY.get(data0);
ByteBuffer bufferU = mImage.getPlanes()[1].getBuffer();
byte[] data1 = new byte[bufferU.remaining()];
bufferU.get(data1);
ByteBuffer bufferV = mImage.getPlanes()[2].getBuffer();
byte[] data2 = new byte[bufferV.remaining()];
bufferV.get(data2);
try
{
outputbytes.write(data0);
outputbytes.write(data2);
outputbytes.write(data1);
final YuvImage yuvImage = new YuvImage(outputbytes.toByteArray(), ImageFormat.NV21, mImage.getWidth(),mImage.getHeight(), null);
ByteArrayOutputStream outBitmap = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0,mImage.getWidth(), mImage.getHeight()), 95, outBitmap);
FileOutputStream outputfile = null;
outputfile = new FileOutputStream(mFile);
outputfile.write(outBitmap.toByteArray());
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
mImage.close();
}

size of byte array before and after writing to file is different why

public void onPictureTaken(byte[] data, Camera camera) {
Uri imageFileUri = null;
BitmapFactory.Options bmpFactoryOptions = new BitmapFactory.Options();
bmpFactoryOptions.inJustDecodeBounds = true;
Bitmap mBitmap = BitmapFactory.decodeByteArray(data, 0, data.length, bmpFactoryOptions);
Log.d("SIZE", "mBitmap size :" + data.length);
bmpFactoryOptions.inJustDecodeBounds = false;
mBitmap = BitmapFactory.decodeByteArray(data, 0, data.length, bmpFactoryOptions);
imageFileUri = getApplicationContext().getContentResolver().insert(
MediaStore.Images.Media.EXTERNAL_CONTENT_URI, new ContentValues());
OutputStream imageFileOS = getContentResolver().openOutputStream(imageFileUri);
mBitmap.compress(Bitmap.CompressFormat.JPEG, 100, imageFileOS);
imageFileOS.flush();
imageFileOS.close();
ByteArrayOutputStream stream1 = new ByteArrayOutputStream();
mBitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream1);
byte[] imageInByte1 = stream1.toByteArray();
long lengthbmp1 = imageInByte1.length;
Log.d("SIZE", "ByteArrayOutputStream1 size :" + lengthbmp1);
output of the Log is like below :
D/SIZE (23100): mBitmap size :4858755
D/SIZE (23100): ByteArrayOutputStream1 size :8931843
Can anybody help me why this difference.
I need to compress the image based on the size, but without compressing the size getting different..
You appear to be loading the image and then recompressing to bitmap
mBitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream1);
Then you're wondering why the image size isn't the same? The answer is you've re-encoded it. Is 100 the compression ratio? If you load a bitmap compressed at 80% and then resave it to 100% in any image editor the size will grow.
The first question is why you reencode the bitmap when you already have the bytes. The size difference you observe comes from the different compression. The camera app will typically compress the image with a quality lower than 100 but you recompress it with 100. It is clear that your image representation will need more space.
If recompression is really necessary (for example if you altered the image in some way), try lower quality factors for better compression. Depending on your image something between 90 and 100 may work well.

Categories

Resources