Android create Bitmap + crop causing OutOfMemory error(eventually) - android

I am taking 3 pictures in my app before uploading it to a remote server. The output is a byteArray. I am currently converting this byteArray to a bitmap, performing cropping on it(cropping the centre square). I eventually run out of memory(that is after exiting the app coming back,performing the same steps). I am trying to re-use the bitmap object using BitmapFactory.Options as mentioned in the android dev guide
https://www.youtube.com/watch?v=_ioFW3cyRV0&list=LLntRvRsglL14LdaudoRQMHg&index=2
and
https://www.youtube.com/watch?v=rsQet4nBVi8&list=LLntRvRsglL14LdaudoRQMHg&index=3
This is the function I call when I'm saving the image taken by the camera.
public void saveImageToDisk(Context context, byte[] imageByteArray, String photoPath, BitmapFactory.Options options) {
options.inJustDecodeBounds = true;
BitmapFactory.decodeByteArray(imageByteArray, 0, imageByteArray.length, options);
int imageHeight = options.outHeight;
int imageWidth = options.outWidth;
int dimension = getSquareCropDimensionForBitmap(imageWidth, imageHeight);
Log.d(TAG, "Width : " + dimension);
Log.d(TAG, "Height : " + dimension);
//bitmap = cropBitmapToSquare(bitmap);
options.inJustDecodeBounds = false;
Bitmap bitmap = BitmapFactory.decodeByteArray(imageByteArray, 0,
imageByteArray.length, options);
options.inBitmap = bitmap;
bitmap = ThumbnailUtils.extractThumbnail(bitmap, dimension, dimension,
ThumbnailUtils.OPTIONS_RECYCLE_INPUT);
options.inSampleSize = 1;
Log.d(TAG, "After square crop Width : " + options.inBitmap.getWidth());
Log.d(TAG, "After square crop Height : " + options.inBitmap.getHeight());
byte[] croppedImageByteArray = convertBitmapToByteArray(bitmap);
options = null;
File photo = new File(photoPath);
if (photo.exists()) {
photo.delete();
}
try {
FileOutputStream e = new FileOutputStream(photo.getPath());
BufferedOutputStream bos = new BufferedOutputStream(e);
bos.write(croppedImageByteArray);
bos.flush();
e.getFD().sync();
bos.close();
} catch (IOException e) {
}
}
public int getSquareCropDimensionForBitmap(int width, int height) {
//If the bitmap is wider than it is tall
//use the height as the square crop dimension
int dimension;
if (width >= height) {
dimension = height;
}
//If the bitmap is taller than it is wide
//use the width as the square crop dimension
else {
dimension = width;
}
return dimension;
}
public Bitmap cropBitmapToSquare(Bitmap source) {
int h = source.getHeight();
int w = source.getWidth();
if (w >= h) {
source = Bitmap.createBitmap(source, w / 2 - h / 2, 0, h, h);
} else {
source = Bitmap.createBitmap(source, 0, h / 2 - w / 2, w, w);
}
Log.d(TAG, "After crop Width : " + source.getWidth());
Log.d(TAG, "After crop Height : " + source.getHeight());
return source;
}
How do I correctly recycle or re-use bitmaps because as of now I am getting OutOfMemory errors?
UPDATE :
After implementing Colin's solution. I am running into an ArrayIndexOutOfBoundsException.
My logs are below
08-26 01:45:01.895 3600-3648/com.test.test E/AndroidRuntime﹕ FATAL EXCEPTION: pool-3-thread-1
Process: com.test.test, PID: 3600
java.lang.ArrayIndexOutOfBoundsException: length=556337; index=556337
at com.test.test.helpers.Utils.test(Utils.java:197)
at com.test.test.fragments.DemoCameraFragment.saveImageToDisk(DemoCameraFragment.java:297)
at com.test.test.fragments.DemoCameraFragment_.access$101(DemoCameraFragment_.java:30)
at com.test.test.fragments.DemoCameraFragment_$5.execute(DemoCameraFragment_.java:159)
at org.androidannotations.api.BackgroundExecutor$Task.run(BackgroundExecutor.java:401)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:422)
at java.util.concurrent.FutureTask.run(FutureTask.java:237)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:152)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:265)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
P.S : I had thought of cropping byteArrays before, but I did not know how to implement it.

You shouldn't need to do any conversion to bitmaps, actually.
Remember that your bitmap image data is RGBA_8888 formatted. Meaning that every 4 contiguous bytes represents one pixel. As such:
// helpers to make sanity
int halfWidth = imgWidth >> 1;
int halfHeight = imgHeight >> 1;
int halfDim = dimension >> 1;
// get our min and max crop locations
int minX = halfWidth - halfDim;
int minY = halfHeight - halfDim;
int maxX = halfWidth + halfDim;
int maxY = halfHeight + halfDim;
// allocate our thumbnail; It's WxH*(4 bits per pixel)
byte[] outArray = new byte[dimension * dimension * 4]
int outPtr = 0;
for(int y = minY; y< maxY; y++)
{
for(int x = minX; x < maxX; x++)
{
int srcLocation = (y * imgWidth) + (x * 4);
outArray[outPtr + 0] = imageByteArray[srcLocation +0]; // read R
outArray[outPtr + 1] = imageByteArray[srcLocation +1]; // read G
outArray[outPtr + 2] = imageByteArray[srcLocation +2]; // read B
outArray[outPtr + 3] = imageByteArray[srcLocation +3]; // read A
outPtr+=4;
}
}
//outArray now contains the cropped pixels.
The end result is that you can do cropping by hand by just copying out the pixels you're looking for, rather than allocating a new bitmap object, and then converting that back to a byte array.
== EDIT:
Actually; The above algorithm is assuming that your input data is the raw RGBA_8888 pixel data. But it sounds like, instead, your input byte array is the encoded JPG data. As such, your 2nd decodeByteArray is actually decoding your JPG file to the RGBA_8888 format. If this is the case, the proper thing to do for re-sizing is to use the techniques described in "Most memory efficient way to resize bitmaps on android?" since you're working with encoded data.

Try setting more and more variables to null - this helps reclaiming that memory;
after
byte[] croppedImageByteArray = convertBitmapToByteArray(bitmap);
do:
bitmap= null;
after
FileOutputStream e = new FileOutputStream(photo.getPath());
do
photo = null;
and after
try {
FileOutputStream e = new FileOutputStream(photo.getPath());
BufferedOutputStream bos = new BufferedOutputStream(e);
bos.write(croppedImageByteArray);
bos.flush();
e.getFD().sync();
bos.close();
} catch (IOException e) {
}
do:
e = null;
bos = null;
Edit #1
If this fails to help, your only real solution actually using memory monitor. To learn more go here and here
Ps. there is another very dark solution, very dark solution. Only for those who know how to navigate thru dark corners of ofheapmemory. But you will have to follow this path on your own.

Related

Incorrect colors on Android Bitmap after conversion to and from a byte array

In Android, I want to load a PNG, get the RGB values in a byte array to do some computation, then I want to recreate a Bitmap with the new values.
To do that, I wrote 2 functions to convert a Bitmap into an RGB byte array and another one to convert an RGB byte array back to a Bitmap, the alpha channel can be ignored.
These are the conversion functions:
public static byte[] ARGB2byte(Bitmap img)
{
int width = img.getWidth();
int height = img.getHeight();
int[] pixels = new int[height*width];
byte rgbIm[] = new byte[height*width*3];
img.getPixels(pixels,0,width,0,0,width,height);
int pixel_count = 0;
int count=width*height;
while(pixel_count < count){
int inVal = pixels[pixel_count];
//Get the pixel channel values from int
int a = (inVal >> 24) & 0xff;
int r = (inVal >> 16) & 0xff;
int g = (inVal >> 8) & 0xff;
int b = inVal & 0xff;
rgbIm[pixel_count*3] = (byte)(r);
rgbIm[pixel_count*3 + 1] = (byte)(g);
rgbIm[pixel_count*3 + 2] = (byte)(b);
pixel_count++;
}
return rgbIm;
}
public static Bitmap byte2ARGB(byte[] data, int width, int height)
{
int pixelsCount = data.length / 3;
int[] pixels = new int[pixelsCount];
for (int i = 0; i < pixelsCount; i++)
{
int offset = 3 * i;
int r = data[offset];
int g = data[offset + 1];
int b = data[offset + 2];
pixels[i] = Color.rgb(r, g, b);
}
return Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
}
So I tried to test these function just loading an image from the asset folder, converting it to byte array, converting it back to Bitmp immediately and saving it to the internal storage to inspect if the final result matches the original image.
Unfortunately it doesn't, the color space seems wrong.
For example if I load this png:
and run the following code:
// Loads the png from assets folder
AssetManager am = getInstrumentation().getContext().getAssets();
InputStream is = am.open(filename);
Bitmap bitmap = BitmapFactory.decodeStream(is);
// Conversion to byte array
byte[] barray = ARGB2byte(bitmap);
Bitmap reconverted = Utils.byte2ARGB(barray, bitmap.getWidth(), bitmap.getHeight());
// Saving the reconverted Bitmap
try {
String folder_path = context.getFilesDir().getAbsolutePath() + "/";
File file = new File(folder_path + "test_conversion.png");
FileOutputStream fos = new FileOutputStream(file);
reconverted.compress(Bitmap.CompressFormat.PNG, 100, fos);
fos.close();
} catch (FileNotFoundException e) {
Log.d("saving bitmap", "File not found: " + e.getMessage());
} catch (IOException e) {
Log.d("saving bitmap", "Error accessing file: " + e.getMessage());
}
}
I get this as result:
What I'm doing wrong?
If I use the same code to save the original Bitmap right after I loaded it, the image I get is correct, so I'm probably doing some mistake during the conversion.
I also inspected the R,G,B values of the byte array of the original image comparing them with th values from the byte array obtained from the reconverted image, and they are the same!!
Is there something that the Bitmap library of Android does under the hood, maybe with the alpha channel? I can't figure it out.
Thank you
Sorry for I can't speak English well first.
It's cause cast byte to int in java will get unexpected result.
Use this instead
int r = data[offset] & 0xFF;
int g = data[offset + 1] & 0xFF;
int b = data[offset + 2] & 0xFF;
See this post too.

Convert android.media.Image (YUV_420_888) to Bitmap

I'm trying to implement camera preview image data processing using camera2 api as proposed here: Camera preview image data processing with Android L and Camera2 API.
I successfully receive callbacks using onImageAvailableListener, but for future processing I need to obtain bitmap from YUV_420_888 android.media.Image. I searched for similar questions, but none of them helped.
Could you please suggest me how to convert android.media.Image (YUV_420_888) to Bitmap or maybe there's a better way of listening for preview frames?
You can do this using the built-in Renderscript intrinsic, ScriptIntrinsicYuvToRGB. Code taken from Camera2 api Imageformat.yuv_420_888 results on rotated image:
#Override
public void onImageAvailable(ImageReader reader)
{
// Get the YUV data
final Image image = reader.acquireLatestImage();
final ByteBuffer yuvBytes = this.imageToByteBuffer(image);
// Convert YUV to RGB
final RenderScript rs = RenderScript.create(this.mContext);
final Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);
final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
allocationYuv.copyFrom(yuvBytes.array());
ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
scriptYuvToRgb.setInput(allocationYuv);
scriptYuvToRgb.forEach(allocationRgb);
allocationRgb.copyTo(bitmap);
// Release
bitmap.recycle();
allocationYuv.destroy();
allocationRgb.destroy();
rs.destroy();
image.close();
}
private ByteBuffer imageToByteBuffer(final Image image)
{
final Rect crop = image.getCropRect();
final int width = crop.width();
final int height = crop.height();
final Image.Plane[] planes = image.getPlanes();
final byte[] rowData = new byte[planes[0].getRowStride()];
final int bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
final ByteBuffer output = ByteBuffer.allocateDirect(bufferSize);
int channelOffset = 0;
int outputStride = 0;
for (int planeIndex = 0; planeIndex < 3; planeIndex++)
{
if (planeIndex == 0)
{
channelOffset = 0;
outputStride = 1;
}
else if (planeIndex == 1)
{
channelOffset = width * height + 1;
outputStride = 2;
}
else if (planeIndex == 2)
{
channelOffset = width * height;
outputStride = 2;
}
final ByteBuffer buffer = planes[planeIndex].getBuffer();
final int rowStride = planes[planeIndex].getRowStride();
final int pixelStride = planes[planeIndex].getPixelStride();
final int shift = (planeIndex == 0) ? 0 : 1;
final int widthShifted = width >> shift;
final int heightShifted = height >> shift;
buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));
for (int row = 0; row < heightShifted; row++)
{
final int length;
if (pixelStride == 1 && outputStride == 1)
{
length = widthShifted;
buffer.get(output.array(), channelOffset, length);
channelOffset += length;
}
else
{
length = (widthShifted - 1) * pixelStride + 1;
buffer.get(rowData, 0, length);
for (int col = 0; col < widthShifted; col++)
{
output.array()[channelOffset] = rowData[col * pixelStride];
channelOffset += outputStride;
}
}
if (row < heightShifted - 1)
{
buffer.position(buffer.position() + rowStride - length);
}
}
}
return output;
}
For a simpler solution see my implementation here:
Conversion YUV 420_888 to Bitmap (full code)
The function takes the media.image as input, and creates three RenderScript allocations based on the y-, u- and v-planes. It follows the YUV_420_888 logic as shown in this Wikipedia illustration.
However, here we have three separate image planes for the Y, U and V-channels, thus I take these as three byte[], i.e. U8 allocations. The y-allocation has size width * height bytes, while the u- and v-allocatons have size width * height/4 bytes each, reflecting the fact that each u-byte covers 4 pixels (ditto each v byte).
I write some code about this, and it's the YUV datas preview and chang it to JPEG datas ,and I can use it to save as bitmap ,byte[] ,or others.(You can see the class "Allocation" ).
And SDK document says: "For efficient YUV processing with android.renderscript: Create a RenderScript Allocation with a supported YUV type, the IO_INPUT flag, and one of the sizes returned by getOutputSizes(Allocation.class), Then obtain the Surface with getSurface()."
here is the code, hope it will help you:https://github.com/pinguo-yuyidong/Camera2/blob/master/camera2/src/main/rs/yuv2rgb.rs

onPreviewFrame YUV grayscale skewed

I'm trying to get the picture from a surfaceView where I have the camera view running,
I've already implemented onPreviewFrame, and it's called correctly as the debug shows me.
The problem I'm facing now, it's since the byte[] data I receive in the method, it's in YUV space color (NV21), I'm trying to convert it to grayscale to generate a Bitmap and then storing it into a file.
The conversion process that I'm following it's:
public Bitmap convertYuvGrayScaleRGB(byte[] yuv, int width, int height) {
int[] pixels = new int[width * height];
for (int i = 0; i < height*width; i++) {
int grey = yuv[i] & 0xff;
pixels[i] = 0xFF000000 | (grey * 0x00010101);
}
return Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
}
The importing procedure for storing it to a file, it's:
Bitmap bitmap = convertYuvGrayScaleRGB(data,widht,heigth);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 50, bytes);
File f = new File(Environment.getExternalStorageDirectory()
+ File.separator + "test.jpg");
Log.d("Camera", "File: " + f.getAbsolutePath());
try {
f.createNewFile();
FileOutputStream fo = new FileOutputStream(f);
fo.write(bytes.toByteArray());
fo.close();
bitmap.recycle();
bitmap = null;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
Altough, the result I've got it's the following:
I can't find any obvious mistake in your code, but i've already met this kind of skewed images before. When this happened to me, it was due to:
At some point in the code, the image width and height are swapped,
Or the original image you're trying to convert has padding, in which case you will need a stride in addition of the width and height.
Hope this helps!
Probably the Width of the image you are converting is not even. in that case
it is padded in memory.
Let me have a look at the docs...
It seems more complicated than this. if you want your code to work as it is now, you will have to have the width
a multiple of 16.
from the docs:
public static final int YV12
Added in API level 9 Android YUV format.
This format is exposed to software decoders and applications.
YV12 is a 4:2:0 YCrCb planar format comprised of a WxH Y plane
followed by (W/2) x (H/2) Cr and Cb planes.
This format assumes
an even width an even height a horizontal stride multiple of 16 pixels
a vertical stride equal to the height y_size = stride * height
c_stride = ALIGN(stride/2, 16) c_size = c_stride * height/2 size =
y_size + c_size * 2 cr_offset = y_size cb_offset = y_size + c_size
I just had this problem with the S3. My problem was that I used the wrong dimensions for the preview. I assumed the camera was 16:9 when it was actually 4:3.
Use Camera.getParameters().getPreviewSize() to see what the output is in.
I made this:
int frameSize = width * height;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
ret[frameSize + (i >> 1) * width + (j & ~1) + 1] = 127; //U
ret[frameSize + (i >> 1) * width + (j & ~1) + 0] = 127; //V
}
}
So simple but it works really good and fast ;)

Create 1 imageView with 9 image Files

I searched from a way to create a single imageView from 9 image files like this :
IMG1 - IMG2 - IMG3
IMG4 - IMG5 - IMG6
IMG7 - IMG8 - IMG9
I seen several interesting topics that helps me. One of these talks about a solution that may fits my needs. In this topic, Dimitar Dimitrov propose this :
You can try to do it with the raw data, by extracting the pixel data
from the images as 32-bit int ARGB pixel arrays, merge in one big
array, and create a new Bitmap, using the methods of the Bitmap class
like copyPixelsToBuffer(), createBitmap() and setPixels().
source : Render Two images in ImageView in Android?
So i may extract the 32-bits ARGB pixels from each image file and then create a Bitmap in which i could use the setPixels function to fill in. The problem is that i don't know how i can "extract the 32-bits ARGB pixels from each image file" ...
I also saw things about canvas and surfaceView but i never uses them. Furthermore the final object will only be sometimes pinched zoomed (when the user wants it), so i think it'll be easier for me to make it works using a single imageView ...
So i began with this portion of code inside an AsyncTask (to avoid using the UI Thread)
but i already get an OUT OF MEMORY Exception
...
#Override
protected Bitmap doInBackground(Void... params) {
return this.createBigBitmap();
}
public Bitmap createBigBitmap() {
Bitmap pageBitmap = Bitmap.createBitmap(800, 1066, Bitmap.Config.ARGB_8888); // OUT OF MEMORY EXCEPTION
// create an ArrayList of the 9 page Parts
ArrayList<Bitmap> pageParts = new ArrayList<Bitmap>();
for(int pagePartNum = 1; pagePartNum <= 9; pagePartNum++){
Bitmap pagePartBitmap = getPagePart(pageNum, pagePartNum);
pageParts.add(pagePartBitmap);
}
// try to copy the content of the 9 bitmaps into a single one bitmap
int[] pixels = null;
int offsetX = 0, offsetY = 0, pagePartNum = 0;
for (int x = 0; x < this.nbPageRows; x++) {
for (int y = 0; y < this.nbPageColumns; y++) {
pagePartNum = x * this.nbPageColumns + y;
Bitmap pagePartBitmap = pageParts.get(pagePartNum);
// read pixels from the pagePartBitmap
pixels = new int[pagePartBitmap.getHeight() * pagePartBitmap.getWidth()];
pagePartBitmap.getPixels(pixels, 0, pagePartBitmap.getWidth(), 0, 0, pagePartBitmap.getWidth(), pagePartBitmap.getHeight());
// compute offsetY
if(x == 0)
offsetY = 0;
if(x == 1)
offsetY = pageParts.get(0).getHeight();
if(x == 2)
offsetY = pageParts.get(0).getHeight() * 2;
// compute offsetX
if(y == 0)
offsetX = 0;
if(y == 1)
offsetX = pageParts.get(0).getWidth();
if(y == 2)
offsetX = pageParts.get(0).getWidth() * 2;
// write pixels read to the pageBitmap
pageBitmap.setPixels(pixels, 0, pagePartBitmap.getWidth(), offsetX, offsetY, pagePartBitmap.getWidth(), pagePartBitmap.getHeight());
offsetX += pagePartBitmap.getWidth();
offsetY += pagePartBitmap.getHeight();
}
}
return pageBitmap;
}
// get a bitmap from one of the 9 existing image file page part
private Bitmap getPagePart(int pageNum, int pagePartNum) {
String imgFilename = this.directory.getAbsolutePath()
+ File.separator + "z-"
+ String.format("%04d", pageNum)
+ "-" + pagePartNum + ".jpg";
// ajoute le bitmap de la partie de page
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inPreferredConfig = Bitmap.Config.ARGB_8888;
return BitmapFactory.decodeFile(imgFilename, opt);
}
Thank you very much
Read BitmapRegionDecoder and this
Edit
Look at Efficient Bitmap Handling in android to come out from OOM error.
Use following code to decode your bitmap.
public static Bitmap decodeSampledBitmapFromResource(Resources res, int resId,
int reqWidth, int reqHeight) {
// First decode with inJustDecodeBounds=true to check dimensions
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeResource(res, resId, options);
// Calculate inSampleSize
options.inSampleSize = 4;
// Decode bitmap with inSampleSize set
options.inJustDecodeBounds = false;
return BitmapFactory.decodeResource(res, resId, options);
}

Camera memory consumption

I am taking pictures via the android camera API:
In order to calculate the available memory for some image processing, I want to check whether the image fits into memory.
I am doing this with those functions:
/**
* Checks if a bitmap with the specified size fits in memory
* #param bmpwidth Bitmap width
* #param bmpheight Bitmap height
* #param bmpdensity Bitmap bpp (use 2 as default)
* #return true if the bitmap fits in memory false otherwise
*/
public static boolean checkBitmapFitsInMemory(long bmpwidth,long bmpheight, int bmpdensity ){
long reqsize=bmpwidth*bmpheight*bmpdensity;
long allocNativeHeap = Debug.getNativeHeapAllocatedSize();
if ((reqsize + allocNativeHeap + Preview.getHeapPad()) >= Runtime.getRuntime().maxMemory())
{
return false;
}
return true;
}
private static long getHeapPad(){
return (long) Math.max(4*1024*1024,Runtime.getRuntime().maxMemory()*0.1);
}
Problem is: I am still getting OutOfMemoryExceptions (not on my phone, but from people who already downloaded my app)
The exception occurs in the last line of following code:
public void onPictureTaken(byte[] data, Camera camera) {
Log.d(TAG, "onPictureTaken - jpeg");
final byte[] data1 = data;
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
options.inSampleSize = downscalingFactor;
Log.d(TAG, "before gc");
printFreeRam();
System.gc();
Log.d(TAG, "after gc");
printFreeRam();
Bitmap photo = BitmapFactory.decodeByteArray(data1, 0, data1.length, options);
downscalingFactor is chosen via the checkBitmapFitsInMemory() method.
I am doing this like that:
for (downscalingFactor = 1; downscalingFactor < 16; downscalingFactor ++) {
double width = (double) bestPictureSize.width / downscalingFactor;
double height = (double) bestPictureSize.height / downscalingFactor;
if(Preview.checkBitmapFitsInMemory((int) width, (int) height, 4*4)){ // 4 channels (RGBA) * 4 layers
Log.v(TAG, " supported: " + width+'x'+height);
break;
}else{
Log.v(TAG, " not supported: " + width+'x'+height);
}
}
Anyone knows why this approach is so buggy?
Try chaning this :
Bitmap photo = BitmapFactory.decodeByteArray(data1, 0, data1.length, options);
to this :
Bitmap photo = BitmapFactory.decodeByteArray(**data**, 0, data.length, options);

Categories

Resources