onPreviewFrame YUV grayscale skewed - android

I'm trying to get the picture from a surfaceView where I have the camera view running,
I've already implemented onPreviewFrame, and it's called correctly as the debug shows me.
The problem I'm facing now, it's since the byte[] data I receive in the method, it's in YUV space color (NV21), I'm trying to convert it to grayscale to generate a Bitmap and then storing it into a file.
The conversion process that I'm following it's:
public Bitmap convertYuvGrayScaleRGB(byte[] yuv, int width, int height) {
int[] pixels = new int[width * height];
for (int i = 0; i < height*width; i++) {
int grey = yuv[i] & 0xff;
pixels[i] = 0xFF000000 | (grey * 0x00010101);
}
return Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
}
The importing procedure for storing it to a file, it's:
Bitmap bitmap = convertYuvGrayScaleRGB(data,widht,heigth);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 50, bytes);
File f = new File(Environment.getExternalStorageDirectory()
+ File.separator + "test.jpg");
Log.d("Camera", "File: " + f.getAbsolutePath());
try {
f.createNewFile();
FileOutputStream fo = new FileOutputStream(f);
fo.write(bytes.toByteArray());
fo.close();
bitmap.recycle();
bitmap = null;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
Altough, the result I've got it's the following:

I can't find any obvious mistake in your code, but i've already met this kind of skewed images before. When this happened to me, it was due to:
At some point in the code, the image width and height are swapped,
Or the original image you're trying to convert has padding, in which case you will need a stride in addition of the width and height.
Hope this helps!

Probably the Width of the image you are converting is not even. in that case
it is padded in memory.
Let me have a look at the docs...
It seems more complicated than this. if you want your code to work as it is now, you will have to have the width
a multiple of 16.
from the docs:
public static final int YV12
Added in API level 9 Android YUV format.
This format is exposed to software decoders and applications.
YV12 is a 4:2:0 YCrCb planar format comprised of a WxH Y plane
followed by (W/2) x (H/2) Cr and Cb planes.
This format assumes
an even width an even height a horizontal stride multiple of 16 pixels
a vertical stride equal to the height y_size = stride * height
c_stride = ALIGN(stride/2, 16) c_size = c_stride * height/2 size =
y_size + c_size * 2 cr_offset = y_size cb_offset = y_size + c_size

I just had this problem with the S3. My problem was that I used the wrong dimensions for the preview. I assumed the camera was 16:9 when it was actually 4:3.
Use Camera.getParameters().getPreviewSize() to see what the output is in.

I made this:
int frameSize = width * height;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
ret[frameSize + (i >> 1) * width + (j & ~1) + 1] = 127; //U
ret[frameSize + (i >> 1) * width + (j & ~1) + 0] = 127; //V
}
}
So simple but it works really good and fast ;)

Related

How to crop a byte[] array containing Y' (luma) without converting to Bitmap

EDIT: Solved! See below.
I need to crop my image (YUV422888 color space) which I obtain from the onImageAvailable listener of Camera2. I don't want or need to convert it to Bitmap as it affects performance a lot, and also I'm actually interested in luma and not in RGB information (which is contained in Plane 0 of the Image).
I came up with the following solution:
Get the Y' information contained in the Plane 0 of the Image object made available by Camera2 in the listener.
Convert the Y' Plane into a byte[] array in.
Convert the byte[] array to a 2d byte[][] array in order to crop.
Use some for loops to crop at desired left, right, top and bottom coordinates.
Fold the 2d byte[][] array back to a 1d byte[] array out, containing cropped luma Y' information.
Point 4 unfortunately yields a corrupt image. What am I doing wrong?
In the onImageAvailableListener of Camera2 (please note that although I am computing a bitmap, it's only to see what's happening, as I'm not interested in the Bitmap/RGB data):
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer(); // Grab just the Y' Plane.
buffer.rewind();
byte[] data = new byte[buffer.capacity()];
buffer.get(data);
Bitmap bitmap = cropByteArray(data, image.getWidth(), image.getHeight()); // Just for preview/sanity check purposes. The bitmap is **corrupt**.
runOnUiThread(new bitmapRunnable(bitmap) {
#Override
public void run() {
image_view_preview.setImageBitmap(this.bitmap);
}
});
The cropByteArray function needs fixing. It outputs a bitmap that is corrupt, and should output an out byte[] array similar to in, but containing only the cropped area:
public Bitmap cropByteArray(byte[] in, int inw, int inh) {
int l = 100; // left crop start
int r = 400; // right crop end
int t = 400; // top crop start
int b = 700; // top crop end
int outw = r-l;
int outh = b-t;
byte[][] in2d = new byte[inw][inh]; // input width and height are 1080 x 1920.
byte[] out = new byte[outw*outh];
int[] pixels = new int[outw*outh];
i = 0;
for(int col = 0; col < inw; col++) {
for(int row = 0; row < inh; row++) {
in2d[col][row] = in[i++];
}
}
i = 0;
for(int col = l; col < r; col++) {
for(int row = t; row < b; row++) {
//out[i++] = in2d[col][row]; // out is the desired output of the function, but for now we output a bitmap instead
int grey = in2d[col][row] & 0xff;
pixels[i++] = 0xFF000000 | (grey * 0x00010101);
}
}
return Bitmap.createBitmap(pixels, inw, inh, Bitmap.Config.ARGB_8888);
}
EDIT Solved thanks to the suggestion by Eddy Talvala. The following code will yield the Y' (luma plane 0 from ImageReader) cropped to the desired coordinates. The cropped data is in the out byte array. The bitmap is generated just for confirmation. I am also attaching the handy YUVtoGrayscale() function below.
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int stride = planes[0].getRowStride();
buffer.rewind();
byte[] Y = new byte[buffer.capacity()];
buffer.get(Y);
int t=200; int l=600;
int out_h = 600; int out_w = 600;
byte[] out = new byte[out_w*out_h];
int firstRowOffset = stride * t + l;
for (int row = 0; row < out_h; row++) {
buffer.position(firstRowOffset + row * stride);
buffer.get(out, row * out_w, out_w);
}
Bitmap bitmap = YUVtoGrayscale(out, out_w, out_h);
Here goes the YUVtoGrayscale().
public Bitmap YUVtoGrayscale(byte[] yuv, int width, int height) {
int[] pixels = new int[yuv.length];
for (int i = 0; i < yuv.length; i++) {
int grey = yuv[i] & 0xff;
pixels[i] = 0xFF000000 | (grey * 0x00010101);
}
return Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
}
There are some remaining issues. I am using the front camera and although the preview orientation is correct inside the TextureView, the image returned by ImageViewer is rotated clockwise and flipped vertically (a person is lying on their right cheek in the preview, only the right cheek is the left cheek because of the vertical flip) on my device which has sensor orientation of 270 deg. Is there an accepted solution to have both the preview and saved photos in the same, correct orientation using Camera2?
Cheers.
It'd be helpful if you described how the image is corrupt - do you see a valid image but it's distorted, or is it just total garbage, or just total black?
But I'm guessing you're not paying attention to the row stride of the Y plane (https://developer.android.com/reference/android/media/Image.Plane.html#getRowStride() ), which would typically result in an image that's skewed (vertical lines become angled lines).
When accessing the Y plane, the byte index of pixel (x,y) is:
y * rowStride + x
not
y * width + x
because row stride may be larger than width.
I'd also avoid copying so much; you really don't need the 2D array, and a large byte[] for the image also wastes memory.
You can instead seek() to the start of each output row, and then only read the bytes you need to copy straight into your destination byte[] out with ByteBuffer.get(byte[], offset, length).
That'd look something like
int stride = planes[0].getRowStride();
ByteBuffer img = planes[0].getBuffer();
int firstRowOffset = stride * t + l;
for (int row = 0; row < outh; row++) {
img.position(firstRowOffset + row * stride);
img.get(out, row * outw, outw);
}

Custom byteArray data to WebRTC videoTrack

I need to use WebRTC for android to send specific cropped(face) video to the videoChannel. I was able manipulate Camera1Session class of WebRTC to get the face cropped. Right now I am setting it to an ImageView.
listenForBytebufferFrames() of Camera1Session.java
private void listenForBytebufferFrames() {
this.camera.setPreviewCallbackWithBuffer(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera callbackCamera) {
Camera1Session.this.checkIsOnCameraThread();
if(callbackCamera != Camera1Session.this.camera) {
Logging.e("Camera1Session", "Callback from a different camera. This should never happen.");
} else if(Camera1Session.this.state != Camera1Session.SessionState.RUNNING) {
Logging.d("Camera1Session", "Bytebuffer frame captured but camera is no longer running.");
} else {
mFrameProcessor.setNextFrame(data, callbackCamera);
long captureTimeNs = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
if(!Camera1Session.this.firstFrameReported) {
int startTimeMs = (int)TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - Camera1Session.this.constructionTimeNs);
Camera1Session.camera1StartTimeMsHistogram.addSample(startTimeMs);
Camera1Session.this.firstFrameReported = true;
}
ByteBuffer byteBuffer1 = ByteBuffer.wrap(data);
Frame outputFrame = new Frame.Builder()
.setImageData(byteBuffer1,
Camera1Session.this.captureFormat.width,
Camera1Session.this.captureFormat.height,
ImageFormat.NV21)
.setTimestampMillis(mFrameProcessor.mPendingTimeMillis)
.setId(mFrameProcessor.mPendingFrameId)
.setRotation(3)
.build();
int w = outputFrame.getMetadata().getWidth();
int h = outputFrame.getMetadata().getHeight();
SparseArray<Face> detectedFaces = mDetector.detect(outputFrame);
if (detectedFaces.size() > 0) {
Face face = detectedFaces.valueAt(0);
ByteBuffer byteBufferRaw = outputFrame.getGrayscaleImageData();
byte[] byteBuffer = byteBufferRaw.array();
YuvImage yuvimage = new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
//My crop logic to get face co-ordinates
yuvimage.compressToJpeg(new Rect(left, top, right, bottom), 80, baos);
final byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Activity currentActivity = getActivity();
if (currentActivity instanceof CallActivity) {
((CallActivity) currentActivity).setBitmapToImageView(bitmap); //face on ImageView is set just fine
}
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
} else {
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
}
}
}
});
}
jpegArray is the final byteArray that I need to stream via WebRTC, which I tried with something like this:
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, jpegArray, (int) face.getWidth(), (int) face.getHeight(), Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(jpegArray);
Setting them up like this gives me following error:
../../webrtc/sdk/android/src/jni/androidvideotracksource.cc line 82
Check failed: length >= width * height + 2 * uv_width * ((height + 1) / 2) (2630 vs. 460800)
Which I assume is because androidvideotracksource does not get the same length of byteArray that it expects, since the frame is cropped now.
Could someone point me in the direction of how to achieve it? Is this the correct way/place to manipulate the data and feed into the videoTrack?
Edit:bitmap of byteArray data does not give me a camera preview on ImageView, unlike byteArray jpegArray. Maybe because they are packed differently?
Can we use WebRTC's Datachannel to exchang custom data ie cropped face "image" in your case and do the respective calculation at receiving end using any third party library ie OpenGL etc? Reason I am suggesting is that the WebRTC Video feed received from channel is a stream in real time not a bytearray . WebRTC Video by its inherent architecture isn't meant to crop video at other hand. If we want to crop or augment video we have to use any ar library to fulfill this job.
We can always leverage WebRTC's Data channel to exchange customized data. Using Video channel for the same is not recommended because it's real time stream not the bytearray.Please revert in case of any concern.
WebRTC in particular and video streaming in general presumes that the video has fixed dimensions. If you want to crop the detected face, your options are either to have pad the cropped image with e.g. black pixels (WebRTC does not use transparency), and crop the video on the receiver side, or, if you don't have control over the receiver, resize the cropped region to fill the expected width * height frame (you should also keep the expected aspect ratio).
Note that JPEG compress/decompress that you use to crop the original is far from efficient. Some other options can be found in Image crop and resize in Android.
Okay, this was definitely a problem of how the original byte[] data was packed and the way byte[] jpegArray was packed. Changing the way of packing this and scaling it as AlexCohn suggested worked for me. I found help from other post on StackOverflow on way to pack it. This is the code for it:
private byte[] getNV21(int left, int top, int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, left, top, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
private void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}`
I pass this byte[] data to onByteBufferFrameCaptured and callback:
Camera1Session.this.events.onByteBufferFrameCaptured(
Camera1Session.this,
data,
w,
h,
Camera1Session.this.getFrameOrientation(),
captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
Prior to this, I had to scale the bitmap which is pretty straight forward:
int width = bitmapToScale.getWidth();
int height = bitmapToScale.getHeight();
Matrix matrix = new Matrix();
matrix.postScale(newWidth / width, newHeight / height);
Bitmap scaledBitmap = Bitmap.createBitmap(bitmapToScale, 0, 0, bitmapToScale.getWidth(), bitmapToScale.getHeight(), matrix, true);

Memory allocation for YUV buffer to convert into RGB

I'm facing issue on few android devices while copying the data from DecodeFrame2()
This is my code:
uint8_t* m_yuvData[3];
SBufferInfo yuvDataInfo;
memset(&yuvDataInfo, 0, sizeof(SBufferInfo));
m_yuvData[0] = NULL;
m_yuvData[1] = NULL;
m_yuvData[2] = NULL;
DECODING_STATE decodingState = m_decoder->DecodeFrame2(bufferData, bufferDataSize, m_yuvData, &yuvDataInfo);
if(yuvDataInfo.iBufferStatus == 1)
{
int yStride = yuvDataInfo->UsrData.sSystemBuffer.iStride[0];
int uvStride = yuvDataInfo->UsrData.sSystemBuffer.iStride[1];
uint32_t width = yuvDataInfo->UsrData.sSystemBuffer.iWidth;
uint32_t height = yuvDataInfo->UsrData.sSystemBuffer.iHeight;
size_t yDataSize = (width * height) + (height * yStride);
size_t uvDataSize = (((width * height) / 4) + (height * uvStride));
size_t requiredSize = yDataSize + (2 * uvDataSize);
uint8_t* yuvBufferedData = (uint8_t*)malloc(requiredSize);
// when i move yuvData[0] to another location i am getting crash.
memcpy(yuvBufferedData, yuvData[0], yDataSize);
memcpy(yuvBufferedData + yDataSize, yuvData[1], uvDataSize);
memcpy(yuvBufferedData + yDataSize + uvDataSize, yuvData[2], uvDataSize);
}
The above code snippet is working on high end android devices. but on few android devices after processing first frame, second frame onwards i am getting crash in first memcpy() statement.
What is wrong in this code? and how to calculate the buffer size from the output of DecodeFrame2().
If i process alternative frames(instead of 30, just 15 frames alternative ones),
it is copying fine.
Please help me to fix this?
yDataSize and uvDataSize is very huge based on the above formula,
This issue has been fixed by modifying the size.

Android create Bitmap + crop causing OutOfMemory error(eventually)

I am taking 3 pictures in my app before uploading it to a remote server. The output is a byteArray. I am currently converting this byteArray to a bitmap, performing cropping on it(cropping the centre square). I eventually run out of memory(that is after exiting the app coming back,performing the same steps). I am trying to re-use the bitmap object using BitmapFactory.Options as mentioned in the android dev guide
https://www.youtube.com/watch?v=_ioFW3cyRV0&list=LLntRvRsglL14LdaudoRQMHg&index=2
and
https://www.youtube.com/watch?v=rsQet4nBVi8&list=LLntRvRsglL14LdaudoRQMHg&index=3
This is the function I call when I'm saving the image taken by the camera.
public void saveImageToDisk(Context context, byte[] imageByteArray, String photoPath, BitmapFactory.Options options) {
options.inJustDecodeBounds = true;
BitmapFactory.decodeByteArray(imageByteArray, 0, imageByteArray.length, options);
int imageHeight = options.outHeight;
int imageWidth = options.outWidth;
int dimension = getSquareCropDimensionForBitmap(imageWidth, imageHeight);
Log.d(TAG, "Width : " + dimension);
Log.d(TAG, "Height : " + dimension);
//bitmap = cropBitmapToSquare(bitmap);
options.inJustDecodeBounds = false;
Bitmap bitmap = BitmapFactory.decodeByteArray(imageByteArray, 0,
imageByteArray.length, options);
options.inBitmap = bitmap;
bitmap = ThumbnailUtils.extractThumbnail(bitmap, dimension, dimension,
ThumbnailUtils.OPTIONS_RECYCLE_INPUT);
options.inSampleSize = 1;
Log.d(TAG, "After square crop Width : " + options.inBitmap.getWidth());
Log.d(TAG, "After square crop Height : " + options.inBitmap.getHeight());
byte[] croppedImageByteArray = convertBitmapToByteArray(bitmap);
options = null;
File photo = new File(photoPath);
if (photo.exists()) {
photo.delete();
}
try {
FileOutputStream e = new FileOutputStream(photo.getPath());
BufferedOutputStream bos = new BufferedOutputStream(e);
bos.write(croppedImageByteArray);
bos.flush();
e.getFD().sync();
bos.close();
} catch (IOException e) {
}
}
public int getSquareCropDimensionForBitmap(int width, int height) {
//If the bitmap is wider than it is tall
//use the height as the square crop dimension
int dimension;
if (width >= height) {
dimension = height;
}
//If the bitmap is taller than it is wide
//use the width as the square crop dimension
else {
dimension = width;
}
return dimension;
}
public Bitmap cropBitmapToSquare(Bitmap source) {
int h = source.getHeight();
int w = source.getWidth();
if (w >= h) {
source = Bitmap.createBitmap(source, w / 2 - h / 2, 0, h, h);
} else {
source = Bitmap.createBitmap(source, 0, h / 2 - w / 2, w, w);
}
Log.d(TAG, "After crop Width : " + source.getWidth());
Log.d(TAG, "After crop Height : " + source.getHeight());
return source;
}
How do I correctly recycle or re-use bitmaps because as of now I am getting OutOfMemory errors?
UPDATE :
After implementing Colin's solution. I am running into an ArrayIndexOutOfBoundsException.
My logs are below
08-26 01:45:01.895 3600-3648/com.test.test E/AndroidRuntime﹕ FATAL EXCEPTION: pool-3-thread-1
Process: com.test.test, PID: 3600
java.lang.ArrayIndexOutOfBoundsException: length=556337; index=556337
at com.test.test.helpers.Utils.test(Utils.java:197)
at com.test.test.fragments.DemoCameraFragment.saveImageToDisk(DemoCameraFragment.java:297)
at com.test.test.fragments.DemoCameraFragment_.access$101(DemoCameraFragment_.java:30)
at com.test.test.fragments.DemoCameraFragment_$5.execute(DemoCameraFragment_.java:159)
at org.androidannotations.api.BackgroundExecutor$Task.run(BackgroundExecutor.java:401)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:422)
at java.util.concurrent.FutureTask.run(FutureTask.java:237)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:152)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:265)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
P.S : I had thought of cropping byteArrays before, but I did not know how to implement it.
You shouldn't need to do any conversion to bitmaps, actually.
Remember that your bitmap image data is RGBA_8888 formatted. Meaning that every 4 contiguous bytes represents one pixel. As such:
// helpers to make sanity
int halfWidth = imgWidth >> 1;
int halfHeight = imgHeight >> 1;
int halfDim = dimension >> 1;
// get our min and max crop locations
int minX = halfWidth - halfDim;
int minY = halfHeight - halfDim;
int maxX = halfWidth + halfDim;
int maxY = halfHeight + halfDim;
// allocate our thumbnail; It's WxH*(4 bits per pixel)
byte[] outArray = new byte[dimension * dimension * 4]
int outPtr = 0;
for(int y = minY; y< maxY; y++)
{
for(int x = minX; x < maxX; x++)
{
int srcLocation = (y * imgWidth) + (x * 4);
outArray[outPtr + 0] = imageByteArray[srcLocation +0]; // read R
outArray[outPtr + 1] = imageByteArray[srcLocation +1]; // read G
outArray[outPtr + 2] = imageByteArray[srcLocation +2]; // read B
outArray[outPtr + 3] = imageByteArray[srcLocation +3]; // read A
outPtr+=4;
}
}
//outArray now contains the cropped pixels.
The end result is that you can do cropping by hand by just copying out the pixels you're looking for, rather than allocating a new bitmap object, and then converting that back to a byte array.
== EDIT:
Actually; The above algorithm is assuming that your input data is the raw RGBA_8888 pixel data. But it sounds like, instead, your input byte array is the encoded JPG data. As such, your 2nd decodeByteArray is actually decoding your JPG file to the RGBA_8888 format. If this is the case, the proper thing to do for re-sizing is to use the techniques described in "Most memory efficient way to resize bitmaps on android?" since you're working with encoded data.
Try setting more and more variables to null - this helps reclaiming that memory;
after
byte[] croppedImageByteArray = convertBitmapToByteArray(bitmap);
do:
bitmap= null;
after
FileOutputStream e = new FileOutputStream(photo.getPath());
do
photo = null;
and after
try {
FileOutputStream e = new FileOutputStream(photo.getPath());
BufferedOutputStream bos = new BufferedOutputStream(e);
bos.write(croppedImageByteArray);
bos.flush();
e.getFD().sync();
bos.close();
} catch (IOException e) {
}
do:
e = null;
bos = null;
Edit #1
If this fails to help, your only real solution actually using memory monitor. To learn more go here and here
Ps. there is another very dark solution, very dark solution. Only for those who know how to navigate thru dark corners of ofheapmemory. But you will have to follow this path on your own.

32 bpp monochrome bitmap to 1 bpp TIFF

My android app uses an external lib that makes some image treatments. The final output of the treatment chain is a monochrome bitmap but saved has a color bitmap (32bpp).
The image has to be uploaded to a cloud blob, so for bandwidth concerns, i'd like to convert it to 1bpp G4 compression TIFF. I successfully integrated libTIFF in my app via JNI and now i'm writing the conversion routine in C. I'm a little stuck here.
I managed to produce a 32 BPP TIFF, but impossible to reduce to 1bpp, the output image is always unreadable. Did someone succeded to do similar task ?
More speciffically :
What should be the value of SAMPLE_PER_PIXEL and BITS_PER_SAMPLE
parameters ?
How to determine the strip size ?
How to fill each strip ? (i.e. : How to convert 32bpp pixel lines to 1 bpp pixels strips ?)
Many thanks !
UPDATE : The code produced with the precious help of Mohit Jain
int ConvertMonochrome32BppBitmapTo1BppTiff(char* bitmap, int height, int width, int resx, int resy, char const *tifffilename)
{
TIFF *tiff;
if ((tiff = TIFFOpen(tifffilename, "w")) == NULL)
{
return TC_ERROR_OPEN_FAILED;
}
// TIFF Settings
TIFFSetField(tiff, TIFFTAG_RESOLUTIONUNIT, RESUNIT_INCH);
TIFFSetField(tiff, TIFFTAG_XRESOLUTION, resx);
TIFFSetField(tiff, TIFFTAG_YRESOLUTION, resy);
TIFFSetField(tiff, TIFFTAG_COMPRESSION, COMPRESSION_CCITTFAX4); //Group4 compression
TIFFSetField(tiff, TIFFTAG_IMAGEWIDTH, width);
TIFFSetField(tiff, TIFFTAG_IMAGELENGTH, height);
TIFFSetField(tiff, TIFFTAG_ROWSPERSTRIP, 1);
TIFFSetField(tiff, TIFFTAG_SAMPLESPERPIXEL, 1);
TIFFSetField(tiff, TIFFTAG_BITSPERSAMPLE, 1);
TIFFSetField(tiff, TIFFTAG_ORIENTATION, ORIENTATION_TOPLEFT);
TIFFSetField(tiff, TIFFTAG_PLANARCONFIG, PLANARCONFIG_CONTIG);
TIFFSetField(tiff, TIFFTAG_PHOTOMETRIC, PHOTOMETRIC_MINISWHITE);
tsize_t tbufsize = (width + 7) / 8; //Tiff ScanLine buffer size for 1bpp pixel row
//Now writing image to the file one row by one
int x, y;
for (y = 0; y < height; y++)
{
char *buffer = malloc(tbufsize);
memset(buffer, 0, tbufsize);
for (x = 0; x < width; x++)
{
//offset of the 1st byte of each pixel in the input image (is enough to determine is black or white in 32 bpp monochrome bitmap)
uint32 bmpoffset = ((y * width) + x) * 4;
if (bitmap[bmpoffset] == 0) //Black pixel ?
{
uint32 tiffoffset = x / 8;
*(buffer + tiffoffset) |= (0b10000000 >> (x % 8));
}
}
if (TIFFWriteScanline(tiff, buffer, y, 0) != 1)
{
return TC_ERROR_WRITING_FAILED;
}
if (buffer)
{
free(buffer);
buffer = NULL;
}
}
TIFFClose(tiff);
tiff = NULL;
return TC_SUCCESSFULL;
}
To convert 32 bpp to 1 bpp, extract RGB and convert it into Y (luminance) and use some threshold to convert to 1 bpp.
Number of samples and bits per pixel should be 1.

Categories

Resources