Problems when scaling a YUV image using libyuv library - android

I'm developing a camera app based on Camera API 2 and I have found several problems using the libyuv.
I want to convert YUV_420_888 images retrieved from a ImageReader, but I'm having some problems with scaling in a reprocessable surface.
In essence: Images come out with tones of green instead of having the corresponding tones (I'm exporting the .yuv files and checking them using http://rawpixels.net/).
You can see an input example here:
And what I get after I perform scaling:
I think I am doing something wrong with strides, or providing an invalid YUV format (maybe I have to transform the image to another format?). However, I can't figure out where is the error since I don't know how to correlate the green color to the scaling algorithm.
This is the conversion code I am using, you can ignore the return NULL as there is further processing that is not related to the problem.
#include <jni.h>
#include <stdint.h>
#include <android/log.h>
#include <inc/libyuv/scale.h>
#include <inc/libyuv.h>
#include <stdio.h>
#define LOG_TAG "libyuv-jni"
#define unused(x) UNUSED_ ## x __attribute__((__unused__))
#define LOGD(...) __android_log_print(ANDROID_LOG_DEBUG, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS_)
struct YuvFrame {
int width;
int height;
uint8_t *data;
uint8_t *y;
uint8_t *u;
uint8_t *v;
};
static struct YuvFrame i420_input_frame;
static struct YuvFrame i420_output_frame;
extern "C" {
JNIEXPORT jbyteArray JNICALL
Java_com_android_camera3_camera_hardware_session_output_photo_yuv_YuvJniInterface_scale420YuvByteArray(
JNIEnv *env, jclass /*clazz*/, jbyteArray yuvByteArray_, jint src_width, jint src_height,
jint out_width, jint out_height) {
jbyte *yuvByteArray = env->GetByteArrayElements(yuvByteArray_, NULL);
//Get input and output length
int input_size = env->GetArrayLength(yuvByteArray_);
int out_size = out_height * out_width;
//Generate input frame
i420_input_frame.width = src_width;
i420_input_frame.height = src_height;
i420_input_frame.data = (uint8_t *) yuvByteArray;
i420_input_frame.y = i420_input_frame.data;
i420_input_frame.u = i420_input_frame.y + input_size;
i420_input_frame.v = i420_input_frame.u + input_size / 4;
//Generate output frame
free(i420_output_frame.data);
i420_output_frame.width = out_width;
i420_output_frame.height = out_height;
i420_output_frame.data = new unsigned char[out_size * 3 / 2];
i420_output_frame.y = i420_output_frame.data;
i420_output_frame.u = i420_output_frame.y + out_size;
i420_output_frame.v = i420_output_frame.u + out_size / 4;
libyuv::FilterMode mode = libyuv::FilterModeEnum::kFilterBilinear;
int result = I420Scale(i420_input_frame.y, i420_input_frame.width,
i420_input_frame.u, i420_input_frame.width / 2,
i420_input_frame.v, i420_input_frame.width / 2,
i420_input_frame.width, i420_input_frame.height,
i420_output_frame.y, i420_output_frame.width,
i420_output_frame.u, i420_output_frame.width / 2,
i420_output_frame.v, i420_output_frame.width / 2,
i420_output_frame.width, i420_output_frame.height,
mode);
LOGD("Image result %d", result);
env->ReleaseByteArrayElements(yuvByteArray_, yuvByteArray, 0);
return NULL;
}

You can try that code that it uses the y_size instead of full size of your array.
...
//Get input and output length
int input_size = env->GetArrayLength(yuvByteArray_);
int y_size = src_width * src_height;
int out_size = out_height * out_width;
//Generate input frame
i420_input_frame.width = src_width;
i420_input_frame.height = src_height;
i420_input_frame.data = (uint8_t *) yuvByteArray;
i420_input_frame.y = i420_input_frame.data;
i420_input_frame.u = i420_input_frame.y + y_size;
i420_input_frame.v = i420_input_frame.u + y_size / 4;
//Generate output frame
free(i420_output_frame.data);
i420_output_frame.width = out_width;
i420_output_frame.height = out_height;
i420_output_frame.data = new unsigned char[out_size * 3 / 2];
i420_output_frame.y = i420_output_frame.data;
i420_output_frame.u = i420_output_frame.y + out_size;
i420_output_frame.v = i420_output_frame.u + out_size / 4;
...
probably your code is based on that https://github.com/begeekmyfriend/yasea/blob/master/library/src/main/libenc/jni/libenc.cc and according to that code you have to use the y_size

You have an issue with the input size of the frame:
It should be:
int input_array_size = env->GetArrayLength(yuvByteArray_);
int input_size = input_array_size * 2 / 3; //This is the frame size
For example, If you have a Frame that is 6x4
Chanel y size: 6*4 = 24
1 2 3 4 5 6
_ _ _ _ _ _
|_|_|_|_|_|_| 1
|_|_|_|_|_|_| 2
|_|_|_|_|_|_| 3
|_|_|_|_|_|_| 4
Chanel u size: 3*2 = 6
1 2 3
_ _ _ _ _ _
| | | |
|_ _|_ _|_ _| 1
| | | |
|_ _|_ _|_ _| 2
Chanel v size: 3*2 = 6
1 2 3
_ _ _ _ _ _
| | | |
|_ _|_ _|_ _| 1
| | | |
|_ _|_ _|_ _| 2
Array Size = 6*4+3*2+3*2 = 36
But actual Frame Size = channel y Size = 36 * 2 / 3 = 24

gmetax is almost correct.
You are using the size of the entire array where you should be using the size of the Y component, which is src_width * src_height.
gmetax's answer is wrong in that he has put y_size in place of out_size when defining the output frame. The correct code snippet, I believe, would look like:
//Get input and output length
int input_size = env->GetArrayLength(yuvByteArray_);
int y_size = src_width * src_height;
int out_size = out_height * out_width;
//Generate input frame
i420_input_frame.width = src_width;
i420_input_frame.height = src_height;
i420_input_frame.data = (uint8_t *) yuvByteArray;
i420_input_frame.y = i420_input_frame.data;
i420_input_frame.u = i420_input_frame.y + y_size;
i420_input_frame.v = i420_input_frame.u + y_size / 4;
//Generate output frame
free(i420_output_frame.data);
i420_output_frame.width = out_width;
i420_output_frame.height = out_height;
i420_output_frame.data = new unsigned char[out_size * 3 / 2];
i420_output_frame.y = i420_output_frame.data;
i420_output_frame.u = i420_output_frame.y + out_size;
i420_output_frame.v = i420_output_frame.u + out_size / 4;

You are trying to scale your YUV422 image as if it was YUV420, no wonder the colors are all messed up. First of all you need to figure out what exactly format of your YUV input buffer. From documentation of YUV_422_888 it looks like it may represent planar as well as interleaved formats (if pixel stride is not 1). From your results it looks like your source is planar and processing of Y plane is ok, but your error is in handling U and V planes. To get scaling right:
You have to figure out if your U and V planes are interleaved or
planar. Most likely they are planar as well.
Use ScalePlane from libyuv to scale U and V separately. Perhaps
if you step into I420Scale it calls ScalePlane for individual
planes. Do the same, but use correct linesizes for your U and V
planes (each is twice larger than what I420Scale expects).
Some tips how to figure out if you have planar or interleaved U and V: try to skip scaling of your image and saving it, to ensure that you get correct result (identical to the source). Then try to zero out U frame or V frame and see what you get. If U and V are planar and you memset U plane to zero you should see entire picture changing color. If they are interleaved you'll get half of picture changing and the other one staying the same. Same way you can check your assumptions about sizes, linesizes, and offsets of your planes. Once you are sure about your YUV format and layout you can scale individual planes if your input is planar, or if you have interleaved input first you need to deinterleave planes and then scale them.
Alternatively, you can use libswscale from ffmpeg/libav and try different formats to find correct one and then use libyuv.

The green images was caused by one of the planes being full of 0's. This means that one of the planes was empty. This was caused because I was converting from YUV NV21 instead of YUV I420. The images from the framework of camera in android comes as I420 YUVs.
We need to convert them to YUV I420 to work properly with Libyuv. After that we can start using the multiple operations that the library offer you. Like rotate, scale etc.
Here is the snipped about how the scaling method looks:
JNIEXPORT jint JNICALL
Java_com_aa_project_images_yuv_myJNIcl_scaleI420(JNIEnv *env, jclass type,
jobject srcBufferY,
jobject srcBufferU,
jobject srcBufferV,
jint srcWidth, jint srcHeight,
jobject dstBufferY,
jobject dstBufferU,
jobject dstBufferV,
jint dstWidth, jint dstHeight,
jint filterMode) {
const uint8_t *srcY = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferY));
const uint8_t *srcU = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferU));
const uint8_t *srcV = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferV));
uint8_t *dstY = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferY));
uint8_t *dstU = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferU));
uint8_t *dstV = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferV));
return libyuv::I420Scale(srcY, srcWidth,
srcU, srcWidth / 2,
srcV, srcWidth / 2,
srcWidth, srcHeight,
dstY, dstWidth,
dstU, dstWidth / 2,
dstV, dstWidth / 2,
dstWidth, dstHeight,
static_cast<libyuv::FilterMode>(filterMode));
}

Related

Android NDK converting YUV420 AIMAGE to RGB

I'm working on streaming camera frames using NDK CAMERA2 API. the format of my AIMAGE_READER is YUV420;however I would like to convert it to RGB. I looked up some Java examples that do the same, but for some reason it doesn't work well and the image is distorted.
the frame resolution is 640X480.
can someone tell me what I am doing wrong here
void NativeCamera::previewImageCallback(void *context, AImageReader *reader)
{
Log::Debug(TAG) << "previewImageCallback" << Log::Endl;
AImage *previewImage = nullptr;
auto status = AImageReader_acquireLatestImage(reader, &previewImage);
if(status !=AMEDIA_OK)
{
return;
}
std::thread processor([=]()
{
uint8_t *dataY = nullptr;
uint8_t *dataU = nullptr;
uint8_t *dataV = nullptr;
int lenY = 0;
int lenU = 0;
int lenV = 0;
AImage_getPlaneData(previewImage, 0, (uint8_t**)&dataY, &lenY);
AImage_getPlaneData(previewImage, 1, (uint8_t**)&dataU, &lenU);
AImage_getPlaneData(previewImage, 2, (uint8_t**)&dataV, &lenV);
uchar buff[lenY+lenU+lenV];
memcpy(buff+0,dataY,lenY);
memcpy(buff+lenY,dataV,lenV);
memcpy(buff+lenY+lenV,dataU,lenU);
cv::Mat yuvMat(480+240,640,CV_8UC1,&buff);
cv::Mat rgbMat;
cv::cvtColor(yuvMat,rgbMat,cv::COLOR_YUV2RGB_NV21,3);
//colorBuffer defined elsewhere
memcpy((char*)colorBuffer,rgbMat.data,640*480*3);
That happens because YUV420p format has 6 bytes per 4 pixels, but RGB has 12 bytes per 4 pixel. So you have to repack your array first, cv::cvtColor - does not do it. Check here about YUV420p packing.

Scaling YUV420 Image using libyuv produces weird output

Possible duplicate of This question with major parts picked from here. I've tried whatever solutions were provided there, they don't work for me.
Background
I'm capturing an image in YUV_420_888 image format returned from ARCore's frame.acquireCameraImage() method. Since I've set the camera configuration at 1920*1080 resolution, I need to scale it down to 224*224 to pass it to my tensorflow-lite implementation. I do that by using LibYuv library through the Android NDK.
Implementation
Prepare the image frames
//Figure out the source image dimensions
int y_size = srcWidth * srcHeight;
//Get dimensions of the desired output image
int out_size = destWidth * destHeight;
//Generate input frame
i420_input_frame.width = srcWidth;
i420_input_frame.height = srcHeight;
i420_input_frame.data = (uint8_t*) yuvArray;
i420_input_frame.y = i420_input_frame.data;
i420_input_frame.u = i420_input_frame.y + y_size;
i420_input_frame.v = i420_input_frame.u + (y_size / 4);
//Generate output frame
free(i420_output_frame.data);
i420_output_frame.width = destWidth;
i420_output_frame.height = destHeight;
i420_output_frame.data = new unsigned char[out_size * 3 / 2];
i420_output_frame.y = i420_output_frame.data;
i420_output_frame.u = i420_output_frame.y + out_size;
i420_output_frame.v = i420_output_frame.u + (out_size / 4);
I scale my image using Libyuv's I420Scale method
libyuv::FilterMode mode = libyuv::FilterModeEnum::kFilterBox;
jint result = libyuv::I420Scale(i420_input_frame.y, i420_input_frame.width,
i420_input_frame.u, i420_input_frame.width / 2,
i420_input_frame.v, i420_input_frame.width / 2,
i420_input_frame.width, i420_input_frame.height,
i420_output_frame.y, i420_output_frame.width,
i420_output_frame.u, i420_output_frame.width / 2,
i420_output_frame.v, i420_output_frame.width / 2,
i420_output_frame.width, i420_output_frame.height,
mode);
and return it to java
//Create a new byte array to return to the caller in Java
jbyteArray outputArray = env -> NewByteArray(out_size * 3 / 2);
env -> SetByteArrayRegion(outputArray, 0, out_size, (jbyte*) i420_output_frame.y);
env -> SetByteArrayRegion(outputArray, out_size, out_size / 4, (jbyte*) i420_output_frame.u);
env -> SetByteArrayRegion(outputArray, out_size + (out_size / 4), out_size / 4, (jbyte*) i420_output_frame.v);
Actual image :
What it looks like post scaling :
What it looks like if I create an Image from the i420_input_frame without scaling :
Since the scaling messes up the colors big time, tensorflow fails to recognize objects properly. (It recognizes properly in their sample application) What am I doing wrong to mess up the colors big time?
Either I was doing something wrong (Which I couldn't fix) or LibYuv does not handle colors properly while dealing with YUV images from Android.
Refer official bug posted on Libyuv library : https://bugs.chromium.org/p/libyuv/issues/detail?id=815&can=1&q=&sort=-id
They suggested I use a method Android420ToI420() first and then I apply whatever transformations I need. I ended up using Android420ToI420() first, then Scaling, then transformation to RGB. In the end, the output was slightly better than the cup image posted above but the distorted colors were still present. I ended up using OpenCV to shrink the image and convert it to RGBA or RGB formats.
// The camera image received is in YUV YCbCr Format at preview dimensions
// so we will scale it down to 224x224 size using OpenCV
// Y plane (0) non-interleaved => stride == 1; U/V plane interleaved => stride == 2
// Refer : https://developer.android.com/reference/android/graphics/ImageFormat.html#YUV_420_888
val cameraPlaneY = cameraImage.planes[0].buffer
val cameraPlaneUV = cameraImage.planes[1].buffer
// Create a new Mat with OpenCV. One for each plane - Y and UV
val y_mat = Mat(cameraImage.height, cameraImage.width, CvType.CV_8UC1, cameraPlaneY)
val uv_mat = Mat(cameraImage.height / 2, cameraImage.width / 2, CvType.CV_8UC2, cameraPlaneUV)
var mat224 = Mat()
var cvFrameRGBA = Mat()
// Retrieve an RGBA frame from the produced YUV
Imgproc.cvtColorTwoPlane(y_mat, uv_mat, cvFrameRGBA, Imgproc.COLOR_YUV2BGRA_NV21)
//Then use this frame to retrieve all RGB channel data
//Iterate over all pixels and retrieve information of RGB channels
for(rows in 1 until cvFrameRGBA.rows())
for(cols in 1 until cvFrameRGBA.cols()) {
val imageData = cvFrameRGBA.get(rows, cols)
// Type of Mat is 24
// Channels is 4
// Depth is 0
rgbBytes.put(imageData[0].toByte())
rgbBytes.put(imageData[1].toByte())
rgbBytes.put(imageData[2].toByte())
}
The color problem is caused because you are working with a different YUV format. The YUV format that camera frameworks use is YUV NV21. This format (NV21) is the standard picture format on Android camera preview. YUV 4:2:0 planar image, with 8 bit Y samples, followed by interleaved V/U plane with 8bit 2x2 subsampled chroma samples.
If your colors are inversed, it means that:
You are working with a YUV NV12 (plane U is V and V is U).
One of your color planes is doing something weird.
To work properly with libyuv I suggest you to convert your camera output to a YUV I420 using transformI420 method and sending the format by parameter:
return libyuv::ConvertToI420(src, srcSize, //src data
dstY, dstWidth, //dst planes
dstU, dstWidth / 2,
dstV, dstWidth / 2,
cropLeft, cropTop, //crop start
srcWidth, srcHeight, //src dimensions
cropRight - cropLeft, cropBottom - cropTop, //dst dimensions
rotationMode,
libyuv::FOURCC_NV21); //libyuv::FOURCC_NV12
After do this conversion you will be able to properly work with libyuv using all the I420scale, I420rotate... and so on. Your scale method should look like:
JNIEXPORT jint JNICALL
Java_com_aa_project_images_yuv_myJNIcl_scaleI420(JNIEnv *env, jclass type,
jobject srcBufferY,
jobject srcBufferU,
jobject srcBufferV,
jint srcWidth, jint srcHeight,
jobject dstBufferY,
jobject dstBufferU,
jobject dstBufferV,
jint dstWidth, jint dstHeight,
jint filterMode) {
const uint8_t *srcY = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferY));
const uint8_t *srcU = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferU));
const uint8_t *srcV = static_cast<uint8_t *>(env->GetDirectBufferAddress(srcBufferV));
uint8_t *dstY = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferY));
uint8_t *dstU = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferU));
uint8_t *dstV = static_cast<uint8_t *>(env->GetDirectBufferAddress(dstBufferV));
return libyuv::I420Scale(srcY, srcWidth,
srcU, srcWidth / 2,
srcV, srcWidth / 2,
srcWidth, srcHeight,
dstY, dstWidth,
dstU, dstWidth / 2,
dstV, dstWidth / 2,
dstWidth, dstHeight,
static_cast<libyuv::FilterMode>(filterMode));
}
If you want to convert this image to a JPEG after all the process. You can use I420toNV21 method and after that use the android native conversion from YUV to JPEG. Also you can use libJpegTurbo which is a complementary library for this kind of situations.

Creating a new Bitmap with Pixel Data in JNI?

I've the below code to create a BitMap (Just a Black / Gray Image) in the JNI with 'ARGB_8888' configuration. But when I dump the content of the Bitmap in the Java code, I'm able to see only the configurations, but not the Pixel Data in the Bitmap.
JNI Code
// Image Details
int imgWidth = 128;
int imgHeight = 128;
int numPix = imgWidth * imgHeight;
// Creaing Bitmap Config Class
jclass bmpCfgCls = env->FindClass("android/graphics/Bitmap$Config");
jmethodID bmpClsValueOfMid = env->GetStaticMethodID(bmpCfgCls, "valueOf", "(Ljava/lang/String;)Landroid/graphics/Bitmap$Config;");
jobject jBmpCfg = env->CallStaticObjectMethod(bmpCfgCls, bmpClsValueOfMid, env->NewStringUTF("ARGB_8888"));
// Creating a Bitmap Class
jclass bmpCls = env->FindClass("android/graphics/Bitmap");
jmethodID createBitmapMid = env->GetStaticMethodID(bmpCls, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
jBmpObj = env->CallStaticObjectMethod(bmpCls, createBitmapMid, imgWidth, imgHeight, jBmpCfg);
// Creating Pixel Data
int triplicateLen = numPix * 4;
char *tripPixData = (char*)malloc(triplicateLen);
for (int lc = 0; lc < triplicateLen; lc++)
{
// Gray / Black Image
if (0 == (lc%4))
tripPixData[lc] = 0x7F; // Alpha
else
tripPixData[lc] = 0x00; // RGB
}
// Setting Pixels in Bitmap
jByteArr = env->NewByteArray(triplicateLen);
env->SetByteArrayRegion(jByteArr, 0, triplicateLen, (jbyte*)tripPixData);
jmethodID setPixelsMid = env->GetMethodID(bmpCls, "setPixels", "([IIIIIII)V");
env->CallVoidMethod(jBmpObj, setPixelsMid, (jintArray)jByteArr, 0, imgWidth, 0, 0, imgWidth, imgHeight);
free(tripPixData);
// Return BitMap Object
return jBmpObj;
In JAVA (Output)
// Checking the Configuration / Image Details
jBmpObj.getWidth() - 128
jBmpObj.getHeight() - 128
jBmpObj.getRowBytes() - 512
jBmpObj.getConfig() - ARGB 8888
// Getting Pixel Data
imgPixs = new int[jBmpObj.getWidth() * jBmpObj.getHeight()];
jBmpObj.getPixels(imgPixs, 0, jBmpObj.getWidth(), 0, 0, jBmpObj.getWidth(), jBmpObj.getHeight());
// Running a Loop on the imgPixs
imgPixs[<0 - imgPixs.lenght>] - 0 (Every Pixel Data)
I used the same concept to create a Bitmap in the Java Code, and it works fine (Even I'm able to see the image). But I want the logic to be in the JNI part and not in Java Code. So I tried the above logic and it failed in setting the Pixel Data.
Any input in fixing this issue will be really helpful,..
Full working example:
jclass bitmapConfig = jniEnv->FindClass("android/graphics/Bitmap$Config");
jfieldID rgba8888FieldID = jniEnv->GetStaticFieldID(bitmapConfig, "ARGB_8888", "Landroid/graphics/Bitmap$Config;");
jobject rgba8888Obj = jniEnv->GetStaticObjectField(bitmapConfig, rgba8888FieldID);
jclass bitmapClass = jniEnv->FindClass("android/graphics/Bitmap");
jmethodID createBitmapMethodID = jniEnv->GetStaticMethodID(bitmapClass,"createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
jobject bitmapObj = jniEnv->CallStaticObjectMethod(bitmapClass, createBitmapMethodID, _width, _height, rgba8888Obj);
jintArray pixels = jniEnv->NewIntArray(_width * _height);
for (int i = 0; i < _width * _height; i++)
{
unsigned char red = bitmap[i*4];
unsigned char green = bitmap[i*4 + 1];
unsigned char blue = bitmap[i*4 + 2];
unsigned char alpha = bitmap[i*4 + 3];
int currentPixel = (alpha << 24) | (red << 16) | (green << 8) | (blue);
jniEnv->SetIntArrayRegion(pixels, i, 1, &currentPixel);
}
jmethodID setPixelsMid = jniEnv->GetMethodID(bitmapClass, "setPixels", "([IIIIIII)V");
jniEnv->CallVoidMethod(bitmapObj, setPixelsMid, pixels, 0, _width, 0, 0, _width, _height);
where bitmap is unsigned char*.
You cannot cast byte[] to int[] in Java, therefore you cannot cast it in JNI. But you can cast char* to int*, so you can simply use your tripPixData to fill a new jjintArray.
IN Android each pixel represented as 0xFFFFFFFF ie ARGB.
0xFF referes most significamt 8 bits of given data.
From your snippet, where you are getting soure image data? But i have solved this issue
by using following code base.i hope this ll help you.
// Creating Pixel Data
unsigned char* rawData = //your raw data
**Note**: here you have get each r,g & b component as 8 bit data //If it is rgb image,if it
is monochrome you can use raw data
int triplicateLen = imgheight * imgwidth;
int *tripPixData = (int*) malloc(triplicateLen * sizeof(int));
if(rgb){
for (int lc = 0; lc < triplicateLen ; lc++){
tripPixData [lc] = (0xFF << 24) | (r[lc] << 16) | (g[lc] << 8) | b[lc];
}
}else{
for (int lc = 0; lc < triplicateLen ; lc++){
tripPixData [lc] = (0xFF << 24) | (rawData [lc] << 16) | (rawData [lc] << 8) | rawData [lc];
}
}

How to actually see a Bitmap taken from an Android heap dump

In the process of tracking severe memory issues in my app, I looked at several heap dumps from my app, and most of the time I have a HUGE bitmap that I don't know of.
It takes 9.4MB, or 9,830,400 bytes, or actually a 1280x1920 image at 4 bytes per pixels.
I checked in Eclipse MAT, it is indeed a byte[9830400], that has one incoming reference which is a android.graphics.Bitmap.
I'd like to dump this to a file and try to see it. I can't understand where is it coming from. My biggest image in all my drawables is a 640x960 png, which takes less than 3MB.
I tried to use Eclipse to "copy value to file", but I think it simply prints the buffer to the file, and I don't know any image software that can read a stream of bytes and display it as a 4 bytes per pixel image.
Any idea?
Here's what I tried: dump the byte array to a file, push it to /sdcard/img, and load an activity like this:
#Override
public void onCreate(final Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
final File inputFile = new File("/sdcard/img");
final FileInputStream isr = new FileInputStream(inputFile);
final Bitmap bmp = BitmapFactory.decodeStream(isr);
ImageView iv = new ImageView(this);
iv.setImageBitmap(bmp);
setContentView(iv);
Log.d("ImageTest", "Image was inflated");
} catch (final FileNotFoundException e) {
Log.d("ImageTest", "Image was not inflated");
}
}
I didn't see anything.
Do you know how is encoded the image? Say it is stored into byte[] buffer. buffer[0] is red, buffer[1] is green, etc?
See here for an easier answer: MAT (Eclipse Memory Analyzer) - how to view bitmaps from memory dump
TL;DR - Install GIMP and load the image as raw RGB Alpha
OK -- After quite some unsuccessful tries, I finally got something out of this byte array. I wrote this simple C program to convert the byte array to a Windows Bitmap file. I'm dropping the code in case somebody is interested.
I compiled this against VisualC 6.0 and gcc 3.4.4, it should work on any OS (tested on Windows, Linux and MacOS X).
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <stdlib.h>
/* Types */
typedef unsigned char byte;
typedef unsigned short uint16_t;
typedef unsigned int uint32_t;
typedef int int32_t;
/* Constants */
#define RMASK 0x00ff0000
#define GMASK 0x0000ff00
#define BMASK 0x000000ff
#define AMASK 0xff000000
/* Structures */
struct bmpfile_magic {
unsigned char magic[2];
};
struct bmpfile_header {
uint32_t filesz;
uint16_t creator1;
uint16_t creator2;
uint32_t bmp_offset;
};
struct bmpfile_dibheader {
uint32_t header_sz;
uint32_t width;
uint32_t height;
uint16_t nplanes;
uint16_t bitspp;
uint32_t compress_type;
uint32_t bmp_bytesz;
int32_t hres;
int32_t vres;
uint32_t ncolors;
uint32_t nimpcolors;
uint32_t rmask, gmask, bmask, amask;
uint32_t colorspace_type;
byte colorspace[0x24];
uint32_t rgamma, ggamma, bgamma;
};
/* Displays usage info and exits */
void usage(char *cmd) {
printf("Usage:\t%s <img_src> <img_dest.bmp> <width> <height>\n"
"\timg_src:\timage byte buffer obtained from Eclipse MAT, using 'copy > save value to file' while selecting the byte[] buffer corresponding to an android.graphics.Bitmap\n"
"\timg_dest:\tpath to target *.bmp file\n"
"\twidth:\t\tpicture width, obtained in Eclipse MAT, selecting the android.graphics.Bitmap object and seeing the object member values\n"
"\theight:\t\tpicture height\n\n", cmd);
exit(1);
}
/* C entry point */
int main(int argc, char **argv) {
FILE *in, *out;
char *file_in, *file_out;
int w, h, W, H;
byte r, g, b, a, *image;
struct bmpfile_magic magic;
struct bmpfile_header header;
struct bmpfile_dibheader dibheader;
/* Parse command line */
if (argc < 5) {
usage(argv[0]);
}
file_in = argv[1];
file_out = argv[2];
W = atoi(argv[3]);
H = atoi(argv[4]);
in = fopen(file_in, "rb");
out = fopen(file_out, "wb");
/* Check parameters */
if (in == NULL || out == NULL || W == 0 || H == 0) {
usage(argv[0]);
}
/* Init BMP headers */
magic.magic[0] = 'B';
magic.magic[1] = 'M';
header.filesz = W * H * 4 + sizeof(magic) + sizeof(header) + sizeof(dibheader);
header.creator1 = 0;
header.creator2 = 0;
header.bmp_offset = sizeof(magic) + sizeof(header) + sizeof(dibheader);
dibheader.header_sz = sizeof(dibheader);
dibheader.width = W;
dibheader.height = H;
dibheader.nplanes = 1;
dibheader.bitspp = 32;
dibheader.compress_type = 3;
dibheader.bmp_bytesz = W * H * 4;
dibheader.hres = 2835;
dibheader.vres = 2835;
dibheader.ncolors = 0;
dibheader.nimpcolors = 0;
dibheader.rmask = RMASK;
dibheader.gmask = BMASK;
dibheader.bmask = GMASK;
dibheader.amask = AMASK;
dibheader.colorspace_type = 0x57696e20;
memset(&dibheader.colorspace, 0, sizeof(dibheader.colorspace));
dibheader.rgamma = dibheader.bgamma = dibheader.ggamma = 0;
/* Read picture data */
image = (byte*) malloc(4*W*H);
if (image == NULL) {
printf("Could not allocate a %d-byte buffer.\n", 4*W*H);
exit(1);
}
fread(image, 4*W*H, sizeof(byte), in);
fclose(in);
/* Write header */
fwrite(&magic, sizeof(magic), 1, out);
fwrite(&header, sizeof(header), 1, out);
fwrite(&dibheader, sizeof(dibheader), 1, out);
/* Convert the byte array to BMP format */
for (h = H-1; h >= 0; h--) {
for (w = 0; w < W; w++) {
r = *(image + w*4 + 4 * W * h);
b = *(image + w*4 + 4 * W * h + 1);
g = *(image + w*4 + 4 * W * h + 2);
a = *(image + w*4 + 4 * W * h + 3);
fwrite(&b, 1, 1, out);
fwrite(&g, 1, 1, out);
fwrite(&r, 1, 1, out);
fwrite(&a, 1, 1, out);
}
}
free(image);
fclose(out);
}
So using this tool I was able to recognise the picture used to generate this 1280x1920 bitmap.
I found that starting from latest version of Android Studio (2.2.2 as of writing), you can view the bitmap file directly:
Open the ‘Android Monitor’ tab (at the bottom left) and then Memory tab.
Press the ‘Dump Java Heap’ button
Choose the ‘Bitmap’ Class Name for the current snapshot, select each Instance of bitmap and view what image exactly consume more memory than expected. (screens 4 and 5)
Choose the Bitmap class name…
Select each Instance of bitmap
and right click on it, select View Bitmap
Just take the input to the image and convert it into a bitmap object by using the fileinput stream/datastream. Also add logs for seeing data for each image that gets used.
You could enable an usb connection and copy the file to an other computer with more tools to investigate.
Some devices could be configured to dump the current screen to file system when the start button is pressed. Maybe this happens to you.

Convert RGB565 to Greyscale (8 bits)

I am trying to convert a rgb565 image (video stream from the Android phone camera) into a greyscale (8 bits) image.
So far I got to the following code (the conversion is computed in native code using the Android NDK). Note that my input image is 640*480 and I want to crop it to make it fit in a 128*128 buffer.
#define RED(a) ((((a) & 0xf800) >> 11) << 3)
#define GREEN(a) ((((a) & 0x07e0) >> 5) << 2)
#define BLUE(a) (((a) & 0x001f) << 3)
typedef unsigned char byte;
void toGreyscale(byte *rgbs, int widthIn, int heightIn, byte *greyscales)
{
const int textSize = 128;
int x,y;
short* rgbPtr = (short*)rgbs;
byte *greyPtr = greyscales;
// rgbs arrives in RGB565 (16 bits) format
for (y=0; y<textSize; y++)
{
for (x=0; x<textSize; x++)
{
short pixel = *(rgbPtr++);
int r = RED(pixel);
int g = GREEN(pixel);
int b = BLUE(pixel);
*(greyPtr++) = (byte)((r+g+b) / 3);
}
rgbPtr += widthIn - textSize;
}
}
The image is sent to the function like this
jbyte* cImageIn = env->GetByteArrayElements(imageIn, &b);
jbyte* cImageOut = (jbyte*)env->GetDirectBufferAddress(imageOut);
toGreyscale((byte*)cImageIn, widthIn, heightIn, (byte*)cImageOut);
The result I get is a horizontally-reversed image (no idea why...the UVs to display the result are correct...), but the biggest problem is that only the red channel is actually correct when I display them separately. The green and blue channels are all messed up and I have no idea why. I checked on the Internet and all the resources I found showed that the masks I am using are correct. Any idea where the mistake could be?
Thanks!
May be an endianess issue?
You could check quickly by reversing the two bytes of your 16 bits word before shifting out the RGB components.

Categories

Resources