I'm having BMP as RGBA buffer (i'm able to save it as BMP in native code and view it as .bmp image) and i need to pass it to android from native code. I've found similar questions and answers and this is one of the solutions:
create android bitmap object in android
pass it to native code
set pixels buffer in native code
return bitmap back to android side
This is not suitable for me because:
pixels array is created i native code
if i create it on android side with specified width and height this makes android allocate the second buffer and it's not good as i'm going to have 24 bitmaps a second (streaming video).
I need smth like this:
pass Buffer from native code and Bitmap.createFromBuffer(Buffer buffer, int width, int height, int format)
create android bitmap object in native code, set pixels buffer and return back to android
Any suggestions/thoughts?
If you wanna to create java Bitmap object from native code, you should do something like this:
in native code read your buffer, then apply every pixel in bufer to argb format, if you have rgba, you can do something like this:
int a = 0xFF & yourPixelInt;
int r = 0xFF & yourPixelInt >> 24;
int g = 0xFF & yourPixelInt >> 16;
int b = 0xFF & yourPixelInt >> 8;
unsigned int newPixel = (a << 24) | (r << 16) | (g << 8) | (b)
Do it for all pixels to convert it from rgba to argb, after that you can create java Bitmap from native code:
jint* bytes = env->GetIntArrayElements( array, NULL );
if (bytes != NULL) {
memcpy(bytes, buffer, origBufferSize * sizeof (unsigned int));
env->ReleaseIntArrayElements( array, bytes, 0 );
}
jclass bitmapClass = env->FindClass("android/graphics/Bitmap");
jmethodID methodid = env->GetStaticMethodID(bitmapClass, "createBitmap", "([IIIIILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
jclass bitmapConfig = env->FindClass("android/graphics/Bitmap$Config");
jfieldID argb8888FieldID = env->GetStaticFieldID(bitmapConfig, "ARGB_8888",
"Landroid/graphics/Bitmap$Config;");
jobject argb8888Obj = env->GetStaticObjectField(bitmapConfig, argb8888FieldID);
jobject java_bitmap = env->CallStaticObjectMethod(bitmapClass, methodid, array, 0, bitmapwidth, bitmapwidth, bitmapheight, argb8888Obj);
Don't forget to release objects to avoid memory leak
env->DeleteLocalRef(array);
env->DeleteLocalRef(bitmapClass);
env->DeleteLocalRef(bitmapConfig);
env->DeleteLocalRef(argb8888Obj);
Related
I have been looking at converting the NV21 byte[] that I get from onPreviewFrame(). I have searched the forums and google for various solutions. I have tried RenderScripts and some other code examples. Some of them give me an image with a yellow tint, some give me an image with red and blue flipped (after I flip it back in the code, I get yellow tint back), some give me strange color features all throughout the image (almost like a negative), some give me a grayscale image, some give me an image so dark you can't really make anything out.
Since I am the one typing the question, I realize I must be the idiot in the room so we will start with this post. This particular solution gives me a very dark image, but I am not cool enough to be able to comment yet. Has anyone tried this solution or has one that produces an image with the same quality as the original NV21 format?
I need either a valid ARGB byte[] or a valid Bitmap, I can modify my project to deal with either. Just for reference I have tried these (and a few others that are really just carbon copies of these):
One solution I tried
Another solution I tried
If you are trying to convert YUV from camera to Bitmap, here is something you can try:
// import android.renderscript.*
// RenderScript mRS;
// ScriptIntrinsicYuvToRGB mYuvToRGB;
// Allocation yuvPreviewAlloc;
// Allocation rgbOutputAlloc;
// Create RenderScript context, ScriptIntrinsicYuvToRGB and Allocations and keep reusing them.
if (NotInitialized) {
mRS = RenderScript.create(this).
mYuvToRGB = ScriptIntrinsicYuvToRGB.create(mRS, Element.YUV(mRS));
// Create a RS Allocation to hold NV21 data.
Type.Builder tYuv = new Type.Builder(mRS, Element.YUV(mRS));
tYuv.setX(width).setY(height).setYuvFormat(android.graphics.ImageFormat.NV21);
yuvPreviewAlloc = Allocation.createTyped(mRS, tYuv.create(), Allocation.USAGE_SCRIPT | Allocation.USAGE_IO_INPUT);
// Create a RS Allocation to hold RGBA data.
Type.Builder tRgb = new Type.Builder(mRS, Element.RGBA_8888(mRS));
tRgb.setX(width).tRgb(height);
rgbOutputAlloc = Allocation.createTyped(mRS, tRgb.create(), Allocation.USAGE_SCRIPT);
// Set input of ScriptIntrinsicYuvToRGB
mYuvToRGB.setInput(yuvPreviewAlloc);
}
// Use rsPreviewSurface as one of the output surface from Camera API.
// You can refer to https://github.com/googlesamples/android-HdrViewfinder/blob/master/Application/src/main/java/com/example/android/hdrviewfinder/HdrViewfinderActivity.java#L504
Surface rsPreviewSurface = yuvPreviewAlloc.getSurface();
...
// Whenever a new frame is available
// Update the yuv Allocation with a new Camera buffer without any copy.
// You can refer to https://github.com/googlesamples/android-HdrViewfinder/blob/master/Application/src/main/java/com/example/android/hdrviewfinder/ViewfinderProcessor.java#L109
yuvPreviewAlloc.ioReceive();
// The actual Yuv to Rgb conversion.
mYuvToRGB.forEach(rgbOutputAlloc);
// Copy the rgb Allocation to a Bitmap.
rgbOutputAlloc.copyTo(mBitmap);
// continue processing mBitmap.
...
When using ScriptIntrinsics I highly recommend to update to at least JellyBean 4.3 or higher (API18). Things are much easier to use than in JB 4.2 (API 17).
ScriptIntrinsicYuvToRGB is not as complicated as it seems.
Especially you don´t need Type.Builder objects.
Camera preview format must be NV21 !
in the onCreate()... method create the RenderScript object and the Intrinsic:
mRS = RenderScript.create(this);
mYuvToRGB = ScriptIntrinsicYuvToRGB.create(mRS, Element.U8_4(mRS));
With your cameraPreviewWidth and cameraPreviewHeight calculate the
length of the camera data byte array:
int yuvDatalength = cameraPreviewWidth*cameraPreviewHeight*3/2 ; // this is 12 bit per pixel
You need a bitmap for output:
mBitmap = Bitmap.createBitmap(cameraPreviewWidth, cameraPreviewHeight, Bitmap.Config.ARGB_8888);
Then you create the input and output allocations (here are the changes in API18+)
yuvPreviewAlloc = Allocation.createSized(mRS, Element.U8(mRS), yuvDatalength);
rgbOutputAlloc = Allocation.createFromBitmap(mRS, mBitmap); // this simple !
and set the script-input to the input allocation
mYuvToRGB.setInput(yuvPreviewAlloc); // this has to be done only once !
In the camera loop (whenever a new frame is avaliable), copy the NV21 byte-array (data[]) to the yuvPreviewAlloc, execute the script and copy result to bitmap:
yuvPreviewAlloc.copyFrom(data); // or yuvPreviewAlloc.copyFromUnchecked(data);
mYuvToRGB.forEach(rgbOutputAlloc);
rgbOutputAlloc.copyTo(mBitmap);
For example: on Nexus 7 (2013, JellyBean 4.3) a full HD (1920x1080) camera preview conversion takes about 7 ms.
I was able to get a different method working (one that was previously linked) by using the code here. But that was giving the Red/Blue color flip. So, I just rearranged the U and V lines and all was ok. This is not as fast as a RenderScript though. It would be good to have a RenderScript that functioned properly. Here is the code:
static public void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0) {
u = (0xff & yuv420sp[uvp++]) - 128; //Just changed the order
v = (0xff & yuv420sp[uvp++]) - 128; //It was originally v then u
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}
Any one have a RenderScript that doesn't have color tint and or flip problems?
camera.setPreviewCallback(new Camera.PreviewCallback() {
private long timestamp=0;
public synchronized void onPreviewFrame(byte[] data, Camera camera) {
Log.e("CameraTest","Time Gap = "+(System.currentTimeMillis()-timestamp));
timestamp=System.currentTimeMillis();
Bitmap mFaceBitmap = BitmapFactory.decodeByteArray(data, 0, data.length);
if (mFaceBitmap!=null) FaceDetection.calculate(mFaceBitmap);
camera.addCallbackBuffer(data);
return;
}
});
I have a camera View, and in front of a simple View (where I can draw something).
I'd like to draw on the front of View, when I can find the face of a human.
But mFaceBitmap is ever and ever return null, why?
If this is a bad idea, how can I do this better?
When you set-up the camera you will need to set the preview size and the preview format. Here is some sample code to give the rough idea:
int previewFormat = 0;
for (int format : parameters.getSupportedPreviewFormats()) {
if (format == FORMAT_NV21) {
previewFormat = FORMAT_NV21;
} else if (previewFormat == 0 && (format == FORMAT_JPEG || format == FORMAT_RGB_565)) {
previewFormat = format;
}
}
// TODO: Iterate on supported preview sizes and pick best one
parameters.setPreviewSize(previewSize.width, previewSize.height);
if (previewFormat != 0) {
parameters.setPreviewFormat(previewFormat);
} else {
// Error on unsupported format
}
Now in the callback you can do something like:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Bitmnap bitmap;
if (previewFormat == FORMAT_NV21) {
int[] previewPixels = new int[previewSize.width * previewSize.height];
decodeYUV420SP(previewPixels, data, previewSize.width, previewSize.height);
bitmap = Bitmap.createBitmap(rgbPixels, previewSize.width, previewSize.height, Bitmap.Config.RGB_565);
} else if (previewFormat == FORMAT_JPEG || previewFormat == FORMAT_RGB_565) {
// RGB565 and JPEG
BitmapFactory.Options opts = new BitmapFactory.Options();
opts.inDither = true;
opts.inPreferredConfig = Bitmap.Config.RGB_565;
bitmap = BitmapFactory.decodeByteArray(data, 0, data.length, opts);
}
}
And finally, the conversion
static void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}
You can't use Bitmap.decodeByteArray to convert a camera's preview output into a bitmap, unfortunately.
decodeByteArray is designed for converting JPEG/PNG/etc images into bitmaps, and it doesn't have any way of knowing what the data in the preview callback is like, because it's a simple raw array of pixel values with no identifying header.
You have to do the conversion yourself. There are many ways to do this, of various degrees of efficiency - I'll write out the simplest one here, but it's also probably the slowest.
The data byte array from the camera is encoded in some particular pixel format, which is specified by Camera.Parameters.setPreviewFormat. If you haven't called this, the default format is NV21. NV21 is guaranteed to work on all Android devices; on Android versions >= 3.0, the YV12 format is also guaranteed to work.
Both of these are YUV formats, meaning the color is encoded as a luminance (brightness) channel and two chroma (color) channels. The functions for setting pixel values on a Bitmap (primarily setPixels) require information in the RGB color space instead, so a conversion is required. In addition, both NV21 and YV12 subsample the chroma channels - if you have a 640x480 image, for example, there will be 640x480 pixels in the luminance channel, but only 320x240 pixels in the two chroma channels.
This means you need to create a new int[] array of the right size, and then loop over the byte[] data array, collecting up a set of Y, U, and V values, convert them to RGB, and write them to the int[] array, and then call setPixels on your destination bitmap. The color conversion matrix you need is the JPEG YCbCr->RGB matrix, which you can find at Wikipedia, for example. You can find out about the layout of NV21 or YV12 at fourcc, as one example
If you really don't want to mess with all that, you can also use the YuvImage class, albeit in a roundabout way. You can construct a YuvImage instance from the preview data, as long as you're using the NV21 format, and then save a JPEG from it into a ByteArrayOutputStream. You can then get the byte[] from the stream, and decode it into a bitmap using Bitmap.decodeByteArray. This is a completely unnecessary roundtrip to JPEG and back, so it's quite inefficient and can cause quality loss, but it only requires a few lines of code.
In the latest version of Android, you can also use Renderscript to efficiently do this conversion. You'll need to copy the data into an Allocation, and then use the YUV to RGB script intrinsic to do the conversion.
Finally, you can pass the data and destination bitmap into JNI code, where you can access the Bitmap directly, and write the conversion function there in C or C++. This requires a lot of scaffolding, but is very efficient.
I've the below code to create a BitMap (Just a Black / Gray Image) in the JNI with 'ARGB_8888' configuration. But when I dump the content of the Bitmap in the Java code, I'm able to see only the configurations, but not the Pixel Data in the Bitmap.
JNI Code
// Image Details
int imgWidth = 128;
int imgHeight = 128;
int numPix = imgWidth * imgHeight;
// Creaing Bitmap Config Class
jclass bmpCfgCls = env->FindClass("android/graphics/Bitmap$Config");
jmethodID bmpClsValueOfMid = env->GetStaticMethodID(bmpCfgCls, "valueOf", "(Ljava/lang/String;)Landroid/graphics/Bitmap$Config;");
jobject jBmpCfg = env->CallStaticObjectMethod(bmpCfgCls, bmpClsValueOfMid, env->NewStringUTF("ARGB_8888"));
// Creating a Bitmap Class
jclass bmpCls = env->FindClass("android/graphics/Bitmap");
jmethodID createBitmapMid = env->GetStaticMethodID(bmpCls, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
jBmpObj = env->CallStaticObjectMethod(bmpCls, createBitmapMid, imgWidth, imgHeight, jBmpCfg);
// Creating Pixel Data
int triplicateLen = numPix * 4;
char *tripPixData = (char*)malloc(triplicateLen);
for (int lc = 0; lc < triplicateLen; lc++)
{
// Gray / Black Image
if (0 == (lc%4))
tripPixData[lc] = 0x7F; // Alpha
else
tripPixData[lc] = 0x00; // RGB
}
// Setting Pixels in Bitmap
jByteArr = env->NewByteArray(triplicateLen);
env->SetByteArrayRegion(jByteArr, 0, triplicateLen, (jbyte*)tripPixData);
jmethodID setPixelsMid = env->GetMethodID(bmpCls, "setPixels", "([IIIIIII)V");
env->CallVoidMethod(jBmpObj, setPixelsMid, (jintArray)jByteArr, 0, imgWidth, 0, 0, imgWidth, imgHeight);
free(tripPixData);
// Return BitMap Object
return jBmpObj;
In JAVA (Output)
// Checking the Configuration / Image Details
jBmpObj.getWidth() - 128
jBmpObj.getHeight() - 128
jBmpObj.getRowBytes() - 512
jBmpObj.getConfig() - ARGB 8888
// Getting Pixel Data
imgPixs = new int[jBmpObj.getWidth() * jBmpObj.getHeight()];
jBmpObj.getPixels(imgPixs, 0, jBmpObj.getWidth(), 0, 0, jBmpObj.getWidth(), jBmpObj.getHeight());
// Running a Loop on the imgPixs
imgPixs[<0 - imgPixs.lenght>] - 0 (Every Pixel Data)
I used the same concept to create a Bitmap in the Java Code, and it works fine (Even I'm able to see the image). But I want the logic to be in the JNI part and not in Java Code. So I tried the above logic and it failed in setting the Pixel Data.
Any input in fixing this issue will be really helpful,..
Full working example:
jclass bitmapConfig = jniEnv->FindClass("android/graphics/Bitmap$Config");
jfieldID rgba8888FieldID = jniEnv->GetStaticFieldID(bitmapConfig, "ARGB_8888", "Landroid/graphics/Bitmap$Config;");
jobject rgba8888Obj = jniEnv->GetStaticObjectField(bitmapConfig, rgba8888FieldID);
jclass bitmapClass = jniEnv->FindClass("android/graphics/Bitmap");
jmethodID createBitmapMethodID = jniEnv->GetStaticMethodID(bitmapClass,"createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
jobject bitmapObj = jniEnv->CallStaticObjectMethod(bitmapClass, createBitmapMethodID, _width, _height, rgba8888Obj);
jintArray pixels = jniEnv->NewIntArray(_width * _height);
for (int i = 0; i < _width * _height; i++)
{
unsigned char red = bitmap[i*4];
unsigned char green = bitmap[i*4 + 1];
unsigned char blue = bitmap[i*4 + 2];
unsigned char alpha = bitmap[i*4 + 3];
int currentPixel = (alpha << 24) | (red << 16) | (green << 8) | (blue);
jniEnv->SetIntArrayRegion(pixels, i, 1, ¤tPixel);
}
jmethodID setPixelsMid = jniEnv->GetMethodID(bitmapClass, "setPixels", "([IIIIIII)V");
jniEnv->CallVoidMethod(bitmapObj, setPixelsMid, pixels, 0, _width, 0, 0, _width, _height);
where bitmap is unsigned char*.
You cannot cast byte[] to int[] in Java, therefore you cannot cast it in JNI. But you can cast char* to int*, so you can simply use your tripPixData to fill a new jjintArray.
IN Android each pixel represented as 0xFFFFFFFF ie ARGB.
0xFF referes most significamt 8 bits of given data.
From your snippet, where you are getting soure image data? But i have solved this issue
by using following code base.i hope this ll help you.
// Creating Pixel Data
unsigned char* rawData = //your raw data
**Note**: here you have get each r,g & b component as 8 bit data //If it is rgb image,if it
is monochrome you can use raw data
int triplicateLen = imgheight * imgwidth;
int *tripPixData = (int*) malloc(triplicateLen * sizeof(int));
if(rgb){
for (int lc = 0; lc < triplicateLen ; lc++){
tripPixData [lc] = (0xFF << 24) | (r[lc] << 16) | (g[lc] << 8) | b[lc];
}
}else{
for (int lc = 0; lc < triplicateLen ; lc++){
tripPixData [lc] = (0xFF << 24) | (rawData [lc] << 16) | (rawData [lc] << 8) | rawData [lc];
}
}
In the process of tracking severe memory issues in my app, I looked at several heap dumps from my app, and most of the time I have a HUGE bitmap that I don't know of.
It takes 9.4MB, or 9,830,400 bytes, or actually a 1280x1920 image at 4 bytes per pixels.
I checked in Eclipse MAT, it is indeed a byte[9830400], that has one incoming reference which is a android.graphics.Bitmap.
I'd like to dump this to a file and try to see it. I can't understand where is it coming from. My biggest image in all my drawables is a 640x960 png, which takes less than 3MB.
I tried to use Eclipse to "copy value to file", but I think it simply prints the buffer to the file, and I don't know any image software that can read a stream of bytes and display it as a 4 bytes per pixel image.
Any idea?
Here's what I tried: dump the byte array to a file, push it to /sdcard/img, and load an activity like this:
#Override
public void onCreate(final Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
final File inputFile = new File("/sdcard/img");
final FileInputStream isr = new FileInputStream(inputFile);
final Bitmap bmp = BitmapFactory.decodeStream(isr);
ImageView iv = new ImageView(this);
iv.setImageBitmap(bmp);
setContentView(iv);
Log.d("ImageTest", "Image was inflated");
} catch (final FileNotFoundException e) {
Log.d("ImageTest", "Image was not inflated");
}
}
I didn't see anything.
Do you know how is encoded the image? Say it is stored into byte[] buffer. buffer[0] is red, buffer[1] is green, etc?
See here for an easier answer: MAT (Eclipse Memory Analyzer) - how to view bitmaps from memory dump
TL;DR - Install GIMP and load the image as raw RGB Alpha
OK -- After quite some unsuccessful tries, I finally got something out of this byte array. I wrote this simple C program to convert the byte array to a Windows Bitmap file. I'm dropping the code in case somebody is interested.
I compiled this against VisualC 6.0 and gcc 3.4.4, it should work on any OS (tested on Windows, Linux and MacOS X).
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <stdlib.h>
/* Types */
typedef unsigned char byte;
typedef unsigned short uint16_t;
typedef unsigned int uint32_t;
typedef int int32_t;
/* Constants */
#define RMASK 0x00ff0000
#define GMASK 0x0000ff00
#define BMASK 0x000000ff
#define AMASK 0xff000000
/* Structures */
struct bmpfile_magic {
unsigned char magic[2];
};
struct bmpfile_header {
uint32_t filesz;
uint16_t creator1;
uint16_t creator2;
uint32_t bmp_offset;
};
struct bmpfile_dibheader {
uint32_t header_sz;
uint32_t width;
uint32_t height;
uint16_t nplanes;
uint16_t bitspp;
uint32_t compress_type;
uint32_t bmp_bytesz;
int32_t hres;
int32_t vres;
uint32_t ncolors;
uint32_t nimpcolors;
uint32_t rmask, gmask, bmask, amask;
uint32_t colorspace_type;
byte colorspace[0x24];
uint32_t rgamma, ggamma, bgamma;
};
/* Displays usage info and exits */
void usage(char *cmd) {
printf("Usage:\t%s <img_src> <img_dest.bmp> <width> <height>\n"
"\timg_src:\timage byte buffer obtained from Eclipse MAT, using 'copy > save value to file' while selecting the byte[] buffer corresponding to an android.graphics.Bitmap\n"
"\timg_dest:\tpath to target *.bmp file\n"
"\twidth:\t\tpicture width, obtained in Eclipse MAT, selecting the android.graphics.Bitmap object and seeing the object member values\n"
"\theight:\t\tpicture height\n\n", cmd);
exit(1);
}
/* C entry point */
int main(int argc, char **argv) {
FILE *in, *out;
char *file_in, *file_out;
int w, h, W, H;
byte r, g, b, a, *image;
struct bmpfile_magic magic;
struct bmpfile_header header;
struct bmpfile_dibheader dibheader;
/* Parse command line */
if (argc < 5) {
usage(argv[0]);
}
file_in = argv[1];
file_out = argv[2];
W = atoi(argv[3]);
H = atoi(argv[4]);
in = fopen(file_in, "rb");
out = fopen(file_out, "wb");
/* Check parameters */
if (in == NULL || out == NULL || W == 0 || H == 0) {
usage(argv[0]);
}
/* Init BMP headers */
magic.magic[0] = 'B';
magic.magic[1] = 'M';
header.filesz = W * H * 4 + sizeof(magic) + sizeof(header) + sizeof(dibheader);
header.creator1 = 0;
header.creator2 = 0;
header.bmp_offset = sizeof(magic) + sizeof(header) + sizeof(dibheader);
dibheader.header_sz = sizeof(dibheader);
dibheader.width = W;
dibheader.height = H;
dibheader.nplanes = 1;
dibheader.bitspp = 32;
dibheader.compress_type = 3;
dibheader.bmp_bytesz = W * H * 4;
dibheader.hres = 2835;
dibheader.vres = 2835;
dibheader.ncolors = 0;
dibheader.nimpcolors = 0;
dibheader.rmask = RMASK;
dibheader.gmask = BMASK;
dibheader.bmask = GMASK;
dibheader.amask = AMASK;
dibheader.colorspace_type = 0x57696e20;
memset(&dibheader.colorspace, 0, sizeof(dibheader.colorspace));
dibheader.rgamma = dibheader.bgamma = dibheader.ggamma = 0;
/* Read picture data */
image = (byte*) malloc(4*W*H);
if (image == NULL) {
printf("Could not allocate a %d-byte buffer.\n", 4*W*H);
exit(1);
}
fread(image, 4*W*H, sizeof(byte), in);
fclose(in);
/* Write header */
fwrite(&magic, sizeof(magic), 1, out);
fwrite(&header, sizeof(header), 1, out);
fwrite(&dibheader, sizeof(dibheader), 1, out);
/* Convert the byte array to BMP format */
for (h = H-1; h >= 0; h--) {
for (w = 0; w < W; w++) {
r = *(image + w*4 + 4 * W * h);
b = *(image + w*4 + 4 * W * h + 1);
g = *(image + w*4 + 4 * W * h + 2);
a = *(image + w*4 + 4 * W * h + 3);
fwrite(&b, 1, 1, out);
fwrite(&g, 1, 1, out);
fwrite(&r, 1, 1, out);
fwrite(&a, 1, 1, out);
}
}
free(image);
fclose(out);
}
So using this tool I was able to recognise the picture used to generate this 1280x1920 bitmap.
I found that starting from latest version of Android Studio (2.2.2 as of writing), you can view the bitmap file directly:
Open the ‘Android Monitor’ tab (at the bottom left) and then Memory tab.
Press the ‘Dump Java Heap’ button
Choose the ‘Bitmap’ Class Name for the current snapshot, select each Instance of bitmap and view what image exactly consume more memory than expected. (screens 4 and 5)
Choose the Bitmap class name…
Select each Instance of bitmap
and right click on it, select View Bitmap
Just take the input to the image and convert it into a bitmap object by using the fileinput stream/datastream. Also add logs for seeing data for each image that gets used.
You could enable an usb connection and copy the file to an other computer with more tools to investigate.
Some devices could be configured to dump the current screen to file system when the start button is pressed. Maybe this happens to you.
I am trying to convert a rgb565 image (video stream from the Android phone camera) into a greyscale (8 bits) image.
So far I got to the following code (the conversion is computed in native code using the Android NDK). Note that my input image is 640*480 and I want to crop it to make it fit in a 128*128 buffer.
#define RED(a) ((((a) & 0xf800) >> 11) << 3)
#define GREEN(a) ((((a) & 0x07e0) >> 5) << 2)
#define BLUE(a) (((a) & 0x001f) << 3)
typedef unsigned char byte;
void toGreyscale(byte *rgbs, int widthIn, int heightIn, byte *greyscales)
{
const int textSize = 128;
int x,y;
short* rgbPtr = (short*)rgbs;
byte *greyPtr = greyscales;
// rgbs arrives in RGB565 (16 bits) format
for (y=0; y<textSize; y++)
{
for (x=0; x<textSize; x++)
{
short pixel = *(rgbPtr++);
int r = RED(pixel);
int g = GREEN(pixel);
int b = BLUE(pixel);
*(greyPtr++) = (byte)((r+g+b) / 3);
}
rgbPtr += widthIn - textSize;
}
}
The image is sent to the function like this
jbyte* cImageIn = env->GetByteArrayElements(imageIn, &b);
jbyte* cImageOut = (jbyte*)env->GetDirectBufferAddress(imageOut);
toGreyscale((byte*)cImageIn, widthIn, heightIn, (byte*)cImageOut);
The result I get is a horizontally-reversed image (no idea why...the UVs to display the result are correct...), but the biggest problem is that only the red channel is actually correct when I display them separately. The green and blue channels are all messed up and I have no idea why. I checked on the Internet and all the resources I found showed that the masks I am using are correct. Any idea where the mistake could be?
Thanks!
May be an endianess issue?
You could check quickly by reversing the two bytes of your 16 bits word before shifting out the RGB components.