What is the Parcelable-Relevant Size of a Bitmap? - android

On API Level 19+ devices, we have getByteCount() and getAllocationByteCount(), each of which return a size of the Bitmap in bytes. The latter takes into account the fact that a Bitmap can actually represent a smaller image than its byte count (e.g., the Bitmap originally held a larger image, but then was used with BitmapFactory.Options and inBitmap to hold a smaller image).
In most Android IPC scenarios, particularly those involving Parcelable, we have a 1MB "binder transaction limit".
For the purposes of determining whether a given Bitmap is small enough for IPC, do we use getByteCount() or getAllocationByteCount()?
My gut instinct says that we use getByteCount(), as that should be the number of bytes the current image in the Bitmap is taking up, but I was hoping somebody had a more authoritative answer.

The size of the image data written to the parcel is getByteCount() plus the size of the Bitmap's color table, if it has one. There are also approximately 48 bytes of Bitmap attributes written to the parcel. The following code analysis and tests provide the basis for these statements.
The native function to write a Bitmap to a Parcel begins at line 620 of this file. The function is included here with explanation added:
static jboolean Bitmap_writeToParcel(JNIEnv* env, jobject,
jlong bitmapHandle,
jboolean isMutable, jint density,
jobject parcel) {
const SkBitmap* bitmap = reinterpret_cast<SkBitmap*>(bitmapHandle);
if (parcel == NULL) {
SkDebugf("------- writeToParcel null parcel\n");
return JNI_FALSE;
}
android::Parcel* p = android::parcelForJavaObject(env, parcel);
The following seven ints are the first data written to the parcel. In Test #2, described below, these values are read from a sample parcel to confirm the size of data written for the Bitmap.
p->writeInt32(isMutable);
p->writeInt32(bitmap->colorType());
p->writeInt32(bitmap->alphaType());
p->writeInt32(bitmap->width());
p->writeInt32(bitmap->height());
p->writeInt32(bitmap->rowBytes());
p->writeInt32(density);
If the bitmap has a color map, it is written to the parcel. A precise determination the parcel size of a bitmap must address this also.
if (bitmap->colorType() == kIndex_8_SkColorType) {
SkColorTable* ctable = bitmap->getColorTable();
if (ctable != NULL) {
int count = ctable->count();
p->writeInt32(count);
memcpy(p->writeInplace(count * sizeof(SkPMColor)),
ctable->lockColors(), count * sizeof(SkPMColor));
ctable->unlockColors();
} else {
p->writeInt32(0); // indicate no ctable
}
}
Now we get to the core of the question: How much data is written for the bitmap image? The amount is determined by this call to bitmap->getSize(). That function is analyzed below. Note here that the value is stored in size, which is used in the code following to write the blob for the image data, and copy the data into the memory pointed to by the blob.
size_t size = bitmap->getSize();
A variable size block of data is written to a parcel using a blob. If the block is less than 40K, it is written "in place" in the parcel. Blocks larger than 40K are written to shared memory using ashmem, and the attributes of the ashmem region are written in the parcel. The blob itself is just a small descriptor containing a few members that include a pointer to the block, its length, and a flag indicating if the block is in-place or in shared memory. The class definition for WriteableBlob is at line 262 of this file. The definition of writeBlob() is at line 747 of this file. writeBlob() determines if the data block is small enough to be written in-place. If so, it expands the parcel buffer to make room. If not, an ashmem region is created and configured. In both cases, the members of the blob (pointer, size, flag) are defined for later use when the block is copied. Note that for both cases, size defines the size of the data that will be copied, either in-place or to shared memory. When writeBlob() completes, the target data buffer is defined in blob and values written to the parcel describing how the image data block is stored (in-place or shared memory) and, for shared memory, the attributes of the ashmem region.
android::Parcel::WritableBlob blob;
android::status_t status = p->writeBlob(size, &blob);
if (status) {
doThrowRE(env, "Could not write bitmap to parcel blob.");
return JNI_FALSE;
}
With the target buffer for the block now setup, the data can be copied using the pointer in the blob. Note that size defines the amount of data copied. Also note that there is only one size. The same value is used for in-place and shared memory targets.
bitmap->lockPixels();
const void* pSrc = bitmap->getPixels();
if (pSrc == NULL) {
memset(blob.data(), 0, size);
} else {
memcpy(blob.data(), pSrc, size);
}
bitmap->unlockPixels();
blob.release();
return JNI_TRUE;
}
That completes the analysis of Bitmap_writeToParcel. It is now clear that while small (<40K) images are written in-place and larger images are written to shared memory, the size of the data written is the same for both cases. The easiest and most direct way to see what that size is, is to create a test case using an image <40K, so that it will be written in-place. The size of the resulting parcel then reveals the size of the image data.
A second method for determining what size is requires an understanding of SkBitmap::getSize(). This is the function used in the code analyzed above to get the size of the image block.
SkBitmap::getSize() is define at line 130 of this file. It is:
size_t getSize() const { return fHeight * fRowBytes; }
Two other functions in the same file relevant to this explanation are height(), defined at line 98:
int height() const { return fHeight; }
and rowBytes(), defined at line 101:
int rowBytes() const { return fRowBytes; }
We saw these functions used in Bitmap_writeToParcel when the attributes of a bitmap are written to a parcel:
p->writeInt32(bitmap->height());
p->writeInt32(bitmap->rowBytes());
With this understanding of these functions, we now see that by dumping the first few ints in a parcel, we can see the values of fHeight and fRowBytes, from which we can infer the value returned by getSize().
The second code snippet below does this and provides further confirmation that the size of the data written to the Parcel corresponds to the value returned by getByteCount().
Test #1
This test creates a bitmap smaller than 40K to produce in-place storage of the image data. The parcel data size is then examined to show that getByteCount() determines the size of the image data stored in the parcel.
The first few statements in the code below are to produce a Bitmap smaller than 40K.
The logcat output confirms that the size of the data written to the Parcel corresponds to the value returned by getByteCount().
byteCount=38400 allocatedByteCount=38400 parcelDataSize=38428
byteCount=7680 allocatedByteCount=38400 parcelDataSize=7708
Code that produced the output shown:
// Setup to get a mutable bitmap less than 40 Kbytes
String path = "someSmallImage.jpg";
Bitmap bm0 = BitmapFactory.decodeFile(path);
// Need it mutable to change height
Bitmap bm1 = bm0.copy(bm0.getConfig(), true);
// Chop it to get a size less than 40K
bm1.setHeight(bm1.getHeight() / 32);
// Now we have a BitMap with size < 40K for the test
Bitmap bm2 = bm1.copy(bm0.getConfig(), true);
// What's the parcel size?
Parcel p1 = Parcel.obtain();
bm2.writeToParcel(p1, 0);
// Expect byteCount and allocatedByteCount to be the same
Log.i("Demo", String.format("byteCount=%d allocatedByteCount=%d parcelDataSize=%d",
bm2.getByteCount(), bm2.getAllocationByteCount(), p1.dataSize()));
// Resize to make byteCount and allocatedByteCount different
bm2.setHeight(bm2.getHeight() / 4);
// What's the parcel size?
Parcel p2 = Parcel.obtain();
bm2.writeToParcel(p2, 0);
// Show that byteCount determines size of data written to parcel
Log.i("Demo", String.format("byteCount=%d allocatedByteCount=%d parcelDataSize=%d",
bm2.getByteCount(), bm2.getAllocationByteCount(), p2.dataSize()));
p1.recycle();
p2.recycle();
Test #2
This test stores a bitmap to a parcel, then dumps the first few ints to get the values from which image data size can be inferred.
The logcat output with comments added:
// Bitmap attributes
bc=12000000 abc=12000000 hgt=1500 wid=2000 rbyt=8000 dens=213
// Attributes after height change. byteCount changed, allocatedByteCount not.
bc=744000 abc=12000000 hgt=93 wid=2000 rbyt=8000 dens=213
// Dump of parcel data. Parcel data size is 48. Image too large for in-place.
pds=48 mut=1 ctyp=4 atyp=1 hgt=93 wid=2000 rbyt=8000 dens=213
// Show value of getSize() derived from parcel data. It equals getByteCount().
bitmap->getSize()= 744000 getByteCount()=744000
Code that produced this output:
String path = "someImage.jpg";
Bitmap bm0 = BitmapFactory.decodeFile(path);
// Need it mutable to change height
Bitmap bm = bm0.copy(bm0.getConfig(), true);
// For reference, and to provide confidence that the parcel data dump is
// correct, log the bitmap attributes.
Log.i("Demo", String.format("bc=%d abc=%d hgt=%d wid=%d rbyt=%d dens=%d",
bm.getByteCount(), bm.getAllocationByteCount(),
bm.getHeight(), bm.getWidth(), bm.getRowBytes(), bm.getDensity()));
// Change size
bm.setHeight(bm.getHeight() / 16);
Log.i("Demo", String.format("bc=%d abc=%d hgt=%d wid=%d rbyt=%d dens=%d",
bm.getByteCount(), bm.getAllocationByteCount(),
bm.getHeight(), bm.getWidth(), bm.getRowBytes(), bm.getDensity()));
// Get a parcel and write the bitmap to it.
Parcel p = Parcel.obtain();
bm.writeToParcel(p, 0);
// When the image is too large to be written in-place,
// the parcel data size will be ~48 bytes (when there is no color map).
int parcelSize = p.dataSize();
// What are the first few ints in the parcel?
p.setDataPosition(0);
int mutable = p.readInt(); //1
int colorType = p.readInt(); //2
int alphaType = p.readInt(); //3
int width = p.readInt(); //4
int height = p.readInt(); //5 bitmap->height()
int rowBytes = p.readInt(); //6 bitmap->rowBytes()
int density = p.readInt(); //7
Log.i("Demo", String.format("pds=%d mut=%d ctyp=%d atyp=%d hgt=%d wid=%d rbyt=%d dens=%d",
parcelSize, mutable, colorType, alphaType, height, width, rowBytes, density));
// From code analysis, we know that the value returned
// by SkBitmap::getSize() is the size of the image data written.
// We also know that the value of getSize() is height()*rowBytes().
// These are the values in ints 5 and 6.
int imageSize = height * rowBytes;
// Show that the size of image data stored is Bitmap.getByteCount()
Log.i("Demo", String.format("bitmap->getSize()= %d getByteCount()=%d", imageSize, bm.getByteCount()));
p.recycle();

Related

Packing Java bitmap into ByteBuffer - byte order doesn't match pixel format and endianness (ARM)

I'm a bit puzzled with internal representation of Bitmap's pixels in ByteBuffer (testing on ARM/little endian):
1) In the Java layer I create an ARGB bitmap and fill it with 0xff112233 color:
Bitmap sampleBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(sampleBitmap);
Paint paint = new Paint();
paint.setStyle(Paint.Style.FILL);
paint.setColor(Color.rgb(0x11,0x22, 0x33));
canvas.drawRect(0,0, sampleBitmap.getWidth(), sampleBitmap.getHeight(), paint);
To test, sampleBitmap.getPixel(0,0) indeed returns 0xff112233 that matches ARGB pixel format.
2) The bitmap is packed into direct ByteBuffer before passing to the native layer:
final int byteSize = sampleBitmap.getAllocationByteCount();
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(byteSize);
//byteBuffer.order(ByteOrder.LITTLE_ENDIAN);// See below
sampleBitmap.copyPixelsToBuffer(byteBuffer);
To test, regardless of the buffer's order setting, in the debugger I see the byte layout which doesn't quite match ARGB but more like a big endian RGBA (or little endian ABGR!?)
byteBuffer.rewind();
final byte [] out = new byte[4];
byteBuffer.get(out, 0, out.length);
out = {byte[4]#12852}
0 = (0x11)
1 = (0x22)
2 = (0x33)
3 = (0xFF)
Now, I'm passing this bitmap to the native layer where I must extract pixels and I would expect Bitmap.Config.ARGB_8888 to be represented, depending on buffer's byte order as:
a) byteBuffer.order(ByteOrder.LITTLE_ENDIAN):
out = {byte[4]#12852}
0 = (0x33)
1 = (0x22)
2 = (0x11)
3 = (0xFF)
or
b) byteBuffer.order(ByteOrder.BIG_ENDIAN):
out = {byte[4]#12852}
0 = (0xFF)
1 = (0x11)
2 = (0x22)
3 = (0x33)
I can make the code which extracts the pixels work based on above output but I don't like it since I can't explain the behaviour which I hope someone will do :)
Thanks!
Let's take a look at the implementation. Both getPixel and copyPixelsToBuffer just call their native counterparts.
Bitmap_getPixels specifies an output format:
SkImageInfo dstInfo = SkImageInfo::Make(1, 1, kBGRA_8888_SkColorType, kUnpremul_SkAlphaType, sRGB);
bitmap.readPixels(dstInfo, &dst, dstInfo.minRowBytes(), x, y);
It effectively asks the bitmap to give the pixel value converted to BGRA_8888 (which becomes ARGB because of different native and java endianness).
Bitmap_copyPixelsToBuffer in its turn just copies raw data:
memcpy(abp.pointer(), src, bitmap.computeByteSize());
And does not have any conversion. It basically returns the data in the same format it uses to store it. Let's find out what this inner format is.
Bitmap_creator is used to create a new bitmap and it gets the format from the config passed by calling
SkColorType colorType = GraphicsJNI::legacyBitmapConfigToColorType(configHandle);
Looking at the legacyBitmapConfigToColorType implementation, ARGB_8888 (which has index 5) becomes kN32_SkColorType.
kN32_SkColorType is from skia library, so looking at the definitions find the comment
kN32_SkColorType is an alias for whichever 32bit ARGB format is the
"native" form for skia's blitters. Use this if you don't have a swizzle
preference for 32bit pixels.
and below is the definition:
#if SK_PMCOLOR_BYTE_ORDER(B,G,R,A)
kN32_SkColorType = kBGRA_8888_SkColorType,
#elif SK_PMCOLOR_BYTE_ORDER(R,G,B,A)
kN32_SkColorType = kRGBA_8888_SkColorType,
SK_PMCOLOR_BYTE_ORDER is defined here and it says SK_PMCOLOR_BYTE_ORDER(R,G,B,A) will be true on a little endian machine, which is our case. So it means the bitmap is stored in kRGBA_8888_SkColorType format internally.

Shader Storage Buffer Objects: endianness?

OpenGL ES 3.1, Android.
I have set up SSBO with the intention to write something in fragment shader and read it back in the application. Things almost work, i.e. I can read back the value I have written, with one issue: when I read an INT, its bytes come reversed (a '17' = 0x00000011 written in the shader comes back as '285212672' = 0x11000000 ).
Here's how I do it:
Shader
(...)
layout (std140,binding=0) buffer SSBO
{
int ssbocount[];
};
(...)
ssbocount[0] = 17;
(...)
Application code
int SIZE = 40;
int[] mSSBO = new int[1];
ByteBuffer buf = ByteBuffer.allocateDirect(SIZE).order(ByteOrder.nativeOrder());
(...)
glGenBuffers(1,mSSBO,0);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, mSSBO[0]);
glBufferData(GL_SHADER_STORAGE_BUFFER, SIZE, null, GL_DYNAMIC_READ);
buf = (ByteBuffer) glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, SIZE, GL_MAP_READ_BIT );
glBindBufferBase(GL_SHADER_STORAGE_BUFFER,0, mSSBO[0]);
(...)
int readValue = buf.getInt(0);
Now print out the Value and it comes as '17' with reversed bytes.
Notice I DO allocate the ByteBuffer with 'nativeOrder'. Of course, I could manually flip the bytes, but the concern is this would only sometimes work, depending on the endianness of the host machine...
The fix is to use native endianess, and create an integer view of the ByteBuffer using ByteBuffer.asIntBuffer(). For some reason the local getInt() calls do not seem to respect the local ByteBuffer endianness settings.

Manipulating android camera frame data in JNI

I'm trying to use the NDK to do some image processing. I am NOT using opencv.
I am fairly new to Android so I was doing this in steps. I started by writing a simple app that would let me capture video from the camera and display it to the screen. I have this done.
Then I tried to manipulate the camera data in native. However, onPreviewFrame uses a byte array to capture frame information. This is my code -
public void onPreviewFrame(byte[] arg0, Camera arg1)
{
if (imageFormat == ImageFormat.NV21)
{
if ( !bProcessing )
{
FrameData = arg0;
mHandler.post(callnative);
}
}
}
And the callnative runnable is like so -
private Runnable callnative = new Runnable()
{
public void run()
{
bProcessing = true;
String returnNative = callTorch(MainActivity.assetManager, PreviewSizeWidth, PreviewSizeHeight, FrameData, pixels);
bitmap.setPixels(pixels, 0, PreviewSizeWidth, 0, 0, PreviewSizeWidth, PreviewSizeHeight);
MycameraClass.setImageBitmap(bitmap);
bProcessing = false;
}
};
The problem is, I need to use FrameData in native as the float datatype. However, it is in the form of a bytearray. I wanted to know how the frame data is stored. Is this a 2 dimensional array of bytes? So the camera returns an 8 bit image and stores this as 640x480 bytes? If that is so, in what form does C interpret this byte data type? Can I simply convert it to float? I have this in native -
jbyte *nativeData;
nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
__android_log_print(ANDROID_LOG_INFO, "Nativeprint", "nativedata is: %d",(int)nativeData[0]);
However, this prints -22 which leads me to believe that I am trying to print out a pointer. I am not sure why that is the case though.
I would appreciate any help on this.
You will not be able to get any float data type from the pixels buffer. the data are in bytes, which in C is the char datatype.
So this:
jbyte *nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
is the same as this:
char *nativeData = (char *)((env)->GetByteArrayElements(NV21FrameData, NULL));
The data is stored as 1 dimension array, so you will retrieve each pixel operating by width, height, and x and y calculations.
Also remember the preview camera frames from your sample are in YUV420sp, this means you will need to convert the data from YUV to RGB before you can set it in a bitmap.

Saving SurfaceTexture to JPEG

I try to save, through JNI, the output of the camera modified by OpenGL ES 2 on my tablet.
To achieve this, I use the libjpeg library compiled by the NDK-r8b.
I use the following code:
In the rendering function:
renderImage();
if (iIsPictureRequired)
{
savePicture();
iIsPictureRequired=false;
}
The saving procedure:
bool Image::savePicture()
{
bool l_res =false;
char p_filename[]={"/sdcard/Pictures/testPic.jpg"};
// Allocates the image buffer (RGBA)
int l_size = iWidth*iHeight*4*sizeof(GLubyte);
GLubyte *l_image = (GLubyte*)malloc(l_size);
if (l_image==NULL)
{
LOGE("Image::savePicture:could not allocate %d bytes",l_size);
return l_res;
}
// Reads pixels from the color buffer (byte-aligned)
glPixelStorei(GL_PACK_ALIGNMENT, 1);
checkGlError("glPixelStorei");
// Saves the pixel buffer
glReadPixels(0,0,iWidth,iHeight,GL_RGBA,GL_UNSIGNED_BYTE,l_image);
checkGlError("glReadPixels");
// Stores the file
FILE* l_file = fopen(p_filename, "wb");
if (l_file==NULL)
{
LOGE("Image::savePicture:could not create %s:errno=%d",p_filename,errno);
free(l_image);
return l_res;
}
// JPEG structures
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jerr.trace_level = 10;
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, l_file);
cinfo.image_width = iWidth;
cinfo.image_height = iHeight;
cinfo.input_components = 3;
cinfo.in_color_space = JCS_RGB;
jpeg_set_defaults(&cinfo);
// Image quality [0..100]
jpeg_set_quality (&cinfo, 70, true);
jpeg_start_compress(&cinfo, true);
// Saves the buffer
JSAMPROW row_pointer[1]; // pointer to a single row
// JPEG stores the image from top to bottom (OpenGL does the opposite)
while (cinfo.next_scanline < cinfo.image_height)
{
row_pointer[0] = (JSAMPROW)&l_image[(cinfo.image_height-1-cinfo.next_scanline)* (cinfo.input_components)*iWidth];
jpeg_write_scanlines(&cinfo, row_pointer, 1);
}
// End of the process
jpeg_finish_compress(&cinfo);
fclose(l_file);
free(l_image);
l_res =true;
return l_res;
}
The display is correct but the generated JPEG seems tripled and overlap from left to right.
What did I do wrong ?
It appears that the internal format of the jpeg lib and the canvas do not match. Other appears to read/encode with RGBRGBRGB, other with RGBARGBARGBA.
You might be able to rearrange the image data, if everything else fails...
char *dst_ptr = l_image; char *src_ptr = l_image;
for (i=0;i<width*height;i++) { *dst_ptr++=*src_ptr++;
*dst_ptr++=*src_ptr++; *dst_ptr++=*src_ptr++; src_ptr++; }
EDIT: now that the cause is verified, there might be even simpler modification.
You might be able to get data from gl pixel buffer in the correct format:
int l_size = iWidth*iHeight*3*sizeof(GLubyte);
...
glReadPixels(0,0,iWidth,iHeight,GL_RGB,GL_UNSIGNED_BYTE,l_image);
And one more piece of warning: if this compiles, but the output is tilted, then it means that your screen width is not a multiple of 4, but that opengl wants to start each new row at dword boundary. But in that case there's a good opportunity of a crash, because in that case the l_size should have been 1,2 or 3 bytes larger than expected.

Access to raw data in ARGB_8888 Android Bitmap

I am trying to access the raw data of a Bitmap in ARGB_8888 format on Android, using the copyPixelsToBuffer and copyPixelsFromBuffer methods. However, invocation of those calls seems to always apply the alpha channel to the rgb channels. I need the raw data in a byte[] or similar (to pass through JNI; yes, I know about bitmap.h in Android 2.2, cannot use that).
Here is a sample:
// Create 1x1 Bitmap with alpha channel, 8 bits per channel
Bitmap one = Bitmap.createBitmap(1,1,Bitmap.Config.ARGB_8888);
one.setPixel(0,0,0xef234567);
Log.v("?","hasAlpha() = "+Boolean.toString(one.hasAlpha()));
Log.v("?","pixel before = "+Integer.toHexString(one.getPixel(0,0)));
// Copy Bitmap to buffer
byte[] store = new byte[4];
ByteBuffer buffer = ByteBuffer.wrap(store);
one.copyPixelsToBuffer(buffer);
// Change value of the pixel
int value=buffer.getInt(0);
Log.v("?", "value before = "+Integer.toHexString(value));
value = (value >> 8) | 0xffffff00;
buffer.putInt(0, value);
value=buffer.getInt(0);
Log.v("?", "value after = "+Integer.toHexString(value));
// Copy buffer back to Bitmap
buffer.position(0);
one.copyPixelsFromBuffer(buffer);
Log.v("?","pixel after = "+Integer.toHexString(one.getPixel(0,0)));
The log then shows
hasAlpha() = true
pixel before = ef234567
value before = 214161ef
value after = ffffff61
pixel after = 619e9e9e
I understand that the order of the argb channels is different; that's fine. But I don't
want the alpha channel to be applied upon every copy (which is what it seems to be doing).
Is this how copyPixelsToBuffer and copyPixelsFromBuffer are supposed to work? Is there any way to get the raw data in a byte[]?
Added in response to answer below:
Putting in buffer.order(ByteOrder.nativeOrder()); before the copyPixelsToBuffer does change the result, but still not in the way I want it:
pixel before = ef234567
value before = ef614121
value after = ffffff41
pixel after = ff41ffff
Seems to suffer from essentially the same problem (alpha being applied upon each copyPixelsFrom/ToBuffer).
One way to access data in Bitmap is to use getPixels() method. Below you can find an example I used to get grayscale image from argb data and then back from byte array to Bitmap (of course if you need rgb you reserve 3x bytes and save them all...):
/*Free to use licence by Sami Varjo (but nice if you retain this line)*/
public final class BitmapConverter {
private BitmapConverter(){};
/**
* Get grayscale data from argb image to byte array
*/
public static byte[] ARGB2Gray(Bitmap img)
{
int width = img.getWidth();
int height = img.getHeight();
int[] pixels = new int[height*width];
byte grayIm[] = new byte[height*width];
img.getPixels(pixels,0,width,0,0,width,height);
int pixel=0;
int count=width*height;
while(count-->0){
int inVal = pixels[pixel];
//Get the pixel channel values from int
double r = (double)( (inVal & 0x00ff0000)>>16 );
double g = (double)( (inVal & 0x0000ff00)>>8 );
double b = (double)( inVal & 0x000000ff) ;
grayIm[pixel++] = (byte)( 0.2989*r + 0.5870*g + 0.1140*b );
}
return grayIm;
}
/**
* Create a gray scale bitmap from byte array
*/
public static Bitmap gray2ARGB(byte[] data, int width, int height)
{
int count = height*width;
int[] outPix = new int[count];
int pixel=0;
while(count-->0){
int val = data[pixel] & 0xff; //convert byte to unsigned
outPix[pixel++] = 0xff000000 | val << 16 | val << 8 | val ;
}
Bitmap out = Bitmap.createBitmap(outPix,0,width,width, height, Bitmap.Config.ARGB_8888);
return out;
}
}
My guess is that this might have to do with the byte order of the ByteBuffer you are using. ByteBuffer uses big endian by default.
Set endianess on the buffer with
buffer.order(ByteOrder.nativeOrder());
See if it helps.
Moreover, copyPixelsFromBuffer/copyPixelsToBuffer does not change the pixel data in any way. They are copied raw.
I realize this is very stale and probably won't help you now, but I came across this recently in trying to get copyPixelsFromBuffer to work in my app. (Thank you for asking this question, btw! You saved me tons of time in debugging.) I'm adding this answer in the hopes it helps others like me going forward...
Although I haven't used this yet to ensure that it works, it looks like that, as of API Level 19, we'll finally have a way to specify not to "apply the alpha" (a.k.a. premultiply) within Bitmap. They're adding a setPremultiplied(boolean) method that should help in situations like this going forward by allowing us to specify false.
I hope this helps!
This is an old question, but i got to the same issue, and just figured out that the bitmap byte are pre-multiplied, you can set the bitmap (as of API 19) to not pre-multiply the buffer, but in the API they make no guarantee.
From the docs:
public final void setPremultiplied(boolean premultiplied)
Sets whether the bitmap should treat its data as pre-multiplied.
Bitmaps are always treated as pre-multiplied by the view system and Canvas for performance reasons. Storing un-pre-multiplied data in a Bitmap (through setPixel, setPixels, or BitmapFactory.Options.inPremultiplied) can lead to incorrect blending if drawn by the framework.
This method will not affect the behaviour of a bitmap without an alpha channel, or if hasAlpha() returns false.
Calling createBitmap or createScaledBitmap with a source Bitmap whose colors are not pre-multiplied may result in a RuntimeException, since those functions require drawing the source, which is not supported for un-pre-multiplied Bitmaps.

Categories

Resources