Where the heck is Bitmap getByteCount()? - android

I know the Android platform is a huge mess, over complicated and over-engineered, but seriously to get the size of a bitmap, is it really necessary to do all those conversions?
Bitmap bitmap = your bitmap object
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream);
byte[] imageInByte = stream.toByteArray();
long length = imageInByte.length;
According to Google Documentation Bitmap has a method getByteCount() to do this, however it is not present in SDK2.2, haven't tried other's but there is no mention of it being deprecated or that API support is any different from API 1... So where is this mysterious method hiding? It would really nice to be albe to simple do
bitmap.getByteCount()

I just wrote this method.
AndroidVersion.java is a class I created to easily get me the version code from the phone.
http://code.google.com/p/android-beryl/source/browse/beryl/src/org/beryl/app/AndroidVersion.java
public static long getSizeInBytes(Bitmap bitmap) {
if(AndroidVersion.isHoneycombMr2OrHigher()) {
return bitmap.getByteCount();
} else {
return bitmap.getRowBytes() * bitmap.getHeight();
}
}

If you filter by API Level 8 (= SDK 2.2), you'll see that Bitmap#getByteCount() is greyed out, meaning that method is not present in that API level.
getByteCount() was added in API Level 12.

The answers here are a bit outdated. Reason (in the docs) :
getByteCount : As of KITKAT, the result of this method can no longer
be used to determine memory usage of a bitmap. See
getAllocationByteCount().
So, the current answer should be :
int result=BitmapCompat.getAllocationByteCount(bitmap)
or, if you insist on writing it yourself:
public static int getBitmapByteCount(Bitmap bitmap) {
if (VERSION.SDK_INT < VERSION_CODES.HONEYCOMB_MR1)
return bitmap.getRowBytes() * bitmap.getHeight();
if (VERSION.SDK_INT < VERSION_CODES.KITKAT)
return bitmap.getByteCount();
return bitmap.getAllocationByteCount();
}

Before API 12 you can calculate the byte size of an Bitmap using getHeight() * getWidth() * 4 if you are using ARGB_8888 because every pixel is stored in 4bytes. I think this is the default format.

As mentioned in other answers, it is only available on API 12 or higher. This is a simple compatibility version of the method.
public static int getByteCount(Bitmap bitmap) {
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.HONEYCOMB_MR1) {
return bitmap.getRowBytes() * bitmap.getHeight();
} else {
return bitmap.getByteCount();
}
}

I tried all of the above methods and they were close, but not quite right (for my situation at least).
I was using bitmap.getByteCount(); inside of the sizeOf() method when creating a new LruCache:
mMemoryCache = new LruCache<String, Bitmap>(cacheSize) {
#Override
protected int sizeOf(String key, Bitmap bitmap) {
return bitmap.getByteCount();
}
};
I then tried the suggested:
return bitmap.getRowBytes() * bitmap.getHeight();
This was great, but I noticed that the returned values were different and when I used the suggestion above, it would not even make a cache on my device. I tested the return values on a Nexus One running api 3.2 and a Galaxy Nexus running 4.2:
bitmap.getByteCount(); returned-> 15
bitmap.getRowBytes() * bitmap.getHeight(); returned-> 15400
So to solve my issue, I simply did this:
return (bitmap.getRowBytes() * bitmap.getHeight()) / 1000;
instead of:
return bitmap.getByteCount();
May not be the same situation you were in, but this worked for me.

As you can see in the source code, getByteCount is simply this:
public final int getByteCount() {
// int result permits bitmaps up to 46,340 x 46,340
return getRowBytes() * getHeight();
}
Here is the source code for 5.0

Related

Programmatic Image Resizing in Android, Memory Issues

Days, I've spent working on this. Weeks, perhaps. Literally. :(
So I've got an image on an SD card that more than likely came out of the built-in camera. I want to take that image and downsample it to an arbitrary size (but always smaller and never larger). My code uses standard Android Bitmap methods to decode, resize, recompress, and save the image. Everything works fine as long as the final image is smaller than 3MP or so. If the image is larger, or if I try to do several of these at once, the application crashes with an OutOfMemoryError. I know why that's happening, and I know it's happening for a perfectly legitimate reason, I just want it to not happen anymore.
Look, I'm not trying to launch a rocket here. All I want to do is resize a camera image and dump it to an OutputStream or even a temporary file. Surely someone out there must have done such a thing. I don't need you to write my code for me, and I don't need my hand held. But between my various programming abortions and days of obsessed Googling, I don't even know which direction to head in. Roughly speaking, does anyone know how to decode a JPEG, downsample it, re-compress it in JPEG, and send it out on an OutputStream without allocating a massive amount of memory?
Ok I know it's a little bit late but, I had this problem and I found solution. It is actually easy and I am sure it supports back to api 10(I have no idea about before 10). I tried this with my phone. It is a samsung galaxy s2 with an 8mp camera and the code perfectly resized camera images to the 168x168 as well as images i found on web. I checked the images by using file manager too. I never tried resizing images to bigger resoulation.
private Bitmap resize(Bitmap bp, int witdh, int height){
return Bitmap.createScaledBitmap(bp, width, height, false);
}
you can save it like this
private void saveBitmap(Bitmap bp) throws FileNotFoundException{
String state = Environment.getExternalStorageState();
File folder;
//if there is memory card available code choose that
if (Environment.MEDIA_MOUNTED.equals(state)) {
folder=Environment.getExternalStorageDirectory();
}else{
folder=Environment.getDataDirectory();
}
folder=new File(folder, "/aaaa");
if(!folder.exists()){
folder.mkdir();
}
File file=new File(folder, (int)(Math.random()*10000)+".jpg");
FileOutputStream os=new FileOutputStream(file);
bp.compress(Bitmap.CompressFormat.JPEG, 90, os);
}
thanks to this link
The following code is from my previous project. Key point is "options.inSampleSize".
public static Bitmap makeBitmap(String fn, int minSideLength, int maxNumOfPixels) {
BitmapFactory.Options options;
try {
options = new BitmapFactory.Options();
options.inPurgeable = true;
options.inJustDecodeBounds = true;
BitmapFactory.decodeFile(fn, options);
if (options.mCancel || options.outWidth == -1
|| options.outHeight == -1) {
return null;
}
options.inSampleSize = computeSampleSize(
options, minSideLength, maxNumOfPixels);
options.inJustDecodeBounds = false;
//Log.e(LOG_TAG, "sample size=" + options.inSampleSize);
options.inDither = false;
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
return BitmapFactory.decodeFile(fn, options);
} catch (OutOfMemoryError ex) {
Log.e(LOG_TAG, "Got oom exception ", ex);
return null;
}
}
private static int computeInitialSampleSize(BitmapFactory.Options options,
int minSideLength, int maxNumOfPixels) {
double w = options.outWidth;
double h = options.outHeight;
int lowerBound = (maxNumOfPixels == UNCONSTRAINED) ? 1 :
(int) Math.ceil(Math.sqrt(w * h / maxNumOfPixels));
int upperBound = (minSideLength == UNCONSTRAINED) ? 128 :
(int) Math.min(Math.floor(w / minSideLength),
Math.floor(h / minSideLength));
if (upperBound < lowerBound) {
// return the larger one when there is no overlapping zone.
return lowerBound;
}
if ((maxNumOfPixels == UNCONSTRAINED) &&
(minSideLength == UNCONSTRAINED)) {
return 1;
} else if (minSideLength == UNCONSTRAINED) {
return lowerBound;
} else {
return upperBound;
}
}

Android: bitmap.getByteCount() in API lesser than 12

When i searched for how to find the size of an image before saving it on the SD card, i found this:
bitmap.getByteCount();
but that method is added in API 12 and i am using API 10. So again i found out this:
getByteCount() is just a convenience method which does exactly what you have placed in the else-block. In other words, if you simply rewrite getSizeInBytes to always return "bitmap.getRowBytes() * bitmap.getHeight()"
here:
Where the heck is Bitmap getByteCount()?
so, by calculating this bitmap.getRowBytes() * bitmap.getHeight() i got the value 120000 (117 KB).
where as the image size on the SD card is 1.6 KB.
What am i missing? or doing wrong?
Thank You
You are doing it correctly!
A quick way to know for sure if the values are valid, is to log it like this:
int numBytesByRow = bitmap.getRowBytes() * bitmap.getHeight();
int numBytesByCount = bitmap.getByteCount();
Log.v( TAG, "numBytesByRow=" + numBytesByRow );
Log.v( TAG, "numBytesByCount=" + numBytesByCount );
This gives the result:
03-29 17:31:10.493: V/ImageCache(19704): numBytesByRow=270000
03-29 17:31:10.493: V/ImageCache(19704): numBytesByCount=270000
So both are calculating the same number, which I suspect is the in-memory size of the bitmap. This is different than a JPG or PNG on disk as it is completely uncompressed.
For more info, we can look to AOSP and the source in the example project. This is the file used in the example project BitmapFun in the Android developer docs Caching Bitmaps
AOSP ImageCache.java
/**
* Get the size in bytes of a bitmap in a BitmapDrawable.
* #param value
* #return size in bytes
*/
#TargetApi(12)
public static int getBitmapSize(BitmapDrawable value) {
Bitmap bitmap = value.getBitmap();
if (APIUtil.hasHoneycombMR1()) {
return bitmap.getByteCount();
}
// Pre HC-MR1
return bitmap.getRowBytes() * bitmap.getHeight();
}
As you can see this is the same technique they use
bitmap.getRowBytes() * bitmap.getHeight();
References:
http://developer.android.com/training/displaying-bitmaps/cache-bitmap.html
http://code.google.com/p/adamkoch/source/browse/bitmapfun/
For now i am using this:
ByteArrayOutputStream bao = new ByteArrayOutputStream();
my_bitmap.compress(Bitmap.CompressFormat.PNG, 100, bao);
byte[] ba = bao.toByteArray();
int size = ba.length;
to get total no.of bytes as size. Because the value i get here perfectly matches the size(in bytes) on the image on SD card.
Nothing is missing! Your codesnippet shows exact the implementation from Android-Source:
http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/android/4.1.1_r1/android/graphics/Bitmap.java#Bitmap.getByteCount%28%29
I think the differences in size are the result of image-compressing (jpg and so on).
Here is an alternative way:
public static int getBitmapByteCount(Bitmap bitmap) {
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.HONEYCOMB_MR1)
return bitmap.getRowBytes() * bitmap.getHeight();
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.KITKAT)
return bitmap.getByteCount();
return bitmap.getAllocationByteCount();
}
A statement from the IDE for getAllocationByteCount():
This can be larger than the result of getByteCount() if a bitmap is
reused to decode other bitmaps of smaller size, or by manual
reconfiguration. See reconfigure(int, int, Bitmap.Config),
setWidth(int), setHeight(int), setConfig(Bitmap.Config), and
BitmapFactory.Options.inBitmap. If a bitmap is not modified in this
way, this value will be the same as that returned by getByteCount().
may u can try this code
int pixels = bitmap.getHeight() * bitmap.getWidth();
int bytesPerPixel = 0;
switch(bitmap.getConfig()) {
case ARGB_8888:
bytesPerPixel = 4;
break;
case RGB_565:
bytesPerPixel = 2;
break;
case ARGB_4444:
bytesPerPixel = 2;
break;
case ALPHA_8 :
bytesPerPixel = 1;
break;
}
int byteCount = pixels / bytesPerPixel;
the image on the sd card has a different size because it's compressed. on the device it will depend on the width/height
Why don't you try dividing it between 1024? To get the KB instead of Bytes.

ALPHA_8 bitmaps and getPixel

I am trying to load a movement map from a PNG image. In order to save memory
after I load the bitmap I do something like that.
`Bitmap mapBmp = tempBmp.copy(Bitmap.Config.ALPHA_8, false);`
If I draw the mapBmp I can see the map but when I use getPixel() I get
always 0 (zero).
Is there a way to retrieve ALPHA information from a bitmap other than
with getPixel() ?
Seems to be an Android bug in handling ALPHA_8. I also tried copyPixelsToBuffer, to no avail. Simplest workaround is to waste lots of memory and use ARGB_8888.
Issue 25690
I found this question from Google and I was able to extract the pixels using the copyPixelsToBuffer() method that Mitrescu Catalin ended up using. This is what my code looks like in case anyone else finds this as well:
public byte[] getPixels(Bitmap b) {
int bytes = b.getRowBytes() * b.getHeight();
ByteBuffer buffer = ByteBuffer.allocate(bytes);
b.copyPixelsToBuffer(buffer);
return buffer.array();
}
If you are coding for API level 12 or higher you could use getByteCount() instead to get the total number of bytes to allocate. However if you are coding for API level 19 (KitKat) you should probably use getAllocationByteCount() instead.
I was able to find a nice and sort of clean way to create boundary maps. I create an ALPHA_8 bitmap from the start. I paint my boundry map with paths. Then I use the copyPixelsToBuffer() and transfer the bytes into a ByteBuffer. I use the buffer to "getPixels" from.
I think is a good solution since you can scale down or up the path() and draw the boundary map at the desired screen resolution scale and no IO + decode operations.
Bitmap.getPixel() is useless for ALPHA_8 bitmaps, it always returns 0.
I developed solution with PNGJ library, to read image from assets and then create Bitmap with Config.ALPHA_8.
import ar.com.hjg.pngj.IImageLine;
import ar.com.hjg.pngj.ImageLineHelper;
import ar.com.hjg.pngj.PngReader;
public Bitmap getAlpha8BitmapFromAssets(String file) {
Bitmap result = null;
try {
PngReader pngr = new PngReader(getAssets().open(file));
int channels = pngr.imgInfo.channels;
if (channels < 3 || pngr.imgInfo.bitDepth != 8)
throw new RuntimeException("This method is for RGB8/RGBA8 images");
int bytes = pngr.imgInfo.cols * pngr.imgInfo.rows;
ByteBuffer buffer = ByteBuffer.allocate(bytes);
for (int row = 0; row < pngr.imgInfo.rows; row++) {
IImageLine l1 = pngr.readRow();
for (int j = 0; j < pngr.imgInfo.cols; j++) {
int original_color = ImageLineHelper.getPixelARGB8(l1, j);
byte x = (byte) Color.alpha(original_color);
buffer.put(row * pngr.imgInfo.cols + j, x ^= 0xff);
}
}
pngr.end();
result = Bitmap.createBitmap(pngr.imgInfo.cols,pngr.imgInfo.rows, Bitmap.Config.ALPHA_8);
result.copyPixelsFromBuffer(buffer);
} catch (IOException e) {
Log.e(LOG_TAG, e.getMessage());
}
return result;
}
I also invert alpha values, because of my particular needs. This code is only tested for API 21.

Load .png image as ARGB_8888 bitmap [duplicate]

I try to read an image from sdcard (in emulator) and then create a Bitmap image with the
BitmapFactory.decodeByteArray
method. I set the options:
options.inPrefferedConfig = Bitmap.Config.ARGB_8888
options.inDither = false
Then I extract the pixels into a ByteBuffer.
ByteBuffer buffer = ByteBuffer.allocateDirect(width*height*4)
bitmap.copyPixelsToBuffer(buffer)
I use this ByteBuffer then in the JNI to convert it into RGB format and want to calculate on it.
But always I get false data - I test without modifying the ByteBuffer. Only thing I do is to put it into the native method into JNI. Then cast it into a unsigned char* and convert it back into a ByteBuffer before returning it back to Java.
unsigned char* buffer = (unsinged char*)(env->GetDirectBufferAddress(byteBuffer))
jobject returnByteBuffer = env->NewDirectByteBuffer(buffer, length)
Before displaying the image I get data back with
bitmap.copyPixelsFromBuffer( buffer )
But then it has wrong data in it.
My Question is if this is because the image is internally converted into RGB 565 or what is wrong here?
.....
Have an answer for it:
->>> yes, it is converted internally to RGB565.
Does anybody know how to create such an bitmap image from PNG with ARGB8888 pixel format?
If anybody has an idea, it would be great!
An ARGB_8888 Bitmap (on pre Honeycomb versions) is natively stored in the RGBA format.
So the alpha channel is moved at the end. You should take this into account when accessing a Bitmap's pixels natively.
I assume you are writing code for a version of Android lower than 3.2 (API level < 12), because since then the behavior of the methods
BitmapFactory.decodeFile(pathToImage);
BitmapFactory.decodeFile(pathToImage, opt);
bitmapObject.createScaledBitmap(bitmap, desiredWidth, desiredHeight, false /*filter?*/);
has changed.
On older platforms (API level < 12) the BitmapFactory.decodeFile(..) methods try to return a Bitmap with RGB_565 config by default, if they can't find any alpha, which lowers the quality of an iamge. This is still ok, because you can enforce an ARGB_8888 bitmap using
options.inPrefferedConfig = Bitmap.Config.ARGB_8888
options.inDither = false
The real problem comes when each pixel of your image has an alpha value of 255 (i.e. completely opaque). In that case the Bitmap's flag 'hasAlpha' is set to false, even though your Bitmap has ARGB_8888 config. If your *.png-file had at least one real transparent pixel, this flag would have been set to true and you wouldn't have to worry about anything.
So when you want to create a scaled Bitmap using
bitmapObject.createScaledBitmap(bitmap, desiredWidth, desiredHeight, false /*filter?*/);
the method checks whether the 'hasAlpha' flag is set to true or false, and in your case it is set to false, which results in obtaining a scaled Bitmap, which was automatically converted to the RGB_565 format.
Therefore on API level >= 12 there is a public method called
public void setHasAlpha (boolean hasAlpha);
which would have solved this issue. So far this was just an explanation of the problem.
I did some research and noticed that the setHasAlpha method has existed for a long time and it's public, but has been hidden (#hide annotation). Here is how it is defined on Android 2.3:
/**
* Tell the bitmap if all of the pixels are known to be opaque (false)
* or if some of the pixels may contain non-opaque alpha values (true).
* Note, for some configs (e.g. RGB_565) this call is ignore, since it does
* not support per-pixel alpha values.
*
* This is meant as a drawing hint, as in some cases a bitmap that is known
* to be opaque can take a faster drawing case than one that may have
* non-opaque per-pixel alpha values.
*
* #hide
*/
public void setHasAlpha(boolean hasAlpha) {
nativeSetHasAlpha(mNativeBitmap, hasAlpha);
}
Now here is my solution proposal. It does not involve any copying of bitmap data:
Checked at runtime using java.lang.Reflect if the current
Bitmap implementation has a public 'setHasAplha' method.
(According to my tests it works perfectly since API level 3, and i haven't tested lower versions, because JNI wouldn't work). You may have problems if a manufacturer has explicitly made it private, protected or deleted it.
Call the 'setHasAlpha' method for a given Bitmap object using JNI.
This works perfectly, even for private methods or fields. It is official that JNI does not check whether you are violating the access control rules or not.
Source: http://java.sun.com/docs/books/jni/html/pitfalls.html (10.9)
This gives us great power, which should be used wisely. I wouldn't try modifying a final field, even if it would work (just to give an example). And please note this is just a workaround...
Here is my implementation of all necessary methods:
JAVA PART:
// NOTE: this cannot be used in switch statements
private static final boolean SETHASALPHA_EXISTS = setHasAlphaExists();
private static boolean setHasAlphaExists() {
// get all puplic Methods of the class Bitmap
java.lang.reflect.Method[] methods = Bitmap.class.getMethods();
// search for a method called 'setHasAlpha'
for(int i=0; i<methods.length; i++) {
if(methods[i].getName().contains("setHasAlpha")) {
Log.i(TAG, "method setHasAlpha was found");
return true;
}
}
Log.i(TAG, "couldn't find method setHasAlpha");
return false;
}
private static void setHasAlpha(Bitmap bitmap, boolean value) {
if(bitmap.hasAlpha() == value) {
Log.i(TAG, "bitmap.hasAlpha() == value -> do nothing");
return;
}
if(!SETHASALPHA_EXISTS) { // if we can't find it then API level MUST be lower than 12
// couldn't find the setHasAlpha-method
// <-- provide alternative here...
return;
}
// using android.os.Build.VERSION.SDK to support API level 3 and above
// use android.os.Build.VERSION.SDK_INT to support API level 4 and above
if(Integer.valueOf(android.os.Build.VERSION.SDK) <= 11) {
Log.i(TAG, "BEFORE: bitmap.hasAlpha() == " + bitmap.hasAlpha());
Log.i(TAG, "trying to set hasAplha to true");
int result = setHasAlphaNative(bitmap, value);
Log.i(TAG, "AFTER: bitmap.hasAlpha() == " + bitmap.hasAlpha());
if(result == -1) {
Log.e(TAG, "Unable to access bitmap."); // usually due to a bug in the own code
return;
}
} else { //API level >= 12
bitmap.setHasAlpha(true);
}
}
/**
* Decodes a Bitmap from the SD card
* and scales it if necessary
*/
public Bitmap decodeBitmapFromFile(String pathToImage, int pixels_limit) {
Bitmap bitmap;
Options opt = new Options();
opt.inDither = false; //important
opt.inPreferredConfig = Bitmap.Config.ARGB_8888;
bitmap = BitmapFactory.decodeFile(pathToImage, opt);
if(bitmap == null) {
Log.e(TAG, "unable to decode bitmap");
return null;
}
setHasAlpha(bitmap, true); // if necessary
int numOfPixels = bitmap.getWidth() * bitmap.getHeight();
if(numOfPixels > pixels_limit) { //image needs to be scaled down
// ensures that the scaled image uses the maximum of the pixel_limit while keeping the original aspect ratio
// i use: private static final int pixels_limit = 1280*960; //1,3 Megapixel
imageScaleFactor = Math.sqrt((double) pixels_limit / (double) numOfPixels);
Bitmap scaledBitmap = Bitmap.createScaledBitmap(bitmap,
(int) (imageScaleFactor * bitmap.getWidth()), (int) (imageScaleFactor * bitmap.getHeight()), false);
bitmap.recycle();
bitmap = scaledBitmap;
Log.i(TAG, "scaled bitmap config: " + bitmap.getConfig().toString());
Log.i(TAG, "pixels_limit = " + pixels_limit);
Log.i(TAG, "scaled_numOfpixels = " + scaledBitmap.getWidth()*scaledBitmap.getHeight());
setHasAlpha(bitmap, true); // if necessary
}
return bitmap;
}
Load your lib and declare the native method:
static {
System.loadLibrary("bitmaputils");
}
private static native int setHasAlphaNative(Bitmap bitmap, boolean value);
Native section ('jni' folder)
Android.mk:
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := bitmaputils
LOCAL_SRC_FILES := bitmap_utils.c
LOCAL_LDLIBS := -llog -ljnigraphics -lz -ldl -lgcc
include $(BUILD_SHARED_LIBRARY)
bitmapUtils.c:
#include <jni.h>
#include <android/bitmap.h>
#include <android/log.h>
#define LOG_TAG "BitmapTest"
#define Log_i(...) __android_log_print(ANDROID_LOG_INFO,LOG_TAG,__VA_ARGS__)
#define Log_e(...) __android_log_print(ANDROID_LOG_ERROR,LOG_TAG,__VA_ARGS__)
// caching class and method IDs for a faster subsequent access
static jclass bitmap_class = 0;
static jmethodID setHasAlphaMethodID = 0;
jint Java_com_example_bitmaptest_MainActivity_setHasAlphaNative(JNIEnv * env, jclass clazz, jobject bitmap, jboolean value) {
AndroidBitmapInfo info;
void* pixels;
if (AndroidBitmap_getInfo(env, bitmap, &info) < 0) {
Log_e("Failed to get Bitmap info");
return -1;
}
if (info.format != ANDROID_BITMAP_FORMAT_RGBA_8888) {
Log_e("Incompatible Bitmap format");
return -1;
}
if (AndroidBitmap_lockPixels(env, bitmap, &pixels) < 0) {
Log_e("Failed to lock the pixels of the Bitmap");
return -1;
}
// get class
if(bitmap_class == NULL) { //initializing jclass
// NOTE: The class Bitmap exists since API level 1, so it just must be found.
bitmap_class = (*env)->GetObjectClass(env, bitmap);
if(bitmap_class == NULL) {
Log_e("bitmap_class == NULL");
return -2;
}
}
// get methodID
if(setHasAlphaMethodID == NULL) { //initializing jmethodID
// NOTE: If this fails, because the method could not be found the App will crash.
// But we only call this part of the code if the method was found using java.lang.Reflect
setHasAlphaMethodID = (*env)->GetMethodID(env, bitmap_class, "setHasAlpha", "(Z)V");
if(setHasAlphaMethodID == NULL) {
Log_e("methodID == NULL");
return -2;
}
}
// call java instance method
(*env)->CallVoidMethod(env, bitmap, setHasAlphaMethodID, value);
// if an exception was thrown we could handle it here
if ((*env)->ExceptionOccurred(env)) {
(*env)->ExceptionDescribe(env);
(*env)->ExceptionClear(env);
Log_e("calling setHasAlpha threw an exception");
return -2;
}
if(AndroidBitmap_unlockPixels(env, bitmap) < 0) {
Log_e("Failed to unlock the pixels of the Bitmap");
return -1;
}
return 0; // success
}
That's it. We are done. I've posted the whole code for copy-and-paste purposes.
The actual code isn't that big, but making all these paranoid error checks makes it a lot bigger. I hope this could be helpful to anyone.

Access to raw data in ARGB_8888 Android Bitmap

I am trying to access the raw data of a Bitmap in ARGB_8888 format on Android, using the copyPixelsToBuffer and copyPixelsFromBuffer methods. However, invocation of those calls seems to always apply the alpha channel to the rgb channels. I need the raw data in a byte[] or similar (to pass through JNI; yes, I know about bitmap.h in Android 2.2, cannot use that).
Here is a sample:
// Create 1x1 Bitmap with alpha channel, 8 bits per channel
Bitmap one = Bitmap.createBitmap(1,1,Bitmap.Config.ARGB_8888);
one.setPixel(0,0,0xef234567);
Log.v("?","hasAlpha() = "+Boolean.toString(one.hasAlpha()));
Log.v("?","pixel before = "+Integer.toHexString(one.getPixel(0,0)));
// Copy Bitmap to buffer
byte[] store = new byte[4];
ByteBuffer buffer = ByteBuffer.wrap(store);
one.copyPixelsToBuffer(buffer);
// Change value of the pixel
int value=buffer.getInt(0);
Log.v("?", "value before = "+Integer.toHexString(value));
value = (value >> 8) | 0xffffff00;
buffer.putInt(0, value);
value=buffer.getInt(0);
Log.v("?", "value after = "+Integer.toHexString(value));
// Copy buffer back to Bitmap
buffer.position(0);
one.copyPixelsFromBuffer(buffer);
Log.v("?","pixel after = "+Integer.toHexString(one.getPixel(0,0)));
The log then shows
hasAlpha() = true
pixel before = ef234567
value before = 214161ef
value after = ffffff61
pixel after = 619e9e9e
I understand that the order of the argb channels is different; that's fine. But I don't
want the alpha channel to be applied upon every copy (which is what it seems to be doing).
Is this how copyPixelsToBuffer and copyPixelsFromBuffer are supposed to work? Is there any way to get the raw data in a byte[]?
Added in response to answer below:
Putting in buffer.order(ByteOrder.nativeOrder()); before the copyPixelsToBuffer does change the result, but still not in the way I want it:
pixel before = ef234567
value before = ef614121
value after = ffffff41
pixel after = ff41ffff
Seems to suffer from essentially the same problem (alpha being applied upon each copyPixelsFrom/ToBuffer).
One way to access data in Bitmap is to use getPixels() method. Below you can find an example I used to get grayscale image from argb data and then back from byte array to Bitmap (of course if you need rgb you reserve 3x bytes and save them all...):
/*Free to use licence by Sami Varjo (but nice if you retain this line)*/
public final class BitmapConverter {
private BitmapConverter(){};
/**
* Get grayscale data from argb image to byte array
*/
public static byte[] ARGB2Gray(Bitmap img)
{
int width = img.getWidth();
int height = img.getHeight();
int[] pixels = new int[height*width];
byte grayIm[] = new byte[height*width];
img.getPixels(pixels,0,width,0,0,width,height);
int pixel=0;
int count=width*height;
while(count-->0){
int inVal = pixels[pixel];
//Get the pixel channel values from int
double r = (double)( (inVal & 0x00ff0000)>>16 );
double g = (double)( (inVal & 0x0000ff00)>>8 );
double b = (double)( inVal & 0x000000ff) ;
grayIm[pixel++] = (byte)( 0.2989*r + 0.5870*g + 0.1140*b );
}
return grayIm;
}
/**
* Create a gray scale bitmap from byte array
*/
public static Bitmap gray2ARGB(byte[] data, int width, int height)
{
int count = height*width;
int[] outPix = new int[count];
int pixel=0;
while(count-->0){
int val = data[pixel] & 0xff; //convert byte to unsigned
outPix[pixel++] = 0xff000000 | val << 16 | val << 8 | val ;
}
Bitmap out = Bitmap.createBitmap(outPix,0,width,width, height, Bitmap.Config.ARGB_8888);
return out;
}
}
My guess is that this might have to do with the byte order of the ByteBuffer you are using. ByteBuffer uses big endian by default.
Set endianess on the buffer with
buffer.order(ByteOrder.nativeOrder());
See if it helps.
Moreover, copyPixelsFromBuffer/copyPixelsToBuffer does not change the pixel data in any way. They are copied raw.
I realize this is very stale and probably won't help you now, but I came across this recently in trying to get copyPixelsFromBuffer to work in my app. (Thank you for asking this question, btw! You saved me tons of time in debugging.) I'm adding this answer in the hopes it helps others like me going forward...
Although I haven't used this yet to ensure that it works, it looks like that, as of API Level 19, we'll finally have a way to specify not to "apply the alpha" (a.k.a. premultiply) within Bitmap. They're adding a setPremultiplied(boolean) method that should help in situations like this going forward by allowing us to specify false.
I hope this helps!
This is an old question, but i got to the same issue, and just figured out that the bitmap byte are pre-multiplied, you can set the bitmap (as of API 19) to not pre-multiply the buffer, but in the API they make no guarantee.
From the docs:
public final void setPremultiplied(boolean premultiplied)
Sets whether the bitmap should treat its data as pre-multiplied.
Bitmaps are always treated as pre-multiplied by the view system and Canvas for performance reasons. Storing un-pre-multiplied data in a Bitmap (through setPixel, setPixels, or BitmapFactory.Options.inPremultiplied) can lead to incorrect blending if drawn by the framework.
This method will not affect the behaviour of a bitmap without an alpha channel, or if hasAlpha() returns false.
Calling createBitmap or createScaledBitmap with a source Bitmap whose colors are not pre-multiplied may result in a RuntimeException, since those functions require drawing the source, which is not supported for un-pre-multiplied Bitmaps.

Categories

Resources