I am trying to create a bitmap from an int array, but the resulting bitmap is all 0.
Scenario
I have an app which takes a raw image and then does some post processing on the raw image. After this processing, I am trying to save the resulting byte array as a JPEG image and here is where I am converting the array to a bitmap so that it can be saved as a JPG.
int [] rgbArray = getColor( returnValue ); // converts the image to an int array
Bitmap image = Bitmap.createBitmap( rgbArray, 1008, 758, Bitmap.Config.ARGB_8888 );
But the image has every pixel set to 0 and the resulting Jpeg is all black.
I wonder if you can give any pointers as to why this should be the case?
I tried printing out the various get functions from the Bitmap to see if it gives any clue.
Bitmap getHeight() 758
Bitmap getWidth() 1008
Bitmap describeContents() 0
Bitmap getByteCount() 3056256
Bitmap getConfig() ARGB_8888
Bitmap isRecycled() false
Bitmap isPremultiplied() true
Bitmap hasAlpha() true
Bitmap getColorSpace() sRGB IEC61966-2.1 (id=0, model=RGB)
I am new to android programming, so any pointers would be extremely helpful. Thanks in advance.
Update
If I setPremultiplied( false ); for the Bitmap, it works fine. I guess my immediate problem is solved if i set Premultiplied to false.
Related
Newbie question. I have a byte array (of total length 1920 X 1080 X 4), with every 4 bytes holding constant alpha value of 0x80 (~ 50% translucency) and RGB triplet, I would like to convert it to a bitmap using BitmapFactory. BitmapFactory always returns a null bitmap in the code below.
byte[] rgbImage; // allocated and populated with data elsewhere
BitmapFactory.Options opts = new BitmapFactory.Options();
opts.outWidth = 1920;
opts.outHeight = 1080;
opts.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap bitmap = BitmapFactory.decodeByteArray(rgbImage, 0, 1920 * 1080 * 4, opts);
if(bitmap == null){
Log.e("TAG","bitmap is null");
}
What am I doing wrong? It seems that since any byte takes values in the range 0..255, even an arbitrary byte array would qualify as a RGB image, provided its dimensions made sense.
BitmapFactory.decodeByteArray doesn't accept unsupported formats, so, ensure that the rgbImage byte data is one of the supported image formats. Follow the link to see which ones:
https://developer.android.com/guide/topics/media/media-formats#image-formats
In addition, althouh not the cause of the failure, outWidth / outHeight are not meant to be populated by you, instead these are populated by BitmapFactory when inJustDecodeBounds is set to True in order to find out an image dimensions without decoding it. Setting them does nothing at all, as these are output parameters. Also note that when setting inJustDecodeBounds to True, the return value will be always Null, as the request becomes for the image dimensions and not the image itself.
Also note that for the length argument in decodeByteArray, you can pass rgbImage.length instead of 1920 * 1080 * 4, if such buffer has the same length.
I'm using the Android Camera2 API to take still capture images and displaying them on a TextureView (for later image editing).
I have been scouring the web for a faster method to:
Decode Camera Image buffer into bitmap
Scale bitmap to size of screen and rotate it (since it comes in rotated 90 degrees)
Display it on a texture view
Currently I've managed an execution time of around 0.8s for the above, but this is too long for my particular application.
A few solutions I've considered were:
Simply taking a single frame of the preview (timing-wise this was fast, except that I had no control over auto flash)
Trying to get instead a YUV_420_888 formatted image and then somehow turning that into a bitmap (there's a lot of stuff online that might help but my initial attempts bore no fruit as of yet)
Simply sending a reduced quality image from the camera itself, but from what I've read it looks like the JPEG_QUALITY parameter in CaptureRequests does nothing! I've also tried setting BitmapFactory options inSampleSize but without any noticeable improvement in speed.
Finding some way to directly manipulate the jpeg byte array from image buffer to transform it and then converting to bitmap, all in one shot
For your reference, the following code takes the image buffer, decodes and transforms it, and displays it on the textureview:
Canvas canvas = mTextureView.lockCanvas();
// obtain image bytes (jpeg) from image in camera fragment
// mFragment.getImage() returns Image object
ByteBuffer buffer = mFragment.getImage().getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
// decoding process takes several hundred milliseconds
Bitmap src = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
mFragment.getImage().close();
// resize horizontally oriented images
if (src.getWidth() > src.getHeight()) {
// transformation matrix that scales and rotates
Matrix matrix = new Matrix();
if (CameraLayout.getFace() == CameraCharacteristics.LENS_FACING_FRONT) {
matrix.setScale(-1, 1);
}
matrix.postRotate(90);
matrix.postScale(((float) canvas.getWidth()) / src.getHeight(),
((float) canvas.getHeight()) / src.getWidth());
// bitmap creation process takes another several hundred millis!
Bitmap resizedBitmap = Bitmap.createBitmap(
src, 0, 0, src.getWidth(), src.getHeight(), matrix, true);
canvas.drawBitmap(resizedBitmap, 0, 0, null);
} else {
canvas.drawBitmap(src, 0, 0, null);
}
// post canvas to texture view
mTextureView.unlockCanvasAndPost(canvas);
This is my first question on stack overflow, so I apologize if I haven't quite followed common conventions.
Thanks in advance for any feedback.
If all you're doing with this is to draw it into a View, and won't be saving it, have you tried to simply request JPEGs that are lower resolution than maximum, and match the screen dimensions better?
Alternatively, if you need the full-size image, JPEG images typically contain a thumbnail - extracting that and displaying it is a lot faster than processing the full-resolution image.
In terms of your current code, if possible, you should avoid having to create a second Bitmap with the scaling. Could you instead place an ImageView on top of your TextureView when you want to display the image, and then rely on its built-in scaling?
Or use Canvas.concat(Matrix) instead of creating the intermediate Bitmap?
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(byteBuffer);
// bitmap is valid and can be displayed
I extracted the ByteArray from the valid byteBuffer. But it returns null when I tried to decodeByteArray. Can someone explain why it's the case.
byteBuffer.rewind();
byteBuffer.get(byteArray, 0, byteBuffer.capacity());
Bitmap bitmap = BitmapFactory.decodeByteArray(byteArray, 0 , byteArray.length);
// returns null
I believe the 2 functions do different things and expect different data.
copyPixelsFromBuffer()
is used to import raw pixel information into an existing Bitmap image which already has size and pixel depth configured.
BitmapFactory.decodeByteArray()
is used to create a bitmap from a byte array containing the full bitmap file data, not just the raw pixels. That's why the function doesn't take (or need) size and pixel depth information, because it gets it all from the bytes passed to it.
I´m trying to merge 2 images, one is bitmap from camera, second one is .png file stored in drawables. What I did was that I used both images as bitmaps and I tried to merge them by using canvas, something like this:
Bitmap topImage = BitmapFactory.decodeFile("gui.png");
Bitmap bottomImage = BitmapFactory.decodeByteArray(arg0, 0, arg0.length);
Canvas canvas = new Canvas(bottomImage);
canvas.drawBitmap(topImage, 0, 0, null);
But I keep getting "Bitmap size exceeds VM budget" error all the time. I tried nearly everything, but still, it keeps throwing this error. Is there another way of merging 2 images? What i need to do is simple - I need to take photo and save it merged with that .PNG image stored in drawables. For example this app is very close to what i need - https://play.google.com/store/apps/details?id=com.hl2.hud&feature=search_result#?t=W251bGwsMSwyLDEsImNvbS5obDIuaHVkIl0.
Thanks :)
See the below code for combining two images.
This method returns combined bitmap
public Bitmap combineImages(Bitmap frame, Bitmap image) {
Bitmap cs = null;
Bitmap rs = null;
rs = Bitmap.createScaledBitmap(frame, image.getWidth() + 50,
image.getHeight() + 50, true);
cs = Bitmap.createBitmap(rs.getWidth(), rs.getHeight(),
Bitmap.Config.RGB_565);
Canvas comboImage = new Canvas(cs);
comboImage.drawBitmap(image, 25, 25, null);
comboImage.drawBitmap(rs, 0, 0, null);
if (rs != null) {
rs.recycle();
rs = null;
}
Runtime.getRuntime().gc();
return cs;
}
You can change height and width as per your requirements
Hope this will help...
How large are the images? I've only encountered this problem when trying to load large images into memory.
Is the byte array your decoding actually an image?
From a quick look at the android docs you can capture an image using the default camera app which may work in this situation.
http://developer.android.com/training/camera/photobasics.html
Also see this question: Capture Image from Camera and Display in Activity
Edit: You may also need to scale the image from the camera down if it is very large. See the end of the android page I linked to for details on that.
I want to convert an image in my app to a Base64 encoded string. This image may be of any type like jpeg, png etc.
What I have done is, I converted the drawable to a Bitmap. Then I converted this Bitmap to ByteArrayOutputStream using compress metheod And I am converting this ByteArrayOutputStream to byte array. And then I am encoding it to Base64 using encodeToString().
I can display the image using the above method if the image is of PNG or JPEG.
ByteArrayOutputStream objByteOutput = new ByteArrayOutputStream();
imgBitmap.compress(CompressFormat.JPEG, 0, objByteOutput);
But the problem is if the image is in any other types than PNG or JPEG, how can I display the image?
Or please suggest me some another method to get byte array from Bitmap.
Thank you...
I'd suggest using
http://developer.android.com/reference/android/graphics/Bitmap.html#copyPixelsToBuffer(java.nio.Buffer)
and specify a ByteBuffer, then you can use .array() on the ByteBuffer if it is implemented (it's an optional method) or .get(byte[]) to get it if .array() doesn't exist.
Update:
In order to determine the size of the buffer to create you should use Bitmap.getByteCount(). However this is only present on API 12 and up, so you would need to use Bitmap.getWidth()*Bitmap.getHeight()*4 - the reason for 4 is that the Bitmap uses a series of pixels (internal representation may be less but shouldn't ever be more), each being an ARGB value with 0-255 hence 4 bytes per pixel.
You can get the same with Bitmap.getHeight() * Bitmap.getRowBytes() - here's some code I used to verify this worked:
BitmapDrawable bmd = (BitmapDrawable) getResources().getDrawable(R.drawable.icon);
Bitmap bm = bmd.getBitmap();
ByteBuffer byteBuff = ByteBuffer.allocate(bm.getWidth() * bm.getHeight() * 4);
byteBuff.rewind();
bm.copyPixelsToBuffer(byteBuff);
byte[] tmp = new byte[bm.getWidth() * bm.getHeight() * 4];
byteBuff.rewind();
byteBuff.get(tmp);
It's not nice code, but it gets the byte array out.