I'm working on an app which gets NV21 buffer from onPreviewFrame() callback through JNI layer and then I convert it to RGB by using OpenCV in C++. Below is the sample code:
Mat yuv(height+height/2, width, CV_8UC1, inBuffer);
Mat rgb(height, width, CV_8UC3);
cvtColor(yuv, rgb, COLOR_YUV2RGB_NV21);
Now in the android app, I get the rgb buffer back and try to display it by generating a bitmap from it:
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
bitmap.createPixelsFromBuffer(ByteBuffer.wrap(imageBuffer));
However, it doesn't display a proper image. Does anyone know what I'm missing here?
In your Bitmap configuration change Bitmap.Config.RGB_565 to Bitmap.Config.ARGB_8888
From the android developer docs,
Bitmap.Config RGB_565
Each pixel is stored on 2 bytes and only the RGB channels are encoded:
red is stored with 5 bits of precision (32 possible values), green is
stored with 6 bits of precision (64 possible values) blue is stored
with 5 bits of precision.
Also, in your native function call keep a 4 channel Mat. Convert with COLOR_YUV2RGBA_NV21.
Related
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(byteBuffer);
// bitmap is valid and can be displayed
I extracted the ByteArray from the valid byteBuffer. But it returns null when I tried to decodeByteArray. Can someone explain why it's the case.
byteBuffer.rewind();
byteBuffer.get(byteArray, 0, byteBuffer.capacity());
Bitmap bitmap = BitmapFactory.decodeByteArray(byteArray, 0 , byteArray.length);
// returns null
I believe the 2 functions do different things and expect different data.
copyPixelsFromBuffer()
is used to import raw pixel information into an existing Bitmap image which already has size and pixel depth configured.
BitmapFactory.decodeByteArray()
is used to create a bitmap from a byte array containing the full bitmap file data, not just the raw pixels. That's why the function doesn't take (or need) size and pixel depth information, because it gets it all from the bytes passed to it.
I have an array of bytes that correspond to a "grayscaled bitmap" (one byte->one pixel), and I need to create a PNG file for this image.
The method below works, but the png created is HUGE, as the Bitmap I am using is an ARGB_8888 bitmap, which takes 4 bytes per pixel instead of 1 byte.
I haven't been able to make it work with other Bitmap.Config different than ARGB_8888. Maybe ALPHA_8 is what I need, but I have not been able to make it work.
I have also tried the toGrayScale method which is included in some other posts (Convert a Bitmap to GrayScale in Android), but I have the same issue with the size.
public static boolean createPNGFromGrayScaledBytes(ByteBuffer grayBytes, int width,
int height,File pngFile) throws IOException{
if (grayBytes.remaining()!=width*height){
Logger.error(Tag, "Unexpected error: size mismatch [remaining:"+grayBytes.remaining()+"][width:"+width+"][height:"+height+"]", null);
return false;
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
// for each byte, I set it in three color channels.
int gray,color;
int x=0,y=0;
while(grayBytes.remaining()>0){
gray = grayBytes.get();
// integer may be negative as byte is signed. make them positive.
if (gray<0){gray+=256;}
// for each byte, I set it in three color channels.
color= Color.argb(-1, gray, gray, gray);
bitmap.setPixel(x, y, color);
x++;
if (x==width){
x=0;
y++;
}
}
FileOutputStream fos=null;
fos = new FileOutputStream(pngFile);
boolean result= bitmap.compress(Bitmap.CompressFormat.PNG,100,fos);
fos.close();
return result;
}
EDIT: Link to the generated file (it may look nonsense, but is just created with randon data).
http://www.tempfiles.net/download/201208/256402/huge_png.html
Any help will be greatly appreciated.
As you've noticed, saving a grayscale image as RGB is expensive. If you have luminance data then it would be better to save as a Grayscale PNG rather than an RGB PNG.
The bitmap and image functionality available in the Android Framework is really geared towards reading and writing image formats that are supported by the framework and UI components. Grayscale PNG is not included here.
If you want to save out a Grayscale PNG on Android then you'll need to use a library like http://code.google.com/p/pngj/
If you use OPENCV for Android library, you can use the library to save a binary data to a png file.
My way is:
in jni part,
set Mat whose data begin with the byte array:
jbyte* _ByteArray_BrightnessImgForOCR = env->GetByteArrayElements(ByteArray_BrightnessImgForOCR, 0);
Mat img(ByteArray_BrightnessImgForOCR_h, ByteArray_BrightnessImgForOCR_w, CV_8UC1, (unsigned char *) _ByteArray_BrightnessImgForOCR);
And then write it to a png file.
imwrite("/mnt/sdcard/binaryImg_forOCR.png", img);
Of course, you need to take some time to get yourself familiar with OpenCV and Java native coding. Following OpenCV for Android examples, it is fast to learn.
I want to convert an image in my app to a Base64 encoded string. This image may be of any type like jpeg, png etc.
What I have done is, I converted the drawable to a Bitmap. Then I converted this Bitmap to ByteArrayOutputStream using compress metheod And I am converting this ByteArrayOutputStream to byte array. And then I am encoding it to Base64 using encodeToString().
I can display the image using the above method if the image is of PNG or JPEG.
ByteArrayOutputStream objByteOutput = new ByteArrayOutputStream();
imgBitmap.compress(CompressFormat.JPEG, 0, objByteOutput);
But the problem is if the image is in any other types than PNG or JPEG, how can I display the image?
Or please suggest me some another method to get byte array from Bitmap.
Thank you...
I'd suggest using
http://developer.android.com/reference/android/graphics/Bitmap.html#copyPixelsToBuffer(java.nio.Buffer)
and specify a ByteBuffer, then you can use .array() on the ByteBuffer if it is implemented (it's an optional method) or .get(byte[]) to get it if .array() doesn't exist.
Update:
In order to determine the size of the buffer to create you should use Bitmap.getByteCount(). However this is only present on API 12 and up, so you would need to use Bitmap.getWidth()*Bitmap.getHeight()*4 - the reason for 4 is that the Bitmap uses a series of pixels (internal representation may be less but shouldn't ever be more), each being an ARGB value with 0-255 hence 4 bytes per pixel.
You can get the same with Bitmap.getHeight() * Bitmap.getRowBytes() - here's some code I used to verify this worked:
BitmapDrawable bmd = (BitmapDrawable) getResources().getDrawable(R.drawable.icon);
Bitmap bm = bmd.getBitmap();
ByteBuffer byteBuff = ByteBuffer.allocate(bm.getWidth() * bm.getHeight() * 4);
byteBuff.rewind();
bm.copyPixelsToBuffer(byteBuff);
byte[] tmp = new byte[bm.getWidth() * bm.getHeight() * 4];
byteBuff.rewind();
byteBuff.get(tmp);
It's not nice code, but it gets the byte array out.
It's well documented that Android's camera preview data is returned back in NV21 (YUV 420). 2.2 added a YuvImage class for decoding the data. The problem I've encountered is that the YuvImage class data appears corrupt or incorrect. I used the Renderscript Sample app called HelloCompute which transforms a Bitmap into a mono-chrome Bitmap. I used two methods for decoding the Preview data into a Bitmap and passing it as input to the Renderscript:
Method 1 - Android YuvImage Class:
YuvImage preview = new YuvImage(data, ImageFormat.NV21, width, height, null);
ByteArrayOutputStream mJpegOutput = new ByteArrayOutputStream(data.length);
preview.compressToJpeg(new Rect(0, 0, width, height), 100, mJpegOutput);
mBitmapIn = BitmapFactory.decodeByteArray( mJpegOutput.toByteArray(), 0, mJpegOutput.size());
// pass mBitmapIn to RS
Method 2 - Posted Decoder Method:
As posted over here by David Pearlman
// work around for Yuv format </p>
mBitmapIn = Bitmap.createBitmap(
ImageUtil.decodeYUV420SP(data, width, height),
width,
height,
Bitmap.Config.ARGB_8888);
// pass mBitmapIn to RS
When the image is processed by the Renderscript and displayed Method 1 is very grainy and not mono-chrome, while Method 2 produces the expected output, a mono-chrome image of the preview frame. Am I doing something wrong or is the YuvImage class not usable? I'm testing this on a Xoom running 3.1.
Furthermore, I displayed the bitmaps produced by both methods on screen prior to passing to the RS. The bitmap from Method 1 has noticeable differences in lighting (I suspected this was due to the JPeg compression), while Method 2's bitmap is identical to the Preview Frame.
There is no justification for using Jpeg encode/decode just to convert a YUV image to a grayscale bitmap (I believe you want grayscale, not monochrome b/w bitmap after all). You can find many code samples that produce the result you need. You may use this one: Converting preview frame to bitmap.
I'm trying to create an Android application that will process camera frames in real time. To start off with, I just want to display a grayscale version of what the camera sees. I've managed to extract the appropriate values from the byte array in the onPreviewFrame method. Below is just a snippet of my code:
byte[] pic;
int pic_size;
Bitmap picframe;
public void onPreviewFrame(byte[] frame, Camera c)
{
pic_size = mCamera.getParameters().getPreviewSize().height * mCamera.getParameters().getPreviewSize().width;
pic = new byte[pic_size];
for(int i = 0; i < pic_size; i++)
{
pic[i] = frame[i];
}
picframe = BitmapFactory.decodeByteArray(pic, 0, pic_size);
}
The first [width*height] values of the byte[] frame array are the luminance (greyscale) values. Once I've extracted them, how do I display them on the screen as an image? Its not a 2D array as well, so how would I specify the width and height?
You can get extensive guidance from the OpenCV4Android SDK. Look into their available examples, specifically Tutorial 1 Basic. 0 Android Camera
But, as it was in my case, for intensive image processing, this will get slower than acceptable for a real-time image processing application.
A good replacement for their onPreviewFrame 's byte array conversion to YUVImage:
YuvImage yuvImage = new YuvImage(frame, ImageFormat.NV21, width, height, null);
Create a rectangle the same size as the image.
Create a ByteArrayOutputStream and pass this, the rectangle and the compression value to compressToJpeg():
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(imageSizeRectangle, 100, baos);
byte [] imageData = baos.toByteArray();
Bitmap previewBitmap = BitmapFactory.decodeByteArray(imageData , 0, imageData .length);
Rendering these previewFrames on a surface and the best practices involved is a new dimension. =)
This very old post has caught my attention now.
The API available in '11 was much more limited. Today one can use SurfaceTexture (see example) to preview camera stream after (some) manipulations.
This is not an easy task to achieve, with the current Android tools/API available. In general, realtime image-processing is better done at the NDK level. To just show the black and white, you can still do it in java. The byte array containing the frame data is in YUV format, where the Y-Plane comes first. So, if you get the just the Y-plane alone (first width x height bytes), it already gives you the black and white.
I did achieve this through extensive work and trials. You can view the app at google:
https://play.google.com/store/apps/details?id=com.nm.camerafx