cv::imencode required for outgoing byte[] in Android OpenCV? - android

I have been struggling with sending back a CV:Mat in JNI as a Java byte[] so that that it can be decoded successfully with BitmapFactory.decode(). When I first bring in my byte[] array(constructed using data from an Android Bitmap) from the Java side, I am able to successfully use it in the C++ OpenCV functions. I do this by building a Mat from the byte[] coming in and calling cv::imdecode on the Mat.
The problem comes when I return back to Android and attempt to use BitmapFactory to decode the byte array into an Android Bitmap. It returns null which indicates that there was a problem in the decoding. Am I incorrectly performing the operations before I return from JNI? Do I need to use cv::imencode since I had to use cv::imdecode on the entering byte[]?
Any and all help appreciated! Code sample is below where I convert the data I need from the Mat in JNI.
NOTE I am aware of using Apache Android_Bitmap functions, but using a byte array is a requirement I am currently working on.
//inData is a char* pointer that is set to a char* cast of the jbyte* pointer for the
// incoming Array.
cv::Mat inMat = cv::Mat(rows, columns, CV_8UC4, inData);
cv::Mat decodedMat = cv::imdecode(inMat, 1);
//convertImage is a function that changes the color space from BGR to Gray and then Gray to
//RGB.
convertImage(decodedMat, decodedMat);
cv::cvtColor(decodedMat, decodedMat, CV_RGB2RGBA);
jbyteArray jDataArray = env->NewByteArray(pixelDataLength);
env->SetByteArrayRegion(jDataArray,0,pixelDataLength,(jbyte*)decodedMat.data);
env->SetObjectField(in,dataArray,jDataArray);
env->ReleaseByteArrayElements(pixelData, bytePointerForIn, 0);

BitmapFactory expects the data provided to it to be in a known file format, but you are passing it raw pixels. You could make it work by calling cv::imencode, but perhaps a more natural solution for loading images from raw pixel data would be to create the original Java Bitmap object as mutable, and then calling the copyPixelsToBuffer and copyPixelsFromBuffer methods to get and set the pixel data in that object.

Solved! It was combination of my original version of sending in the bytes by using Bitmap.compress() and then on the returned pixels using copyPixelsFromBuffer. Thanks to Buddy for pointing me in the right direction.
Android:
//input was the original bitmap that was used to construct the byte[] array. I then used input.compress() in JPEG format to a Byte. Very important for it to be compressed in a format that will be recognized in cv::imdecode.
ByteArrayOutputStream bos = new ByteArrayOutputStream();
input.compress(CompressFormat.JPEG, 100, bos);
data1.pixelData = bos.toByteArray();
...
//Reconstruction after JNI call
ByteBuffer buffer2 = ByteBuffer.wrap(data1.pixelData);
Bitmap returnFromConvert = Bitmap.createBitmap(input.getWidth(),
input.getHeight(),Bitmap.Config.ARGB_8888);
returnFromConvert.copyPixelsFromBuffer(buffer2);

Related

Cocos2dx: Garbled bitmap image from android

Note: Using the latest of cocos 2.x
Hi there. I'm trying to pass a bitmap through jni and convert it to a sprite to display, but the image created using CCTexture2d's initWithData is completely garbled. I've looked at other forum posts here, but even after following those as closely as possible, nothing works. Note that we are not going to use a path for the image (and it isn't an option).
There are a few issues that I will break down:
What kind of data is necessary?
The initWithData method has no documentation as to what kind of data it even takes- byte array? Pixel array? It seems other people have used byte array and got it working, so that is what we have gone with. This is the code to get the byte array, but are we supposed to use PNG or JPEG format?
ByteArrayOutputStream bAOS = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG,100, bAOS);
byte[] bytes = bAOS.toByteArray();
Getting the data to cocos
We are using some other convenience methods to write jni messages as json data which gets converted to a CCDictionary. The Byte array gets turned into a CCArray of a bunch of ints, which we can then loop through and assign...to what kind of array? "byte" isn't a valid primitive in c++. Online I've seen an int array:
CCArray *bytes = (CCArray*)dictionary->objectForKey("bytes");
int byteArray[bytes->count()];
for (int i = 0; i < bytes->count(); i++) {
CCNumber *v = (CCNumber*)bytes->objectAtIndex(i);
byteArray[i] = v->getIntValue();
};
and I've tried a char array as well, but neither worked.
CCTexture2d? CCImage?
Ok, so now we have our supposed byte array, we can render the image. Do we use CCTexture2d or CCImage?
For CCTexture2d (we know the image is 20x20 px):
CCTexture2D *texture = new CCTexture2D();
texture->initWithData(byteArray,kCCTexture2DPixelFormat_RGBA8888,20,20,CCSizeMake(20,20));
The image is garbled.
For CCImage...what is the length? I've seen length calculated as the image witdth * image height, or as the actual length of the byte array. Your guess is as good as mine, as I have never gotten a CCImage to work.
CCImage *image = new CCImage();
image->initWithImageData(byteArray,length,CCImage::kFmtPng,20,20,8);
Once we have a CCImage, we can set it to a texture using CCTexture2d's initWithImage function.
Now that we have the Texture, making a sprite with it should be simple:
CCSprite *sprite = CCSprite::createWithTexture(texture);
Which works, in as much as a sprite is created. The image is trash though.
So, what exactly is wrong here?
How do we get the data from the bitmap?
How do we make the data something that cocos can render?
Texture2d or Image? What are the parameters?
For kicks, here is the image we are testing:
And here is the byte array as Android reports it:
[-119,80,78,71,13,10,26,10,0,0,0,13,73,72,68,82,0,0,0,40,0,0,0,40,8,2,0,0,0,3,-100,47,58,0,0,0,3,115,66,73,84,8,8,8,-37,-31,79,-32,0,0,0,-71,73,68,65,84,88,-123,-19,-44,-79,13,-125,48,16,-123,-31,-97,8,-118,-48,96,37,-108,40,-70,34,3,-48,-91,-124,81,24,-115,77,48,27,-64,6,-98,32,50,19,-112,5,48,-123,69,20,41,-36,-55,-35,-45,-13,-41,-100,-114,21,-30,-34,64,-45,48,-60,-74,-41,11,63,26,-123,21,86,88,97,-123,21,86,-8,-60,112,106,-101,-56,-90,-61,11,83,48,-10,6,39,44,38,-108,39,-51,16,9,11,69,-117,8,-127,-81,-89,-102,-66,99,-82,67,-11,116,108,35,97,88,-124,121,-81,109,-4,78,120,-66,-27,82,88,-31,-81,77,26,-35,-12,20,19,66,-32,-128,92,51,-71,21,46,47,-19,-15,-80,67,122,58,-61,-10,109,-86,114,-9,122,-40,-22,-19,-114,-121,23,-52,76,13,-19,102,-6,-52,-20,-67,112,73,57,-122,-22,-25,91,46,-123,21,-2,63,-8,3,120,-72,71,35,-19,75,10,-28,0,0,0,0,73,69,78,68,-82,66,96,-126]
And here is the result (the image is not 20 x 20 though- scaled by cocos to fit density, probably)
Any help would be greatly appreciated

Grayscaled bitmaps in android

I have an array of bytes that correspond to a "grayscaled bitmap" (one byte->one pixel), and I need to create a PNG file for this image.
The method below works, but the png created is HUGE, as the Bitmap I am using is an ARGB_8888 bitmap, which takes 4 bytes per pixel instead of 1 byte.
I haven't been able to make it work with other Bitmap.Config different than ARGB_8888. Maybe ALPHA_8 is what I need, but I have not been able to make it work.
I have also tried the toGrayScale method which is included in some other posts (Convert a Bitmap to GrayScale in Android), but I have the same issue with the size.
public static boolean createPNGFromGrayScaledBytes(ByteBuffer grayBytes, int width,
int height,File pngFile) throws IOException{
if (grayBytes.remaining()!=width*height){
Logger.error(Tag, "Unexpected error: size mismatch [remaining:"+grayBytes.remaining()+"][width:"+width+"][height:"+height+"]", null);
return false;
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
// for each byte, I set it in three color channels.
int gray,color;
int x=0,y=0;
while(grayBytes.remaining()>0){
gray = grayBytes.get();
// integer may be negative as byte is signed. make them positive.
if (gray<0){gray+=256;}
// for each byte, I set it in three color channels.
color= Color.argb(-1, gray, gray, gray);
bitmap.setPixel(x, y, color);
x++;
if (x==width){
x=0;
y++;
}
}
FileOutputStream fos=null;
fos = new FileOutputStream(pngFile);
boolean result= bitmap.compress(Bitmap.CompressFormat.PNG,100,fos);
fos.close();
return result;
}
EDIT: Link to the generated file (it may look nonsense, but is just created with randon data).
http://www.tempfiles.net/download/201208/256402/huge_png.html
Any help will be greatly appreciated.
As you've noticed, saving a grayscale image as RGB is expensive. If you have luminance data then it would be better to save as a Grayscale PNG rather than an RGB PNG.
The bitmap and image functionality available in the Android Framework is really geared towards reading and writing image formats that are supported by the framework and UI components. Grayscale PNG is not included here.
If you want to save out a Grayscale PNG on Android then you'll need to use a library like http://code.google.com/p/pngj/
If you use OPENCV for Android library, you can use the library to save a binary data to a png file.
My way is:
in jni part,
set Mat whose data begin with the byte array:
jbyte* _ByteArray_BrightnessImgForOCR = env->GetByteArrayElements(ByteArray_BrightnessImgForOCR, 0);
Mat img(ByteArray_BrightnessImgForOCR_h, ByteArray_BrightnessImgForOCR_w, CV_8UC1, (unsigned char *) _ByteArray_BrightnessImgForOCR);
And then write it to a png file.
imwrite("/mnt/sdcard/binaryImg_forOCR.png", img);
Of course, you need to take some time to get yourself familiar with OpenCV and Java native coding. Following OpenCV for Android examples, it is fast to learn.

Android : some problems when converting drawable image to byte array

I want to convert an image in my app to a Base64 encoded string. This image may be of any type like jpeg, png etc.
What I have done is, I converted the drawable to a Bitmap. Then I converted this Bitmap to ByteArrayOutputStream using compress metheod And I am converting this ByteArrayOutputStream to byte array. And then I am encoding it to Base64 using encodeToString().
I can display the image using the above method if the image is of PNG or JPEG.
ByteArrayOutputStream objByteOutput = new ByteArrayOutputStream();
imgBitmap.compress(CompressFormat.JPEG, 0, objByteOutput);
But the problem is if the image is in any other types than PNG or JPEG, how can I display the image?
Or please suggest me some another method to get byte array from Bitmap.
Thank you...
I'd suggest using
http://developer.android.com/reference/android/graphics/Bitmap.html#copyPixelsToBuffer(java.nio.Buffer)
and specify a ByteBuffer, then you can use .array() on the ByteBuffer if it is implemented (it's an optional method) or .get(byte[]) to get it if .array() doesn't exist.
Update:
In order to determine the size of the buffer to create you should use Bitmap.getByteCount(). However this is only present on API 12 and up, so you would need to use Bitmap.getWidth()*Bitmap.getHeight()*4 - the reason for 4 is that the Bitmap uses a series of pixels (internal representation may be less but shouldn't ever be more), each being an ARGB value with 0-255 hence 4 bytes per pixel.
You can get the same with Bitmap.getHeight() * Bitmap.getRowBytes() - here's some code I used to verify this worked:
BitmapDrawable bmd = (BitmapDrawable) getResources().getDrawable(R.drawable.icon);
Bitmap bm = bmd.getBitmap();
ByteBuffer byteBuff = ByteBuffer.allocate(bm.getWidth() * bm.getHeight() * 4);
byteBuff.rewind();
bm.copyPixelsToBuffer(byteBuff);
byte[] tmp = new byte[bm.getWidth() * bm.getHeight() * 4];
byteBuff.rewind();
byteBuff.get(tmp);
It's not nice code, but it gets the byte array out.

Error: SkImageDecoder::Factory returned null

I am working on a project which is using MPEG2 codec for decoding of a video. My codec is in C.
After decoding a video it is returning unsigned char pointer of RGB buffer which is a pointer to an image bits which are stored as an byte Array. My display function is in Android, so I have to send that information to Android using JNI.
Before calling to display function I have copied that RGB buffer data in to byte Array and pass it to display function:
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inDither = false;
opt.inPreferredConfig = Bitmap.Config.RGB_565;
Bitmap bit=BitmapFactory.decodeByteArray(data, 0, data.length,opt);
canvas.drawBitmap(bit, draw_x, draw_y, null);
But when I am running the application the message is coming:
DEBUG/skia(327):SkImageDecoder::Factory returned null.
I don't know why bitmapFactory is returning null. Since I am beginner with Android, I don't know too much about Android programming. Can anybody please help me..
Yes I solved this error. What I have done I have added bitmap header before the RGB data and then copied that data into byte array and then pass it to the display function which is in android. And then use
Bitmap bit=BitmapFactory.decodeByteArray(data, 0, data.length);
canvas.drawBitmap(bit, draw_x, draw_y, null);
It will return the bitmap and draw that image...

Processing Android camera frames in real time

I'm trying to create an Android application that will process camera frames in real time. To start off with, I just want to display a grayscale version of what the camera sees. I've managed to extract the appropriate values from the byte array in the onPreviewFrame method. Below is just a snippet of my code:
byte[] pic;
int pic_size;
Bitmap picframe;
public void onPreviewFrame(byte[] frame, Camera c)
{
pic_size = mCamera.getParameters().getPreviewSize().height * mCamera.getParameters().getPreviewSize().width;
pic = new byte[pic_size];
for(int i = 0; i < pic_size; i++)
{
pic[i] = frame[i];
}
picframe = BitmapFactory.decodeByteArray(pic, 0, pic_size);
}
The first [width*height] values of the byte[] frame array are the luminance (greyscale) values. Once I've extracted them, how do I display them on the screen as an image? Its not a 2D array as well, so how would I specify the width and height?
You can get extensive guidance from the OpenCV4Android SDK. Look into their available examples, specifically Tutorial 1 Basic. 0 Android Camera
But, as it was in my case, for intensive image processing, this will get slower than acceptable for a real-time image processing application.
A good replacement for their onPreviewFrame 's byte array conversion to YUVImage:
YuvImage yuvImage = new YuvImage(frame, ImageFormat.NV21, width, height, null);
Create a rectangle the same size as the image.
Create a ByteArrayOutputStream and pass this, the rectangle and the compression value to compressToJpeg():
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(imageSizeRectangle, 100, baos);
byte [] imageData = baos.toByteArray();
Bitmap previewBitmap = BitmapFactory.decodeByteArray(imageData , 0, imageData .length);
Rendering these previewFrames on a surface and the best practices involved is a new dimension. =)
This very old post has caught my attention now.
The API available in '11 was much more limited. Today one can use SurfaceTexture (see example) to preview camera stream after (some) manipulations.
This is not an easy task to achieve, with the current Android tools/API available. In general, realtime image-processing is better done at the NDK level. To just show the black and white, you can still do it in java. The byte array containing the frame data is in YUV format, where the Y-Plane comes first. So, if you get the just the Y-plane alone (first width x height bytes), it already gives you the black and white.
I did achieve this through extensive work and trials. You can view the app at google:
https://play.google.com/store/apps/details?id=com.nm.camerafx

Categories

Resources