I am attempting to hide a small file within a bitmap. I am setting the least significant bit of ARGB to each bit in the file (the first few pixels of the bitmap I have reserved for file size).
For this debugging purpose, I am comparing the encoded int[] to the decoded int[]. Every few hundred bytes, there is an incorrect bit.
I am saving the picture through the Bitmap.compress method like so...
OutputStream out = getContentResolver().openOutputStream(fileUri);
picture.compress(Bitmap.CompressFormat.PNG,100,out);
Then, when extracting the file from the image...
pic = MediaStore.Images.Media.getBitmap(this.getContentResolver(), fileUri);
pic.getPixels(intArr, 0, width, 0, 0, width, height);
I took the liberty of examining one of these pngs, and I am indeed using argb8 as the config, and, using tweakPNG, have discovered that the only chuncks in the generated file are
IHDR: 8 bits/sample, truecolor+alpha, noninterlaced
sBIT: RGBA8
IDAT
IEND
The PNG photo looks fine, and has no issues.
Edit: progress!
I tracked the bug down to this basic issue.
Bitmap.setPixels is not functioning identically to Bitmap.getpixels.
The following code shows a slight difference between converted and converted2. Roughly 1 or two bits per hundred ints. This seems shocking to me. Is that an android bug?
picture3 = Bitmap.createBitmap(picture1.getWidth(), picture1.getHeight(), Bitmap.Config.ARGB_8888);
picture3.setPixels(converted,0,picture1.getWidth(),0,0,picture1.getWidth(), picture1.getHeight());
int[] converted2 = new int[converted.length];
picture3.getPixels(converted2,0,picture1.getWidth(),0,0,picture1.getWidth(), picture1.getHeight());
This is the same issue as in this thread.
I have a simple solution, just increment and decriment the off A, R, G, or B value until the lsb if each is correct. Obviously this will result in a more distorted bitmap, but it seems to be the only solution I can think of.
Note: Using the latest of cocos 2.x
Hi there. I'm trying to pass a bitmap through jni and convert it to a sprite to display, but the image created using CCTexture2d's initWithData is completely garbled. I've looked at other forum posts here, but even after following those as closely as possible, nothing works. Note that we are not going to use a path for the image (and it isn't an option).
There are a few issues that I will break down:
What kind of data is necessary?
The initWithData method has no documentation as to what kind of data it even takes- byte array? Pixel array? It seems other people have used byte array and got it working, so that is what we have gone with. This is the code to get the byte array, but are we supposed to use PNG or JPEG format?
ByteArrayOutputStream bAOS = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG,100, bAOS);
byte[] bytes = bAOS.toByteArray();
Getting the data to cocos
We are using some other convenience methods to write jni messages as json data which gets converted to a CCDictionary. The Byte array gets turned into a CCArray of a bunch of ints, which we can then loop through and assign...to what kind of array? "byte" isn't a valid primitive in c++. Online I've seen an int array:
CCArray *bytes = (CCArray*)dictionary->objectForKey("bytes");
int byteArray[bytes->count()];
for (int i = 0; i < bytes->count(); i++) {
CCNumber *v = (CCNumber*)bytes->objectAtIndex(i);
byteArray[i] = v->getIntValue();
};
and I've tried a char array as well, but neither worked.
CCTexture2d? CCImage?
Ok, so now we have our supposed byte array, we can render the image. Do we use CCTexture2d or CCImage?
For CCTexture2d (we know the image is 20x20 px):
CCTexture2D *texture = new CCTexture2D();
texture->initWithData(byteArray,kCCTexture2DPixelFormat_RGBA8888,20,20,CCSizeMake(20,20));
The image is garbled.
For CCImage...what is the length? I've seen length calculated as the image witdth * image height, or as the actual length of the byte array. Your guess is as good as mine, as I have never gotten a CCImage to work.
CCImage *image = new CCImage();
image->initWithImageData(byteArray,length,CCImage::kFmtPng,20,20,8);
Once we have a CCImage, we can set it to a texture using CCTexture2d's initWithImage function.
Now that we have the Texture, making a sprite with it should be simple:
CCSprite *sprite = CCSprite::createWithTexture(texture);
Which works, in as much as a sprite is created. The image is trash though.
So, what exactly is wrong here?
How do we get the data from the bitmap?
How do we make the data something that cocos can render?
Texture2d or Image? What are the parameters?
For kicks, here is the image we are testing:
And here is the byte array as Android reports it:
[-119,80,78,71,13,10,26,10,0,0,0,13,73,72,68,82,0,0,0,40,0,0,0,40,8,2,0,0,0,3,-100,47,58,0,0,0,3,115,66,73,84,8,8,8,-37,-31,79,-32,0,0,0,-71,73,68,65,84,88,-123,-19,-44,-79,13,-125,48,16,-123,-31,-97,8,-118,-48,96,37,-108,40,-70,34,3,-48,-91,-124,81,24,-115,77,48,27,-64,6,-98,32,50,19,-112,5,48,-123,69,20,41,-36,-55,-35,-45,-13,-41,-100,-114,21,-30,-34,64,-45,48,-60,-74,-41,11,63,26,-123,21,86,88,97,-123,21,86,-8,-60,112,106,-101,-56,-90,-61,11,83,48,-10,6,39,44,38,-108,39,-51,16,9,11,69,-117,8,-127,-81,-89,-102,-66,99,-82,67,-11,116,108,35,97,88,-124,121,-81,109,-4,78,120,-66,-27,82,88,-31,-81,77,26,-35,-12,20,19,66,-32,-128,92,51,-71,21,46,47,-19,-15,-80,67,122,58,-61,-10,109,-86,114,-9,122,-40,-22,-19,-114,-121,23,-52,76,13,-19,102,-6,-52,-20,-67,112,73,57,-122,-22,-25,91,46,-123,21,-2,63,-8,3,120,-72,71,35,-19,75,10,-28,0,0,0,0,73,69,78,68,-82,66,96,-126]
And here is the result (the image is not 20 x 20 though- scaled by cocos to fit density, probably)
Any help would be greatly appreciated
I have an array of bytes that correspond to a "grayscaled bitmap" (one byte->one pixel), and I need to create a PNG file for this image.
The method below works, but the png created is HUGE, as the Bitmap I am using is an ARGB_8888 bitmap, which takes 4 bytes per pixel instead of 1 byte.
I haven't been able to make it work with other Bitmap.Config different than ARGB_8888. Maybe ALPHA_8 is what I need, but I have not been able to make it work.
I have also tried the toGrayScale method which is included in some other posts (Convert a Bitmap to GrayScale in Android), but I have the same issue with the size.
public static boolean createPNGFromGrayScaledBytes(ByteBuffer grayBytes, int width,
int height,File pngFile) throws IOException{
if (grayBytes.remaining()!=width*height){
Logger.error(Tag, "Unexpected error: size mismatch [remaining:"+grayBytes.remaining()+"][width:"+width+"][height:"+height+"]", null);
return false;
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
// for each byte, I set it in three color channels.
int gray,color;
int x=0,y=0;
while(grayBytes.remaining()>0){
gray = grayBytes.get();
// integer may be negative as byte is signed. make them positive.
if (gray<0){gray+=256;}
// for each byte, I set it in three color channels.
color= Color.argb(-1, gray, gray, gray);
bitmap.setPixel(x, y, color);
x++;
if (x==width){
x=0;
y++;
}
}
FileOutputStream fos=null;
fos = new FileOutputStream(pngFile);
boolean result= bitmap.compress(Bitmap.CompressFormat.PNG,100,fos);
fos.close();
return result;
}
EDIT: Link to the generated file (it may look nonsense, but is just created with randon data).
http://www.tempfiles.net/download/201208/256402/huge_png.html
Any help will be greatly appreciated.
As you've noticed, saving a grayscale image as RGB is expensive. If you have luminance data then it would be better to save as a Grayscale PNG rather than an RGB PNG.
The bitmap and image functionality available in the Android Framework is really geared towards reading and writing image formats that are supported by the framework and UI components. Grayscale PNG is not included here.
If you want to save out a Grayscale PNG on Android then you'll need to use a library like http://code.google.com/p/pngj/
If you use OPENCV for Android library, you can use the library to save a binary data to a png file.
My way is:
in jni part,
set Mat whose data begin with the byte array:
jbyte* _ByteArray_BrightnessImgForOCR = env->GetByteArrayElements(ByteArray_BrightnessImgForOCR, 0);
Mat img(ByteArray_BrightnessImgForOCR_h, ByteArray_BrightnessImgForOCR_w, CV_8UC1, (unsigned char *) _ByteArray_BrightnessImgForOCR);
And then write it to a png file.
imwrite("/mnt/sdcard/binaryImg_forOCR.png", img);
Of course, you need to take some time to get yourself familiar with OpenCV and Java native coding. Following OpenCV for Android examples, it is fast to learn.
I'm trying to create an Android application that will process camera frames in real time. To start off with, I just want to display a grayscale version of what the camera sees. I've managed to extract the appropriate values from the byte array in the onPreviewFrame method. Below is just a snippet of my code:
byte[] pic;
int pic_size;
Bitmap picframe;
public void onPreviewFrame(byte[] frame, Camera c)
{
pic_size = mCamera.getParameters().getPreviewSize().height * mCamera.getParameters().getPreviewSize().width;
pic = new byte[pic_size];
for(int i = 0; i < pic_size; i++)
{
pic[i] = frame[i];
}
picframe = BitmapFactory.decodeByteArray(pic, 0, pic_size);
}
The first [width*height] values of the byte[] frame array are the luminance (greyscale) values. Once I've extracted them, how do I display them on the screen as an image? Its not a 2D array as well, so how would I specify the width and height?
You can get extensive guidance from the OpenCV4Android SDK. Look into their available examples, specifically Tutorial 1 Basic. 0 Android Camera
But, as it was in my case, for intensive image processing, this will get slower than acceptable for a real-time image processing application.
A good replacement for their onPreviewFrame 's byte array conversion to YUVImage:
YuvImage yuvImage = new YuvImage(frame, ImageFormat.NV21, width, height, null);
Create a rectangle the same size as the image.
Create a ByteArrayOutputStream and pass this, the rectangle and the compression value to compressToJpeg():
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(imageSizeRectangle, 100, baos);
byte [] imageData = baos.toByteArray();
Bitmap previewBitmap = BitmapFactory.decodeByteArray(imageData , 0, imageData .length);
Rendering these previewFrames on a surface and the best practices involved is a new dimension. =)
This very old post has caught my attention now.
The API available in '11 was much more limited. Today one can use SurfaceTexture (see example) to preview camera stream after (some) manipulations.
This is not an easy task to achieve, with the current Android tools/API available. In general, realtime image-processing is better done at the NDK level. To just show the black and white, you can still do it in java. The byte array containing the frame data is in YUV format, where the Y-Plane comes first. So, if you get the just the Y-plane alone (first width x height bytes), it already gives you the black and white.
I did achieve this through extensive work and trials. You can view the app at google:
https://play.google.com/store/apps/details?id=com.nm.camerafx