Ok so been racking my brain on this one all day. Trying to figure out how I can convert a Bitmap from canvas to a 1bpp (bit per pixel) Bitmap file in Android and physically save it as such.
So far I've iterated through the bitmap and created an int[] of the resulting pixel values as 1s or 0s. However, my next question is what do I do with that?
What I tried to do was something like
int[] bits = // populated earlier
byte[] bmp = new byte[bits.length / 8];
int byteindex = 0;
int bitindex = 0;
for (int i=0; i<bits.length; i++) {
if (bits[i] == 1)
// set to 1
else
// set to 0
if (bitindex++ == 8) {
bitindex = 0;
byteindex++;
}
}
OutputStream out = new FileOutputStream("/mnt/sdcard/dynbmp.bmp");
out.write(bmp);
out.close();
I get a file out of it but it's obviously not a valid bmp file. Who knows what it is. You'll have to forgive me for my lack of bit-byte and imaging knowledge, but where am I screwing up? Do I the idea completely wrong? Am I missing some header info or something?
Yes, you are missing several things. It's a little bit more complicated... Look here:
http://en.wikipedia.org/wiki/BMP_file_format
Related
I have an Android Project with OpenCV4.0.1 and TFLite installed.
And I want to make an inference with a pretrained MobileNetV2 of an cv::Mat which I extracted and cropped from a CameraBridgeViewBase (Android style).
But it's kinda difficult.
I followed this example.
That does the inference about a ByteBuffer variable called "imgData" (line 71, class: org.tensorflow.lite.examples.classification.tflite.Classifier)
That imgData looks been filled on the method called "convertBitmapToByteBuffer" from the same class (line 185), adding pixel by pixel form a bitmap that looks to be cropped little before.
private int[] intValues = new int[224 * 224];
Mat _croppedFace = new Mat() // Cropped image from CvCameraViewFrame.rgba() method.
float[][] outputVal = new float[1][1]; // Output value from my MobileNetV2 // trained model (i've changed the output on training, tested on python)
// Following: https://stackoverflow.com/questions/13134682/convert-mat-to-bitmap-opencv-for-android
Bitmap bitmap = Bitmap.createBitmap(_croppedFace.cols(), _croppedFace.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(_croppedFace, bitmap);
convertBitmapToByteBuffer(bitmap); // This call should be used as the example one.
// runInference();
_tflite.run(imgData, outputVal);
But, it looks that the input_shape of my NN is not correct, but I'm following the MobileNet example because my NN it's a MobileNetV2.
I've solved the error, but I'm sure that it isn't the best way to do it.
Keras MobilenetV2 input_shape is: (nBatches, 224, 224, nChannels).
I just want to predict a single image, so, nBaches == 1, and I'm working on RGB mode, so nChannels == 3
// Nasty nasty, but works. nBatches == 2? -- _cropped.shape() == (244, 244), 3 channels.
float [][][][] _inputValue = new float[2][_cropped.cols()][_cropped.rows()][3];
// Fill the _inputValue
for(int i = 0; i < _croppedFace.cols(); ++i)
for (int j = 0; j < _croppedFace.rows(); ++j)
for(int z = 0; z < 3; ++z)
_inputValue [0][i][j][z] = (float) _croppedFace.get(i, j)[z] / 255; // DL works better with 0:1 values.
/*
Output val, has this shape, but I don't really know why.
I'm sure that one's of that 2's is for nClasses (I'm working with 2 classes)
But I don't really know why it's using the other one.
*/
float[][] outputVal = new float[2][2];
// Tensorflow lite interpreter
_tflite.run(_inputValue , outputVal);
On python has the same shape:
Python prediction:
[[XXXXXX, YYYYY]] <- Sure for the last layer that I made, this is just a prototype NN.
Hope some one got help, and also that someone can improve the answer because this is not very optimized.
I'm trying to convert an YUV image to grayscale, so basically I just need the Y values.
To do so I wrote this little piece of code (with frame being the YUV image):
imageConversionTime = System.currentTimeMillis();
size = frame.getSize();
byte nv21ByteArray[] = frame.getImage();
int lol;
for (int i = 0; i < size.width; i++) {
for (int j = 0; j < size.height; j++) {
lol = size.width*j + i;
yMatrix.put(j, i, nv21ByteArray[lol]);
}
}
bitmap = Bitmap.createBitmap(size.width, size.height, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(yMatrix, bitmap);
imageConversionTime = System.currentTimeMillis() - imageConversionTime;
However, this takes about 13500 ms. I need it to be A LOT faster (on my computer it takes 8.5 ms in python) (I work on a Motorola Moto E 4G 2nd generation, not super powerful but it should be enough for converting images right?).
Any suggestions?
Thanks in advance.
First of all I would assign size.width and size.height to a variable. I don't think the compiler will optimize this by default, but I am not sure about this.
Furthermore Create a byte[] representing the result instead of using a Matrix.
Then you could do something like this:
int[] grayScalePixels = new int[size.width * size.height];
int cntPixels = 0;
In your inner loop set
grayScalePixels[cntPixels] = nv21ByteArray[lol];
cntPixels++;
To get your final image do the following:
Bitmap grayScaleBitmap = Bitmap.createBitmap(grayScalePixels, size.width, size.height, Bitmap.Config.ARGB_8888);
Hope it works properly (I have not tested it, however at least the shown principle should be applicable -> relying on a byte[] instead of Matrix)
Probably 2 years too late but anyways ;)
To convert to gray scale, all you need to do is set the u/v values to 128 and leave the y values as is. Note that this code is for YUY2 format. You can refer to this document for other formats.
private void convertToBW(byte[] ptrIn, String filePath) {
// change all u and v values to 127 (cause 128 will cause byte overflow)
byte[] ptrOut = Arrays.copyOf(ptrIn, ptrIn.length);
for (int i = 0, ptrInLength = ptrOut.length; i < ptrInLength; i++) {
if (i % 2 != 0) {
ptrOut[i] = (byte) 127;
}
}
convertToJpeg(ptrOut, filePath);
}
For NV21/NV12, I think the loop would change to:
for (int i = ptrOut.length/2, ptrInLength = ptrOut.length; i < ptrInLength; i++) {}
Note: (didn't try this myself)
Also I would suggest to profile your utils method and createBitmap functions separately.
I have a requirement to display somewhat big images on an Android app.
Right now I'm using an ImageView with a source Bitmap.
I understand openGL has a certain device-independent limitation as to
how big the image dimensions can be in order for it to process it.
Is there ANY way to display these images (with fixed width, without cropping) regardless of this limit,
other than splitting the image into multiple ImageView elements?
Thank you.
UPDATE 01 Apr 2013
Still no luck so far all suggestions were to reduce image quality. One suggested it might be possible to bypass this limitation by using the CPU to do the processing instead of using the GPU (though might take more time to process).
I don't understand, is there really no way to display long images with a fixed width without reducing image quality? I bet there is, I'd love it if anyone would at least point me to the right direction.
Thanks everyone.
You can use BitmapRegionDecoder to break apart larger bitmaps (requires API level 10). I've wrote a method that will utilize this class and return a single Drawable that can be placed inside an ImageView:
private static final int MAX_SIZE = 1024;
private Drawable createLargeDrawable(int resId) throws IOException {
InputStream is = getResources().openRawResource(resId);
BitmapRegionDecoder brd = BitmapRegionDecoder.newInstance(is, true);
try {
if (brd.getWidth() <= MAX_SIZE && brd.getHeight() <= MAX_SIZE) {
return new BitmapDrawable(getResources(), is);
}
int rowCount = (int) Math.ceil((float) brd.getHeight() / (float) MAX_SIZE);
int colCount = (int) Math.ceil((float) brd.getWidth() / (float) MAX_SIZE);
BitmapDrawable[] drawables = new BitmapDrawable[rowCount * colCount];
for (int i = 0; i < rowCount; i++) {
int top = MAX_SIZE * i;
int bottom = i == rowCount - 1 ? brd.getHeight() : top + MAX_SIZE;
for (int j = 0; j < colCount; j++) {
int left = MAX_SIZE * j;
int right = j == colCount - 1 ? brd.getWidth() : left + MAX_SIZE;
Bitmap b = brd.decodeRegion(new Rect(left, top, right, bottom), null);
BitmapDrawable bd = new BitmapDrawable(getResources(), b);
bd.setGravity(Gravity.TOP | Gravity.LEFT);
drawables[i * colCount + j] = bd;
}
}
LayerDrawable ld = new LayerDrawable(drawables);
for (int i = 0; i < rowCount; i++) {
for (int j = 0; j < colCount; j++) {
ld.setLayerInset(i * colCount + j, MAX_SIZE * j, MAX_SIZE * i, 0, 0);
}
}
return ld;
}
finally {
brd.recycle();
}
}
The method will check to see if the drawable resource is smaller than MAX_SIZE (1024) in both axes. If it is, it just returns the drawable. If it's not, it will break the image apart and decode chunks of the image and place them in a LayerDrawable.
I chose 1024 because I believe most available phones will support images at least that large. If you want to find the actual texture size limit for a phone, you have to do some funky stuff through OpenGL, and it's not something I wanted to dive into.
I wasn't sure how you were accessing your images, so I assumed they were in your drawable folder. If that's not the case, it should be fairly easy to refactor the method to take in whatever parameter you need.
You can use BitmapFactoryOptions to reduce size of picture.You can use somthing like that :
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 3; //reduce size 3 times
Have you seen how your maps working? I had made a renderer for maps once. You can use same trick to display your image.
Divide your image into square tiles (e.g. of 128x128 pixels). Create custom imageView supporting rendering from tiles. Your imageView knows which part of bitmap it should show now and displays only required tiles loading them from your sd card. Using such tile map you can display endless images.
It would help if you gave us the dimensions of your bitmap.
Please understand that OpenGL runs against natural mathematical limits.
For instance, there is a very good reason a texture in OpenGL must be 2 to the power of x. This is really the only way the math of any downscaling can be done cleanly without any remainder.
So if you give us the exact dimensions of the smallest bitmap that's giving you trouble, some of us may be able to tell you what kind of actual limit you're running up against.
I am trying to load a movement map from a PNG image. In order to save memory
after I load the bitmap I do something like that.
`Bitmap mapBmp = tempBmp.copy(Bitmap.Config.ALPHA_8, false);`
If I draw the mapBmp I can see the map but when I use getPixel() I get
always 0 (zero).
Is there a way to retrieve ALPHA information from a bitmap other than
with getPixel() ?
Seems to be an Android bug in handling ALPHA_8. I also tried copyPixelsToBuffer, to no avail. Simplest workaround is to waste lots of memory and use ARGB_8888.
Issue 25690
I found this question from Google and I was able to extract the pixels using the copyPixelsToBuffer() method that Mitrescu Catalin ended up using. This is what my code looks like in case anyone else finds this as well:
public byte[] getPixels(Bitmap b) {
int bytes = b.getRowBytes() * b.getHeight();
ByteBuffer buffer = ByteBuffer.allocate(bytes);
b.copyPixelsToBuffer(buffer);
return buffer.array();
}
If you are coding for API level 12 or higher you could use getByteCount() instead to get the total number of bytes to allocate. However if you are coding for API level 19 (KitKat) you should probably use getAllocationByteCount() instead.
I was able to find a nice and sort of clean way to create boundary maps. I create an ALPHA_8 bitmap from the start. I paint my boundry map with paths. Then I use the copyPixelsToBuffer() and transfer the bytes into a ByteBuffer. I use the buffer to "getPixels" from.
I think is a good solution since you can scale down or up the path() and draw the boundary map at the desired screen resolution scale and no IO + decode operations.
Bitmap.getPixel() is useless for ALPHA_8 bitmaps, it always returns 0.
I developed solution with PNGJ library, to read image from assets and then create Bitmap with Config.ALPHA_8.
import ar.com.hjg.pngj.IImageLine;
import ar.com.hjg.pngj.ImageLineHelper;
import ar.com.hjg.pngj.PngReader;
public Bitmap getAlpha8BitmapFromAssets(String file) {
Bitmap result = null;
try {
PngReader pngr = new PngReader(getAssets().open(file));
int channels = pngr.imgInfo.channels;
if (channels < 3 || pngr.imgInfo.bitDepth != 8)
throw new RuntimeException("This method is for RGB8/RGBA8 images");
int bytes = pngr.imgInfo.cols * pngr.imgInfo.rows;
ByteBuffer buffer = ByteBuffer.allocate(bytes);
for (int row = 0; row < pngr.imgInfo.rows; row++) {
IImageLine l1 = pngr.readRow();
for (int j = 0; j < pngr.imgInfo.cols; j++) {
int original_color = ImageLineHelper.getPixelARGB8(l1, j);
byte x = (byte) Color.alpha(original_color);
buffer.put(row * pngr.imgInfo.cols + j, x ^= 0xff);
}
}
pngr.end();
result = Bitmap.createBitmap(pngr.imgInfo.cols,pngr.imgInfo.rows, Bitmap.Config.ALPHA_8);
result.copyPixelsFromBuffer(buffer);
} catch (IOException e) {
Log.e(LOG_TAG, e.getMessage());
}
return result;
}
I also invert alpha values, because of my particular needs. This code is only tested for API 21.
I am trying to access the raw data of a Bitmap in ARGB_8888 format on Android, using the copyPixelsToBuffer and copyPixelsFromBuffer methods. However, invocation of those calls seems to always apply the alpha channel to the rgb channels. I need the raw data in a byte[] or similar (to pass through JNI; yes, I know about bitmap.h in Android 2.2, cannot use that).
Here is a sample:
// Create 1x1 Bitmap with alpha channel, 8 bits per channel
Bitmap one = Bitmap.createBitmap(1,1,Bitmap.Config.ARGB_8888);
one.setPixel(0,0,0xef234567);
Log.v("?","hasAlpha() = "+Boolean.toString(one.hasAlpha()));
Log.v("?","pixel before = "+Integer.toHexString(one.getPixel(0,0)));
// Copy Bitmap to buffer
byte[] store = new byte[4];
ByteBuffer buffer = ByteBuffer.wrap(store);
one.copyPixelsToBuffer(buffer);
// Change value of the pixel
int value=buffer.getInt(0);
Log.v("?", "value before = "+Integer.toHexString(value));
value = (value >> 8) | 0xffffff00;
buffer.putInt(0, value);
value=buffer.getInt(0);
Log.v("?", "value after = "+Integer.toHexString(value));
// Copy buffer back to Bitmap
buffer.position(0);
one.copyPixelsFromBuffer(buffer);
Log.v("?","pixel after = "+Integer.toHexString(one.getPixel(0,0)));
The log then shows
hasAlpha() = true
pixel before = ef234567
value before = 214161ef
value after = ffffff61
pixel after = 619e9e9e
I understand that the order of the argb channels is different; that's fine. But I don't
want the alpha channel to be applied upon every copy (which is what it seems to be doing).
Is this how copyPixelsToBuffer and copyPixelsFromBuffer are supposed to work? Is there any way to get the raw data in a byte[]?
Added in response to answer below:
Putting in buffer.order(ByteOrder.nativeOrder()); before the copyPixelsToBuffer does change the result, but still not in the way I want it:
pixel before = ef234567
value before = ef614121
value after = ffffff41
pixel after = ff41ffff
Seems to suffer from essentially the same problem (alpha being applied upon each copyPixelsFrom/ToBuffer).
One way to access data in Bitmap is to use getPixels() method. Below you can find an example I used to get grayscale image from argb data and then back from byte array to Bitmap (of course if you need rgb you reserve 3x bytes and save them all...):
/*Free to use licence by Sami Varjo (but nice if you retain this line)*/
public final class BitmapConverter {
private BitmapConverter(){};
/**
* Get grayscale data from argb image to byte array
*/
public static byte[] ARGB2Gray(Bitmap img)
{
int width = img.getWidth();
int height = img.getHeight();
int[] pixels = new int[height*width];
byte grayIm[] = new byte[height*width];
img.getPixels(pixels,0,width,0,0,width,height);
int pixel=0;
int count=width*height;
while(count-->0){
int inVal = pixels[pixel];
//Get the pixel channel values from int
double r = (double)( (inVal & 0x00ff0000)>>16 );
double g = (double)( (inVal & 0x0000ff00)>>8 );
double b = (double)( inVal & 0x000000ff) ;
grayIm[pixel++] = (byte)( 0.2989*r + 0.5870*g + 0.1140*b );
}
return grayIm;
}
/**
* Create a gray scale bitmap from byte array
*/
public static Bitmap gray2ARGB(byte[] data, int width, int height)
{
int count = height*width;
int[] outPix = new int[count];
int pixel=0;
while(count-->0){
int val = data[pixel] & 0xff; //convert byte to unsigned
outPix[pixel++] = 0xff000000 | val << 16 | val << 8 | val ;
}
Bitmap out = Bitmap.createBitmap(outPix,0,width,width, height, Bitmap.Config.ARGB_8888);
return out;
}
}
My guess is that this might have to do with the byte order of the ByteBuffer you are using. ByteBuffer uses big endian by default.
Set endianess on the buffer with
buffer.order(ByteOrder.nativeOrder());
See if it helps.
Moreover, copyPixelsFromBuffer/copyPixelsToBuffer does not change the pixel data in any way. They are copied raw.
I realize this is very stale and probably won't help you now, but I came across this recently in trying to get copyPixelsFromBuffer to work in my app. (Thank you for asking this question, btw! You saved me tons of time in debugging.) I'm adding this answer in the hopes it helps others like me going forward...
Although I haven't used this yet to ensure that it works, it looks like that, as of API Level 19, we'll finally have a way to specify not to "apply the alpha" (a.k.a. premultiply) within Bitmap. They're adding a setPremultiplied(boolean) method that should help in situations like this going forward by allowing us to specify false.
I hope this helps!
This is an old question, but i got to the same issue, and just figured out that the bitmap byte are pre-multiplied, you can set the bitmap (as of API 19) to not pre-multiply the buffer, but in the API they make no guarantee.
From the docs:
public final void setPremultiplied(boolean premultiplied)
Sets whether the bitmap should treat its data as pre-multiplied.
Bitmaps are always treated as pre-multiplied by the view system and Canvas for performance reasons. Storing un-pre-multiplied data in a Bitmap (through setPixel, setPixels, or BitmapFactory.Options.inPremultiplied) can lead to incorrect blending if drawn by the framework.
This method will not affect the behaviour of a bitmap without an alpha channel, or if hasAlpha() returns false.
Calling createBitmap or createScaledBitmap with a source Bitmap whose colors are not pre-multiplied may result in a RuntimeException, since those functions require drawing the source, which is not supported for un-pre-multiplied Bitmaps.