I need some info on the possible methods for dividing a bitmap in smaller pieces.
More importantly I would need some options to judge. I have checked many posts and I am still not entirely convinced about what to do:
cut the portion of bitmap
How do I cut out the middle area of the bitmap?
These two posts are some good options I found, but I cant calculate the CPU and RAM cost of each method, or maybe I should not bother with this calculation at all. Nonetheless if I am about to do something, why not do it the best way from the start.
I would be grateful to get some tips and links on bitmap compression so maybe I get better performance combining the two methods.
This function allows you to split a bitmap into and number of rows and columns.
Example Bitmap[][] bitmaps = splitBitmap(bmp, 2, 1);
Would create a vertically split bitmap stored in a two dimensional array.
2 columns 1 row
Example Bitmap[][] bitmaps = splitBitmap(bmp, 2, 2);
Would split a bitmap into four bitmaps stored in a two dimensional array.
2 columns 2 rows
public Bitmap[][] splitBitmap(Bitmap bitmap, int xCount, int yCount) {
// Allocate a two dimensional array to hold the individual images.
Bitmap[][] bitmaps = new Bitmap[xCount][yCount];
int width, height;
// Divide the original bitmap width by the desired vertical column count
width = bitmap.getWidth() / xCount;
// Divide the original bitmap height by the desired horizontal row count
height = bitmap.getHeight() / yCount;
// Loop the array and create bitmaps for each coordinate
for(int x = 0; x < xCount; ++x) {
for(int y = 0; y < yCount; ++y) {
// Create the sliced bitmap
bitmaps[x][y] = Bitmap.createBitmap(bitmap, x * width, y * height, width, height);
}
}
// Return the array
return bitmaps;
}
You want to divide a bitmap into parts. I assume you want to cut equal parts from bitmap. Say for example you need four equal parts from a bitmap.
This is a method which divides a bitmap onto four equal parts and has it in an array of bitmaps.
public Bitmap[] splitBitmap(Bitmap src) {
Bitmap[] divided = new Bitmap[4];
imgs[0] = Bitmap.createBitmap(
src,
0, 0,
src.getWidth() / 2, src.getHeight() / 2
);
imgs[1] = Bitmap.createBitmap(
src,
src.getWidth() / 2, 0,
src.getWidth() / 2, src.getHeight() / 2
);
imgs[2] = Bitmap.createBitmap(
src,
0, src.getHeight() / 2,
src.getWidth() / 2, src.getHeight() / 2
);
imgs[3] = Bitmap.createBitmap(
src,
src.getWidth() / 2, src.getHeight() / 2,
src.getWidth() / 2, src.getHeight() / 2
);
return divided;
}
Related
I have a big bitmap - sometimes with height 2000 sometimes 4000 etc. This is possible to split this big bitmap dividing by 1500 and save into array?
For example if I have bitmap with height 2300 I want to have array with two bitmaps: one 1500 height and second 800.
You can use the createBitmap() to create bitmap chunks from the original Bitmap.
The below function takes in a bitmap and the desired chunk size (1500 in your case).
And split the bitmap vertically if the width is greater than height, and horizontally otherwise.
fun getBitmaps(bitmap: Bitmap, maxSize: Int): List<Bitmap> {
val width = bitmap.width
val height = bitmap.height
val nChunks = ceil(max(width, height) / maxSize.toDouble())
val bitmaps: MutableList<Bitmap> = ArrayList()
var start = 0
for (i in 1..nChunks.toInt()) {
bitmaps.add(
if (width >= height)
Bitmap.createBitmap(bitmap, start, 0, width / maxSize, height)
else
Bitmap.createBitmap(bitmap, 0, start, width, height / maxSize)
)
start += maxSize
}
return bitmaps
}
Usage:
getBitmaps(myBitmap, 1500)
Yes, you can use Bitmap.createBitmap(bmp, offsetX, offsetY, width, height);
to create a "slice" of the bitmap starting at a particular x and y offset, and having a particular width and height.
I'll leave the math up to you.
My doubt is bitmap.getPixels(allPixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
is processing 1D array. But a bitmap is always 2D picture representation. But why there is single dimentional array?
And how the packing of bytes in 1D array?
I know this is anoob question, but I can't understand it.
Thanks
But a bitmap is always 2D picture representation. But why there is
single dimentional array?
Bitmap stored in memory as 1-dimensional array of bytes (not only bitmap, but most binary data). All pixels of the bitmap are placed in memory row by row and each row with width of bitmap. I think, method Bitmap.getPixels() do nothing but copy bytes from memory into int[] array. You are free to create your own method that will convert 1D array to 2D array, but in most cases this is not required (see below).
And how the packing of bytes in 1D array?
Method Bitmap.getPixels() accepts and fills int[] array with length of bitmap width multiply by bitmap height. The part of result array corresponding to the rectangle, specified in parameters of the method, will be filled with colors of pixels, and rest of array will filled with zeros.
It's very easy to get the color of the desired pixel from this array. Index of pixel is x + y * bitmapWidth:
...
int width = bitmap.getWidth();
int height = bitmap.getHeight();
int[] allPixels = new int[width * height];
bitmap.getPixels(allPixels, 0, width, 0, 0, width, height);
int x = 64;
int y = 128;
int pixelColor = allPixels[x + y * width];
...
I'm having the OutOfMemory error when inverting a bitmap.. Here is the code I use to invert:
public Bitmap invertBitmap(Bitmap bm) {
Bitmap src = bm.copy(bm.getConfig(), true);
// image size
int height = src.getHeight();
int width = src.getWidth();
int length = height * width;
int[] array = new int[length];
src.getPixels(array, 0, src.getWidth(), 0, 0, src.getWidth(), src.getHeight());
int A, R, G, B;
for (int i = 0; i < array.length; i++) {
A = Color.alpha(array[i]);
R = 255 - Color.red(array[i]);
G = 255 - Color.green(array[i]);
B = 255 - Color.blue(array[i]);
array[i] = Color.argb(A, R, G, B);
}
src.setPixels(array, 0, src.getWidth(), 0, 0, src.getWidth(), src.getHeight());
return src;
}
The image is ~80 kb big, the dimensions are 800x1294 and the picture has words which are black and an invisible background..
The images are in a ViewPager..
when you copy bm, try: bm = null;
In android , due to 16MB (on almost all phones) memory cap for applications, it is not wise to hold entire bitmap in memory. This is a common scenario and is happening to may developers.
You can get many information about this problem in this stackoverflow thread. But I really urges you to read android's official document about efficient usage of Bitmaps. They are here and here.
The memory size used by an image in completelly different from the file size of that image.
While in a file the image may be compressed using different alghorithms (jpg, png, etc.) and when loaded in memory as a bitmap, it uses 2 or 4 bytes per pixel.
So in your case (you are not sowing the code but it lloks like you are using 4 bytes per pixel), the memory size per image is:
size = width * height * 4; // this is aprox 2MB
In your code, first you copy the original bitmap to a new one, and then ceate an array to manipulate the colors. So in total you are using size x 3 = 6MB per image inversion.
There are plenty of examples on how to handle large bitmap in Android, but I'll leave you what I think is the most important topics:
Try to use only one copy of bitmap in your code above
If you are only having words in your image use Bitmap.Config = RGB_565. This only uses 2 bytes per pixel, reducing size by half.
Call recycle() on a bitmap that you don't need anymore.
Have a lool at scale option in Bitmap.Factory. You may reduce the size of image that still fit your needs.
good luck.
I want to print a Bitmap to a mobile Bluetooth Printer (Bixolon SPP-R200) - the SDK doesn't offer direkt methods to print an in-memory image. So I thought about converting a Bitmap like this:
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
To a Monochrome Bitmap. I am drawing black text on above given Bitmap using a Canvas, which works well. However, when I convert the above Bitmap to a ByteArray, the printer seems to be unable to handle those bytes. I suspect I need an Array with one Bit per Pixel (a Pixel would be either white = 1 or black = 0).
As there seems to be no convenient, out of the box way to do that, one idea I had was to use:
bitmap.getPixels(pixels, offset, stride, x, y, width, height)
to Obtain the pixels. I assume, I'd have to use it as follows:
int width = bitmap.getWidth();
int height = bitmap.getHeight();
int [] pixels = new int [width * height];
bitmap.getPixels(pixels, 0, width, 0, 0, width, height);
However - I am not sure about a few things:
In getPixels - does it make sense to simply pass the width as the "Stride" argument?
I guess I'd have to evaluate the color information of each pixel and either switch it to black or white (And I'd write this value in a new target byte array which I would ultimately pass to the printer)?
How to best evaluate each pixel color information in order to decide that it should be black or white? (The rendered Bitmap is black pain on a white background)
Does this approach make sense at all? Is there an easier way? It's not enough to just make the bitmap black & white, the main issue is to reduce the color information for each pixel into one bit.
UPDATE
As suggested by Reuben I'll first convert the Bitmap to a monochrome Bitmap. and then I'll iterate over each pixel:
int width = bitmap.getWidth();
int height = bitmap.getHeight();
int[] pixels = new int[width * height];
bitmap.getPixels(pixels, 0, width, 0, 0, width, height);
// Iterate over height
for (int y = 0; y < height; y++) {
int offset = y * height;
// Iterate over width
for (int x = 0; x < width; x++) {
int pixel = bitmap.getPixel(x, y);
}
}
Now Reuben suggested to "read the lowest byte of each 32-bit pixel" - that would relate to my question about how to evaluate the pixel color. My last question in this regard: Do I get the lowest byte by simply doing this:
// Using the pixel from bitmap.getPixel(x,y)
int lowestByte = pixel & 0xff;
You can convert the image to monochrome 32bpp using a ColorMatrix.
Bitmap bmpMonochrome = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bmpMonochrome);
ColorMatrix ma = new ColorMatrix();
ma.setSaturation(0);
Paint paint = new Paint();
paint.setColorFilter(new ColorMatrixColorFilter(ma));
canvas.drawBitmap(bmpSrc, 0, 0, paint);
That simplifies the color->monochrome conversion. Now you can just do a getPixels() and read the lowest byte of each 32-bit pixel. If it's <128 it's a 0, otherwise it's a 1.
Well I think its quite late now to reply to this thread but I was also working on this stuff sometimes back and decided to build my own library that will convert any jpg or png image to 1bpp .bmp. Most printers that require 1bpp images will support this image (tested on one of those :)).
Here you can find library as well as a test project that uses it to make a monochrome single channel image. Feel free to change it..:)
https://github.com/acdevs/1bpp-monochrome-android
Enjoy..!! :)
You should convert each pixel into HSV space and use the value to determine if the Pixel on the target image should be black or white:
Bitmap bwBitmap = Bitmap.createBitmap( bitmap.getWidth(), bitmap.getHeight(), Bitmap.Config.RGB_565 );
float[] hsv = new float[ 3 ];
for( int col = 0; col < bitmap.getWidth(); col++ ) {
for( int row = 0; row < bitmap.getHeight(); row++ ) {
Color.colorToHSV( bitmap.getPixel( col, row ), hsv );
if( hsv[ 2 ] > 0.5f ) {
bwBitmap.setPixel( col, row, 0xffffffff );
} else {
bwBitmap.setPixel( col, row, 0xff000000 );
}
}
}
return bwBitmap;
Converting to monochrome with exact the same size as the original bitmap is not enough to print.
Printers can only print each "pixel" (dot) as monochrome because each spot of ink has only 1 color, so they must use much more dots than enough and adjust their size, density... to emulate the grayscale-like feel. This technique is called halftoning. You can see that printers often have resolution at least 600dpi, normally 1200-4800dpi, while display screen often tops at 200-300ppi.
So your monochrome bitmap should be at least 3 times the original resolution in each side.
I am using following code to merge 2 different bitmap into 1.
public Bitmap combineImages(Bitmap c, Bitmap s) {
Bitmap cs = null;
int width, height = 0;
width = c.getWidth() + (s.getWidth() / 2);
height = c.getHeight() + (s.getHeight() / 2);
cs = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas comboImage = new Canvas(cs);
comboImage.drawBitmap(c, 0f, 0f, null);
comboImage.drawBitmap(s, c.getWidth() - (s.getWidth() / 2), c
.getHeight()
- (s.getHeight() / 2), null);
return cs;
}
It working good. But problem is that it make my image BLUR.
Basically my full code is here. My Full Code What I am doing is converting my Base64 String images to Bitmap. What you think this may be issue?
I just want to prevent BLUR to my images...
oh yes, I just make the 3 images of hdpi,mdpi,ldpi "+" image with appropriate resolution. and it works for me.