I have a project in which I capture a photo from camera (using Camera or Camera2 API) and then I have to manipulate all image pixels colors.
The image is large (4032X3024) and using Bitmap.getPixel(x,y) or Bitmap.setPixel(x,y) takes forever.
Is there a better way that I can work on the image's pixels? Is there some kind of external library I can use?
Thanks!
You could get a copy of bitmap's pixels into int[] and process it by using getPixels(). As for some performance and memory improvements you could copy only some part of bitmap into pixels and process parts of array in separate threads. Once all finished, put together the final processed pixels and call setPixels()
There is a big topic that exists for this exact reason. I would probably suggest looking into it if it is applied to your app.
Related
I'm new to Android development. I have a library here that produces LCD images that come from a device through USB. These images are small, 120x80 pixel, and are 1-bit. I want to show these images on the UI.
All documents I've found explain how to show bitmaps from image files (PNG etc.) or how to show them from an app's resources. I did find out that I can add an ImageView to the UI. Then, for each incoming 120x80 pixel image, create a Bitmap instance, fill it with pixels, and assign it to the ImageView. However, I do not know if this yields the best performance here.
It is also important to keep in mind that nearest neighbor scaling must be used here. Bilinear filtering with such a small image produces a result that is too blurry.
From my experience with other languages, it seems wasteful to create a Bitmap instance for each incoming image. But perhaps I am wrong here.
Suggestions?
I would like to read images from InputStreams, and draw them to my canvas. Unfortunately, the images may be very large, and could easily cause out of memory exceptions. BitmapFactory allows me to provide a sample size value, which will down sample the image as it is processed and avoid the memory issues. However, image quality suffers.
Ideally, canvas would provide a paint image method which can paint from an InputStream, as opposed to from a Bitmap, but I haven't found anything of this type. Does this exist, or is there any other way to safely render arbitrarily large images from InputStreams without down sampling?
I am not sure what image format you are using but if you want to send a lossless image (or with less loss than has already occurred), It can't be compressed better than JPEG. so, use JPEG first.
Here is an example for drawing image on Canvas. Override the 'draw' method. http://www.androidsnippets.com/drawing-an-image-as-a-map-overlay2
At the end, hacking it
Probably, pre-splitting the image; compressing each individual one and restoring at the end sounds logical.
Here are few attempts,
http://kalanir.blogspot.com/2010/02/how-to-split-image-into-chunks-java.html
And
http://www.anddev.org/multimedia-problems-f28/chunk-of-a-big-big-image-t6211.html
The answer appears to be: no, there isn't.
I'm working on an image processing application for Android that recognizes music notation from pictures taken of music sheets.
I tried to load the entire image into a Bitmap using the BitmapFactory.decodeFile(imgPath) method, but because my phone doesn't have enough memory I get a "VM heap size" error. To work around this, I'd like to chop the full image into smaller pieces, but I'm not sure how to do that.
I also saw that it was possible to reduce the memory size of the Bitmap by using the inSampleSize property of the BitmapFactory.Option class, but if I do that I won't get the high resolution image I need for the music notation recognition process.
Is there anyway to handle this without going to NDK?
Android 2.3.3 has a new API called android.graphics.BitmapRegionDecoder that lets you do exactly what you want.
You would for instance do the following:
BitmapRegionDecoder decoder = BitmapRegionDecoder.newInstance(myStream, false);
Bitmap region = decoder.decodeRegion(new Rect(10, 10, 50, 50), null);
Easy :)
If it's from a camera the image will likely be jpeg format. You could use an external jpeg library - either in java or via the NDK, whatever you can find - to give you better control and load it a piece at a time. If you need it as an android.graphics.Bitmap then I suspect you will then need to re-encode the subimage as PNG or JPEG and pass it to BitmapFactory.decodeByteArray(). (If memory is a concern then do be sure to forget your references to the pieces of the bitmap promptly so that the garbage collector can run effectively.)
The same technique will also work if the input graphic is PNG format, or just about anything else provided you can find suitable decode code for it.
I think that by loading the image piecewise you are setting yourself an algorithmic challenge in deciding what parts of it you are really interested in the full detail of. I notice that BitmapFactory.Options includes the option to subsample, that might be useful if you want to analyse an overview of the image to decide what regions to load in full detail.
If you're dealing with JPEG images, see my answer to this question as well as this example.
I don't know how possible it is to get libijg on Android, but if it is, then it's worth a shot.
In my android project I need to get access for each separate pixel of JPEG image. Image created by built-in photo application. I try to convert JPEG into Bitmap class instance, but OutOfMemoryException was thrown. After searching info about this problem I have found the following solution: resize image! But quality of image is important in my project, and i can't resize it. Is there any way to get each-pixel access?
if your image is too big and the quality is important i suppose the best way is to use or create your own class to cut the image in zone (eg : 50*50 px) , there is several jpeg info class in the internet to help you understand how work jpeg files.
Have you tried BufferedImage ? (it's not in the sdk but maybe usable)
The nature of jpeg makes it very hard to get the value of a single pixel. The main reason is that the data is not byte aligned, another is that everything is encoded in blocks that can be of sizes 8x16, 16x8 and 8x8. Also, you need to handle subsampling of chroma values.
If the image contains restart markers, maybe you can skip into the image so you don't have to decode the whole image before getting the pixel value.
In short I am unable to access all the pixels of a bitmap image.
I have used an intent to fire the native Camera app and returned a Bitmap image to my application activity. The data is definitely a bitmap object and I am able to display, get the height/width etc and access some pixels using getPixel(). However when I use the values of getHeight() and getWidth() I get an array out of bounds error. By trail and error I have found I can only access a reduced number of pixels of the image, for example with one image which returned a height and width value of 420,380, I could also access 200,100. I then do some image processing and used setPixel() on the original image. When I display the image it shows the, say 200,100, processing pixels and the rest normal, therefore the pixels are obviously there and accessible by android but not by me. I have to spoken to other people who have also had this problem with images.
Does anyone know anything more about this, reasons? or a work around?
Many thanks in advance.
It seems that there's no way around this, does anyone think it would be better/possible to access the image directly in memory maybe using the NDK?
You won't be able to access the pixel at (getWidth(),getHeight()) in any image because like everything else they are 0-indexed. The valid range of pixels is (0 to getWidth()-1, 0 to getHeight()-1), and thus the bottomrightmost pixel is obtained by b.getPixel(b.getWidth()-1, b.getHeight()-1).
Got an answer from Albert Pucciani on the Android forums. I now create an int buffer and copy the pixels to it, then use get() and put() to extract the pixels. It's also much quicker to use get() and put() instead of the get/setPixel() from the Bitmap class. Need to test now whether this does return all the pixels to the buffer for all images.
After more testing I have discovered this is simply a memory issue as the amount allocated for each process includes all bitmaps.