I am developing an application that includes image processing ( grayscale , Bw filte ,object detection ,color adjustment ,level adjustment ).
As you know, new mobile phones takes high quality images with large sizes.Due to memory limitations, it is difficult to image processing and outofmemoryException occurs frequently .So I've moved compeletely image proccessing from Java layer to JNI as follows:
Mat file loaded in jni by source file path.
Proccessing ....
Result mat are stored in sd card.
Result image loaded in inSampledSize bitmap as preview.
OutOfMemoryException does not occur with this method in image proccessing .
but sometime when image have very large dimensions , Activity closed automattically during image processing without any exception and it's cause did not specify when debugging.
Why is this happening? And How can I fix this?
Excuse me for my english.
There are two things you could two :
First check if the problem happens while doing the image processing, if yes try to offload it from main thread and do it in a separate thread (may be an async task to start with). For more info on this refer this
Out of Memory Issue happens easily when you try set the high resolution images(bitmap) to the ImageView. If you are doing this you should lower the resolution before setting it to the ImageView. For more info on this please refer this link and also this stackoverflow post.
Related
I am loading images from webserver using AQuery
and getting out of memory exception (java.lang.OutOfMemoryError: thread creation failed) . I did everything that docs suggest while there are OOM. but i am not able to escape out. can you please suggest me why i am getting OOM after loading 40 images ?
The best way to avoid a OOM especially when dealing with images is as follows:
If possible and your api supports, then try to fetch images based on resolution just enough for the viewing on the mobile device.
Using volley or Picasso on its own does not solve the OOM issue.. In order to handle the OOM issue, make use of the BitmapFactory provided by the framework to resize the downloaded image to a size just enough for the view in which it is being displayed. You can do something as follows
ImageView imgView = findViewById(R.id.img);
int height = imgView.getHeight();
int width = imgView.getWidth();
Bitmap = yourBitmapOfFullResolution;
Bitmap scaledBitmap = Bitmap.createScaledBitmap
(yourBitmapOfFullResolution, height, width, true);
imgView.setImageBitmap(scaledBitmap);
Try Volley Library for this. its much faster and efficient.
and i think your problem is the heap size of your device, if you are running and testing your app on emulator. then you can easily increase the heap size by editing the device configuration.
Thrown when a request for memory is made that can not be satisfied using the available platform resources. Such a request may be made by both the running application or by an internal function of the VM.
Add this feild in application tag of manifest file
android:largeHeap="true"
I'm trying to copy part of the source image. I've decoded the resource into bitmap with inScaling option turned off so I can crop from real image size but I'm getting outOfMemoryErorr thrown out.
I've read my device (S3) memory with
Log.i("CropParams memory", String.valueOf(Runtime.getRuntime().maxMemory()));
and it's showing 64MB.
I would really like to crop from real image size and to know what exactly causes outOfMemoryError so I know how to manage this kind of situations. This particular image is 2448x3264 and has 3.41MB.
Why is this particular image causing this error?
Thanks
See my answer on another question on how to dump the heap memory when you get an OutOfMemoryError.
With the Hprof file you should be able to analyze the memory usage as described here.
I think looks like a job for the Region decoder.
http://developer.android.com/reference/android/graphics/BitmapRegionDecoder.html
Just instantiate it with any of the newInstance api:s, then call decodeRegion for the region you are interested in.
Background
We have just swapped to the UIL library, which seems fantastic. Unfortunately we have to support CMYK images (against our will) and have attempted to modify an existing ImageDecoder called BaseImageDecoder.
The code for this can be found here. http://pastebin.com/NqbSr0w3
We had an existing AsyncTask http://pastebin.com/5aq6QrRd that used an ImageMagick wrapper described in this SO post (Convert Image byte[] from CMYK to RGB?). This worked fine before in our setup.
The Problem
The current decoder fails to load the cached image from the file system and this results in a decoding error. We have looked through the source code and believe we are using the right functions. We also thought that adding our extra level of decoding at this point in the process was ideal, as the image may have been resized and stored on the file system.
File cachedImageFile = ImageLoader.getInstance().getDiscCache().get(decodingInfo.getImageUri());
if (!cachedImageFile.exists()) {
Log.v("App", "FILE DOES NOT EXIST");
return null;
}
The above lines always return that the file does not exist.
The Question
Are we incorrect to process our CMYK images at this point, and if not why can't we get the image from the cache on file system?
I'm noticing a crash (in an external native library that does some image processing) when I pass it the pixel data returned from bitmap.getPixels().
If I package the image in the app, in the drawables folder and load the Bitmap with
BitmapFactory.decodeResource()
then grab the pixel data with
bitmap.getPixels()
there's no crash, and everything works as expected. However, if I load the same image from the file system with
BitmapFactory.decodeFile()
then grab the pixels with
bitmap.getPixels()
and hand that off, the native lib crashes.
Is there a difference between the way these two calls process the image into a Bitmap?
Reading the Android sources There is one interesting diffrence: The decodeFile method may call a different native bitmap decoder if the passed file is an asset, while the decodeResource will never do this.
if (is instanceof AssetManager.AssetInputStream) {
bm = nativeDecodeAsset(((AssetManager.AssetInputStream) is).getAssetInt(),
outPadding, opts);
However, the crash is most likely a bug in your native code. Messing up the stackframe with bad pointers and/or buffer overruns typically results in weird crashes like this. Try to check all your native code that runs before the crash and see if you can spot any memory issues like that.
This might sound like a strange/silly question. But hear me out.
Android applications are, at least on the T-Mobile G1, limited to 16
MB of heap.
And it takes 4 bytes per pixel to store an image (in Bitmap form):
public void onPictureTaken(byte[] _data, Camera _camera) {
Bitmap temp = BitmapFactory.decodeByteArray(_data, 0, _data.length);
}
So 1 image, at 6 Megapixels takes up 24MB of heap. (Cue Memory overflow).
Now I am very much aware of the ability to decode with parameters, to effectively reduce the size of the image. I even have a method which will scale it down to a desired size.
But what about in the scenario when I want to use the camera as a quality camera!
I have no idea how to get this image into the database. As soon as I decode, it errors.
Note: I need(?) to convert it to Bitmap so that I can rotate it before storing it.
So to sum it up:
Limited to 16MB of heap
Image takes up 24MB of heap
Not enough space to take and manipulate an image
This doesnt address the problem, but I Recommend it as a starting point for others who are just loading images from a location:
Displaying Bitmaps on android
I can only think of a couple of things that might help, none of them are optimal
Do your rotations server side
Store the data from the capture directly to the SDCARD without decoding it, then rotate it chunk by chunk using the file system, then send that to your DB. Lots of examples on the web (if your angles are simple, 90 180 etc) though this would be time consuming since IO operations against SDCARD's are not exactly fast.
When you decode drop the alpha channel, this may not solve you issue though and if you are using a matrix to rotate the image then you would need a target/source anyway
Options opt = new Options();
opt.inPreferredConfig = Bitmap.Config.RGB_565;
// Decode the raw camera a bitmap with no alpha channel
bmp = BitmapFactory.decodeByteArray(raw, 0,raw.length, opt);
There may be a better way to do this, but since your device is so limited in heap etc. I can't think of any.
It would be nice if there was an optional file based matrix method (which in general is what I am suggesting as option 2) or some kind of "paging" system for Android but that's the best I can come up.
First save it to the filesystem, do your operations with the file from the filesystem...