OutOfMemoryException far from memory limit - android

In my application I use quite a lot of assets to render. This caused my application to crash with an exception indicating that there is no more memory left (when allocating a byte array). Using meminfo I've seen that my process uses about 40mb of memory which according to my calculations is correct (so no hidden excessive memory allocation in my code).
The total memory usage on my system is 300mb. My tablet however supports 1gb of memory and I wonder why it throws me an exception at a usage of 300mb. Is there some per process limit that I need to change? Or are there any other things I'm missing about androids memory management?

Add this to androidManifest in application tag
android:largeHeap="true"
to make things work but this will consume more memory hence more gc calls

Related

Is possible to re allocate the heap memory in Android

I am developing an application in that I am getting the Out of memory Exception this is happened due to the insufficient memory. Is possible re allocate the memory once reach the maximum allocate memory.
It is not possible to allocate more memory than is in budget which system gives you.
You can try to put android:largeHeap to your manifest, but it works only on some devices and it isnt generally best solution.
Instead you should find out where you are using all that memory and try to optimize to cut it down.
On modern devices there should be plenty of memory and should be enough for most operations with good design. System places gap on memory usage to prevent poorly designed apps, from depleting memory pool.

Android Running apps memory usage

What is the difference between the heap usage (Allocated) we can see in the Elipse Memory Analysis Tool (in the DDMS view) and the memory usage size for the same App shown here on the Android device?:
Settings->Apps->Running
Even though I aggressively tried to preserve memory by making objects null as soon as they weren't needed, the latter number (memory usage size on Running apps screen) only kept increasing and my app finally crashed due to OutOfMemoryError. However, the former showed me that I was well within a reasonable size. I was also calling System.gc() a lot. Is there a difference between the two? Why the discrepancy? Any ideas on how I can solve this problem?
The biggest difference between the two that I know of is the scope of garbage collection.
Normal garbage collection, including System.gc(), collects a bit of garbage, then stops. It is not a complete sweep of the heap to get rid of everything. That is to try to minimize the CPU impact of garbage collection.
The heap dump prepared for MAT, though, effectively a complete GC.
Your symptoms suggest that you are allocating memory faster than GC can reclaim it. The primary solution for this is to try to allocate less memory, or allocate it less frequently. For example, where possible, reuse objects, bitmap buffers, and the like, instead of trying to let GC clean the old stuff and allocating new stuff as you go.
It sounds like you have a memory leak somewhere in your application if the memory is never released. This means that somewhere you are maintaining a strong reference to a large object which is being recreated (like an Activity or Bitmap) which is why calling System.gc() is making no difference.
I suggest watching the following on memory management in android from google IO 2011. It lets you know how to use the eclipse memory analyser tool which is incredibly useful for debugging this sort of error

Why is Android 4.0 / Ice Cream Sandwich allocating so much heap memory?

I noticed that on my Galaxy Nexus that android.content.res.Resources is allocating about 11MB. I discovered this as I was in the process of profiling things using DDMS and the "Dump HPROF file" option. So, I spent two hours trying to see if the allocation was due to something in my code or supporting libraries. I removed all my data, a ton of classes, all my libraries, and saw no change. After placing a breakpoint in my code at the beginning of the onCreate() method of the activity, it showed that the 11MB allocation is already present.
After being thoroughly confused, I decided to connect my rooted Nook Color running CM7 to see what it was reporting for initial memory usage for the exact same application. The worst case memory "Problem Suspect" reported by the MAT weighs in at a mere 896KB.
Is ICS that top-heavy? Am I missing something here? As far as I can tell, my application is functioning correctly, but having the heap usage indicate 97% full has me worried about potential failures.
If it helps, MAT was indicating that the primary objects consuming all the memory were Bitmaps, BitmapDrawables, and NinePatchDrawables. I don't understand where these allocations are coming from.
Pre-Honeycomb (<3.0), Bitmaps were allocated in native heap and did not appear in Dalvik heap dumps as shown by Eclipse MAT, etc. This native allocation still contributed towards maximum Dalvik heap limits for an application, and still caused garbage collection to run at approximately the correct time when approaching a low memory situation. This usage can be measured with Debug.getNativeHeapAllocatedSize().
Since Android 3.0 (including ICS), it now allocates the pixel data for Bitmaps in normal byte arrays in Dalvik heap. The practical effects of this are better/simplified garbage collection behaviour for Bitmaps (since they can be treated in a more orthodox way) and the ability to track Bitmap allocations in Dalvik heap dumps.
I do not think the actual memory usage for a particular application is significantly different between pre-Honeycomb and more recent releases, and that this is just a matter of an alternative accounting practice.
Memory Analysis for Android
BitMaps in Android

what shoud be difference between "NativeHeapAllocatedSize" and "Runtime totalMemory" in Android to prevent from "OutOfmemory Exception"?

hello i m doing some runtime calculation for getting NativeHeap memory and allocated memory at runtime, so any one can suggest me
what should be the difference between "Debug.getNativeHeapAllocatedSize()" and "Runtime.getRuntime().totalMemory()"
so can prevent app by OutOf Memory Exception.
Thanks
Runtime.getRuntime().totalMemory()
Returns the total amount of memory which is available to the running program.
getNativeHeapAllocatedSize()
For devices below HoneyComb most of the huge allocations are deferred to the native heap (e.g Bitmaps). Hence this api is useful to find out how much of native heap is allocated.
OOM Errors occurs when there are no objects which can be freed by the DVM. Typically you have about 16MB in the Heap to play with (for a standard phone). Check your logs* to see GC statements having info about how much of memory is allocated.
I don't think there should be a fixed ratio to cause an OOM error. Like in the case when you load a very huge bitmap, here the native memory used is huge.
Slide 25

BitmapFactory OOM driving me nuts

I've been doing a lot of searching and I know a lot of other people
are experiencing the same OOM memory problems with BitmapFactory. My
app only shows a total memory available of 4MB using Runtime.getRuntime
().totalMemory(). If the limit is 16MB, then why doesn't the total
memory grow to make room for the bitmap? Instead it throws an error.
I also don't understand that if I have 1.6MB of free memory according
to Runtime.getRuntime().freeMemory() why do I get an error saying "VM
won't let us allocate 614400 bytes"? Seems to me I have plenty
available memory.
My app is complete except for this problem, which goes away when I
reboot the phone so that my app is the only thing running. I'm using
an HTC Hero for device testing (Android 1.5).
At this point I'm thinking the only way around this is to somehow
avoid using BitmapFactory.
Anyone have any ideas on this or an explanation as to why VM won't
allocate 614KB when there's 1.6MB of free memory?
[Note that (as CommonsWare points out below) the whole approach in this answer only applies up to and including 2.3.x (Gingerbread). As of Honeycomb Bitmap data is allocated in the VM heap.]
Bitmap data is not allocated in the VM heap. There is a reference to it in the VM heap (which is small), but the actual data is allocated in the Native heap by the underlying Skia graphics library.
Unfortunately, while the definition of BitmapFactory.decode...() says that it returns null if the image data could not be decoded, the Skia implementation (or rather the JNI glue between the Java code and Skia) logs the message you’re seeing ("VM won't let us allocate xxxx bytes") and then throws an OutOfMemory exception with the misleading message "bitmap size exceeds VM budget".
The issue is not in the VM heap but is rather in the Native heap. The Natïve heap is shared between running applications, so the amount of free space depends on what other applications are running and their bitmap usage. But, given that BitmapFactory will not return, you need a way to figure out if the call is going to succeed before you make it.
There are routines to monitor the size of the Native heap (see the Debug class getNative methods). However, I have found that getNativeHeapFreeSize() and getNativeHeapSize() are not reliable. So in one of my applications that dynamically creates a large number of bitmaps I do the following.
The Native heap size varies by platform. So at startup, we check the maximum allowed VM heap size to determine the maximum allowed Native heap size. [The magic numbers were determined by testing on 2.1 and 2.2, and may be different on other API levels.]
long mMaxVmHeap = Runtime.getRuntime().maxMemory()/1024;
long mMaxNativeHeap = 16*1024;
if (mMaxVmHeap == 16*1024)
mMaxNativeHeap = 16*1024;
else if (mMaxVmHeap == 24*1024)
mMaxNativeHeap = 24*1024;
else
Log.w(TAG, "Unrecognized VM heap size = " + mMaxVmHeap);
Then each time we need to call BitmapFactory we precede the call by a check of the form.
long sizeReqd = bitmapWidth * bitmapHeight * targetBpp / 8;
long allocNativeHeap = Debug.getNativeHeapAllocatedSize();
if ((sizeReqd + allocNativeHeap + heapPad) >= mMaxNativeHeap)
{
// Do not call BitmapFactory…
}
Note that the heapPad is a magic number to allow for the fact that a) the reporting of Native heap size is "soft" and b) we want to leave some space in the Native heap for other applications. We are running with a 3*1024*1024 (ie 3Mbytes) pad currently.
1.6 MB of memory seems like a lot but it could be the case that the memory is so badly fragmented that it can't allocate such big block of memory in one go (still this does sound very strange).
One common cause of OOM while using image resources is when one is decompressing JPG, PNG, GIF images with really high resolutions. You need to bear in mind that all these formats are pretty well compressed and take up very little space but once you load the images to the phone, the memory they're going to use is something like width * height * 4 bytes. Also, when decompression kicks in, a few other auxiliary data structures need to be loaded for the decoding step.
It seems like the issues given in Torid's answer have been resolved in the more recent versions of Android.
However, if you are using an image cache (a specialized one or even just a regular HashMap), it is pretty easy to get this error by creating a memory leak.
In my experience, if you inadvertently hold on to your Bitmap references and create a memory leak, OP's error (an referring to the BitmapFactory and native methods) is the one that will crash your app (up to ICS - 14 and +?)
To avoid this, make your you "let go" of your Bitmaps. This means using SoftReferences in the final tier of your cache, so that Bitmaps can get garbage collected out of it. This should work, but if you are still getting crashes, you can try to explicitly mark certain Bitmaps for collection by using bitmap.recycle(), just remember to never return a bitmap for use in your app if bitmap.isRecycled().
As an aside, LinkedHashMaps are a great tool for easily implementing pretty good cache structures, especially if you combine hard and soft references like in this example (starting line 308)... but using hard references is also how you can get yourself into memory leak situations if you mess up.
Although usually it doesnt make sense to catch an Error because usually they are thrown only by the vm but in this particular case the Error is thrown by the jni glue code thus it is very simple to handle cases where you could not load the image: just catch the OutOfMemoryError.
Although this is a fairly high level answer, the problem for me turned out to be using hardware acceleration on all of my views. Most of my views have custom Bitmap manipulation, which I figured to be the source of the large native heap size, but in fact when disabling hardware acceleration the native heap usage was cut down by a factor of 4.
It seems as though hardware acceleration will do all kinds of caching on your views, creating bitmaps of its own, and since all bitmaps share the native heap, the allocation size can grow pretty dramatically.

Categories

Resources