External allocation too large for this process in Android - android

I'm getting "external allocation too large for this process" errors in my app. Lots of these at once:
11-16 10:56:59.230: ERROR/dalvikvm-heap(2875): 1303680-byte external allocation too large for this process.
11-16 10:56:59.230: ERROR/GraphicsJNI(2875): VM won't let us allocate 1303680 bytes
11-16 10:56:59.230: ERROR/dalvikvm-heap(2875): 1536000-byte external allocation too large for this process.
11-16 10:56:59.230: ERROR/GraphicsJNI(2875): VM won't let us allocate 1536000 bytes
It appears that they are produced while the layout is being rendered, after loading large bitmaps. The errors, however, are not produced while the bitmap is being decoded.
How can I debug these errors? Any additional pointers?

adamp's comment was the answer in my particular case:
The framework will often capture views
onscreen into temporary bitmaps for
drawing performance. It looks like
your app is pushing right up against
its memory limit already and this
bumps it over. Take a look at the
other suggestions for limiting your
app's memory usage.

If you're using threads, then the debugger might be the source of the problem. If you run the app under the debugger, then any threads created will still be retained by the debugger, even when they're finished running. This leads to memory errors that won't occur when the app is running without the debugger.
http://code.google.com/p/android/issues/detail?id=7979

Related

OutOfMemoryException far from memory limit

In my application I use quite a lot of assets to render. This caused my application to crash with an exception indicating that there is no more memory left (when allocating a byte array). Using meminfo I've seen that my process uses about 40mb of memory which according to my calculations is correct (so no hidden excessive memory allocation in my code).
The total memory usage on my system is 300mb. My tablet however supports 1gb of memory and I wonder why it throws me an exception at a usage of 300mb. Is there some per process limit that I need to change? Or are there any other things I'm missing about androids memory management?
Add this to androidManifest in application tag
android:largeHeap="true"
to make things work but this will consume more memory hence more gc calls

Why is Android 4.0 / Ice Cream Sandwich allocating so much heap memory?

I noticed that on my Galaxy Nexus that android.content.res.Resources is allocating about 11MB. I discovered this as I was in the process of profiling things using DDMS and the "Dump HPROF file" option. So, I spent two hours trying to see if the allocation was due to something in my code or supporting libraries. I removed all my data, a ton of classes, all my libraries, and saw no change. After placing a breakpoint in my code at the beginning of the onCreate() method of the activity, it showed that the 11MB allocation is already present.
After being thoroughly confused, I decided to connect my rooted Nook Color running CM7 to see what it was reporting for initial memory usage for the exact same application. The worst case memory "Problem Suspect" reported by the MAT weighs in at a mere 896KB.
Is ICS that top-heavy? Am I missing something here? As far as I can tell, my application is functioning correctly, but having the heap usage indicate 97% full has me worried about potential failures.
If it helps, MAT was indicating that the primary objects consuming all the memory were Bitmaps, BitmapDrawables, and NinePatchDrawables. I don't understand where these allocations are coming from.
Pre-Honeycomb (<3.0), Bitmaps were allocated in native heap and did not appear in Dalvik heap dumps as shown by Eclipse MAT, etc. This native allocation still contributed towards maximum Dalvik heap limits for an application, and still caused garbage collection to run at approximately the correct time when approaching a low memory situation. This usage can be measured with Debug.getNativeHeapAllocatedSize().
Since Android 3.0 (including ICS), it now allocates the pixel data for Bitmaps in normal byte arrays in Dalvik heap. The practical effects of this are better/simplified garbage collection behaviour for Bitmaps (since they can be treated in a more orthodox way) and the ability to track Bitmap allocations in Dalvik heap dumps.
I do not think the actual memory usage for a particular application is significantly different between pre-Honeycomb and more recent releases, and that this is just a matter of an alternative accounting practice.
Memory Analysis for Android
BitMaps in Android

Measure (Android) heap fragmentation?

We have an app with lots of bitmaps in memory. It keeps failing with
java.lang.OutOfMemoryError: bitmap size exceeds VM budget
errors. It's possible that we are genuinely using too much memory; it's possible that we are leaking memory; it's also possible that we aren't doing anything wrong, and heap fragmentation is what's killing us. (Since Android's garbage collector doesn't relocate live blocks, we could have megabytes free and be unable to allocate 50K.)
Is there any way to rule out fragmentation? I've looked for something like maxAvail/memAvail, but haven't spotted anything apposite.
I would look into examining the heap via MAT. The Eclipse Memory Analyzer will help you determine which of your proposed issues you actually have.
There was a talk at Google I/O 2011 that covered some basics on the topic of memory management and debugging. You can watch it online here: http://www.youtube.com/watch?v=_CruQY55HOk&feature=relmfu

what shoud be difference between "NativeHeapAllocatedSize" and "Runtime totalMemory" in Android to prevent from "OutOfmemory Exception"?

hello i m doing some runtime calculation for getting NativeHeap memory and allocated memory at runtime, so any one can suggest me
what should be the difference between "Debug.getNativeHeapAllocatedSize()" and "Runtime.getRuntime().totalMemory()"
so can prevent app by OutOf Memory Exception.
Thanks
Runtime.getRuntime().totalMemory()
Returns the total amount of memory which is available to the running program.
getNativeHeapAllocatedSize()
For devices below HoneyComb most of the huge allocations are deferred to the native heap (e.g Bitmaps). Hence this api is useful to find out how much of native heap is allocated.
OOM Errors occurs when there are no objects which can be freed by the DVM. Typically you have about 16MB in the Heap to play with (for a standard phone). Check your logs* to see GC statements having info about how much of memory is allocated.
I don't think there should be a fixed ratio to cause an OOM error. Like in the case when you load a very huge bitmap, here the native memory used is huge.
Slide 25

BitmapFactory OOM driving me nuts

I've been doing a lot of searching and I know a lot of other people
are experiencing the same OOM memory problems with BitmapFactory. My
app only shows a total memory available of 4MB using Runtime.getRuntime
().totalMemory(). If the limit is 16MB, then why doesn't the total
memory grow to make room for the bitmap? Instead it throws an error.
I also don't understand that if I have 1.6MB of free memory according
to Runtime.getRuntime().freeMemory() why do I get an error saying "VM
won't let us allocate 614400 bytes"? Seems to me I have plenty
available memory.
My app is complete except for this problem, which goes away when I
reboot the phone so that my app is the only thing running. I'm using
an HTC Hero for device testing (Android 1.5).
At this point I'm thinking the only way around this is to somehow
avoid using BitmapFactory.
Anyone have any ideas on this or an explanation as to why VM won't
allocate 614KB when there's 1.6MB of free memory?
[Note that (as CommonsWare points out below) the whole approach in this answer only applies up to and including 2.3.x (Gingerbread). As of Honeycomb Bitmap data is allocated in the VM heap.]
Bitmap data is not allocated in the VM heap. There is a reference to it in the VM heap (which is small), but the actual data is allocated in the Native heap by the underlying Skia graphics library.
Unfortunately, while the definition of BitmapFactory.decode...() says that it returns null if the image data could not be decoded, the Skia implementation (or rather the JNI glue between the Java code and Skia) logs the message you’re seeing ("VM won't let us allocate xxxx bytes") and then throws an OutOfMemory exception with the misleading message "bitmap size exceeds VM budget".
The issue is not in the VM heap but is rather in the Native heap. The Natïve heap is shared between running applications, so the amount of free space depends on what other applications are running and their bitmap usage. But, given that BitmapFactory will not return, you need a way to figure out if the call is going to succeed before you make it.
There are routines to monitor the size of the Native heap (see the Debug class getNative methods). However, I have found that getNativeHeapFreeSize() and getNativeHeapSize() are not reliable. So in one of my applications that dynamically creates a large number of bitmaps I do the following.
The Native heap size varies by platform. So at startup, we check the maximum allowed VM heap size to determine the maximum allowed Native heap size. [The magic numbers were determined by testing on 2.1 and 2.2, and may be different on other API levels.]
long mMaxVmHeap = Runtime.getRuntime().maxMemory()/1024;
long mMaxNativeHeap = 16*1024;
if (mMaxVmHeap == 16*1024)
mMaxNativeHeap = 16*1024;
else if (mMaxVmHeap == 24*1024)
mMaxNativeHeap = 24*1024;
else
Log.w(TAG, "Unrecognized VM heap size = " + mMaxVmHeap);
Then each time we need to call BitmapFactory we precede the call by a check of the form.
long sizeReqd = bitmapWidth * bitmapHeight * targetBpp / 8;
long allocNativeHeap = Debug.getNativeHeapAllocatedSize();
if ((sizeReqd + allocNativeHeap + heapPad) >= mMaxNativeHeap)
{
// Do not call BitmapFactory…
}
Note that the heapPad is a magic number to allow for the fact that a) the reporting of Native heap size is "soft" and b) we want to leave some space in the Native heap for other applications. We are running with a 3*1024*1024 (ie 3Mbytes) pad currently.
1.6 MB of memory seems like a lot but it could be the case that the memory is so badly fragmented that it can't allocate such big block of memory in one go (still this does sound very strange).
One common cause of OOM while using image resources is when one is decompressing JPG, PNG, GIF images with really high resolutions. You need to bear in mind that all these formats are pretty well compressed and take up very little space but once you load the images to the phone, the memory they're going to use is something like width * height * 4 bytes. Also, when decompression kicks in, a few other auxiliary data structures need to be loaded for the decoding step.
It seems like the issues given in Torid's answer have been resolved in the more recent versions of Android.
However, if you are using an image cache (a specialized one or even just a regular HashMap), it is pretty easy to get this error by creating a memory leak.
In my experience, if you inadvertently hold on to your Bitmap references and create a memory leak, OP's error (an referring to the BitmapFactory and native methods) is the one that will crash your app (up to ICS - 14 and +?)
To avoid this, make your you "let go" of your Bitmaps. This means using SoftReferences in the final tier of your cache, so that Bitmaps can get garbage collected out of it. This should work, but if you are still getting crashes, you can try to explicitly mark certain Bitmaps for collection by using bitmap.recycle(), just remember to never return a bitmap for use in your app if bitmap.isRecycled().
As an aside, LinkedHashMaps are a great tool for easily implementing pretty good cache structures, especially if you combine hard and soft references like in this example (starting line 308)... but using hard references is also how you can get yourself into memory leak situations if you mess up.
Although usually it doesnt make sense to catch an Error because usually they are thrown only by the vm but in this particular case the Error is thrown by the jni glue code thus it is very simple to handle cases where you could not load the image: just catch the OutOfMemoryError.
Although this is a fairly high level answer, the problem for me turned out to be using hardware acceleration on all of my views. Most of my views have custom Bitmap manipulation, which I figured to be the source of the large native heap size, but in fact when disabling hardware acceleration the native heap usage was cut down by a factor of 4.
It seems as though hardware acceleration will do all kinds of caching on your views, creating bitmaps of its own, and since all bitmaps share the native heap, the allocation size can grow pretty dramatically.

Categories

Resources