How heap memory allocation works - android

I'm working on an app and I have memory issues.
I started to study this thing and I have met Eclipse's debugging system.
I use DDMS's Heap tester to see how much memory my app allocated.
I saw it's about 90%.
Now I made a simple new project, a blank empty activity without any functions or variables. Just a splendid new project.
I ran this heap tester and I saw the results:
Heap size: 10,629 MB
Allocated: 9,189 MB
Free: 1,440 MB
Used: 86.45 %
Objects: 44,565
Well, is it normal?
I have a very simple blank activity, and nothing else, and this app is used 86% of memory?
Allocated 9 MB of 10? Really? Is that normal? How this works?
Please instruct me about this, because I would like to know how these memory allocations work.

Dalvik will initially allocate a certain heap size to your app. In your case, this is around 10 MB. As your app needs more memory, Dalvik will increase the heap size upto the maximum configured size (which is different for different devices). If your app still needs more memory after the maximum is reached, then it will cause a OutOfMemoryException.
To learn more about analyzing memory allocations in Android, check out this excellent article from the Android developers blog:
http://android-developers.blogspot.in/2011/03/memory-analysis-for-android.html

Examining Heap Usage is somewhat tricky but is equally easy. Let's find out how.
So consider a small application. You have Android debugging tools to determine the heap usage and to examine them.
You can check this- memory-analysis-for-android, which have more details of how to analize the application effectively in android.
Let's have a short description here too:
There are two ways to start DDMS-
1) Using Eclipse: click Window > Open Perspective > Other... > DDMS
2) or from the command line: run ddms (or ./ddms on Mac/Linux) in the tools/ directory
Then select your application process from Devices and click "Update Heap".
Now switch to the Heap tab in DDMS.
To see the first update, click the Cause GC button.
You will see something like this:
We can see that our set (the Allocated column) is a little over 20MB. If you do some little flip flop, that number can go up. In small applications, the amount of memory we leak is bounded. In some ways, this can be the worst kind of leak to have, because we never get an OutOfMemoryError indicating that we are leaking.
You can use Heap Dump to identify the problem. Click the Dump HPROF file button in the DDMS toolbar and save the file wherever you want. Then run hprof-conv on it.
Using MAT which is a powerful Memory Analyzer tool-
You can install MAT from SITE which is a stand-alone Memory Analyzer tool and analyze the Heap dumps using it.
NOTE:
If you're running ADT (which includes a plug-in version of DDMS) and have MAT installed in Eclipse as well, clicking the "dump HPROF" button will automatically do the conversion (using hprof-conv) and open the converted hprof file into Eclipse (which will be opened by MAT).
Start the MAT and load the converted HPROF file. Navigate to the Histogram view which shows a list of classes sortable by the number of instances, the shallow heap (total amount of memory used by all instances), or the retained heap (total amount of memory kept alive by all instances, including other objects that they have references to).
If we sort by shallow heap, we can see that instances of byte[] are at the top.
Next, Right-click on the byte[] class and select List Objects > with incoming references. This produces a list of all byte arrays in the heap, which we can sort based on Shallow Heap usage.
Pick one of the big objects, and drill down on it. This will show you the path from the root set to the object - the chain of references that keeps this object alive. Lo and behold, there's our bitmap cache!
MAT can't tell us for sure that this is a leak, because it doesn't know whether these objects are needed or not -- only the programmer can do that. However, looking at the stats it is predictable to know that the cache is using a large amount of memory relative to the rest of the application, so we might consider limiting the size of the cache.
Go this way all along for all, and you will see a tremendous amount of performance optimization.

What you see here is allocated memory and not maximum memory which can be allocated, maximum memory which can be allocated depends upon android version and device to device.
In this case, your apps does not have any high memory requirement, all the files,system and object being used to run the app is very small hence initially android has allocated your app a common initial space,now this space goes on increasing as demand from app increases until its met, or it exceeds maximum heap size defines per app by android, in this scenario your app will crash stating running out of memory as reason.
To read more about memory allocation in android go through below developer link
http://developer.android.com/training/articles/memory.html

Related

Potential memory leak of bitmap reported by heap analysis tools but no bitmaps used by App

My App is running fine (i.e. no crashes). During testing I have been investigating memory use. I use Android Studio (AI-141.2006197) DDMS to output a Dump HPROF file then open it in the Eclipse Memory Analyzer. This tool describes a leak suspect:
One instance of "android.graphics.Bitmap" loaded by "<system class loader>" occupies
2,536,984 (40.81%) bytes. The memory is accumulated in one instance of "byte[]"
loaded by "<system class loader>".
Keywords
byte[]
android.graphics.Bitmap
Some more information from the dominator_tree:
Over the last day I have stripped the opening activity and fragment of my App to bare bones, removing all opening and reference to bitmaps, removing menus, service, everything. All that's left is one activity containing 1 fragment. The fragment has a ListView, with each list item having a simple TextView. No bitmaps are used. I can show the code in a later edit to this question if needed.
I am testing this stripped-down App on a phone and a tablet. I install the App, start it, see the list displayed by the first fragment, then exit. Via DDMS I cause a GC then do the heap dump and examine it in the Eclipse memory Analyzer. For both the phone and the tablet I see the "potential" memory leak.
My test phone, which uses a cynogenmod ROM, has a performance setting option to "Allow Purging of Assets" (see http://pocketnow.com/2012/12/10/5-nexus-4-speed-tips#toc-5). When I enable this feature my potential memory leak disappears. This makes me think there isn't a problem with my App after all, but its some system behaviour that I don't understand.
Some questions:
Is this likely a memory leak in my App?
If my test App is not using bitmaps or drawables, why is memory for a bitmap be allocated? What can I check?
Do you have any words of wisdom regarding interpreting which potential memory leaks reported by the Eclipse memory Analyzer can be ignored?
Thanks in advance. I come from an embedded real-time C background so get very nervous when I see memory leaks!
This bitmap is system-related and not a memory leak in the App. I came to this conclusion via posts Android EdgeEffect appears to allocate a 1 meg bitmap and Strange Bitmap using 1 Mb of Heap.
For anyone reading this I'd like to bring to your attention another very useful post that showed me how to view the bitmaps that are pointed to by the Eclipse Memory Analyzer tool. This can really help with debugging. See MAT (Eclipse Memory Analyzer) - how to view bitmaps from memory dump
All views generate bitmaps caches of themselves. This is likely a memory leak. It seems to occur when there are frequent layout changes on the view. You could try to disable the cache to see if it resolves your issue:
view.setDrawingCacheEnabled(false);

Linux out of memory process killing my android app

This is happening on an embedded system that is using a custom build of Android 4.0.2 platform. I see one of our android activity apps growing to around 400MB (rss size when "ps" is invoked) and getting killed by Linux OOM killer.
The android platform was configured with max heap size set to 62M. I am clueless how Dalvik VM let the activity grow to 400MB.
Shouldn't the app get Java out of memory exceptions when heap reaches around 60MB?
We don't see those Java exceptions in the logcat logs or in anr traces.
We implemented a sample activity that allocates byte arrays in sequence and set each byte to a dummy value. We do see Outofmemory exceptions when the activity allocated around 60MB.
Are there allocation paths in android that don't get counted towards heap budget?
The activity renders bitmap pngs downloaded from a web site.
Below are "getprop" results on our platform.
$ adb shell getprop | grep -i heap
I appreciate any pointers.
Thanks
Edited:
Note:
Below is ps output. The Pss and Uss are around 316M which is way above.
PID Vss Rss Pss Uss cmdline
logcat: hd[0]: pexecd(65): 982 351512K 351316K 326300K 316632K mytest.home^M
logcat: hd[0]: pexecd(65): 660 679916K 61044K 57200K 56952K ./videngine^M
RAM: 741764K total, 20320K free, 2148K buffers, 80104K cached, 24964K shmem, 10368K slab
Direct allocations in native code don't count against that Java heap total. There may be other possibilities as well (perhaps pages mapped and populated from files?).
If you have a custom android build, you may be able to set OOM killer values to preserve your own application.
I see one of our android activity apps growing to around 400MB (rss size when "ps" is invoked)
To quote Dianne Hackborn, regarding the output of ps on Android: "the Vss and Rss columns are basically noise (these are the straight-forward address space and RAM usage of a process, where if you add up the RAM usage across processes you get an ridiculously large number)"
I would heartily encourage you to read her epic SO answer on measuring an app's memory footprint. Notably, Rss plays no role in her analysis, beyond the quote I cited above. Hence, I would suggest not worrying about Rss and focus on other metrics.
Here is what you can do to get an idea of what memory your app is using.
Go to DDMS, and create a heap dump by clicking the icon that looks like this:
Next convert the HPROF in android format to regular HPROF format using the hprof-conv tool in the androir-sdk/tools folder.
Next, open the heap dump with Eclipse Memory Analyser (MAT), and look at the dominator tree, there you will see a list of variables that your app is forcing the Dalvik Garbage Collector (GC) to keep. Right click on them and go to "Path the GC root", and "Exclude Weak References", will show you the references thats keeping those objects alive. Look and see if you have any expired references that's been kept as a memory leak.
You can watch this video for much detailed way of find memory leak in Android application.

Android grow heap frag case

I'm working on an app to streaming music from internet... My app does many things and it's structured in this way: I have a tab view and every view is allocated in memory so every time I navigate through tabs I find again the previous status ( every tab can also open a webview to find information about songs, news etc in internet ).. all that grows memory occupation but makes the app very user friendly... After having paid attention to avoid memory leaks following the Android guide, I tried to look at the heap occupation and I found that my app allocates max 3.5MB of memory and the heap size allocated is 4.5 - 4.6 MB... I'm working on the emulator .. They are not so much I think, but sometimes my app is restarted founding in LogCat a strange message like
Grow heap ( frag case ) to 3.373 for 19764-byte allocation
What is it? an emulator issue? or something else? Am I using too much memory?
Thank you in advance for any help :)
The maximum heap size depends on the device (you can get that value by calling Runtime.getRuntime().maxMemory()), but it's probably around 32MB. In order to save memory, Android doesn't allocate maximum memory to every app automatically. Instead it waits until the app need more memory and then gives it more heap space as needed until it's reached the max. I believe that's the Grow heap message you see.
If you do a lot of memory allocation and freeing, you may run into fragmentation problems. Wikipedia has a decent description here, but basically means that you might have the required memory available, just not all in one chunk. Hence the need to grow the heap.
So to answer your questions, it's probably not an emulator issue, it's just the nature of your program, which sounds a little memory heavy. However this isn't a bad thing. I don't think using 3-5MB for multiple tabs with webviews is too much.

Two questions about max heap sizes and available memory in android

I see that the Heap Size is automatically increased as the app needs it, up to whatever the phone's Max Heap Size is. I also see that the Max Heap Size is different depending on the device.
So my first question is, what are the typical Max Heap Sizes on Android devices? I have tested memory allocation on one phone that was able to use a heap over 40mb while another gave out OutOfMemory errors in the 20's mbs. What are the lowest that are in common use and what are the highest that are on common devices? Is there a standard or average?
The second question, and more important one, is how to ensure you are able to use the resources available per device but avoid using too much? I know there are methods such as onLowMemory() but those seem to be only for the entire system memory, not just the heap for your specific application.
Is there a way to detect the max heap size for the device and also detect when the available heap memory is reaching a low point for your application?
For example, if the device only allowed a max heap of 24mb and the app was nearing that limit in allocation, then it could detect and scale back. However, if the device could comfortably handle more, it would be able to take advantage of what is available.
Thanks
Early devices had a per-app cap of 16MB. Later devices increased that to 24MB. Future devices will likely have even more available.
The value is a reflection of the physical memory available on the device and the properties of the display device (because a larger screen capable of displaying more colors will usually require larger bitmaps).
Edit: Additional musings...
I read an article not too long ago that pointed out that garbage-collecting allocators are essentially modeling a machine with infinite memory. You can allocate as much as you want and it'll take care of the details. Android mostly works this way; you keep hard references to the stuff you need, soft/weak references to stuff you might not, and discard references to the stuff you'll never need again. The GC sorts it all out.
In your particular case, you'd use soft references to keep around the things that you don't need to have in memory, but would like to keep if there's enough room.
This starts to fall apart with bitmaps, largely because of some early design decisions that resulted in the "external allocation" mechanism. Further, the soft reference mechanism needs some tuning -- the initial version tended to either keep everything or discard everything.
The Dalvik heap is under active development (see e.g. the notes on Android 2.3 "Gingerbread", which introduces a concurrent GC), so hopefully these issues will be addressed in a future release.
Edit: Update...
The "external allocation" mechanism went away in 4.0 (Ice Cream Sandwich). The pixel data for Bitmaps is now stored on the Dalvik heap, avoiding the earlier annoyances.
Recent devices (e.g. the Nexus 4) cap the heap size at 96MB or more.
A general sense of the app's memory limits can be obtained as the "memory class", from ActivityManager.getMemoryClass(). A more specific value can be had from the java.lang.Runtime function maxMemory().
Here are the "normal" (see below) heap sizes for some specific devices:
G1: 16MB
Moto Droid: 24MB
Nexus One: 32MB
Viewsonic GTab: 32MB
Novo7 Paladin: 60MB
I say "normal" because some versions of Android (e.g., CyanogenMod) will allow a user to manually adjust the heap limit. The result can be larger or smaller than the "normal" values.
See this answer for additional information, including how to find out what the heap size actually is programmatically, and also how to distinguish between the absolute heap size limit on the one hand and the heap limit that you should ideally respect, on the other:
Detect application heap size in Android
To detect what your present heap utilization is, you could try using the Runtime class' totalMemory() method. However, I've read reports that different versions/implementations of the Android OS may have different policies regarding whether native memory (from which the backing memory for bitmaps is allocated) is counted against the heap's maximum or not. And, since version 3.0, the native memory is directly taken from the application's own heap.
The iffiness of this calculation makes me think that it is a mistake to monitor your app's usage of memory at runtime, constantly comparing it to the amount available. Also, if you are in the middle of an involved computation, and find that you're running out of memory, it is not always convenient or reasonable to cancel that computation, and it may create a bad experience for your users if you do.
Instead, you might try preemptively defining certain modes, or constraints, upon your app's functional behavior that will ensure that it comes in under whatever your current device's relevant heap limits are (as detected during your app's initialization).
For example, if you have an app that uses a large list of words that must be loaded into memory all at once, then you might constrain your app so that for smaller heap limits only a smaller list of the more common words can be loaded, while for larger heap limits a full list containing many more words can be loaded.
There are also Java programming techniques that allow you to declare certain memory to be reclaimable by the garbage collector on demand, even if it has existing "soft" (rather than hard) references. If you have data that you would like to keep in memory, but which can be re-loaded from non-volatile storage if required (i.e., a cache), then you might consider using soft references to have such memory automatically freed when your app starts bumping against the upper limits of your heap. See this page for info on soft references in Android:
http://developer.android.com/reference/java/lang/ref/SoftReference.html

BitmapFactory OOM driving me nuts

I've been doing a lot of searching and I know a lot of other people
are experiencing the same OOM memory problems with BitmapFactory. My
app only shows a total memory available of 4MB using Runtime.getRuntime
().totalMemory(). If the limit is 16MB, then why doesn't the total
memory grow to make room for the bitmap? Instead it throws an error.
I also don't understand that if I have 1.6MB of free memory according
to Runtime.getRuntime().freeMemory() why do I get an error saying "VM
won't let us allocate 614400 bytes"? Seems to me I have plenty
available memory.
My app is complete except for this problem, which goes away when I
reboot the phone so that my app is the only thing running. I'm using
an HTC Hero for device testing (Android 1.5).
At this point I'm thinking the only way around this is to somehow
avoid using BitmapFactory.
Anyone have any ideas on this or an explanation as to why VM won't
allocate 614KB when there's 1.6MB of free memory?
[Note that (as CommonsWare points out below) the whole approach in this answer only applies up to and including 2.3.x (Gingerbread). As of Honeycomb Bitmap data is allocated in the VM heap.]
Bitmap data is not allocated in the VM heap. There is a reference to it in the VM heap (which is small), but the actual data is allocated in the Native heap by the underlying Skia graphics library.
Unfortunately, while the definition of BitmapFactory.decode...() says that it returns null if the image data could not be decoded, the Skia implementation (or rather the JNI glue between the Java code and Skia) logs the message you’re seeing ("VM won't let us allocate xxxx bytes") and then throws an OutOfMemory exception with the misleading message "bitmap size exceeds VM budget".
The issue is not in the VM heap but is rather in the Native heap. The Natïve heap is shared between running applications, so the amount of free space depends on what other applications are running and their bitmap usage. But, given that BitmapFactory will not return, you need a way to figure out if the call is going to succeed before you make it.
There are routines to monitor the size of the Native heap (see the Debug class getNative methods). However, I have found that getNativeHeapFreeSize() and getNativeHeapSize() are not reliable. So in one of my applications that dynamically creates a large number of bitmaps I do the following.
The Native heap size varies by platform. So at startup, we check the maximum allowed VM heap size to determine the maximum allowed Native heap size. [The magic numbers were determined by testing on 2.1 and 2.2, and may be different on other API levels.]
long mMaxVmHeap = Runtime.getRuntime().maxMemory()/1024;
long mMaxNativeHeap = 16*1024;
if (mMaxVmHeap == 16*1024)
mMaxNativeHeap = 16*1024;
else if (mMaxVmHeap == 24*1024)
mMaxNativeHeap = 24*1024;
else
Log.w(TAG, "Unrecognized VM heap size = " + mMaxVmHeap);
Then each time we need to call BitmapFactory we precede the call by a check of the form.
long sizeReqd = bitmapWidth * bitmapHeight * targetBpp / 8;
long allocNativeHeap = Debug.getNativeHeapAllocatedSize();
if ((sizeReqd + allocNativeHeap + heapPad) >= mMaxNativeHeap)
{
// Do not call BitmapFactory…
}
Note that the heapPad is a magic number to allow for the fact that a) the reporting of Native heap size is "soft" and b) we want to leave some space in the Native heap for other applications. We are running with a 3*1024*1024 (ie 3Mbytes) pad currently.
1.6 MB of memory seems like a lot but it could be the case that the memory is so badly fragmented that it can't allocate such big block of memory in one go (still this does sound very strange).
One common cause of OOM while using image resources is when one is decompressing JPG, PNG, GIF images with really high resolutions. You need to bear in mind that all these formats are pretty well compressed and take up very little space but once you load the images to the phone, the memory they're going to use is something like width * height * 4 bytes. Also, when decompression kicks in, a few other auxiliary data structures need to be loaded for the decoding step.
It seems like the issues given in Torid's answer have been resolved in the more recent versions of Android.
However, if you are using an image cache (a specialized one or even just a regular HashMap), it is pretty easy to get this error by creating a memory leak.
In my experience, if you inadvertently hold on to your Bitmap references and create a memory leak, OP's error (an referring to the BitmapFactory and native methods) is the one that will crash your app (up to ICS - 14 and +?)
To avoid this, make your you "let go" of your Bitmaps. This means using SoftReferences in the final tier of your cache, so that Bitmaps can get garbage collected out of it. This should work, but if you are still getting crashes, you can try to explicitly mark certain Bitmaps for collection by using bitmap.recycle(), just remember to never return a bitmap for use in your app if bitmap.isRecycled().
As an aside, LinkedHashMaps are a great tool for easily implementing pretty good cache structures, especially if you combine hard and soft references like in this example (starting line 308)... but using hard references is also how you can get yourself into memory leak situations if you mess up.
Although usually it doesnt make sense to catch an Error because usually they are thrown only by the vm but in this particular case the Error is thrown by the jni glue code thus it is very simple to handle cases where you could not load the image: just catch the OutOfMemoryError.
Although this is a fairly high level answer, the problem for me turned out to be using hardware acceleration on all of my views. Most of my views have custom Bitmap manipulation, which I figured to be the source of the large native heap size, but in fact when disabling hardware acceleration the native heap usage was cut down by a factor of 4.
It seems as though hardware acceleration will do all kinds of caching on your views, creating bitmaps of its own, and since all bitmaps share the native heap, the allocation size can grow pretty dramatically.

Categories

Resources