One of the touted features of the ART runtime in Android 5.0+ is heap compaction, to reduce heap fragmentation. A fragmented heap can get OutOfMemoryErrors a lot easier, as there may not be a single contiguous free block of memory big enough for your needs, even if the heap overall has enough free space.
I understand that this occurs when the app moves to the background, based on Google conference presentations and the like. However, the only statement that I can find on it in the documentation says:
Homogeneous space compaction is free-list space to free-list space compaction which usually occurs when an app is moved to a pause imperceptible process state. The main reasons for doing this are reducing RAM usage and defragmenting the heap.
It's unclear exactly what a "pause imperceptible process state" means, technically.
Suppose an app does not have any foreground activities at the moment. Is there anything that the developer might have done that might prevent heap compaction for that app's process? For example, does having a foreground service block heap compaction?
Putting the pieces of the puzzle together.
From what I can determine, ART will compact anything that is paused for 2-3 seconds and by paused it means not currently running in background, so activities, but not running services. It will also compact on the fly, or concurrently while the app is in the foreground.
Currently, the event that triggers heap compaction is ActivityManager process-state changes. When an app goes to background, it notifies ART the process state is no longer jank “perceptible.” This enables ART do things that cause long application thread pauses, such as compaction and monitor deflation.
Chet Hasse states:
Garbage Collection
ART brought improved garbage collection dynamics. For one thing, ART is a moving collector; it is able to compact the heap when a long pause in the application won’t impact user experience (for example, when the app is in the background and is not playing audio). Also, there is a separate heap for large objects like bitmaps, making it faster to find memory for these large objects without wading through the potentially fragmented regular heap. Pauses in ART are regularly in the realm of 2–3ms.
From what I can see any pause in the app is fair game for the ART GC.
I suspect the app needs to be paused completely of all services, etc for the compact to occur, as it's reallocating the memory addresses of the heap, and for this to occur, it cannot be changing. As this larger compact that is taken during the app pause and not on the fly is a dynamic rearrangement of the heap. The only changes that can be made in the smaller pauses is to re-route some addresses on processes no longer being used.
Though this is an educated guess, not definitive and I will endeavour to get more info.
The source code here should have the answer. They're using naming like InJankPerceptibleProcessState() and trying to wade through this, as you probably already have yourself.
Reading it, will update answer when/if I find the definite answer.
Homogeneous space compaction is free-list space to free-list space compaction which usually occurs when an app is moved to a pause imperceptible process state. The main reasons for doing this are reducing RAM usage and defragmenting the heap.
Source : https://developer.android.com/studio/profile/investigate-ram.html#LogMessages
Actually you can measure the idle time of an app by. Start the idle timer and stop if if there is any event captured in TextWatcher/OnKeylistner, if you app is in background and none of these events are called, it is good to be collected by GC.
Also this heap contraction is event based and priority based. Eg if there is never a scenario when user need memory, OS will not even do it.
As far as priority is concerned, for garbage collection, it looks for background apps with no background service, then app with background service and at last the foreground apps.
Related
I'm writing a application with a ~1GB memory footprint.
The app works fine on devices with 4GB memory but on devices with less than 3GB of memory it will trigger aggressive memory reclaim and sometimes trigger OOM killer and degrade user experience on a system level, (e.g. it takes a long time to go back to the previous app, music player in the background gets killed.)
I'm wondering if it's possible to query system memory status and adjust the memory usage of my app accordingly, e.g. use memory conservatively before the system is about to kill processes of ActivityManager.RunningAppProcessInfo.IMPORTANCE_SERVICE level.
Is this possible?
First, you may want to profile your app to see you really need that much memory constantly allocated:
Manage Your App's Memory
Investigating Your RAM Usage
Then you have onTrimMemory() callback (API 14+), you can use to free some resources when devices is short of them:
Called when the operating system has determined that it is a good time
for a process to trim unneeded memory from its process. This will
happen for example when it goes in the background and there is not
enough memory to keep as many background processes running as desired.
I'm writing a real-time arcade game for Android >= 2.1. During gameplay I'm not allocating memory, to not seduce GC. Beacuse if GC invokes, it takes processor for 70-200ms. User see this as "oh no, that game is lagging...".
I checked LogCat. There are lots of GC_FOR_MALLOC or GC_EXPLICIT. But... not from PID of my process! My game is not causing them. They're caused because other processes, running in the background. Some wallpaper, widgets, radio, email, weather checking and other services...
I don't understand it entirely. When, for example wallpaper dissapears, its onPause() is called, I suppose. So, it should stop all its threads and certainly do not allocate any memory (or call System.gc()). Maybe it's wrongly implemented? I don't know. But there are some Android services, which are also causing GC from time to time... It's odd.
Is it a big Android <= 2.2 architecture flaw?
Android 2.3 introduces concurrent GC, which takes less time.
What can I do to ensure that my game will run smoothly?
First of all, the things which you see in LogCat will differ from one device to another. If you are certain the GC is not coming from your app, you have absolutely nothing you can do. You will always find the GC doing..something.
Make sure you keep YOUR code clean and very lite.
Plus, remember that generally speaking, in the presence of a garbage collector, it is never good practice to manually call the GC. A GC is organized around heuristic algorithms which work best when left to their own devices. Calling the GC manually often decreases performance.
Occasionally, in some relatively rare situations, one may find that a particular GC gets it wrong, and a manual call to the GC may then improves things, performance-wise. This is because it is not really possible to implement a "perfect" GC which will manage memory optimally in all cases. Such situations are hard to predict and depend on many subtle implementation details. The "good practice" is to let the GC run by itself; a manual call to the GC is the exception, which should be envisioned only after an actual performance issue has been duly witnessed.
I do not believe it is a flaw on Android <= 2.2. Is it happening on higher versions? Have you tested it?
I have some problems with the memory usage of my android app and don't know what causes the high memory usage. When I start my app, it uses up to 40 mb ram (says DDMS) and when I open another app, my app gets immediately killed.
I read a lot about memory leaks and I'm unbinding drawables, running the GC and so on but my app still needs a lot of memory.
I have about 3mb resources in my app, but afaik they are loaded into ram on demand. Am I wrong? May this cause the 40mb of ram usage?
EDIT: I think I'm not having memory leaks because I can switch the orientation on each activity as often as I want and the app does not crash because of low memory. So it can't be a memoryleak, can it?
you need to do memory management into your android application, please free the resources which is no longer used, try to override onStop(), onDestroy(), onPause() methods of Activity which will keep track of activity stack.
in OnDestroy() method free your whole availed resources, so that another app can use the same resources again.
What data structures are you using? Very large data structures (long Lists, big graphs, big maps, etc) can quickly use up RAM.
It could also be that you're leaking the Context on orientation change in your app.
It could also be that your layouts are really badly designed along with some heavy data structures.
It's difficult to tell unless you describe a bit more about what your app tries to do.
I have an OpenGL Android app that uses a considerable amount of memory to set up a complex scene and this clearly causes significant heap fragmentation. Even though there are no memory leaks it is impossible to destroy and create the app without it running out of memory due to fragmentation. (Fragmentation is definitely the problem, not leaks)
This causes a major problem since Android has a habit of destroying and creating activities on the same VM/heap which obviously causes the activity to crash. As a strategy to counter this I have used the following technique:
#Override
protected void onStop() {
super.onStop();
if(isFinishing()) {
System.runFinalizersOnExit(true);
System.exit(0);
}
}
This ensures that when the activity is finishing it causes a complete VM shutdown and therefore next time the activity is started it gets a fresh unfragmented heap.
Note: I realise that this is not the "Android way" but given that the garbage collector is non-compacting it is impossible to continuously re-use the heap.
This techinque does actually work in general, however it doesn't work when the activity is destroyed in a non-finishing mode and then re-created.
Has anyone got any good suggestions about how to handle the degredation of the heap?
Further note: Reducing memory consumption is not really an option either. The activity doesn't actually use that much memory, but the heap (and native heap) seem to get easily fragmented, probably due to some large'ish memory chunks
Fragmentation is almost always a consequence of an ill conditioned allocation pattern. Large objects are frequently created and destroyed. In conjunction with smaller objects may persisting (or a least having a different lifetime) - holes in the heap are created.
The only working fragmentation prevention in such scenarios is: prevent the specific allocation pattern. This can often be done by pooling the large objects. If successfull, the application will thankfully acknowledge this with a much better execution speed as well.
#edit: yet more specific to your question: if the heap after a restart of the application is yet not empty, so what is there to remain on the heap? You confirmed that its not a problem of a memory leak, but this is, what it seems. Since you are using OpenGL - could it possibly be, some native wrappers have survived, because the OpenGL ressources have not properly been disposed?
I have an Android app with a running service.
When I look in the "Running Apps" menu in Android settings, I see that my app memory usage is
between 9-16MB .
I used DDMS Allocation Tracker to see where this is coming from, but all of the objects were less than 500 bytes.
Does it make sense? Any other ways to track my app's memory usage?
Also, I have an SQLite database opened as long as the service is running. Is that an impact on memory as well?
Thanks.
Does it make sense?
It neither makes sense nor doesn't make sense. You can get to "9-16MB" by increments of 500 as easily as you can get there by increments of 5000. Also, AFAIK that allocation tracker does not track everything (e.g., bitmaps on pre-3.0 environments).
Any other ways to track my app's memory usage?
Dump your heap (e.g., using the Dump HPROF File toolbar button in DDMS) and examine the results with the MAT plugin for Eclipse. There was a presentation on this at the 2011 Google I|O conference -- the YouTube video is online. You can use this to track memory leaks.
Is that an impact on memory as well?
Some, I'm sure.
Another issue is actually the service itself. Your objective should be to have that service in memory as little as possible, and only while it is actively delivering continuous value to the user. Ideally, your service is destroyed ~99% of the time.