Is this memory leakage - android

I am planning to hunt any memory leak in my Android App. After searching through different blogs, it seems like MAT is a good tool for that.
Before proceeding further, I just want to make something clear. When I check allocated heap memory in memory monitor tab of android studio, i can see the allocated memory increases by ~1 MB (from 16MB of initial allocation) each time I rotate my device. This, most probably suggests some leaks.
However in the process, at any stage, whenever I click on Garbage Collection button in memory minitor window to force GC, the allocated memory comes down to near the initial stage 16MB+ (sometimes requires 2 back to back click when allocated memory expands beyond 30 MB).
My question is, does this behavior suggests that I don't have any leaks due to strong references? If GC can collect those extra chunks, how important is that to check the issue?

Related

After app startup, 70% of the memory is occupied by FinalizerReference

I was investigating the memory consumption of my android app. Immediately after app startup, I clicked 'Dump Java Heap' and the first class on the list is FinalizerReference (java.lang.ref). It has over 800 instances and consume more than 70% of the total memory comsumption.
I understand it is for garbage collection. Unlikely it is because of memory leak since it was captured right after app startup without switching to another view. I did not do any heavy processing during startup, apart from reading something from the shared preference.
Possible Memory leak through FinalizerReference
From this post, I tried to look on the referent field of the FinalizerReference, but it appears to be something that beyond my understanding e.g. Matrix, Canvas, Render Node. It sounds like UI components for me.
Here is my question:
Is there any way/tool for me to further debug the root cause of memory comsumption.
Is this something that I need to worry, or it is just the normal behavior of android memory management.
A better tool would be useful, but only because it should show that the reported Retained Size of ~33 MB for FinalizerReference is not real memory consumption, just massive multiple counting of the same small amount of memory by the Memory Profiler. The Shallow Size of ~28 kB is important, but negligible. The way I investigated this (using Memory Profiler) is detailed in my answer to my own similar question.
You should not worry about FinalizerReference, at least not based on what you show here. You may need to worry about Memory Profiler, due to the meaningless Retained Size it reports for this class. I regard its calculation as a bug, and I filed this issue.

How heap memory allocation works

I'm working on an app and I have memory issues.
I started to study this thing and I have met Eclipse's debugging system.
I use DDMS's Heap tester to see how much memory my app allocated.
I saw it's about 90%.
Now I made a simple new project, a blank empty activity without any functions or variables. Just a splendid new project.
I ran this heap tester and I saw the results:
Heap size: 10,629 MB
Allocated: 9,189 MB
Free: 1,440 MB
Used: 86.45 %
Objects: 44,565
Well, is it normal?
I have a very simple blank activity, and nothing else, and this app is used 86% of memory?
Allocated 9 MB of 10? Really? Is that normal? How this works?
Please instruct me about this, because I would like to know how these memory allocations work.
Dalvik will initially allocate a certain heap size to your app. In your case, this is around 10 MB. As your app needs more memory, Dalvik will increase the heap size upto the maximum configured size (which is different for different devices). If your app still needs more memory after the maximum is reached, then it will cause a OutOfMemoryException.
To learn more about analyzing memory allocations in Android, check out this excellent article from the Android developers blog:
http://android-developers.blogspot.in/2011/03/memory-analysis-for-android.html
Examining Heap Usage is somewhat tricky but is equally easy. Let's find out how.
So consider a small application. You have Android debugging tools to determine the heap usage and to examine them.
You can check this- memory-analysis-for-android, which have more details of how to analize the application effectively in android.
Let's have a short description here too:
There are two ways to start DDMS-
1) Using Eclipse: click Window > Open Perspective > Other... > DDMS
2) or from the command line: run ddms (or ./ddms on Mac/Linux) in the tools/ directory
Then select your application process from Devices and click "Update Heap".
Now switch to the Heap tab in DDMS.
To see the first update, click the Cause GC button.
You will see something like this:
We can see that our set (the Allocated column) is a little over 20MB. If you do some little flip flop, that number can go up. In small applications, the amount of memory we leak is bounded. In some ways, this can be the worst kind of leak to have, because we never get an OutOfMemoryError indicating that we are leaking.
You can use Heap Dump to identify the problem. Click the Dump HPROF file button in the DDMS toolbar and save the file wherever you want. Then run hprof-conv on it.
Using MAT which is a powerful Memory Analyzer tool-
You can install MAT from SITE which is a stand-alone Memory Analyzer tool and analyze the Heap dumps using it.
NOTE:
If you're running ADT (which includes a plug-in version of DDMS) and have MAT installed in Eclipse as well, clicking the "dump HPROF" button will automatically do the conversion (using hprof-conv) and open the converted hprof file into Eclipse (which will be opened by MAT).
Start the MAT and load the converted HPROF file. Navigate to the Histogram view which shows a list of classes sortable by the number of instances, the shallow heap (total amount of memory used by all instances), or the retained heap (total amount of memory kept alive by all instances, including other objects that they have references to).
If we sort by shallow heap, we can see that instances of byte[] are at the top.
Next, Right-click on the byte[] class and select List Objects > with incoming references. This produces a list of all byte arrays in the heap, which we can sort based on Shallow Heap usage.
Pick one of the big objects, and drill down on it. This will show you the path from the root set to the object - the chain of references that keeps this object alive. Lo and behold, there's our bitmap cache!
MAT can't tell us for sure that this is a leak, because it doesn't know whether these objects are needed or not -- only the programmer can do that. However, looking at the stats it is predictable to know that the cache is using a large amount of memory relative to the rest of the application, so we might consider limiting the size of the cache.
Go this way all along for all, and you will see a tremendous amount of performance optimization.
What you see here is allocated memory and not maximum memory which can be allocated, maximum memory which can be allocated depends upon android version and device to device.
In this case, your apps does not have any high memory requirement, all the files,system and object being used to run the app is very small hence initially android has allocated your app a common initial space,now this space goes on increasing as demand from app increases until its met, or it exceeds maximum heap size defines per app by android, in this scenario your app will crash stating running out of memory as reason.
To read more about memory allocation in android go through below developer link
http://developer.android.com/training/articles/memory.html

Android Running apps memory usage

What is the difference between the heap usage (Allocated) we can see in the Elipse Memory Analysis Tool (in the DDMS view) and the memory usage size for the same App shown here on the Android device?:
Settings->Apps->Running
Even though I aggressively tried to preserve memory by making objects null as soon as they weren't needed, the latter number (memory usage size on Running apps screen) only kept increasing and my app finally crashed due to OutOfMemoryError. However, the former showed me that I was well within a reasonable size. I was also calling System.gc() a lot. Is there a difference between the two? Why the discrepancy? Any ideas on how I can solve this problem?
The biggest difference between the two that I know of is the scope of garbage collection.
Normal garbage collection, including System.gc(), collects a bit of garbage, then stops. It is not a complete sweep of the heap to get rid of everything. That is to try to minimize the CPU impact of garbage collection.
The heap dump prepared for MAT, though, effectively a complete GC.
Your symptoms suggest that you are allocating memory faster than GC can reclaim it. The primary solution for this is to try to allocate less memory, or allocate it less frequently. For example, where possible, reuse objects, bitmap buffers, and the like, instead of trying to let GC clean the old stuff and allocating new stuff as you go.
It sounds like you have a memory leak somewhere in your application if the memory is never released. This means that somewhere you are maintaining a strong reference to a large object which is being recreated (like an Activity or Bitmap) which is why calling System.gc() is making no difference.
I suggest watching the following on memory management in android from google IO 2011. It lets you know how to use the eclipse memory analyser tool which is incredibly useful for debugging this sort of error

Android Heap Fragmentation Strategy?

I have an OpenGL Android app that uses a considerable amount of memory to set up a complex scene and this clearly causes significant heap fragmentation. Even though there are no memory leaks it is impossible to destroy and create the app without it running out of memory due to fragmentation. (Fragmentation is definitely the problem, not leaks)
This causes a major problem since Android has a habit of destroying and creating activities on the same VM/heap which obviously causes the activity to crash. As a strategy to counter this I have used the following technique:
#Override
protected void onStop() {
super.onStop();
if(isFinishing()) {
System.runFinalizersOnExit(true);
System.exit(0);
}
}
This ensures that when the activity is finishing it causes a complete VM shutdown and therefore next time the activity is started it gets a fresh unfragmented heap.
Note: I realise that this is not the "Android way" but given that the garbage collector is non-compacting it is impossible to continuously re-use the heap.
This techinque does actually work in general, however it doesn't work when the activity is destroyed in a non-finishing mode and then re-created.
Has anyone got any good suggestions about how to handle the degredation of the heap?
Further note: Reducing memory consumption is not really an option either. The activity doesn't actually use that much memory, but the heap (and native heap) seem to get easily fragmented, probably due to some large'ish memory chunks
Fragmentation is almost always a consequence of an ill conditioned allocation pattern. Large objects are frequently created and destroyed. In conjunction with smaller objects may persisting (or a least having a different lifetime) - holes in the heap are created.
The only working fragmentation prevention in such scenarios is: prevent the specific allocation pattern. This can often be done by pooling the large objects. If successfull, the application will thankfully acknowledge this with a much better execution speed as well.
#edit: yet more specific to your question: if the heap after a restart of the application is yet not empty, so what is there to remain on the heap? You confirmed that its not a problem of a memory leak, but this is, what it seems. Since you are using OpenGL - could it possibly be, some native wrappers have survived, because the OpenGL ressources have not properly been disposed?

Analyzing memory usage using MAT. Trying to interpret what I see

I'm auto-testing my app by loading the same screen over and over. I make heap dumps every 400 screen loads and then try to see whether I have leaks. The screen makes 3 network calls (it loads two XML files and one bitmap file). I noticed this object:
39 instances of "org.apache.http.impl.conn.tsccm.ConnPoolByRoute", loaded by "" occupy 6,506,440 (51.72%) bytes.
This is after 800 screen loads. The "funny" thing is that after 400 loads there were 75 instances totaling to about 13MB!
In fact, the strangest thing is, that the heap keeps increasing from 12MB to almost 27MB (I'm not sure if it will go over the 64MB that the device has!). When I stop the automatic loading and repeatedly press "Cause GC" in the Eclipse DDMS Heap section, it keeps drecreasing the "Allocated" memory until I'm almost down to 6MB!? The question I have is, how come the heap size keeps increasing even though it seems I'm not using all that memory. Is it because my memory is defragmented? Should I be able to converse more memory if I don't keep creating new objects but keep an object pool? Should GC not cause an OOM when I reach the critical value of 64MB and should it try to do its best more? I feel kind of uneasy that it would get so close to 64MB. I wish the GC would be more punctual :)
Hope someone can help.
Best,
Joris

Categories

Resources