Hello everyone I am seeing a major uptick in crashes regarding memory leaks in our recent Android builds. We have done some things to try to mitigate these issues, but still am seeing the same crashes in the latest release.
Fatal Exception: java.lang.OutOfMemoryError
Failed to allocate a 16 byte allocation with 1890136 free bytes and 1845KB until OOM, target footprint 201326592, growth limit 201326592; failed due to fragmentation (largest possible contiguous allocation 54788096 bytes)
java.lang.Long.valueOf (Long.java:845)
io.reactivex.internal.operators.observable.ObservableInterval$IntervalObserver.run (ObservableInterval.java:82)
io.reactivex.Scheduler$PeriodicDirectTask.run (Scheduler.java:562)
io.reactivex.Scheduler$Worker$PeriodicTask.run (Scheduler.java:509)
io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run (ExecutorScheduler.java:288)
io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.run (ExecutorScheduler.java:253)
java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
java.lang.Thread.run (Thread.java:923)
Fatal Exception: java.lang.OutOfMemoryError
Failed to allocate a 16 byte allocation with 1590248 free bytes and 1552KB until OOM, target footprint 201326592, growth limit 201326592; failed due to fragmentation (largest possible contiguous allocation 39845888 bytes)
io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:161)
io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:187)
io.reactivex.Scheduler$Worker$PeriodicTask.run (Scheduler.java:531)
io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run (ExecutorScheduler.java:288)
io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.run (ExecutorScheduler.java:253)
java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
java.lang.Thread.run (Thread.java:923)
Fatal Exception: java.lang.OutOfMemoryError
Failed to allocate a 16 byte allocation with 1215008 free bytes and 1186KB until OOM, target footprint 201326592, growth limit 201326592; failed due to fragmentation (largest possible contiguous allocation 49020928 bytes)
io.reactivex.internal.queue.MpscLinkedQueue.offer (MpscLinkedQueue.java:62)
io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:167)
io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:187)
io.reactivex.Scheduler$Worker$PeriodicTask.run (Scheduler.java:531)
io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run (ExecutorScheduler.java:288)
io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.run (ExecutorScheduler.java:253)
java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
java.lang.Thread.run (Thread.java:923)
is there some framework change that is triggering these issues, is this application code that is causing this? what are some strategies to try to address crashes like the above?
Some other techniques to consider beyond the existing comments:
In field instrumentation:
Activity Patterns: If you have something that records user activity, look for apps that go a long time without crashing and apps that crash earlier and see if there are different actions performed by the user
Direct Memory Usage: Since you are not yet able to reproduce this on debug builds, you could record memory available just before and just after particular activities to help you narrow down where in the app this occurring. You can access app available memory and then log it (if you can get the logs) or report it back through some analytics system.
Local testing:
(with Leak Canary or profiler)
There are often points in time that should come back to the same level of allocated memory: for instance if you go into a screen and come back out you may allocate some static items, but from 2nd use of the screen onwards you will want the memory to come back to a normal (quiescent) point. So stopping execution, forcing a GC, restarting execution and going through a workflow and then coming back to the home screen. (again skipping the first time) Can be a good way to narrow down which workflow is leaving significant extra memory.
It is unusual that the debug builds are not producing this effect, if you have a "friendly" end user reporting this issue, perhaps give them a debug build and ask them to support you by using it.
In a debug environment you can also try to "make it worse" so, for example, go into and out of a screen or workflow 10 or 100 times (scripting for the 100 example).
Use Coroutines for long or heavy operations. These crashes are coming from Rxjava. Maybe you are not performing work accurately in that.
I am just going to have a stab at where you could look, from the stack trace it looks like you are using a schedular to perform tasks. My suspicion is that you are running multiple threads, and as each thread requires its own allocation of memory some thing to consider would be:
Controlling the number of threads through a thread pool, this will cap the number of threads available and recycle threads instead of allocating new ones and potentially having a significant number of threads running at the same time.
Related
I have crash reported on Google console, there are two of them and it is identical (because stackoverflow detect it as spam, i will post only one), Crashlytics trying to upload report and it will be crashed due out of memory error.
Crash report:
java.lang.OutOfMemoryError:
at com.android.okio.Segment.<init> (Segment.java:34)
at com.android.okio.SegmentPool.take (SegmentPool.java:48)
at com.android.okio.OkBuffer.writableSegment (OkBuffer.java:511)
at com.android.okio.OkBuffer.write (OkBuffer.java:424)
at com.android.okio.OkBuffer.clone (OkBuffer.java:740)
at com.android.okhttp.internal.http.RetryableSink.writeToSocket (RetryableSink.java:77)
at com.android.okhttp.internal.http.HttpConnection.writeRequestBody (HttpConnection.java:236)
at com.android.okhttp.internal.http.HttpTransport.writeRequestBody (HttpTransport.java:77)
at com.android.okhttp.internal.http.HttpEngine.readResponse (HttpEngine.java:610)
at com.android.okhttp.internal.http.HttpURLConnectionImpl.execute (HttpURLConnectionImpl.java:379)
at com.android.okhttp.internal.http.HttpURLConnectionImpl.getResponse (HttpURLConnectionImpl.java:323)
at com.android.okhttp.internal.http.HttpURLConnectionImpl.getResponseCode (HttpURLConnectionImpl.java:491)
at com.android.okhttp.internal.http.DelegatingHttpsURLConnection.getResponseCode (DelegatingHttpsURLConnection.java:105)
at com.android.okhttp.internal.http.HttpsURLConnectionImpl.getResponseCode (HttpsURLConnectionImpl.java:25)
at io.fabric.sdk.android.services.network.HttpRequest.code (HttpRequest.java:1357)
at com.crashlytics.android.core.DefaultCreateReportSpiCall.invoke (DefaultCreateReportSpiCall.java:65)
at com.crashlytics.android.core.CompositeCreateReportSpiCall.invoke (CompositeCreateReportSpiCall.java:18)
at com.crashlytics.android.core.ReportUploader.forceUpload (ReportUploader.java:104)
at com.crashlytics.android.core.ReportUploader$Worker.attemptUploadWithRetry (ReportUploader.java:242)
at com.crashlytics.android.core.ReportUploader$Worker.onRun (ReportUploader.java:185)
at io.fabric.sdk.android.services.common.BackgroundPriorityRunnable.run (BackgroundPriorityRunnable.java:30)
at java.lang.Thread.run (Thread.java:818)
Craslytics libs version used:
answers-1.4.6, beta-1.2.10, crashlytics-2.9.8, crashlytics-core-2.6.7, crashlytics-ndk-2.0.5, fabric-1.4.7
I don't know how to reproduce this crash, so i have no idea to fix it myself. Any tips to troubleshoot this kind of crash?
this error show at that time when you do some this wrong or cause of you faulty programming.
Usually, this error is thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector.
OutOfMemoryError usually means that you’re doing something wrong, either holding onto objects too long, or trying to process too much data at a time. Sometimes, it indicates a problem that’s out of your control, such as a third-party library that caches strings, or an application server that doesn’t clean up after deploys. And sometimes, it has nothing to do with objects on the heap.
To find the cause, the text of the exception includes a detailed message at the end. Lets examine all the errors.
Error 1 – Java heap space : This error arises due to the
applications that make excessive use of finalizers.
Error 2 – GC Overhead limit exceeded : This error indicates that the
garbage collector is running all the time and Java program is making
very slow progress.
Error 3 – Permgen space is thrown : The java.lang.OutOfMemoryError:
PermGen space error indicates that the Permanent Generation’s area
in memory is exhausted.
Error 4 – Metaspace : Java class metadata is allocated in native
memory. If metaspace for class metadata is exhausted, a
java.lang.OutOfMemoryError exception with a detail MetaSpace is
thrown.
Error 5 – Requested array size exceeds VM limit : This error
indicates that the application attempted to allocate an array that
is larger than the heap size.
Error 6 – Request size bytes for reason. Out of swap space? : This
apparent exception occurs when an allocation from the native heap
failed and the native heap might be close to exhaustion. The error
indicates the size (in bytes) of the request that failed and the
reason for the memory request. Usually the reason is the name of the
source module reporting the allocation failure, although sometimes
it is the actual reason.
Error 7 : reason stack_trace_with_native_method : Whenever this
error message(reason stack_trace_with_native_method) is thrown then
a stack trace is printed in which the top frame is a native method,
then this is an indication that a native method has encountered an
allocation failure. The difference between this and the previous
message is that the allocation failure was detected in a Java Native
Interface (JNI) or native method rather than in the JVM code.
for more details Understand the OutOfMemoryError
and reference of answer ref.
I made a application that runs on a coffee machine.After 20+ days (can be 60+ days depending on use)
an OutOfMemoryError occurs:
java.lang.OutOfMemoryError: Failed to allocate a 604 byte allocation with 16777216 free bytes and 319MB until OOM; failed due to fragmentation (required continguous free 65536 bytes for a new buffer where largest contiguous free 53248 bytes)
My question is:
Is there a way to run a defragmentation on memory android application programmatically?
The time it takes should not be a issue because machine goes into standby or eco mode.
And what I see is that there is more than enough memory available.
Is there a way to run a defragmentation on memory android application programmatically?
No. On Android 5.0-7.1, the best thing that you can do is get out of the foreground, as ART's garbage collector will compact memory only when your app is in the background. On Android 8.0+, ART's garbage collector will compact memory even while you are in the foreground.
Beyond that, aim to start a fresh process once per week or something, so you get a fresh heap.
I am new to android development and I am trying to understand how this Garbage Collection works but I need a clear explanation from someone first hand.
My app is doing some large transactions back and forth with a server. When I am switching from one activity to another, I constantly get the following message in my console:
GC_MINOR: (Nursery full) pause 2.77ms, total 2.95ms, bridge 11.82ms promoted 128K major 2640K los 4441K
and of course, the ms timing is different every time but it happens A LOT!
I read up on it here and created environment.txt file in my project with the following lines:
MONO_GC_PARAMS=nursery-size=1024m
MONO_GC_PARAMS=soft-heap-limit=64m
I was just testing different values at nursery-size and soft-heap-limit but it didn't help at all.
Right now, app runs REALLY slow when I go from one activity to another.
Can someone please explain in detail and present me with some options?
Thank you.
Garbage Collection works on different portions of heap
Nursery
Tenured
Which is different on different JVMs (Hotspot, IBM etc.)
Generally Nursery size is lower than Tenured (Nursery << Tenured)
Ex. In 2 GB of heap space Nusery can range from 128-512 and Remaining will be Tenured.
Nursery part will always be well managed by JVM. This part is used most of the time for new object creation, allocation as this part is lower in size GC operations (Compaction, GC collection)are fast and well tuned.
Tenured part is used when objects in Nursery grow in size or are alive for than a specific time limit (long living objects). They are maintained in Tenured. This is bigger chunk of memory and thus GC operations are slower.
Nursery pauses are generally small and shouldn't impact much when you are facing it continuously then that's sign of problem. While resizing nursery keep in mind that it should not be more than Tenured. Size is proportional to GC operations time.
In your case you should look at,
Existing nursery size and object allocation pattern. If objects of bigger size of getting created then try increasing nursery in multiples of 2.
Try parallel threads for GC operation. This can improve performance drastically.
Find out JVM policies i.e. Throughput policy, CMS policy (dependent on JVM)
In my application I use quite a lot of assets to render. This caused my application to crash with an exception indicating that there is no more memory left (when allocating a byte array). Using meminfo I've seen that my process uses about 40mb of memory which according to my calculations is correct (so no hidden excessive memory allocation in my code).
The total memory usage on my system is 300mb. My tablet however supports 1gb of memory and I wonder why it throws me an exception at a usage of 300mb. Is there some per process limit that I need to change? Or are there any other things I'm missing about androids memory management?
Add this to androidManifest in application tag
android:largeHeap="true"
to make things work but this will consume more memory hence more gc calls
hello i m doing some runtime calculation for getting NativeHeap memory and allocated memory at runtime, so any one can suggest me
what should be the difference between "Debug.getNativeHeapAllocatedSize()" and "Runtime.getRuntime().totalMemory()"
so can prevent app by OutOf Memory Exception.
Thanks
Runtime.getRuntime().totalMemory()
Returns the total amount of memory which is available to the running program.
getNativeHeapAllocatedSize()
For devices below HoneyComb most of the huge allocations are deferred to the native heap (e.g Bitmaps). Hence this api is useful to find out how much of native heap is allocated.
OOM Errors occurs when there are no objects which can be freed by the DVM. Typically you have about 16MB in the Heap to play with (for a standard phone). Check your logs* to see GC statements having info about how much of memory is allocated.
I don't think there should be a fixed ratio to cause an OOM error. Like in the case when you load a very huge bitmap, here the native memory used is huge.
Slide 25