I have an annoying problem:
I'm fetching a lot of GeoJSON data from a server. This works, even with 16MB heap and even if I start and stop the app several times. The memory consumption stays constant and never exceeds. But there is a case where I exceed the 16MB heap. I'd like to describe it shortly:
The app is used and "quit" by home button, so the app resides in the background and is not destroyed yet. When the app is resumed, my "controller" which is a part of my app, checks for new GeoJSON data. If there is a GeoJSON data update, the app downloads and processes it and here the problem begins. If the app was already started before and is resumed from background the heap size of 16MB is not enough for the following code (if and only if the app is resumed from background instead of a fresh start):
private synchronized String readUrlData(HttpURLConnection urlConnection) throws IOException {
Log.i(TAG, "Start reading data from URL");
urlConnection.setReadTimeout(1000 * 45); //45 sec
Reader reader = new InputStreamReader(urlConnection.getInputStream());
StringBuilder sb = new StringBuilder(1024 * 16);
char[] chars = new char[1024 * 16]; //16k
int len;
while((len = reader.read(chars)) >= 0) {
sb.append(chars, 0, len);
}
reader.close();
Log.i(TAG, "Finished reading data from URL");
return sb.toString();
}
I get OutOfMemory either in append() or in toString(). Obviously the app takes little to much memory for this when it's somehow used before. I already tried to find a more resource friendly way for the code above but there is no solution. Again, if the app is started from new, there are never problems. And I'm absolutely sure that I don't have any memory leaks because
I checked this part and more with MAT and there was never more than 1.6MB occupied (this is the GeoJSON data).
I performed this use case several times consecutively with 24MB heap size.
If there would be a memory leak, it would have been crashed after the 3rd ot 4th time with 24MB heap, but it ran without problems.
I know how to avoid the crash. I could show an AlertDialog to the user which tells him that there is new GeoJSON data available and he needs to restart the app. But there is a catch. If the application is "terminated" by Activity's finish(), the application still remains in memory so when it restarts, the crash comes again because the memory is never deallocated (at least I can't rely on it in most cases). I already figured out that System.exit(0); instead of finish would free all memory because it kills the whole app, so no crash occures after restart with the new GeoJSON data. But I know this is no good solution. I already tried System.gc() on important parts but this doesn't work either. Any ideas how to deal with this problem? Probably I need something like restarting the app with deallocing all used memory.
Another solution could be to redesign the code above but I don't think that it's possible to get more MBs out of this.
If I don't find a reasonable solution for this, I will use System.exit(0) when heap is 16MB (I think there is a way to check that) to restart the app.
here are some ideas :
as soon as you know there is anything to read , set the old data to null so that it will be GC-ed .
use a service that does this task , that will run on a different process (thus having at least 16 mb just for this task) . once it handles the data , move it in some way to whatever component that you need , while null-ing the old data before.
decode the data as you get it , instead of getting all of it and then decode it .
compress the data so that the while decoding it , it will decompress it on the way .
use an alternative to json , such as google's protocol buffers or your own customized data type .
most of the devices out there have more than 16 mb in their heap memory . you could simply set the min sdk version to 8 or 10 since most devices that have more memory than this also have a higher API .
Thanks for your answers. At the end I decided to let System.exit(0) in this one special case because the data is hardly ever updated and even if, it must be coincidence that it happens exactly when someone has the app running in background.
Related
I started a few months ago to program in android doing my first app that consists of taking orders like a waiters app.
As it was the first app you can imagine that i wasn't aiming for the optimization but that was a big mistake. I've deployed app in some restaurants and till this moment all was going well.
The issue:
Now the app was installed in a restaurant where they take a lot of orders for a single table they can get up to 10 minutes where they take orders and yesterday i've got a call where they said that the app was crashing on tables with a lot of people.
Now i'm trying to simulate what happened in my company.
The App:
The app consist of the Main screen where the user can choose to enter in settings or in the orders then there is another "login" activity and then the "main" where there is an alert that asks for the number of table then the waiter takes order and send it.
After sending i'm just "resetting" the activity or better by clearing the recyclerView out of items and settings all values to 0 but i think that it's produce a "little" memory leak.
Conclusion:
Now i would have as more as possible suggestions on how can i improve my app performance or better how could i prevent memory leak or even if it's possible to "recreate" the activity in someway after the waiters send the receipt so i could "save" some memory or idk.
Actually here is a screen of profiling, i don't know it it's could help
Good practices to avoid memory leaks IMHO:
avoid storing android context statically in helper or data classes
remember to unregister broadcast receivers once their job it's finished. A good practice is to register inside onResume() method and unregister inside onPause() method
prefer the usage of LiveData for your "model" classes
use LeakCanary library to detect any potential leak
be careful with the usage of static variables, remember to set them to null once are no longer useful to the application, that way they would be eligible to be garbage collected.
You can find an interesting article about this topic on Medium
First of all you seem to not have a CrashReport library on your app. So first thing to do is to add Crashlytics to your app.
Get clear, actionable insight into app issues with this powerful crash reporting solution for Android
Then if you think you have leaks on your app, I suggest you use CanaryLeak
A memory leak detection library for Android and Java.
Once Crashlytics in place you will know where the issue comes from. It can give you as far as the line of code that crashes your app.
Do you have any subscribers to events which is not unsubscribed? This is a common pitfall for memory leaks. I suggest that you use a memory profiler to profile the memory usage. This article gives you an introduction if you are using android studio:
https://developer.android.com/studio/profile/memory-profiler
As highlighted by the others, without specifics, it is hard to know what is the cause of the memory leak.
I wrote apps for Android tablet at restaurants in Beijing (Lily's American Diners), and I struggled with leaks for a long time. The problem is you have tablets Android tablets running for 14 hours a day so you have to have very robust architecture so things do leak and get torn down by the OS.
1) Canaryleak and Crashylytics are good ideas. I used ACRA.
2) Try to avoid popups, global variables, all the obvious things.
3) I use a background service to handle uploading data to the server, downloading menus from the server, printing to POS printers.
4) For network operations, use OKHTTP, it makes all the Async network operations cleaner.
5) For image loading, use Square Picasso. it makes all the image cashing cleaner.
6) In your lifecycle architecture,minimize activities. I use 1 single instance activity for the splash screen initial load and 1 single instance activity if they choose to do the manager operations, setttings, etc., then, there is 1 activity that runs the whole day long and handles everything. Within that activity, I use a ViewFlipper to quickly change "pages" (views) for each of the following operations:
a) choosing items from the menu
b) showing the ticket of selected items
c) showing large pics of the dish or special options for each dish
d) showing a successful sent order
e) showing an unsuccessfully sent order
f) showing the choose table screen for customers or waiters, whoever does it.
g) showing daily specials
The other big issue is persistent storage so orders and menus don't get lost, etc during the long running operation of these kinds of apps. i made extensive use of shared preferences.
Good luck.
There are many different possibilities, for example:
Database remains opened
Data cannot encapsulate properly.
From Shared Preference
These are common possibilities from where the data may leak.
I have a small Android application which does a server call to post some User data to a server.
Following is the code :
private boolean completed = false;
public String postData( Data data){
new Thread(new Runnable() {
#Override
public void run() {
try{
String response = callApi(data);
completed = true;
}catch(Exception e){
Log.e("API Error",e.getMessage());
completed = true;
return;
}
}
}).start();
while(!completed){
// Log.i("Inside loop","yes");
}
return response.toString();
}
The above method calls the API to post data and returns the response received which works fine.
The loop at the bottom is a UI blocking loop which blocks the UI until a response is received or an error.
The problem :
I tried the same code for Marshmallow and Oreo device and the results were different.
For Marshmallow : Things moved in line with my expectation. :)
For Oreo (8.1.0) :
The very first API call works good enough after I open the App. However, the subsequent API calls after, cause the UI to block forever although an Error or Response is received from the Server(verified by logging and debugging).
However, on setting breakpoints(running in Debug mode) the App moves with much less trouble.
It seems the system is unable to exit the UI blocking loop although the condition is met.
The second behavior which was noticed is when I log a message in the UI blocking thread, the System is able to exit the loop and return from the Method though the API response is not logged.
Could someone help understand such inconsistency across these two flavors of Android and what could be the change introduced causing such a behavior for Oreo but not for Marshmallow?
Any insight would be extremely helpful.
It's more likely to be differences in the processor cache implementation in the two different hardware devices you're using. Probably not the JVM at all.
Memory consistency is a pretty complicated topic, I recommend checking out a tutorial like this for a more in-depth treatment. Also see this java memory model explainer for details on the guarantees that the JVM will provide, irrespective of your hardware.
I'll explain a hypothetical scenario in which the behavior you've observed could happen, without knowing the specific details of your chipset:
HYPOTHETICAL SCENARIO
Two threads: Your "UI thread" (let's say it's running on core 1), and the "background thread" (core 2). Your variable, completed, is assigned a single, fixed memory location at compile time (assume that we have dereferenced this, etc., and we've established what that location is). completed is represented by a single byte, initial value of "0".
The UI thread, on core 1, quickly reaches the busy-wait loop. The first time it tries to read completed, there is a "cache miss". Thus the request goes through the cache, and reads completed (along with the other 31 bytes in the cache line) out of main memory. Now that the cache line is in core 1's L1 cache, it reads the value, and it finds that it is "0". (Cores are not connected directly to main memory; they can only access it via their cache.) So the busy-wait continues; core 1 requests the same memory location, completed, again and again, but instead of a cache miss, L1 is now able to satisfy each request, and need no longer communicate with main memory.
Meanwhile, on core 2, the background thread is working to complete the API call. Eventually it finishes, and attempts to write a "1" to that same memory location, completed. Again, there is a cache miss, and the same sort of thing happens. Core 2 writes a "1" into appropriate location in its own L1 cache. But that cache line doesn't necessarily get written back to main memory yet. Even if it did, core 1 isn't referencing main memory anyway, so it wouldn't see the change. Core 2 then completes the thread, returns, and goes off to do work someplace else.
(By the time core 2 is assigned to a different process, its cache has probably been synchronized to main memory, and flushed. So, the "1" does make it back to main memory. Not that that makes any difference to core 1, which continues to run exclusively from its L1 cache.)
And things continue in this way, until something happens to suggest to core 1's cache that it is dirty, and it needs to refresh. As I mentioned in the comments, this could be a fence occurring as part of a System.out.println() call, debugger entry, etc. Naturally, if you had used a synchronized block, the compiler would've placed a fence in your own code.
TAKEAWAYS
...and that's why you always protect accesses to shared variables with a synchronized block! (So you don't have to spend days reading processor manuals, trying to understand the details of the memory model on the particular hardware you are using, just to share a byte of information between two threads.) A volatile keyword will also solve the problem, but see some of the links in the Jenkov article for scenarios in which this is insufficient.
I have an android app that needs to store critical information coming from a sensor. The sensor updates data every 5 ms. I need to persist each of these data points on internal memory in text files.
In the current scenario, I am collecting data points for 2 seconds in memory and then writing to the file at the end of 2 seconds to save battery life. However, under situations where the app crashes, I am loosing the critical data points.
Does anyone have any suggestions on how to handle this?
Is it a good idea to write the data point to the file every 5ms. Would this significantly reduce the battery life and increase the load on the CPU? If anyone has come across a similar situation, can you please share how you resolved the issue?
I would suggest you to study the reason of your app crash. If your app is crashing because of internal exceptions there is a better way of dealing with this thing.
Write a good exception management and use this blocks to write data to internal memory whenever there is an exception generated. Re-start the app after the data has been successfully written.
It you app is crashing because of external reasons and you are unable to catch exceptions, you have to think of some other way.
As your App is critical, I would look into setting up a DefaultUncaughtExceptionHandler by calling Thread.setDefaultUncaughtExceptionHandler in your Application class. This way in the handler, you can write all unsaved data, AND you can restart the app for continued handling of your critical data. I would put some seconds counter in there, to prevent an infinite loop of crashes. The Open Source ACRA library uses Thread.setDefaultUncaughtExceptionHandler, so you may get some idea from there on how to use it.
An additional idea is to write the data using a service in a separate process, search for "Remote Service". This way, even if the app crashes, the service will still be alive. You will have to setup some functionality on how to share the data between the app and the service. If the app is really critical, I would setup 2 remote services, one that gets the info from the sensor (and caches it as a backup until confirmed that it's written), and one that caches the data and writes it every few seconds. Each service should also have a DefaultUncaughtExceptionHandler as above. This is in addition to the actual app, that will have the user interface. Though it is a little waste of resources, but for critical data it is not wasted.
i donot think there's a good method. What more important is to avoid crash maybe
Instead of writing to a file every 5 ms, which will be a costly operation, you can save data to SharedPreferences every 5 ms and every 2 sec, save the data from SharedPreferences to a file. SharedPreferences content won't be deleted even if app crashes and hence you will not have any data loss.
I have an app which let users to pick an image from sd card and then app process the image. I am downsizing images to 1/5 of avialable vm memory and i do call recycle() for every bitmap in onDestroy() call and i still get out of memory error if i close and open my app multiple times.
There are various memory leak scenarios in Android. One way to track them down is to use the Traceview tool http://developer.android.com/guide/developing/debugging/debugging-tracing.html.
For more info on common Android memory leak problems see http://android-developers.blogspot.co.uk/2009/01/avoiding-memory-leaks.html
Note that when you finish the last Activity of the app the Java process of your app may (in most cases will) be alive meaning all static stuff is still alive when you "start" the app again. Do you store any heavy objects in static fields?
Also note that according to the Activity life-cycle the onDestroy() is not guaranteed to be called. However I don't think this is related, because when you (versus the OS) close the Activity (either by pressing 'Back' button or by calling finish() from the code) then OS always calls onDestroy().
In general, without seeing the code it is difficult to say what happens.
I use three files for storing of local data, for my app, two of which get checked on app start-up and updated remotely (if a newer version is available or the files do not yet exist). The third is for user data that can periodically be stored while the app is running.
All three use the same method to save the file:
public boolean setLocalFile(String Filename, String FileText, Context con) {
try {
FileOutputStream fos = con.openFileOutput(Filename, Context.MODE_PRIVATE);
fos.write(FileText.getBytes());
fos.close();
return true;
} catch(Exception e) {
handleError(e); // local method that simply does a System.out.println
return false;
}
}
Now the third file writes fine, but the first two (that are checked and written on start-up) don't write at all. In debug, it appears as if the setLocalFile method is completely skipped without throwing an exception or crashing the app and the only error logs reported appear to be:
07-11 16:14:13.162: ERROR/AndroidRuntime(1882): ERROR: thread attach failed
07-11 16:14:18.882: ERROR/gralloc(62): [unregister] handle 0x3bfe40 still locked (state=40000001)
I've not found anything useful online, in relation to these either, unfortunately.
It's got me stumped - I have no idea why it's not writing in this particular case. Any ideas?
A belated update... Giving the code where this issue was occurring probably would not be too practical as I suspect one would practically need all of my code to examine it fully.
My variable names began with capitals largely because I've just been getting back into Java again after about 10 years with other languages, so it took me a while to get back into the swing of things - not pretty, but I would be surprised if it would have made a difference. Nonetheless, thanks guys for the responses.
The background to the issue was that I had decided to handle the data held in these files as class objects. So the Drafts object would hold all the draft documents and the class constructor would call the getLocalFile method and store it locally in the class, allowing me to work with it (using various get/set type methods), before writing it again to a file with a 'commit' method (calling setLocalFile).
There were various such class objects being called, but never more than one instantiated for each data file. What stumped me was that everything was working fine, except the FS operation was simply being ignored. Once I accessed the files directly (ignoring the class wrappers I'd written) the problem vanished.
The class wrappers work fine in other parts of the app and this problem only occurred in the one, complex and rather intensive, section of it.
As I said, I'm only getting back into Java after an extended absence (and much has changed), but looking at the issue and how it 'resolved' itself, my guess is that it was some form of thread / memory issue - essentially I was accessing the FS from too many objects and eventually it decided it didn't want to play anymore.
The problem, as I said, is resolved, but it bugs me that the solution did not allow me to wrap my data objects more elegantly. If someone might suggest what may the cause / solution for this, I'll happily try it out and report back, otherwise this response may help someone with a similar problem in the future - at least to find an inelegant solution...