This question is for Android developers who are familiar with the Activity Lifecycle.
I'm developing an app that performs face detection and facial landmark recognition.
The according machine learning models take a long time to be parsed from the SD storage and loaded into memory. On current average Android devices, it easily takes up to 20 seconds. By the way, all of this face analysis stuff and model loading happens in C++ native code, which is integrated using the Android NDK + JNI.
Because the model loading takes a long time, the actual parsing and loading is scheduled early in the background via AsyncTasks, so that the user does not notice a huge delay.
Before the actual face analysis is performed, the user can take a selfie via the MediaStore.ACTION_IMAGE_CAPTURE. This will call a separate camera app installed on the device and receive the picture via onActivityResult.
Now the problem starts: Almost always the whole app process will be killed while the user is in the separate camera Activity/App. Mostly it seems to happen right before returning from the Camera app (the timing seems odd). I did another test to confirm that it happens when the capture button is pressed inside the camer app. At that moment, my app is killed. When pressing the 'Accept image' button, the app is recreated. The reason given in logcat by the ActivityManager for the process kill is 'prev LAST' (I've found nothing via Google on the meaning of this, but I saw that many other apps are also killed with this reason, so it seems to happen quite often).
Thus, all of the Activities of my app need to be recreated by Android (fine by me, because it happens fast), but also the face analysis models must be loaded again from scratch, and the user will notice a huge delay before his selfie can be processed.
My question is: Is there any possibility to tell Android that an Activity/App has a legitimate reason to not be killed while being in the background temporarily to get a camera picture? After all, the ActivityManager makes a wrong decision to kill the app. Having to reload the models so frequently takes up a lot of CPU and memory resources.
It seems like an oversight in the Android lifecycle architecture. I know that few apps have the specific requirements of my app, but still, it seems stupid. The only way I can think of to 'fix' this issue is to implement my own camera Activity inside the app, but this goes counter Android's own best practices.
There is also some 'android:persistent' flag that you can stick to your Activity via AndroidManifest.xml, but the docs are totally unclear about the implications of this. See the docs on this.
By the way: onDestroy is not called when the app process is killed. I've read somewhere that there is no guarantee that onDestroy will be called, and this is actually not a problem for me. Although I wonder why the Android docs do not state this clearly.
Almost always the whole app process will be killed while the user is in the separate camera Activity/App
This is not surprising. A camera app can consume quite a bit of memory, so Android needs to free up memory by terminating background apps' processes.
After all, the ActivityManager makes a wrong decision to kill the app
Given that a likely alternative is the OS crashing, I suspect the user would agree with the OS decision to terminate your process.
Having to reload the models so frequently takes up a lot of CPU and memory resources.
Then perhaps you should not be starting another app from yours. Take the photo yourself. Use the camera APIs directly, or use libraries like Fotoapparat and CameraKit-Android as simpler wrappers around those APIs.
The only way I can think of to 'fix' this issue is to implement my own camera Activity inside the app, but this goes counter Android's own best practices.
By that argument, no devices would ever have a camera app, as writing any camera app "goes counter Android's own best practices".
Any app that needs a camera must use the camera APIs (directly or indirectly) to have any shot at reliable behavior. You are assuming that thousands of camera apps are all properly written and will correctly honor your ACTION_IMAGE_CAPTURE Intent (e.g., putting the results in the place that you designate with EXTRA_OUTPUT). Many camera apps have buggy ACTION_IMAGE_CAPTURE implementations. ACTION_IMAGE_CAPTURE is not unreasonable for cases where you and the user can live without the picture being taken (e.g., a note-taker app that has an "attach photo" feature), but that would not seem to be the case with your app.
Related
I just try to understand the reason of it. Our app is really stable and optimized, but time to time it just switch off during the work on foreground mode. User is using it, and it' just closed without any crash or ANR messages - really bad experience. What we already did:
use UncaughtExceptionHandler and print logs to the disk for analyzing. Logs are empty, not information about crashes for these cases
used Firebase crashlityc - is empty too
checked app with help profiler and leakcanary - no leak, memory using ~200 Mb
A bit words about our app:
clear arhitecture, Kotlin, MVVM, coroutines, dagger, retrofit, room, one single activity app
app should work all time, infinitely. App is an interface for hardware terminal.
the most part of fragments are stored into the backstack and reused again, it help to make screen opening faster after first fragment using. No leak with it, no fragment or viewModels duplicates
we use glide for downloading and previewing user avatars. I'm afraid, that leaks could be a part of bitmaps or jpg. Profiler doesn't show it after 1-2 hours tests, but I didn't test it during for ex. one week
possible app interruptions happen more often, when device is uncharged or just started (first 10-20 minutes after device start)
the most part of clients has a bad WIFI connections
we have ~10 modules, the most part of them are own canvas libs
crashes happen in random moments...
we also have ANR problem for some clients, but we added ANR watchdog, so soon we will know the reason of it.
we have 50-60 singletons. I'm not sure, that is bad or good. The first plan was using much memory to make app faster.
For me it looks like a native crash or system kill, but how to repeat it? I still don't understand the real reason of it. If you faced with similar problem, please describe your experience, it could be helpful for us. Thx!
I have a specific situation, that is application specific that needs to be handled. My app is an ionic video surveillance app, that uses cordova for various plugins. It displays streaming JPEG images as one part of its function.
The scenarios I need to handle:
If the app goes into background, it needs to clear up video resources, that results in stopping of video streams
If the app is running in multi-window mode (Android 7.0+), it needs to run side by side another app in split window mode, even if the user is interacting with the other app.
If the app is running in multi-window mode and the user actually switches it out of the view, it needs to clear up video resources
That being said, here are the following predicaments:
Today, browser apps don't get onStop() as a callback. We only get onPause(). This causes a problem when running in multi-window, because the moment the user taps on the other app, my app gets an onPause() and thinks its going into the background and clears up video resources. Obviously, this is undesirable and the viewer wants the video playback to continue (not streaming video but streaming images, so we don't really have a PIP video player situation here)
There is no way, in JS land, to detect multi-window mode
According to the multi-window docs, it is now necessary to differentiate between stop() and pause() (and as a corollary, resume() and start()) for user experiences like playing of videos.
Given my specific requirements sighted above, I have arrived at the following solution:
I've developed a plugin that lets me trap onStop() and onStart() and multi-window state (as a result of another related question I asked on SO a few days ago)
My approach therefore, is if I am running on Android, I will not trap the browser pause and resume events and only rely on onStop() and onStart() (simplifies the process of not having two callbacks for one event, both pause and stop/ resume and start)
I've been testing situations on my android app on when onPause() is called and when onStop() is called. It seems to me they are always called together. I've read this may not be true in the case of an ActivityDialog. Based on the description there, that situation seems fine for my app as I don't need to clean up video resources.
Does anyone see a problem with this approach -- that is, don't trap onPause() browser event if running on android, use cordova onStop() instead? While I've tested, I might be missing something obvious that others might like to advise me on.
It is critical that my app is able to free video resources when the app actually moves to the background and its UI is not displayed (because due to some browser issues, it results in big leaks, otherwise) and hence it is important I don't miss an event. Therefore I'd like to make sure that ditching onPause() for onStop() doesn't result in a missed situation. I also need to support users running Android 4.2 & 4.4 for which I use the now defunct crosswalk library, so I really need to make sure by adopting this approach, I'm not going for a solution that will only work on modern systems.
Answering my own question:
I've been testing this approach for a while now and I don't see any evident issues - both legacy and new versions of Android seem to behave well.
I am trying to create a TV back light controller that runs on Android TV. To do this I want to use what is being displayed and adjust the back light colors accordingly. I am having trouble determining a good way to basically get a screenshot regularly through a service. Does anyone have any ideas?
On Android 5.0+, you can use the media projection APIs for this. However:
The user has to authorize this, by launching one of your activities which then displays a system-supplied authorization dialog
That authorization does not last forever, but only for however long your process is around (and possibly less than that, though so far in my testing it seems to last as long as the process does)
AFAIK, DRM-protected materials will be blocked from the screenshots, and so using those screenshots to try to detect foreground UI colors may prove unreliable
I have an object sitting in memory on the application that I'm using, and on a button press I do a startActivityForResult and launch the camera application, so I can attach a photo to that object. On every phone/tablet I've ever tested with (somewhere around 15 or so) it works completely fine, but for some reason with the Motorola Droid 3 (CDMA version) once the camera application starts, it's like onDestroy is called... even though it returns to my app after the photograph is snapped, all of the variables held in memory are erased. Can someone direct me as to how I can fix this please?
I'm guessing what's happening is that the camera app uses a big enough chunk of memory that android needs to destroy the paused activity. If you look at this page,
http://developer.android.com/reference/android/app/Activity.html
It shows that possibility pretty clearly.
You are seeing differing behavior on different devices because different devices have different apps loaded into memory and different amounts of memory to begin with.
If you need to save state in your app, you can hook in at onSaveInstanceState() and onRestoreInstanceState(). Here's a post that talks about it in more detail.
How to save an activity state using save instance state?
In summary, don't depend on the state being the same when you resume an activity. If you depend on that happening, you need to handle it yourself.
It is necessary for my application to keep the camera captured until released manually by an Activity (I realize this is bad practice, as no other applications will be able to use the camera). I used to be able to do this by avoiding the camera.release() call in the surfaceDestroyed function from CameraPreview, but this no longer works after 2.1.
Is there any way to keep the camera in captivity, without it being automatically released after surfaceDestroyed?
As a workaround question to answer instead of the previous one, is there any way to take a picture from within a service, without the preview view inflated?
You may be able to call camera.lock() to re-obtain the lock on the camera hardware. Failing that you can reopen the camera.
However, the drawback to this is much worse than preventing other apps from accessing the camera. This will also rapidly drain the battery, because it keeps the camera sensor and DSP powered. According to this thread, that could potentially kill a battery dead in two hours.