Any idea on how I would go about running the android CardboardActivityas a service, and still be able to perform vr/openGl intensive task?
There is an answer for one special service getting permission to access the screen. Further investigation may lead to a generalized answer.
The android live wallpaper service does this. The issue of course is with the permission to bind to the screen frame buffer, which live wall paper services are given.
android:permission="android.permission.BIND_WALLPAPER"
So if you implement your app with a live wallpaper service, and manage to set it as your app background, you will also have an app that doubles as a wallpaper. Keep in mind that wallpapers are very constrained in what they can do, but the OP did not specify any other requirements.
Related
This question is for Android developers who are familiar with the Activity Lifecycle.
I'm developing an app that performs face detection and facial landmark recognition.
The according machine learning models take a long time to be parsed from the SD storage and loaded into memory. On current average Android devices, it easily takes up to 20 seconds. By the way, all of this face analysis stuff and model loading happens in C++ native code, which is integrated using the Android NDK + JNI.
Because the model loading takes a long time, the actual parsing and loading is scheduled early in the background via AsyncTasks, so that the user does not notice a huge delay.
Before the actual face analysis is performed, the user can take a selfie via the MediaStore.ACTION_IMAGE_CAPTURE. This will call a separate camera app installed on the device and receive the picture via onActivityResult.
Now the problem starts: Almost always the whole app process will be killed while the user is in the separate camera Activity/App. Mostly it seems to happen right before returning from the Camera app (the timing seems odd). I did another test to confirm that it happens when the capture button is pressed inside the camer app. At that moment, my app is killed. When pressing the 'Accept image' button, the app is recreated. The reason given in logcat by the ActivityManager for the process kill is 'prev LAST' (I've found nothing via Google on the meaning of this, but I saw that many other apps are also killed with this reason, so it seems to happen quite often).
Thus, all of the Activities of my app need to be recreated by Android (fine by me, because it happens fast), but also the face analysis models must be loaded again from scratch, and the user will notice a huge delay before his selfie can be processed.
My question is: Is there any possibility to tell Android that an Activity/App has a legitimate reason to not be killed while being in the background temporarily to get a camera picture? After all, the ActivityManager makes a wrong decision to kill the app. Having to reload the models so frequently takes up a lot of CPU and memory resources.
It seems like an oversight in the Android lifecycle architecture. I know that few apps have the specific requirements of my app, but still, it seems stupid. The only way I can think of to 'fix' this issue is to implement my own camera Activity inside the app, but this goes counter Android's own best practices.
There is also some 'android:persistent' flag that you can stick to your Activity via AndroidManifest.xml, but the docs are totally unclear about the implications of this. See the docs on this.
By the way: onDestroy is not called when the app process is killed. I've read somewhere that there is no guarantee that onDestroy will be called, and this is actually not a problem for me. Although I wonder why the Android docs do not state this clearly.
Almost always the whole app process will be killed while the user is in the separate camera Activity/App
This is not surprising. A camera app can consume quite a bit of memory, so Android needs to free up memory by terminating background apps' processes.
After all, the ActivityManager makes a wrong decision to kill the app
Given that a likely alternative is the OS crashing, I suspect the user would agree with the OS decision to terminate your process.
Having to reload the models so frequently takes up a lot of CPU and memory resources.
Then perhaps you should not be starting another app from yours. Take the photo yourself. Use the camera APIs directly, or use libraries like Fotoapparat and CameraKit-Android as simpler wrappers around those APIs.
The only way I can think of to 'fix' this issue is to implement my own camera Activity inside the app, but this goes counter Android's own best practices.
By that argument, no devices would ever have a camera app, as writing any camera app "goes counter Android's own best practices".
Any app that needs a camera must use the camera APIs (directly or indirectly) to have any shot at reliable behavior. You are assuming that thousands of camera apps are all properly written and will correctly honor your ACTION_IMAGE_CAPTURE Intent (e.g., putting the results in the place that you designate with EXTRA_OUTPUT). Many camera apps have buggy ACTION_IMAGE_CAPTURE implementations. ACTION_IMAGE_CAPTURE is not unreasonable for cases where you and the user can live without the picture being taken (e.g., a note-taker app that has an "attach photo" feature), but that would not seem to be the case with your app.
I am trying to create a TV back light controller that runs on Android TV. To do this I want to use what is being displayed and adjust the back light colors accordingly. I am having trouble determining a good way to basically get a screenshot regularly through a service. Does anyone have any ideas?
On Android 5.0+, you can use the media projection APIs for this. However:
The user has to authorize this, by launching one of your activities which then displays a system-supplied authorization dialog
That authorization does not last forever, but only for however long your process is around (and possibly less than that, though so far in my testing it seems to last as long as the process does)
AFAIK, DRM-protected materials will be blocked from the screenshots, and so using those screenshots to try to detect foreground UI colors may prove unreliable
It is a well known issue that many Android phones switch off the accelerometer when the screen goes off. However something seems to have changed with Android Fit (the app). Fit keeps counting steps even when the screen goes off. If Fit is installed, then events are raised for step counting within the Fit environment and I am able to capture them using
Fitness.SensorsApi.findDataSources(mClient, new DataSourcesRequest.Builder()
.setDataTypes(DataType.TYPE_STEP_COUNT_CUMULATIVE)
I have tested this on a Samsung S4 and on a Oneplus One and in both cases the steps are counted.
How do they do that? What Android classes do they use?
My understanding is that the available method introduced since Kitkat is to implement a SensorEventListener. For example theelfismike provides code that implements this. However on many phones the step counting stops when the screen goes off. Interestingly the counting does not seem to stop if the Google Fit app is installed (hence I guess they keep the accelerometer on).
Am I missing something? Is the functionality of keeping counting steps after screen off available to the mortal programmers?
Thanks!
As Ilja said, your code runs even after the screen gets turned off. But in this case I guess we need a little different answer.
They definitely use a Service that keeps a wakelock and they query the sensors for data. Important part here is holding the wakelock - you have to prevent the device from going into sleep during lifetime of your service - if you don't want to miss some data.
But this approach will be drain the battery really fast, because in order to detect steps you need to process quite a lot of data from sensors.
Thats why there is sensor batching. That allows you to get continuous sensor data even without keeping the device awake. It basically stores the sensor events in a hw based queue right in the chip itself and only sends them to your app (service,..) at predefined intervals in batches. This allows you to do a 24/7 monitoring without draining the battery significantly. Please note that only supported chipsets can do that (you can find details in Android docs), in case of older phones you need to fallback to the hideous wakelock keeping method in order to get your data.
You can also just use Google Fit APIs, but this would only work when there're both Google Fit + Google Play Services installed on the device with monitoring turned on.
Every normal Thread is keep on working when the screen goes off or when the Activity lost its focus...but when the activity gets killed then all thread are killed...
However you can use services for longrunning tasks like asking the accelerometer for example
I need to use a background service to launch my application with voice command even when the screen is locked. For example when I say "start" the screen will be unlocked and the application launches automatically, I tried to make this code work https://github.com/gast-lib/gast-lib/blob/master/library/src/root/gast/speech/activation/SpeechActivationService.java
but I don't know how to use that and how to do the service with the activity .
I'll recommend using CMUSphinx to recognize speech continuously. To achieve continuous speech recognition using google speech recognition api, you might have to resort to a loop in a background service which will take too much resources and drains the device battery.
On the other hand, Pocketsphinx works really great. It's fast enough to spot a key phrase and recognize voice commands behind the lock screen without users touching their device. And it does all this offline. You can try the demo.
If you really want to use google's api as I've demonstrated above, see this
I need to develop 2 applications "Sender" and "Receiver". These two will perform screen mirroring from Sender to Receiver.
How can I do this? Are there any in-built APIs / libraries available for same?
Can I use Miracast to achiave this? If, yes please guide me.
Assumption: Both device will be remain on same wifi.
To collect the UI from Sender, you can try creating something that looks like my MirroringFrameLayout from the CWAC-Layouts library. That is designed to update a separate Mirror View for on the same device that has the MirroringFrameLayout, such as having the MirroringFrameLayout on the touchscreen and the Mirror shown on an external display via a Presentation.
The problem you will encounter is performance, as my current approach draws the entire MirrorFrameLayout contents to a Bitmap, which is then shown by the Mirror. That would require you to ship new bitmaps across the network connection on every UI change, which is likely to be slow. So, while my approach is easy, you may need to be much more aware of what your UI is doing so you can ship over smaller updates.
The best approach may be to stop thinking of "screen mirroring" entirely, and instead focus on "operation mirroring". For example, suppose Sender is a drawing app, and Receiver is supposed to see the drawings. Rather than sending over the screens, send over the drawing operations that the user performs, and apply those same operations on the Receiver.