I am trying to create a TV back light controller that runs on Android TV. To do this I want to use what is being displayed and adjust the back light colors accordingly. I am having trouble determining a good way to basically get a screenshot regularly through a service. Does anyone have any ideas?
On Android 5.0+, you can use the media projection APIs for this. However:
The user has to authorize this, by launching one of your activities which then displays a system-supplied authorization dialog
That authorization does not last forever, but only for however long your process is around (and possibly less than that, though so far in my testing it seems to last as long as the process does)
AFAIK, DRM-protected materials will be blocked from the screenshots, and so using those screenshots to try to detect foreground UI colors may prove unreliable
Related
I'm writing an app that requires frequent location checks using GPS. It needs to get the current location every 10-15 seconds and then write it to a database (the database part is mostly irrelevant for this question, just wanted to throw it out there). The issue I'm facing is that I can't find a decent way to accomplish this. I've tried several things:
Using LocationManager.getBackgroundLocationListener() never seems to work. I've tried it on both Android and iOS and nothing ever seems to happen. Might be something I'm doing wrong but regardless, I've read that on iOS, it will only get hit when moving to a different cell tower, so that doesn't meet my needs.
I've tried using Background Fetch, but it's only reliable on Android. The frequency is completely random on iOS and so this won't meet my needs either.
I've tried just starting a new thread that will fetch the current location every 10-15 seconds, but this only works when the screen is on and the app is being used. This won't meet my needs because I need to make sure it continues getting location updates when switching to other apps.
Does anyone have any suggestions on alternative methods to solve this problem? Note that I don't have a background in objective-c, so a cross-platform solution like Codename One is really my only option here.
Background fetch isn't the right choice in this case as iOS expects you to declare what you are actually doing. What you are trying to do is prohibited in iOS and might be problematic in Android too.
Fetching location data in the background (especially at this resolution) has serious privacy implications so I doubt it will work. Doing this in the foreground should work fine to allow use cases such as navigation but in the background it might be an issue.
This is further compounded by the fact that location and networking are big battery draining activities. So there are limits to the resolution you can apply with these API's. One of the more common use cases for background is geo-fencing which allows you to track when a user approaches an important point of interest. The reason this API exists is to reduce polling costs and keep location usage to a minimum.
Notice that both Android & iOS provide tools to detect battery usage for the user and these might impact misbehaving apps.
Is it possible to determine if a device (non-rooted) is in use at the moment, even if my app is not in the foreground? Precisely "in use" means the user made touch events in the last 5 seconds or display is on.
If so, what specific rights are required?
Thanks
AFAIK, android security model would not allow you to record touches if your app in not in the foreground.
There are some crude workarounds like overlaying a transparent screen to record touches. Not sure if these work now though.
"in use" means the user made touch events in the last 5 seconds
In Android, that's not practical, short of writing your own custom ROM.
or display is on
In Android, you can find out if the device is in an "interactive" mode or not. This does not strictly align with screen-on/screen-off, as the whole notion of screen-on/screen-off has pretty much fallen by the wayside.
Any idea on how I would go about running the android CardboardActivityas a service, and still be able to perform vr/openGl intensive task?
There is an answer for one special service getting permission to access the screen. Further investigation may lead to a generalized answer.
The android live wallpaper service does this. The issue of course is with the permission to bind to the screen frame buffer, which live wall paper services are given.
android:permission="android.permission.BIND_WALLPAPER"
So if you implement your app with a live wallpaper service, and manage to set it as your app background, you will also have an app that doubles as a wallpaper. Keep in mind that wallpapers are very constrained in what they can do, but the OP did not specify any other requirements.
I need to develop 2 applications "Sender" and "Receiver". These two will perform screen mirroring from Sender to Receiver.
How can I do this? Are there any in-built APIs / libraries available for same?
Can I use Miracast to achiave this? If, yes please guide me.
Assumption: Both device will be remain on same wifi.
To collect the UI from Sender, you can try creating something that looks like my MirroringFrameLayout from the CWAC-Layouts library. That is designed to update a separate Mirror View for on the same device that has the MirroringFrameLayout, such as having the MirroringFrameLayout on the touchscreen and the Mirror shown on an external display via a Presentation.
The problem you will encounter is performance, as my current approach draws the entire MirrorFrameLayout contents to a Bitmap, which is then shown by the Mirror. That would require you to ship new bitmaps across the network connection on every UI change, which is likely to be slow. So, while my approach is easy, you may need to be much more aware of what your UI is doing so you can ship over smaller updates.
The best approach may be to stop thinking of "screen mirroring" entirely, and instead focus on "operation mirroring". For example, suppose Sender is a drawing app, and Receiver is supposed to see the drawings. Rather than sending over the screens, send over the drawing operations that the user performs, and apply those same operations on the Receiver.
I am designing a prank app for android in which i want to show as if someone has called and then when the user answers the call it should play the recorded file. I have finished most of the things apart from 1 issue which i am facing currently.
1) As we all know the UI of every phone is somewhat different when they get a call, so i want to make sure that when running the app it should use the phone's default UI, so as to make sure it looks like a real incoming call.
I am uploading a picture so you'll have a better idea regarding my query. So as we can see that the UI is different for every phone, how can i make sure that when my app makes a fake call it uses the same UI which is displayed while getting a real incoming call.
Any help on the problem would be highly appreciated.
What you're trying to do is start the Android Dialler app, and then to change the functionality/have it do nothing while you play an audio file.
While you could enable the loudspeaker and play audio, you can't make the dialler stay active without a live call. An invalid number would fail, and an actual call would cost money.
What you want to do is actually very hard to achieve and the only thing you could try to do is to create your own fake dialler activities with different themes (according to major brands), detect the device manufacturer and then display a relevant one.