Wake lock for Mac OS X - android

In Android, there's wake lock to keep the screen on. So, is there Mac equivalent to keep the screen on for a PC running Mac OS X? If that is the case, what are the APIs?

To add to it, there are also command-line tools, like the built-in caffeinate.

Yes, in OS X it is done at the OS level through IOPMLib, the Power Management subsystem, which is also the subsystem that controls AppNap under OS X Mavericks.
Here's an example of what we do when performing heavy calculations. In our case, we keep the CPU from sleeping, but you can prevent the display from sleeping by using kIOPMAssertionTypePreventUserIdleDisplaySleep where we used kIOPMAssertionTypePreventUserIdleSystemSleep.
#property IOPMAssertionID currentPowerAssertion;
- (void)assertPowerRequirement:(NSString*)reason
{
// don't re-assert if we're already here
if (_currentPowerAssertion)
return;
IOPMAssertionID assertionID;
IOReturn success = IOPMAssertionCreateWithName(
kIOPMAssertionTypePreventUserIdleSystemSleep, // prevent CPU from going to sleep
kIOPMAssertionLevelOn, // we are turning this on
(__bridge CFStringRef)reason, // here's why
&assertionID); // reference for de-asserting
if (success == kIOReturnSuccess) {
_currentPowerAssertion = assertionID;
} else {
NSLog(#"Power assert failed");
}
}
- (void)deassertPowerRequirement
{
if (!_currentPowerAssertion)
return;
IOReturn success = IOPMAssertionRelease(_currentPowerAssertion);
if (success !=kIOReturnSuccess) {
NSLog(#"Power de-assert failed");
}
_currentPowerAssertion = 0;
}
In this case, this is in our App delegate, and we have the currentPowerAssertion property to keep track. Since we only use one assertion state and only for one purpose, we use a single storage mechanism. However, you can assert multiple times from different parts of your program, as long as you balance the assertions with de-assertions and use appropriate reason. Specifications from Apple mandate a reason be given (not NULL), and suggest that the Application name and task be described in the assertion.
It's important to make sure that you de-assert when you don't need this any longer, although assertions are kept on a per-app basis, so when your App quits, they will automatically be de-asserted.

Related

What is the best way to execute asynchronous code inside CameraX analyze()?

I'm using CameraX's image analysis use case that keeps calling the analyze() method in my custom Analyzer class. Inside analyze(), before doing anything else, I need to send a request to a connected device and wait for its response; the latency is very low and I'm already doing it synchronously with no issues, but I was told it's better to make it asynchronous just in case the device responds too slowly.
I know that MLKit's process() returns a Task<List<T>> and I already call onSuccessListener { } on it, so I was wondering if I can use a similar approach (I can't return a Task<T> from my function, how do I create one?). Otherwise would you suggest threads, or coroutines, or something else?
Edit: below there's a simplified example of what I'm trying to do. For a given frame sent by the camera I just need to perform only the current analysis in line, then I return so that analyze() will be called again with the next frame, on which it will perform the next analysis.
It might look hacky but it's for an app that continuously runs in foreground on a single-purpose device (let's call it Dev A) with no user interaction provided by touch or other conventional means, so it needs some kind of trigger to start doing what is required.
The trigger might as well be when the first image analysis in line is successful, but running MLKit or TFLite models from real time camera feed all day long makes Dev A overheat excessively. The best solution so far seems to be waiting for the trigger to come from an external device (Dev B) that operates independently.
Since Dev B may respond with some delay I need to communicate with it asynchronously, hence the reason for the question in the first place. While there are certainly several architectural nuances to discuss, the current root of the problem is that I can't decide (or rather I don't know) how to handle the repeating "connection" with Dev B in a non-blocking way.
I mean, can I just treat this issue like any other case where multithreading is needed, or the fact that the camera is involved might pose additional threats? The backpressure strategy is set to STRATEGY_KEEP_ONLY_LATEST, so in theory if the current call to analyze() hasn't finished yet the new frames are dropped and nothing bad happens even if inside the method I'm still waiting for the async call to Dev B to finish, or am I missing something?
var connected = false
lateinit var result: Boolean
var analysis1 = true
var analysis2 = true
override fun analyze() {
if (!connected) {
result = connectToDevice() // needs to be async
connected = true
}
// need positive result to proceed, otherwise start over
if (!result) {
connected = false
return
}
if (analysis1) {
// perform analysis #1...
analysis1 = false
// when an analysis is done, exit early and perform next analysis on next frame
return
}
if (analysis2) {
// perform analysis #2...
analysis2 = false
// same as above
return
}
// when all analyses are done, reset all flags to start over
connected = false
analysis1 = true
analysis2 = true
}

Android differentiate between a TV and a STB

How can I differentiate between a TV and a STB/game console on AndroidTV programatically? This method (https://developer.android.com/training/tv/start/hardware) won't work because an STB running AndroidTV is considered a television.
What's Your Goal?
The obvious reason for doing this would be to determine if a device is suitable for a game to be played on. If it's for any other reason, then the purpose needs to be elaborated on in order to receive applicable assistance.
With that said ...
Since it's not possible to query the device type directly -- personally, I'd look for something that only a game console would be likely to have.
In other words: a game controller/gamepad.
public ArrayList<Integer> getGameControllerIds() {
ArrayList<Integer> gameControllerDeviceIds = new ArrayList<Integer>();
int[] deviceIds = InputDevice.getDeviceIds();
for (int deviceId : deviceIds) {
InputDevice dev = InputDevice.getDevice(deviceId);
int sources = dev.getSources();
// Verify that the device has gamepad buttons, control sticks, or both.
if (((sources & InputDevice.SOURCE_GAMEPAD) == InputDevice.SOURCE_GAMEPAD)
|| ((sources & InputDevice.SOURCE_JOYSTICK)
== InputDevice.SOURCE_JOYSTICK)) {
// This device is a game controller. Store its device ID.
if (!gameControllerDeviceIds.contains(deviceId)) {
gameControllerDeviceIds.add(deviceId);
}
}
}
return gameControllerDeviceIds;
}
Of course, it's not fool-proof. Obviously, nothing would be returned if the gamepad(s) were unplugged at the time (not sure when that would happen). Not to mention, some TVs support gamepads (Samsung comes to mind first) -- but, if you're intention is to verify that there's an adequate input available for the application, this would be ideal.
If a gamepad isn't present, a message could be displayed stating, "Please connect a gamepad." -- while continuously checking in the background, and automatically proceeding once one is detected.

What is rate limiting for android app shortcuts?

As per documentation for app shortcuts
Rate Limiting
When using the setDynamicShortcuts(), addDynamicShortcuts(), or
updateShortcuts() methods, keep in mind that you might only be able to
call these methods a specific number of times in a background app, an
app with no activities or services currently in the foreground. In a
production environment, you can reset this rate limiting by bringing
your app to the foreground.
What is rate limiting in concern with app shortcuts? when isRateLimitingActive() should be used?
Looking at the source code it seems that the isRateLimitingActive() method returns false if you do not have any remaining calls left to the ShortcutManager API (hence the "0"). I guess rate limiting is needed because the API is resource intensive. I can imagine that at least the following will happen if you update a shortcut:
The launcher app (and other listeners) needs to be notified and starts updating it's UI or whatever is needed (depends on the launcher);
The system needs to store the new dynamic shortcut information;
You could use this method to find out if a call to setDynamicShortcuts(), addDynamicShortcuts() or updateShortcuts() will succeed before even trying to do so.
Source:
/**
* Return {#code true} when rate-limiting is active for the caller application.
*
* <p>See the class level javadoc for details.
*
* #throws IllegalStateException when the user is locked.
*/
public boolean isRateLimitingActive() {
try {
return mService.getRemainingCallCount(mContext.getPackageName(), injectMyUserId())
== 0;
} catch (RemoteException e) {
throw e.rethrowFromSystemServer();
}
}
Bonus: setDynamicShortcuts(), addDynamicShortcuts() or updateShortcuts() return false if they did not succeed due to Rate Limiting.
The recommended maximum number of shortcuts is 4, although it is possible to publish up to 5. You can read more here.

Check if app runs on tablet

You can check if your app is running on different versions of Android using the following:
int sdk = Build.VERSION.SDK_INT;
if (sdk <= Build.VERSION_CODES.ECLAIR){
//do stuff
} else {
//do some other stuff
}
Is there such a check for tablet use? Currently I have two ways to find out that I don't like:
Configuration configuration = getResources().getConfiguration();
if (configuration.orientation == Configuration.ORIENTATION_LANDSCAPE) {
try {
return configuration.screenWidthDp >= 1024;
} catch (NoSuchFieldError ex) {
return false;
}
} else {
return false;
}
And, if I use a different layout file with different view ids:
return findViewById(R.id.mytabletlinearlayout) == null;
Is there something like the version codes available for this? Or is there another more elegant solution?
Edit
For some clarity, I need to perform some different actions if I'm running my app on a tablet, because then I am using a multi-pane layout.
Rather than attempting to detect tablets, consider detecting screen sizes and detecting features that the device supports, since the difference between a phone and a tablet isn't exactly well defined (and subject to change).
To detect a feature, use the PackageManager class, specifically hasSystemFeature or getSystemAvailableFeatures. For detecting screen sizes, your second approach of sniffing changes in the layouts in your different folders is an appropriate way to handle it.
If you check available features at runtime, you won't be forced to make generalizations about tablets ahead of time, like assuming they don't have a back facing camera, or other features like that.
To check if it is running on a tablet this answer should be sufficient. Bear in mind though that you may be asking the wrong question. Your program should be more worried about the specific characteristics of the device that you are using. (e.g. Screen Size, Memory Capabilities, etc.)

android: turn off screen when close to face

My app allows the user to access their corporate voice mail. Normally, durring a phone call when the user holds the device up to their ear, the screen shuts off so they wont accidentally push buttons with their face. I would like to make my app do the same thing when the user is listening to their voice mail.
anyone know how to do this?
If you are allowed to look at open source code without causing yourself problems, check the source of the Android Phone Application. Specifically src/com/android/phone/PhoneApp.java and src/com/android/phone/InCallScreen.java.
From src/com/android/phone/PhoneApp.java:
//Around line 519
// Wake lock used to control proximity sensor behavior.
if ((pm.getSupportedWakeLockFlags()
& PowerManager.PROXIMITY_SCREEN_OFF_WAKE_LOCK) != 0x0) {
mProximityWakeLock = pm.newWakeLock(
PowerManager.PROXIMITY_SCREEN_OFF_WAKE_LOCK,
LOG_TAG);
}
....
// Around line 1334
if (((state == Phone.State.OFFHOOK) || mBeginningCall)&& !screenOnImmediately) {
// Phone is in use! Arrange for the screen to turn off
// automatically when the sensor detects a close object.
if (!mProximityWakeLock.isHeld()) {
if (DBG) Log.d(LOG_TAG, "updateProximitySensorMode: acquiring...");
mProximityWakeLock.acquire();
} else {
if (VDBG) Log.d(LOG_TAG, "updateProximitySensorMode: lock already held.");
}
} else {
// Phone is either idle, or ringing. We don't want any
// special proximity sensor behavior in either case.
if (mProximityWakeLock.isHeld()) {
if (DBG) Log.d(LOG_TAG, "updateProximitySensorMode: releasing...");
// Wait until user has moved the phone away from his head if we are
// releasing due to the phone call ending.
// Qtherwise, turn screen on immediately
int flags =
(screenOnImmediately ? 0 : PowerManager.WAIT_FOR_PROXIMITY_NEGATIVE);
mProximityWakeLock.release(flags);
}
}
Additionally, if you look at the code for the PowerManager class, PROXIMITY_SCREEN_OFF_WAKE_LOCK is documented (but hidden) and should do what you want ( I am not sure which API level this works for, however ) -- but not in the table for some reason.
/**
* Wake lock that turns the screen off when the proximity sensor activates.
* Since not all devices have proximity sensors, use
* {#link #getSupportedWakeLockFlags() getSupportedWakeLockFlags()} to determine if
* this wake lock mode is supported.
*
* {#hide}
*/
public static final int PROXIMITY_SCREEN_OFF_WAKE_LOCK = WAKE_BIT_PROXIMITY_SCREEN_OFF;
If you aren't afraid of using a potential undocumented feature, it should do exactly what you need.
as of API level 21 (Lollipop) you can get proximity wake lock this just like that:
if(powerManager.isWakeLockLevelSupported(PowerManager.PROXIMITY_SCREEN_OFF_WAKE_LOCK)) {
PowerManager.WakeLock wakeLock = powerManager.newWakeLock(PowerManager.PROXIMITY_SCREEN_OFF_WAKE_LOCK, TAG);
wakeLock.setReferenceCounted(false);
return wakeLock;
} else {
return null;
}
}
then it is up to you to acquire and release the lock.
PS: PowerManager#getSupportedWakeLockFlags was hidden, but now exists nomore. They have invented isWakeLockLevelSupported instead.
Probably you don't need it anymore but for the ones that are interested in code you could have a look at my SpeakerProximity project at http://code.google.com/p/speakerproximity/
What you are seeing is the use of a proximity sensor. For devices that have one, you access it through SensorManager.

Categories

Resources