My app relies heavily on animations to work, and when their speed is changed in the developer's options, the whole app breaks and stops working. I've been researching ways of ignoring the developer's settings and forcing the animations to run as they should, but it is not changing anything really.
I'm calling this function in my App's Application Class' onCreate:
private fun fixAnimation() {
val durationScale = Settings.Global.getFloat(contentResolver, Settings.Global.ANIMATOR_DURATION_SCALE, 1f)
Log.d("fixAnimation", "durationScale: $durationScale")
if (durationScale != 1f) {
try {
ValueAnimator::class.java.getMethod("setDurationScale",Float::class.javaPrimitiveType).invoke(null, 1f)
} catch (t: Throwable) {
Log.e("fixAnimation", t.message ?: t.toString())
}
}
}
But nothing changed.
Got it from here: Make ObjectAnimator animation duration independent of global animator duration scale setting in developer options
And here: https://medium.com/#arpytoth/android-objectanimator-independent-of-global-animator-duration-fe33c808c83e
Apparently it is working for some people but not on my case.
Maybe it is because I am using ViewPropertyAnimator, and not ValueAnimator directly? But it should still work, as this is changing the ValueAnimator's duration scale setting globally, and ViewPropertyAnimator is backed by a ValueAnimator...
You are calling your code too early.
If you set a breakpoint into ValueAnimator.setDurationScale(float durationScale) you will notice that after your code has run WindowManagerGlobal will reset the duration scale back to the system value.
So one possible solution is to call your code in Activity.onResume(). Theoretically you could also put it in Activity.onCreate() however if the user (or the system, e.g. to save battery) then changes it while the app is backgrounded, it won't work.
There still are edge cases, e.g. if the system decides to change the factor while the app is in foreground or possibly in multi-window scenarios, but that's as good as it gets with this workaround. After all you likely don't want to check and fix this every second or so as there is reflection involved.
Related
I used the latest Camera2Basic sample program as a source for my trials:
https://github.com/android/camera-samples.git
Basically I configured the CaptureRequest before I call the capture() function in the takePhoto() function like this:
private fun prepareCaptureRequest(captureRequest: CaptureRequest.Builder) {
//set all needed camera settings here
captureRequest.set(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_OFF)
captureRequest.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_OFF);
//captureRequest.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);
//captureRequest.set(CaptureRequest.CONTROL_AWB_LOCK, true);
captureRequest.set(CaptureRequest.CONTROL_AWB_MODE, CaptureRequest.CONTROL_AWB_MODE_OFF);
captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
//captureRequest.set(CaptureRequest.CONTROL_AE_LOCK, true);
//captureRequest.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER_CANCEL);
//captureRequest.set(CaptureRequest.NOISE_REDUCTION_MODE, CaptureRequest.NOISE_REDUCTION_MODE_FAST);
//flash
if (mState == CaptureState.PRECAPTURE){
//captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF)
}
if (mState == CaptureState.TAKEPICTURE) {
//captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_SINGLE)
//captureRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_ALWAYS_FLASH);
captureRequest.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_SINGLE)
}
val iso = 100
captureRequest.set(CaptureRequest.SENSOR_SENSITIVITY, iso)
val fractionOfASecond = 750.toLong()
captureRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, 1000.toLong() * 1000.toLong() * 1000.toLong() / fractionOfASecond)
//val exposureTime = 133333.toLong()
//captureRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, exposureTime)
//val characteristics = cameraManager.getCameraCharacteristics(cameraId)
//val configs: StreamConfigurationMap? = characteristics[CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP]
//val frameDuration = 33333333.toLong()
//captureRequest.set(CaptureRequest.SENSOR_FRAME_DURATION, frameDuration)
val focusDistanceCm = 20.0.toFloat() //20cm
captureRequest.set(CaptureRequest.LENS_FOCUS_DISTANCE, 100.0f / focusDistanceCm)
//captureRequest.set(CaptureRequest.COLOR_CORRECTION_MODE, CameraMetadata.COLOR_CORRECTION_MODE_FAST)
captureRequest.set(CaptureRequest.COLOR_CORRECTION_MODE, CaptureRequest.COLOR_CORRECTION_MODE_TRANSFORM_MATRIX)
val colorTemp = 8000.toFloat();
val rggb = colorTemperature(colorTemp)
//captureRequest.set(CaptureRequest.COLOR_CORRECTION_TRANSFORM, colorTransform);
captureRequest.set(CaptureRequest.COLOR_CORRECTION_GAINS, rggb);
}
but the picture that is returned never is the picture where the flash is at its brightest. This is on a Google Pixel 2 device.
As I only take one picture I am also not sure how to check some CaptureResult states to find the correct one as there is only one.
I already looked at the other solutions to similar problems here but they were either never really solved or somehow took the picture during capture preview which I don't want.
Other strange observations are that on different devices the images are taken (also not always at the right moment), but then the manual values I set are not observed in the JPEG metadata of the image.
If needed I can put my git fork on github.
Long exposure time in combination with flash seems to be the basic issue and when the results are not that good, this means that the timing of your preset isn't that good. You'd have to optimize the exposure time's duration, in relation to the flash's timing (just check the EXIF of some photos for example values). You could measure the luminosity with an ImageAnalysis.Analyzer (this had been removed from the sample application, but elder revisions still have an example). And I've tried with the default Motorola camera app; there the photo also seems to be taken shortly after the flash, when the brightness is already decaying (in order to avoid the dazzling bright). That's the CaptureState.PRECAPTURE, where you switch the flash off. Flashing in two stages is rather the default and this might yield better results.
If you want it to be dazzlingly bright (even if this is generally not desired), you could as well first switch on the torch, that the image, switch off the torch again (I use something alike this, but only for barcode scanning). This would at least prevent any expose/flash timing issues.
When changed values are not represented in EXIF, you'd need to use ExifInterface, in order to update them (there's an example which updates the orientation, but one can update any value).
Any Android phone has developer options to modify animation speed. Window animation scale, Transition animation scale, and Animator duration scale are the three settings I'm talking about.
The below code snippet ignores your settings:
.delay(200, TimeUnit.MILLISECONDS).subscribe
The code ignores animation settings because "delay" is not inherently tied to animations. In my code's case, it is.
How can I get this code in my app to scale based on the device's developer options animation scale settings?
Do not tie your code to animation using delay in milliseconds.
While this is an easy solution, the animation delay or duration, may differ from the value you set to it. Instead, you can use animation listeners callbacks.
So I found out how to get the system settings for a multiplier...
.delay(getScaledDelayDuration(200), TimeUnit.MILLISECONDS).subscribe
private long getScaledDelayDuration(long delay) {
float multiplier = Settings.System.getFloat(
this.getContext().getContentResolver(),
Settings.System.TRANSITION_ANIMATION_SCALE, 1);
return (long) (multiplier * delay);
}
...but that not only doesn't solve the root issue I'm having, it also is just not a good way to go about this at all. I'm thinking I should just delete the question at this point.
According to offical google team statement the CONTROL_AE_EXPOSURE_COMPENSATION manual change is broken on Android 5.1. I'm lookin for a workaround for couple of days and the only one I found is connected to SENSOR_INFO_SENSITIVITY_RANGE. However, I found some difficulties in using it. My code look like this:
if(!modeDisabled){
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
modeDisabled=true;
}
range1 = characteristics.get(CameraCharacteristics.SENSOR_INFO_SENSITIVITY_RANGE);
minmin = range1.getLower();
maxmax = range1.getUpper();
int iso = ((i * (maxmax - minmin)) / 100 + minmin);
mPreviewRequestBuilder.set(CaptureRequest.SENSOR_SENSITIVITY, iso);
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), null, mBackgroundHandler);
Of course the 'i' value is a progress value taken from the seekbar and everyting is closed in OnProgressChanged function.
The problem is that there are no visible changes when manipulating the seekbar. I'd be really gratetful for any help.
CONTROL_AE_EXPOSURE_COMPENSATION isn't broken in Android 5.1 in general, it was disabled on the Nexus 6 only (and will be re-enabled in a future update).
If you're disabling auto-exposure, you probably also need to set the exposure time, in addition to the sensitivity. You also preferably need to set the frame duration, though the defaults for both are probably 1/30s, which is reasonable. You can also copy the latest values for those from the most-recent capture result that did you auto-exposure.
That said, you should still see some sort of change here. Is it possible that you're overwriting your capture request elsewhere right after you set this one as the repeating request? You can check the returned capture results to see what the sensitivity setting the camera device is receiving is.
With Unity, the CardboardHead script is added to the main camera and that handles everything quite nicely, but I need to be able to "recenter" the view on demand and the only option I see so far is to rorate the entire scene and it seems like this is something the would address first-hand and I can't find anything in the docs.
With Oculus Mobile SDK (GearVR), it would be OVRCamera.ResetCameraPositionOrientation(Vector3.one, Vector3.zero, Vector3.up, Vector3.zero); though they handle it nicely each time the viewer is put on so it's rarely needed there.
There's a "target" parameter on the CardboardHead that lets you use to another gameobject as a reference for rotation. Or you can use a dummy parent gameobject. Either way, when you want to recenter, you set this reference object's rotation so that the CardboardHead is now pointing forward. Add this function to an script on the CardboardHead (or just add it into that script):
public void Recenter() {
Transform reference = target != null ? target : transform.parent;
if (reference != null) {
reference.rotation = Quaternion.Inverse(transform.rotation) * reference.rotation;
// next line is optional -- try it with and without
reference.rotation = Quaternion.FromToRotation(reference.up, Vector3.up) * reference.rotation;
}
}
Cardboard.SDK.Recenter (); should do the trick.
Recenter orientation Added Recenter() function to Cardboard.SDK, which resets the head tracker so the phone's current heading becomes the forward direction (+Z axis).
Couldn't find the docs for the API/SDK but it's in the release notes for the v0.4.5 Update.
You can rotate the Cardboard Main to point in a certain direction.
This is what worked for me when I wanted the app to start up pointing a certain way. Since the CardboardHead points at Vector3.zero on startup if no target is assigned, I ran a function during Start() for the CardboardMain that would point in the direction I wanted.
Of course, if you're already rotating CardboardMain for some other reason, it may be possible to use this same method by creating a parent of the CardboardHead (child of CardboardMain) and doing the same thing.
This question is a bit old but for Google VR SDK 1.50+ you can do
transform.eulerAngles = new Vector3(newRot.x, newRot.y, newRot.z);
UnityEngine.VR.InputTracking.Recenter();
also, if you don't want to get confused you also need to catch the GvrEditorEmulator instance and Recenter it as well.
#if UNITY_EDITOR
gvrEditorEmulator.Recenter();
#endif
Recentering GvrEditorEmulator though doesn't seem to work very well at the moment but if you disable it you'll see the recentering works for the main camera.
I'm trying to animate an element by CSS3 transtions using translate3d: JSFiddle.
// for start animation
$("#content")
.css("-webkit-transition", "all 100s");
.css("-webkit-transform", "translate(0, -900px)");
// for stop animation
$("#content")
.css("-webkit-transition", "none");
In desktop Chrome and Safari is good, but in the default browser on Android 4.1.x (SGSII, Galaxy Nexus, etc) this approach does not work - transition does not stop. Additionally, I note that the situation is only a relatively translate3d: with translate and position CSS props (e.g. "top", "left") it works.
The transition implementation on Android 4 seems to be buggy in cases where a transitioning hardware-rendered layer is canceled by adjusting the webkitTransitionDuration to 0 (and setting webkitTransition to 'none' or '' often implies this). This can be circumvented by using a transition duration of .001ms or similar, although this very likely still draws multiple frames.
A more practical work-around on at least certain devices is to use a negative value for the webkitTransitionDelay, forcing a new transition to take effect, but choosing this value such that the transition starts directly in its finished state.
Like so:
e.style.webkitTransition = '-webkit-transform linear 10s';
e.style.webkitTransform = 'translate3d(100px,100px,0)';
# now cancel 10s-long transition in 1s and reset transformation
setTimeout(function() {
e.style.webkitTransitionDelay = '-10s'
e.style.webkitTransform = 'translate3d(0,0,0)';
}, 1000)
Here is what I discovered with some experimentation:
Stopping a running translate2d or 3d on chrome, safari, firefox, and iphone webview can be done by setting a transition of "none" or a transition with a negative or 0 time delay and giving a new translation to the current position as described above.
This however does not work for android webview. The only solution I could find for android was to set the transition delay to a small positive number like .001s and giving a translate for the current position.
Note that in iphone webview the solution of a negative transition delay is preferable to "none" or a small positive number which will flash the final position of the ongoing transition before performing the preempting following transition.
This solves my (very similar) problem:
$el.removeClass('THE_ANIMATION').css('opacity', 0.99);
window.setTimeout(function () {
$el.css('opacity', 1);
}, 0);