How can I regulate the framerate in my android app? I would like the game to run at a constant speed. My app doesn't require a high framerate so I don't want one because that would take up more battery then necessary.
Don't use frame rate to measure time. Use time to measure time. A GC pass can take 3/10 of a second, other tasks can fire up in the background, etc etc.
There's always a system/setup that will run slower than you thought possible.
Then you don't specify velocities in pixels, but in pixels/second. Each frame of animation takes a certain amount of time, etc etc. In your game engine, when computing the next frame, one of your inputs is "how much time has passed since the last frame". You determine that value as a fraction of a second, and multiply your velocity/sec by that fraction. The result is how far a Given Thing has moved since the last frame.
Note that really slow frame rates can wreak havoc on collision detection, particularly with fast moving objects. If Thing 1 passes completely through Thing 2 between frames, just checking BBoxes or radii isn't going to cut it.
Having said all that, sleep() is your friend. At the start of a frame's processing, call System.currentTimeMilis(). At the end of a frame's processing (including rendering), check the current time again. If the difference isn't long enough, sleep(N) for enough time to match your desired frame rate.
So if you want 20fps max, then each frame should take 50ms (1000ms / 20 = 50ms). If a frame only took 10ms to simulate and render, then you need to sleep for another (50ms - 10ms = ) 40ms before moving on to the next frame.
Alternatively, you can keep running the simulation as fast as possible, and only render the screen every so often. This won't help battery life much (though OpenGL hardware acceleration is expensive if the heat coming off my Evo is any indication), but can make for a Very Smooth experience. Heck, you can start calculating things like motion blur at that point.
Related
My project is for Android. I need to run code after rendering the frame. This is because rendering time could vary vaguely for different frames. I have some time consuming mesh and collider creation routines (So basically there is no room for multithreading because of direct interaction with unity's API). Let's say the profiler shows something like this:
When I do my coroutines I can't know how much time the rendering takes. So I can't decide how much time budget I have for the coroutine and when to yield it. It could be 20ms or it could be 1ms.
Let's reserve a constant time ,say 10ms, for coroutine execution and assume that target frame rate is 30 and physx time is negligible . Now if rendering take 5ms then 5+10=15ms and I dropped the opportunity to use 15 more milliseconds. On the contrary if the frame had a spike and took 25ms, Then 25+10=35ms and that is greater than the 30FPS target framerate resulting in a visible FPS drop. So either way the constant time has very bad consequence.
But if I could run code after rendering time, I could know exactly how much time I have before the screen refresh and yield at the right moment without losing precious time or the risk of creating a spike with my own hands!!!
I know the codes run according to unity's execution order http://docs.unity3d.com/Manual/ExecutionOrder.html. I need a workaround or any viable strategy to do coroutines reliably.
IEnumerator MyCoroutine()
{
while(true){
yield return new WaitForEndOfFrame();
// My code
}
}
http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
As mentioned in the docs, waits for all to be done, right before swapping the buffer. I don't think you can do anything after that without hacking the engine.
I'm trying to capture images with 30 seconds exposure times in my app (I know it's possible since the stock camera allows it).
But SENSOR_INFO_EXPOSURE_TIME_RANGE (which it's supposed to be in nanoseconds) gives me the range :
13272 - 869661901
in seconds it would be just
0.000013272 - 0.869661901
Which obviously is less than a second.
How can I use longer exposure times?
Thanks in advance!.
The answer to your question:
You can't. You checked exactly the right information and interpreted it correctly. Any value you set for the exposure time longer than that will be clipped to that max amount.
The answer you want:
You can still get what you want, though, by faking it. You want 30 continuous seconds' worth of photons falling on the sensor, which you can't get. But you can get something (virtually) indistinguishable from it by accumulating 30 seconds' worth of photons with tiny missing intervals interspersed.
At a high level, what you need to do is create a List of CaptureRequests and pass it to CameraCaptureSession.captureBurst(...). This will take the shots with as minimal an interstitial time as possible. When each frame of image data is available, pass it to some new buffer somewhere and accumulate the information (simple point-wise addition). This is probably most properly done with an Allocation as the output Surface and some RenderScript.
Notes on data format:
The right way to do this is to use the RAW_SENSOR output format if you can. That way the accumulated output really is directly proportional to the light that was incident to the sensor over the whole 30s.
If you can't use that, for some reason, I would recommend using YUV_420_888 output, and make sure you set the tone map curve to be linear (unfortunately you have to do this manually by creating a curve with two points). Otherwise the non-linearity introduced will ruin our scheme. (Although I'm not sure simple addition is exactly right in a linear YUV space, but it's a first approach at least.) Whether you use this approach or RAW_SENSOR, you'll probably want to apply your own gamma curve/tone map after accumulation to make it "look right."
For the love of Pete don't use JPEG output, for many reasons, not the least of which is that this will most likely add a LOT of interstitial time between exposures, thereby ruining our approximation of 30s on continuous exposure.
Note on exposure equivalence:
This will produce almost exactly the exposure you want, but not quite. It differs in two ways.
There will be small missing periods of photon information in the middle of this chunk of exposure time. But on the time scale you are talking about (30s), missing a few milliseconds of light here and there is trivial.
The image will be slightly nosier than if you had taken a true single exposure of 30s. This is because each time you read out the pixel values from the actual sensor, a little electronic noise gets added to the information. So in the end you'll have 35 times as much of this additive noise (from the 35 exposures for your specific problem) as a single exposure would. There's no way around this, sorry, but it might not even be noticeable- this is usually fairly small relative to the meaningful photographic signal. It depends on the camera sensor quality (and ISO, but I imagine for this application you need that to be high.)
(Bonus!) This exposure will actually be superior in one way: Areas that might have been saturated (pure white) in a 30s exposure will still retain definition in these far shorter exposures, so you're basically guaranteed not to lose your high end details. :-)
You can't always trust SENSOR_INFO_EXPOSURE_TIME_RANGE as of May 2017. Try manually increasing the time and see what happens. I know my Pixel will actually take a 1.9 sec shot but SENSOR_INFO_EXPOSURE_TIME_RANGE has a value in the sub second range.
An android app I'm updating uses Canvas drawing with a runnable (rather than override onDraw) and has changes to the screen approximately twice per second. With that being the case, I wonder if it's best to attempt to only draw a frame twice per second (or when there's a change) rather than attempting to achieve maximum frames per second. The main draw would be to save on battery life.
I would just implement and try to test before and after battery life/app usage, but this is difficult to measure accurately as there is so many other things that can skew the results. So, wondering if anyone knows what the best practice here is? If you're not developing a video game and don't necessarily need high frame rates, is it best to attempt to reduce the number of draws as low as possible (i.e. low fps), or would the difference be negligible and you shouldn't bother?
EDIT Canvas is derived from lockCanvas() and thus can't be hardware accelerated at this time.
Thanks
I don't understand why, but when I increase the Fixed TimeStep in the Time setting in Unity3D, I have a bad frame issue on Android only.
In iOS, I have a better performance, but Android the animation is very very bad..
Somebody can tell me why increasing the Fixed TimeStep have an issue with the FPS, but on Android, not on iOS.
A fixed time step of 60 (Hz) means Unity guarantees that the FixedUpdate method runs this many times per second, regardless of framerate. FixedUpdate can be set to run multiple times per frame even.
However you can't force a CPU to do ever more per frame/second. Eventually this will affect the framerate because there isn't enough time to compute and render a frame in the time necessary.
For instance, to get a constant 60 frames per second each frame must be computed and rendered within a 0.01666 second time window. If the computation and rendering takes 0.017 seconds Unity is no longer rendering 60 fps. If vertical synch is enabled (as it is on mobile devices) a constant time per frame of just over 0.01666 means the framerate will be 30 fps (not 55 or something). So on mobile you're more likely to notice the effect of constantly being over the 0.01666 time per frame.
If there are enough FixedUpdate iterations running per second the app has more to compute and thus takes longer per frame. Eventually the FixedUpdate iterations per frame (plus the time it takes to render) no longer complete within 0.01666 seconds, that's what you see as a drop in framerate.
I'm developing a GL live wallpaper that uses very little CPU and only modest GPU. On my older test phone, it can run at a full 58fps or so most of the time. But occasionally the effects ramp up, and then the render times jitter between 16ms and 50ms per frame. For example, it'll render several frames at 16ms, slide up to 50ms over a dozen frames or so, render several more frames at 50ms, then slide back down to 16ms and repeat. I discovered that if I set the CPU governor to "performance" (or "conservative", curiously enough) instead of the default "ondemand" it'll render with full effects at full speed. Alternatively, if I leave the governor alone and insert a busy loop in my code (increment a variable 100,000 times per frame) that bumps my CPU usage up enough to transition to a higher clock rate and render smoothly as well.
So it seems on this phone my app is bottlenecked by the GPU, but only when it throttles down. Now, I wouldn't mind if the GLSurfaceView rendered at a slower rate according to the GPU clock, but my problem here is that I'm getting the bursts of alternating high and low frame rates which makes my animation look fluid/frameskippy/fluid/frameskippy/etc. several times per second. It seems like the GPU clock is ramping up and down like crazy?
I got a visible improvement by using RENDERMODE_WHEN_DIRTY and calling requestRender() on a strictly timed thread, but the darn GPU keeps ramping up and down. Why won't it either render as fast as it can at the slower clock, or just jump to and STAY AT the higher clock?
The best solution I've come up with so far is using a sliding window to detect the average frame update time, then applying the difference from the target frame time until the two values converge. The time between render updates is slower but at least it's roughly constant. So that works in theory, but it takes several seconds to reach a steady state and it looks bad in the meantime.
I think a third option might be to cannibalize the GLSurfaceView source and make a custom version. From what I understand, the blocking GL calls are made in there, so it would be much easier for me to time render calls and react accordingly. I'm not very comfortable attempting that though because there's a lot of code in there that I'd have to spend a lot of time understanding before I could even begin to mess with it. Plus I'd then have to worry about how well version X of GLSurfaceView plays with any version Y of Android.
So, with all that said, do I have any other options here? Is there an easier fix to this?
try fixing the frame rate by pausing the thread (thread sleep) for the remaining time to reach a constant frame rate.