An android app I'm updating uses Canvas drawing with a runnable (rather than override onDraw) and has changes to the screen approximately twice per second. With that being the case, I wonder if it's best to attempt to only draw a frame twice per second (or when there's a change) rather than attempting to achieve maximum frames per second. The main draw would be to save on battery life.
I would just implement and try to test before and after battery life/app usage, but this is difficult to measure accurately as there is so many other things that can skew the results. So, wondering if anyone knows what the best practice here is? If you're not developing a video game and don't necessarily need high frame rates, is it best to attempt to reduce the number of draws as low as possible (i.e. low fps), or would the difference be negligible and you shouldn't bother?
EDIT Canvas is derived from lockCanvas() and thus can't be hardware accelerated at this time.
Thanks
Related
Drawing on android itself, is a herculean task. Now my requirement is on to see, how robust I can draw atleast 10million points with different intensity levels.
Some methods I came across:
Android draws with Canvas and Bitmaps
SurfaceView with OpenGL
Using libGDX fastest drawing library
Custom view to refresh & update automatically
What is best method to go about it? If I need to draw 10million or more points maybe on a static image on android, how can I enhance it and not degrade its performance. Every second I need to refresh and draw another 10million points. Is it possible or android is capable of doing such a task?
As your question states 10mil/sec, I understand that you want them realtime, thus opengl is the way to go, leaving you with options 2, 3 and 4.
You would definitely need to batch those calls.
You can think about using point sprites to reduce the amount of data you need to transfer to GPU.
Android as OS is capable of anything your machine can support. Your specific device may have performance issues, or not.
Don't optimize prematurely and try option 3 (libGDX). It would be the easiest to set up and achieve your task. If it won't be performant enough I'd think about rolling my own opengl-based solution.
https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size
I'm developing a GL live wallpaper that uses very little CPU and only modest GPU. On my older test phone, it can run at a full 58fps or so most of the time. But occasionally the effects ramp up, and then the render times jitter between 16ms and 50ms per frame. For example, it'll render several frames at 16ms, slide up to 50ms over a dozen frames or so, render several more frames at 50ms, then slide back down to 16ms and repeat. I discovered that if I set the CPU governor to "performance" (or "conservative", curiously enough) instead of the default "ondemand" it'll render with full effects at full speed. Alternatively, if I leave the governor alone and insert a busy loop in my code (increment a variable 100,000 times per frame) that bumps my CPU usage up enough to transition to a higher clock rate and render smoothly as well.
So it seems on this phone my app is bottlenecked by the GPU, but only when it throttles down. Now, I wouldn't mind if the GLSurfaceView rendered at a slower rate according to the GPU clock, but my problem here is that I'm getting the bursts of alternating high and low frame rates which makes my animation look fluid/frameskippy/fluid/frameskippy/etc. several times per second. It seems like the GPU clock is ramping up and down like crazy?
I got a visible improvement by using RENDERMODE_WHEN_DIRTY and calling requestRender() on a strictly timed thread, but the darn GPU keeps ramping up and down. Why won't it either render as fast as it can at the slower clock, or just jump to and STAY AT the higher clock?
The best solution I've come up with so far is using a sliding window to detect the average frame update time, then applying the difference from the target frame time until the two values converge. The time between render updates is slower but at least it's roughly constant. So that works in theory, but it takes several seconds to reach a steady state and it looks bad in the meantime.
I think a third option might be to cannibalize the GLSurfaceView source and make a custom version. From what I understand, the blocking GL calls are made in there, so it would be much easier for me to time render calls and react accordingly. I'm not very comfortable attempting that though because there's a lot of code in there that I'd have to spend a lot of time understanding before I could even begin to mess with it. Plus I'd then have to worry about how well version X of GLSurfaceView plays with any version Y of Android.
So, with all that said, do I have any other options here? Is there an easier fix to this?
try fixing the frame rate by pausing the thread (thread sleep) for the remaining time to reach a constant frame rate.
My game uses too much battery. I don't know exactly how much it uses as compared to comparable games, but it uses too much. Players complain that it uses a lot, and a number of them note that it makes their device "run hot". I'm just starting to investigate this and wanted to ask some theoretical and practical questions to narrow the search space. This is mainly about the iOS version of my game, but probably many of the same issues affect the Android version. Sorry to ask many sub-questions, but they all seemed so interrelated I thought it best to keep them together.
Side notes: My game doesn't do network access (called out in several places as a big battery drain) and doesn't consume a lot of battery in the background; it's the foreground running that is the problem.
(1) I know there are APIs to read the battery level, so I can do some automated testing. My question here is: About how long (or perhaps: about how much battery drain) do I need to let the thing run to get a reliable reading? For instance, if it runs for 10 minutes is that reliable? If it drains 10% of the battery, is that reliable? Or is it better to run for more like an hour (or, say, see how long it takes the battery to drain 50%)? What I'm asking here is how sensitive/reliable the battery meter is, so I know how long each test run needs to be.
(2) I'm trying to understand what are the likely causes of the high battery use. Below I list some possible factors. Please help me understand which ones are the most likely culprits:
(2a) As with a lot of games, my game needs to draw the full screen on each frame. It runs at about 30 fps. I know that Apple says to "only refresh the screen as much as you need to", but I pretty much need to draw every frame. Actually, I could put some work into only drawing the parts of the screen that had changed, but in my case that will still be most of the screen. And in any case, even if I can localize the drawing to only part of the screen, I'm still making an OpenGL swap buffers call 30 times per second, so does it really matter that I've worked hard to draw a bit less?
(2b) As I draw the screen elements, there is a certain amount of floating point math that goes on (e.g., in computing texture UV coordinates), and some (less) double precision math that goes on. I don't know how expensive these are, battery-wise, as compared to similar integer operations. I could probably cache a lot of these values to not have to repeatedly compute them, if that was a likely win.
(2c) I do a certain amount of texture switching when rendering the scene. I had previously only been worried about this making the game too slow (it doesn't), but now I also wonder whether reducing texture switching would reduce battery use.
(2d) I'm not sure if this would be practical for me but: I have been reading about shaders and OpenCL, and I want to understand if I were to unload some of the CPU processing to the GPU, whether that would likely save battery (in addition to presumably running faster for vector-type operations). Or would it perhaps use even more battery on the GPU than on the CPU?
I realize that I can narrow down which factors are at play by disabling certain parts of the game and doing iterative battery test runs (hence part (1) of the question). It's just that that disabling is not trivial and there are enough potential culprits that I thought I'd ask for general advice first.
Try reading this article:
Android Documents on optimization
What works well for me, is decreasing the use for garbage collection e.g. when programming for a desktop computer, you're (or i'm) used to defining variables inside loops when they are not needed out side of the loop, this causes a massive use of garbage collection (and i'm not talking about primitive vars, but big objects.
try avoiding things like that.
One little tip that really helped me get Battery usage (and warmth of the device!) down was to throttle FPS in my custom OpenGL Engine.
Especially while the scene is static (e.g. a turn-based game or the user tapped pause) throttle down FPS.
Or throttle if the user isn't responsive for more then 10 seconds, like a screensaver on a desktop pc. In the real world users often get distracted while using mobile devices. Don't let your app drain battery while your user figures out what subway-station he's in ;)
Also on the iPhone, sometimes 60FPS is the default, throttling this manually to 30 FPS is barely visible and safes you about half of the gpu cycles (and therefore a lot of battery!).
I'm working on a 2D game for android using OpenGL ES 1.1 and I would like to know if this idea is good/bad/useless.
I have a screen divided in 3 sections, so I used scissors to avoid object overlapping from one view to the other.
I roughly understand the low level implementation of scissor and since my draws take a big part of the computation, I'm looking for ideas to speed it up.
My current idea is as follows:
If I put a glscissor around each object before I draw it, would I increase the speed of my application.
The idea is if I put a GLScissor, (center+/-sizetexture), then the OpenGL pipeline will have less tests to do (since it can discard 90~99% of the surface thanks to the glscissors.
So to all opengl experts, is this good, bad or will have no impact ? And why?
It shouldn't have any impact, IMHO. I'm not an expert, but my thinking is as follows:
Scissor test saves on your GPU's fill rate (the amount of fragments/pixels a hardware can put in the framebuffer per second),
if you put a glScissor around each object, the test won't actually cut off anything - the same number of pixels will be rendered, so no fill rate will be saved.
If you want to have your rendering optimized, a good place to start is to make sure you're doing optimal batching and reduce the number of draw calls or complex state switches (texture switches).
Of course the correct approach to optimizations is to try to diagnose why is your rendering slow, so the above is just my guess which may or may not help in your particular situation.
How can I regulate the framerate in my android app? I would like the game to run at a constant speed. My app doesn't require a high framerate so I don't want one because that would take up more battery then necessary.
Don't use frame rate to measure time. Use time to measure time. A GC pass can take 3/10 of a second, other tasks can fire up in the background, etc etc.
There's always a system/setup that will run slower than you thought possible.
Then you don't specify velocities in pixels, but in pixels/second. Each frame of animation takes a certain amount of time, etc etc. In your game engine, when computing the next frame, one of your inputs is "how much time has passed since the last frame". You determine that value as a fraction of a second, and multiply your velocity/sec by that fraction. The result is how far a Given Thing has moved since the last frame.
Note that really slow frame rates can wreak havoc on collision detection, particularly with fast moving objects. If Thing 1 passes completely through Thing 2 between frames, just checking BBoxes or radii isn't going to cut it.
Having said all that, sleep() is your friend. At the start of a frame's processing, call System.currentTimeMilis(). At the end of a frame's processing (including rendering), check the current time again. If the difference isn't long enough, sleep(N) for enough time to match your desired frame rate.
So if you want 20fps max, then each frame should take 50ms (1000ms / 20 = 50ms). If a frame only took 10ms to simulate and render, then you need to sleep for another (50ms - 10ms = ) 40ms before moving on to the next frame.
Alternatively, you can keep running the simulation as fast as possible, and only render the screen every so often. This won't help battery life much (though OpenGL hardware acceleration is expensive if the heat coming off my Evo is any indication), but can make for a Very Smooth experience. Heck, you can start calculating things like motion blur at that point.