Scheduling android graphics events - android

I asked this question over in the Android Developer's user group, last week. Nobody responded, so I thought I'd ask it over here.
Does anyone have any suggestions about how to schedule video events to happen at an exact clock time? I've been thinking about an application that would require two adjacent phones to display the same thing at exactly the same time. I'm wondering what that granularity of "exactly" is going to be.
I've done some testing on a couple of devices and it seems that the delay between an invalidate and the subsequent redraw can be as much 16ms. Perhaps I can do better with OpenGL?
Ideas? Anyone?

OpenGL itself is capable of very high framerates (unless I am mistaken). What I can tell you is that plenty of games have been written to run and maintain 30 frames per second. That's one frame every 3.33ms. At that speed, the change should be imperceptible to the human eye, or so I've heard (the estimate limit is 5ms).
However, there is a major difference between what OpenGL can do, and what the device running OpenGL can do. Again, Unless I am mistaken, you should be able to instruct OpenGL to run at 200 frames per second. The caveat is that if the machine you are running the animation on can't handle that framerate, it will either frame-skip or lag, and in either case will hog the processor and GPU like no other.
Again, as I don't know the specifics, I can only guess, but I would think that this is less of an issue with OpenGL vs the other leading brand, and more of an issue of the devices you are trying to sync. With the right code, a proven framework, two powerful machines, and high-speed data transfer capability (read: LAN at the least), there is no reason why you shouldn't be able to sync up the video. If any of these things are not the case, all bets are off.
-Cody

Related

Determine exact screen flip times on Android

I am attempting to determine (to within 1 ms) when particular screen flips happen on Android. Choreographer fires every time a frame flips, but gives no way of determining which frame is actually being displayed. According to https://source.android.com/devices/graphics/architecture.html, there are several layers in the process: the user land buffer, which flips to a triple-buffered queue, which flips to the surface flinger, which flips to the hardware. Each of these layers can potentially drop a frame, but at this point I have only determined how to to monitor the user land buffer. Is there a way to monitor the other buffers/flips (in real time, on a non-rooted, non-custom phone)?
I have observed unexpected frame delays on the HTC M8 (about 1 every 5 minutes), but the Nexus 7 does not appear to have this problem. I measure the delays by using a Cedrus StimTracker (http://cedrus.com/stimtracker/) with a photo sensor and the Lab Streaming Layer (https://github.com/sccn/labstreaminglayer). I have tried using eglPresentationTimeANDROID to control when screens are flipped, and that has not fixed the problem.
Note that I'm using the ndk, but I can usually use the JNI to get access to non-ndk features when I need to.
The reason I care is in order to use Android for psychological and neurological experiments, where 1 ms precision is highly desirable.
As far as accessible APIs go, it sounds like you've found the relevant bits and pieces. If you haven't yet, please read through this stackoverflow item.
Using Choreographer and extrapolation, you can guess at when the next display refresh will occur. Using eglPresentationTimeANDROID() on an Android 5.0+ device, you can tell SurfaceFlinger when you want a particular frame to be sent to the display. Assuming SurfaceFlinger is properly accounting for all latency (such as additional frames added by "smart" panels), that should get you reliable timing.
(Bear in mind that the timing is based on when the display latches the next frame, not when the next frame is fully visible on the display... the latency there will depend on the panel.)
Grafika's "scheduled swap" Activity uses this feature, but it sounds like you're already familiar.
The only way to get signaled by the display when it does the swap would be to dup() the display-retire fence fd from the previous frame, and wait on it. Some of the code in SurfaceFlinger does this, notably DispSync watches the retire fences to see if the software "VSYNC" is drifting. There is no public API for fences, and the user-space response time could certainly be more than 1ms anyway... it usually works out better to schedule ahead than it does to react. Your requirement for non-rooted non-custom devices makes this problematic.
If you're mostly seeing correct behavior, but occasionally seeing a miss, your best bet is to use systrace to track down the cause.

OpenGL buffer swap refresh rate, how to calculate ideal

I am timing my OpenGL frame rates, with v-sync turned on, and notice my timings aren't the precise frequency as set by the monitor. That is, on my desktop I have a 60Hz refresh, but the FPS is stable at 59.88, whereas on my tablet it's also 60Hz but the FPS can be 61/62 FPS. I'm curious as to what precisely causes these slight deviations.
These are the ideas I've had so far:
Dropped Frames: This is the obvious answer: I'm just missing some frames. This is however not the cause as I can verify I am not dropping frames and the drop in FPS would be higher if this happened. I calculate the time over 120 frames, so if 1 frame was lost the FPS would drop below 59.5 on the desktop.
Inaccurate timings: I use clock_gettime to get my timings. On Linux I know this is accurate enough (as I previously did nanosecond based timings with it, but here we could even live with +/- several hundred microseconds). On Android however I'm not sure of the accuracy of this.
API Oddity: I use glXSwapBuffers on the desktop and eglSwapBuffers on Android. There could be an oddity here, but I don't see how this could so subtley affect the frame rate.
Approximate Hz: This is my biggest guess that the video cards/monitor aren't actually running at 60Hz. This is probably tied to the exact speed of the monitor, and the video card frequency. This seems like something that could be concretely determined, but I don't know which tools can be used to do this. (Update: My current video mode in Linux shows 59.93Hz, so closer, but still not there)
If the answer is indeed #4 this is perhaps not the best exchange site for the question. But in all cases my ultimate goal is to figure out programmatically what the ideal refresh rate actaully is. So I'm hoping somebody can confirm/deny my ideas, and possibly point me in the right direction to getting the information I need.

how to measure and improve battery use in iPhone/iPad game (Android also)

My game uses too much battery. I don't know exactly how much it uses as compared to comparable games, but it uses too much. Players complain that it uses a lot, and a number of them note that it makes their device "run hot". I'm just starting to investigate this and wanted to ask some theoretical and practical questions to narrow the search space. This is mainly about the iOS version of my game, but probably many of the same issues affect the Android version. Sorry to ask many sub-questions, but they all seemed so interrelated I thought it best to keep them together.
Side notes: My game doesn't do network access (called out in several places as a big battery drain) and doesn't consume a lot of battery in the background; it's the foreground running that is the problem.
(1) I know there are APIs to read the battery level, so I can do some automated testing. My question here is: About how long (or perhaps: about how much battery drain) do I need to let the thing run to get a reliable reading? For instance, if it runs for 10 minutes is that reliable? If it drains 10% of the battery, is that reliable? Or is it better to run for more like an hour (or, say, see how long it takes the battery to drain 50%)? What I'm asking here is how sensitive/reliable the battery meter is, so I know how long each test run needs to be.
(2) I'm trying to understand what are the likely causes of the high battery use. Below I list some possible factors. Please help me understand which ones are the most likely culprits:
(2a) As with a lot of games, my game needs to draw the full screen on each frame. It runs at about 30 fps. I know that Apple says to "only refresh the screen as much as you need to", but I pretty much need to draw every frame. Actually, I could put some work into only drawing the parts of the screen that had changed, but in my case that will still be most of the screen. And in any case, even if I can localize the drawing to only part of the screen, I'm still making an OpenGL swap buffers call 30 times per second, so does it really matter that I've worked hard to draw a bit less?
(2b) As I draw the screen elements, there is a certain amount of floating point math that goes on (e.g., in computing texture UV coordinates), and some (less) double precision math that goes on. I don't know how expensive these are, battery-wise, as compared to similar integer operations. I could probably cache a lot of these values to not have to repeatedly compute them, if that was a likely win.
(2c) I do a certain amount of texture switching when rendering the scene. I had previously only been worried about this making the game too slow (it doesn't), but now I also wonder whether reducing texture switching would reduce battery use.
(2d) I'm not sure if this would be practical for me but: I have been reading about shaders and OpenCL, and I want to understand if I were to unload some of the CPU processing to the GPU, whether that would likely save battery (in addition to presumably running faster for vector-type operations). Or would it perhaps use even more battery on the GPU than on the CPU?
I realize that I can narrow down which factors are at play by disabling certain parts of the game and doing iterative battery test runs (hence part (1) of the question). It's just that that disabling is not trivial and there are enough potential culprits that I thought I'd ask for general advice first.
Try reading this article:
Android Documents on optimization
What works well for me, is decreasing the use for garbage collection e.g. when programming for a desktop computer, you're (or i'm) used to defining variables inside loops when they are not needed out side of the loop, this causes a massive use of garbage collection (and i'm not talking about primitive vars, but big objects.
try avoiding things like that.
One little tip that really helped me get Battery usage (and warmth of the device!) down was to throttle FPS in my custom OpenGL Engine.
Especially while the scene is static (e.g. a turn-based game or the user tapped pause) throttle down FPS.
Or throttle if the user isn't responsive for more then 10 seconds, like a screensaver on a desktop pc. In the real world users often get distracted while using mobile devices. Don't let your app drain battery while your user figures out what subway-station he's in ;)
Also on the iPhone, sometimes 60FPS is the default, throttling this manually to 30 FPS is barely visible and safes you about half of the gpu cycles (and therefore a lot of battery!).

Strange "stutter" in box2D on different android devices

I'm developing an engine and a game at the same time in C++ and I'm using box2D for the physics back end. I'm testing on different android devices and on 2 out of 3 devices, the game runs fine and so do the physics. However, on my galaxy tab 10.1 I'm sporadically getting a sort of "stutter". Here is a youtube video demonstrating:
http://www.youtube.com/watch?v=DSbd8vX9FC0
The first device the game is running on is an Xperia Play... the second device is a Galaxy Tab 10.1. Needless to say the Galaxy tab has much better hardware than the Xperia Play, yet Box2D is lagging at random intervals for random lengths of time. The code for both machines is exactly the same. Also, the rest of the engine/game is not actually lagging. The entire time, it's running at solid 60 fps. So this "stuttering" seems to be some kind of delay or glitch in actually reading values from box2D.
The sprites you see moving check to see if they have an attached physical body at render time and set their positional values based on the world position of the physical body. So it seems to be in this specific process that box2D is seemingly out of sync with the rest of the application. Quite odd. I realize it's a long shot but I figured I'd post it here anyway to see if anyone had ideas... since I'm totally stumped. Thanks for any input in advance!
Oh, P.S. I am using a fixed time step since that seems to be the most commonly suggested solution for things like this. I moved to a fixed time step while developing this on my desktop, I ran into a similar issue just more severe and the fixed step was the solution. Also like I said the game is running steady at 60 fps, which is controlled by a low latency timer so I doubt simple lag is the issue. Thanks again!
As I mentioned in the comments here, this came down to being a timer resolution issue. I was using a timer class which was supposed to access the highest resolution system timer, cross platform. Everything worked great, except when it came to Android, some versions worked and some versions it did not. The galaxy tab 10.1 was one such case.
I ended up re-writing my getSystemTime() method to use a new addition to C++11 called std::chrono::high_resolution_clock. This also worked great (everywhere but Android)... except it has yet to be implemented in any NDK for android. It is supposed to be implemented in version 5 of the crystax NDK R7, which at the time of this post is 80% complete.
I did some research into various methods of accessing the system time or something by which I could base a reliable timer on the NDK side, but what it comes down to is that these various methods are not supported on all platforms. I've went through the painful process of writing my own engine from scratch simply so that I could support every version of android, so betting on methods that are inconsistently implemented is nonsensical.
The only sensible solution for anyone facing this problem, in my opinion, is to simply abandon the idea of implementing such code on the NDK side. I'm going to do this on the Java end instead, since thus far in all my tests this has been sufficiently reliable across all devices that I've tested on. More on that here:
http://www.codeproject.com/Articles/189515/Androng-a-Pong-clone-for-Android#Gettinghigh-resolutiontimingfromAndroid7
Update
I have now implemented my proposed solution, to do timing on the java side and it has worked. I also discovered that handling any relatively large number, regardless of data type (a number such as the nano seconds from calling the monotonic clock) in the NDK side also results in serious lagging on some versions of android. As such I've optimized this as much as possible by passing around a pointer to the system time, to ensure we're not passing-by-copy.
One last thing too, my statement that calling the monotonic clock from the NDK side is unreliable is however, it would seem, false. From the Android docks on System.nanoTime(),
...and System.nanoTime(). This clock is guaranteed to be monotonic,
and is the recommended basis for the general purpose interval timing
of user interface events, performance measurements, and anything else
that does not need to measure elapsed time during device sleep.
So it would seem, if this can be trusted, that calling the clock is reliable, but as mentioned there are other issues that then arise, like handling allocating and dumping the massive number that results which alone nearly cut my framerate in half on the Galaxy Tab 10.1 with Android 3.2. Ultimate conclusion: supporting all android devices equally is either damn near or flat out impossible and using native code seems to make it worse.
I am very new to game development, and you seem a lot more experienced and it may be silly to ask, but are you using delta time to update your world? Altough you say you have a constant frame rate of 60 fps, maybe your frame counter calculates something wrong, and you should use delta time to skip some frames when the FPS is low, or your world seem to "stay behind". I am pretty sure that you are familiar with this, but I think a good example is here : DeltaTimeExample altough it is a C implementation. If you need I can paste some code from my Android Projects of how I use delta time, that I've developed following this book : Beginning Android Games.

Raycasting vs OpenGL-ES 2.0 - is there a noticeable difference for Doom like game on Android?

It's my first question, I searched on google as always, round two was searching directly on SO, but still I couldn't get the exact answer.
I'm going to write 3D graphics engine for games I want to make in the future for Android platform. I played Doom recently on my mobile, on break in high-school, and when I lifted my head after being killed by Baron of Hell I saw more that twenty people struggling to see me playing, followed by loud "AAAAAAAW!" when they saw me dead. My jaw went down to the floor. None of them suspected that it was game from 1993.
But let's get back to the point. If you wanted to write "find it by yourself" you can stop typing now. I can't check this on my own, for different reasons. At first I own only one mid-cost device (HTC Wildfire) I can test my engine on. The second and more important reason is time. I don't have time to write whole OpenGL-ES 2.0 graphics engine and realize that my Wildfire can't even run it. Or it gets around 1FPS when walking.
For raycasting to be more realistic it needs few calculations, which are not important for me in OpenGL because I just set vertices & indices and it goes. I love the way that Doom levels are designed but I wanted to add ability to move your head up and down (rotation on X-axis) to look around and shoot more precisely, and the complexity of calculations in raycasting are growing (not for me, for CPU). I know Wildfire doesn't have any GPU and anything based on OGL (even 2D) is lagging as hell even if I overclock my CPU to #748 with PERFORMANCE governor (it turns off on higher values, and I got to move it from my Casemate to stop it from overheating). But Doom is running perfectly without any lag even if I underclock it to #448. Is this only because of lo-res textures and non-complex levels or because of raycasting?
And please don't flame for going back to such old thing like raycasting engine. Actual mobile devices, or smartphones - call them like you want - that doesn't cost $1k are in stage that computers was 18 (Doom release) years ago. There will always be a group of such devices. And even if I don't make my billion on this - I will have something to play on travel.
I've got some pretty nice ideas for games like this, and I want to hit the low cost devices as they don't have anything more spectacular than simple logic games. As this is my first question please correct anything I wrote wrong, poorly or just bad - I'll rewrite it.

Categories

Resources