When running the basic sample:
http://android-developers.blogspot.com/2009/04/introducing-glsurfaceview.html
Add some basic timing using System.currentTimeMillis() or dropping to native code and using clock_gettime(CLOCK_REALTIME_HR, &res) both yield the same results it appears. I tested with both separately.
I always calculate a frame rate of ~56. That is a rough average of 18ms between frames. I realize the timing of frames is not a perfect science (timer accuracy, device doing stuff in the background). I simply let it run doing absolutely nothing, not even a glClear. After an hour this is the results I got which is about the same as letting it run for 1 minutes.
I've tried many things to achieve a true 59/60fps. I'm beginning to wonder if it is my device (HTC Incredible S). Although I have tried Samsung Galaxy S and Nexus S both with the same results. Tablets do not exhibit this behavior at all. On my EEE Transformer and Xoom I get ~16ms a frame consistently.
This leads me to believe VSYNC is the issue and that the vsync on that generation of Android phones is slow. Really VSYNC should be ~16ms. Achieving a true 60fps is unlikely but 59fps should be common.
Has anyone else experienced this? Any tips for disabling vsync? I don't think that is possible. Any tips for gaining control of controlling the "swap" step of rendering?
Related
I'm doing some work with computer vision on Android. I have made an app that keeps the exposure low after calibration with exposurelock, and that works fine. The exposure is how I want it with the LED on and off for almost all video frames.
However after I toggle the LED flash from the camera on, the image is shortly overexposed.
Since the exposure is locked, this is something that I do not understand why it is happening.
I'm testing with a Moto G2nd and Nexus 5X. Only the Nexus 5X seems to show the problem. From what I've read there are some difficulties with setting exposure compensation (which the Nexus 5X lacks), but the exposurelock is sure to work, although I'm beginning to doubt it has flaws. The Moto G2nd has the exposure compensation set to minimum (which here it does work).
Begging the question; is the shortly overexposed frame the result of the Nexus drivers, missing exposure compensation, or is it a natural occuring problem?
If the latter is the case, can I counteract it? I've thought about keeping track of the overall brightness of the frame and comparing that with a not overexposed one a few frames back. This seemed tricky and not to work.
Sidenote: maybe worth to mention, is that the Nexus shortly seems to stutter the video output after setting the camera parameters (flash on or off). Would it be conceivable that during this stutter the CMOS is charged too long?
I've looked into this some more. Nexus support wasn't of any help, so I basically had to work around the problem. I ended up timing the flash (there's a delay in case of Nexus 5x) by watching the brightness of the frame. Instead of just timing from flash low to high, I add another frame at the end to account for one overexposed frame.
I'm not sure this same issue happens when turning the flash off. I would guess so as there's a significant framedrop around powering off the flash, just as when powering on. Either way, that doesn't seem a problem for me (yet).
My guess this is a problem with the camera driver and the CMOS is indeed charged too long, because frames are not fetched in time, due to a delay by enabling/disabling the flash.
I created a game with createjs, but it runs very slowly on a mobile.
I added a function to show FPS obtained with createjs.Ticker.getMeasuredFPS(). However, the FPS shown by the function is quite normal. I set the FPS to 60, and the result of getMeasuredFPS() is about 55-60, while the animation is laggy and the FPS shouldn't be so high (it might be 5-10).
How can I get the real FPS on the device?
How can I profile it on a mobile?
getMeasuredFPS() is well tested, and should be accurate. That said, there's always a chance your mobile browser is doing something tricky (running the code every tick, but not rendering). There's also a chance your code may not be updating properly.
Either way, it's worth looking at a couple of alternatives. You can check getMeasuredTickTime(), which will tell you the average time spent in each tick (see the docs for more info).
You should also take a look at profiling using devtools.
http://google.com?q=profile+mobile+chrome
I have an OpenGL game for Android. It runs at a good 60fps when the screen is touched. When I release my finger it goes back down to around 30fps. Does the touch event/release raise/lower a thread's priority and if so how can I replicate this to keep it at a constant 60fps. This only seems to be an issue on Galaxy Note 2 so far.
I'll assume you are using onDrawFrame and setRenderMode(RENDERMODE_CONTINUOUSLY).
30 and 60FPS indicates that your implementation of onDrawFrame is called as the device's screen refreshes. Most displays refresh at 60Hz, giving you 60FPS.
It is likely that the Galaxy Note 2 has some power saving feature that limits screen refresh to 30Hz when there are no touches on screen. Check if there's any way to disable this feature.
AFAIK, OpenGL ES does not specify a standard for screen refresh rates, you will need a throttling function to ensure that your game runs/feels the same (i.e. at the same speed) despite differences in FPS.
Yes.
The best way to observe this phenomena is to use systrace with the "freq" tag enabled. You probably need a rooted device, and you definitely need one on which systrace is enabled.
systrace will record changes in the clock frequency for various components. It varies by device, but you can usually get the per-core CPU clocks and GPU memory rate. You will likely see several of them drop significantly at the same time your frame rate drops.
The motivation for doing this is to reduce power requirements and extend battery life. The assumption is that, while your finger is in contact with the screen, you're actively doing something and the device should be as responsive as possible. After a brief period of time, the clocks will slow to a level appropriate for the current workload. The heuristics that determine how long to wait before slowing, and how much to slow down, are tuned for each device.
(This has caused some people to create a thread that just sits and spins on multi-core devices as a way to artificially prop up the CPU clock rate. Not recommended. See also this answer.)
The bottom line is that this isn't a simple matter of adjusting thread priorities. You have to choose between recognizing that the slowdown will happen and adapting to it (by making your game updates independent of frame rate), or figure out some way to fool the device into staying in a higher-power mode when you want smooth animation.
(For anyone who wants to play along at home: build a copy of Grafika and start the "Record GL app" activity. If you drag your finger around the screen all will be well, but if you leave it alone for a few seconds you may start to see the dropped-frame counter rising as the app falls behind. Seen on Nexus 5, Nexus 7 (2013), and others.)
I'm developing an engine and a game at the same time in C++ and I'm using box2D for the physics back end. I'm testing on different android devices and on 2 out of 3 devices, the game runs fine and so do the physics. However, on my galaxy tab 10.1 I'm sporadically getting a sort of "stutter". Here is a youtube video demonstrating:
http://www.youtube.com/watch?v=DSbd8vX9FC0
The first device the game is running on is an Xperia Play... the second device is a Galaxy Tab 10.1. Needless to say the Galaxy tab has much better hardware than the Xperia Play, yet Box2D is lagging at random intervals for random lengths of time. The code for both machines is exactly the same. Also, the rest of the engine/game is not actually lagging. The entire time, it's running at solid 60 fps. So this "stuttering" seems to be some kind of delay or glitch in actually reading values from box2D.
The sprites you see moving check to see if they have an attached physical body at render time and set their positional values based on the world position of the physical body. So it seems to be in this specific process that box2D is seemingly out of sync with the rest of the application. Quite odd. I realize it's a long shot but I figured I'd post it here anyway to see if anyone had ideas... since I'm totally stumped. Thanks for any input in advance!
Oh, P.S. I am using a fixed time step since that seems to be the most commonly suggested solution for things like this. I moved to a fixed time step while developing this on my desktop, I ran into a similar issue just more severe and the fixed step was the solution. Also like I said the game is running steady at 60 fps, which is controlled by a low latency timer so I doubt simple lag is the issue. Thanks again!
As I mentioned in the comments here, this came down to being a timer resolution issue. I was using a timer class which was supposed to access the highest resolution system timer, cross platform. Everything worked great, except when it came to Android, some versions worked and some versions it did not. The galaxy tab 10.1 was one such case.
I ended up re-writing my getSystemTime() method to use a new addition to C++11 called std::chrono::high_resolution_clock. This also worked great (everywhere but Android)... except it has yet to be implemented in any NDK for android. It is supposed to be implemented in version 5 of the crystax NDK R7, which at the time of this post is 80% complete.
I did some research into various methods of accessing the system time or something by which I could base a reliable timer on the NDK side, but what it comes down to is that these various methods are not supported on all platforms. I've went through the painful process of writing my own engine from scratch simply so that I could support every version of android, so betting on methods that are inconsistently implemented is nonsensical.
The only sensible solution for anyone facing this problem, in my opinion, is to simply abandon the idea of implementing such code on the NDK side. I'm going to do this on the Java end instead, since thus far in all my tests this has been sufficiently reliable across all devices that I've tested on. More on that here:
http://www.codeproject.com/Articles/189515/Androng-a-Pong-clone-for-Android#Gettinghigh-resolutiontimingfromAndroid7
Update
I have now implemented my proposed solution, to do timing on the java side and it has worked. I also discovered that handling any relatively large number, regardless of data type (a number such as the nano seconds from calling the monotonic clock) in the NDK side also results in serious lagging on some versions of android. As such I've optimized this as much as possible by passing around a pointer to the system time, to ensure we're not passing-by-copy.
One last thing too, my statement that calling the monotonic clock from the NDK side is unreliable is however, it would seem, false. From the Android docks on System.nanoTime(),
...and System.nanoTime(). This clock is guaranteed to be monotonic,
and is the recommended basis for the general purpose interval timing
of user interface events, performance measurements, and anything else
that does not need to measure elapsed time during device sleep.
So it would seem, if this can be trusted, that calling the clock is reliable, but as mentioned there are other issues that then arise, like handling allocating and dumping the massive number that results which alone nearly cut my framerate in half on the Galaxy Tab 10.1 with Android 3.2. Ultimate conclusion: supporting all android devices equally is either damn near or flat out impossible and using native code seems to make it worse.
I am very new to game development, and you seem a lot more experienced and it may be silly to ask, but are you using delta time to update your world? Altough you say you have a constant frame rate of 60 fps, maybe your frame counter calculates something wrong, and you should use delta time to skip some frames when the FPS is low, or your world seem to "stay behind". I am pretty sure that you are familiar with this, but I think a good example is here : DeltaTimeExample altough it is a C implementation. If you need I can paste some code from my Android Projects of how I use delta time, that I've developed following this book : Beginning Android Games.
We are currently developing an Android Game using opengles. We are now trying to support different resolutions, however, when allowing for large resolutions, the moto droid we are testing on seems to lag at it's native resolution (800x400). We have the frames locked at 30 fps, and when we check, our game only takes about 15-20 ms to do it's updating and drawing on the droid, however, there is sometimes 30-60 ms that elapses between calls to ondrawframe. My Galaxy S device doesn't have this problem, and when we put the resolution down on the motorola droid it seems perfectly fine.
We've investigated the issue and have found people with what seems to have been the exact same issue but no one could offer a solution. Other games on the market that seem to use opengles run fine on the moto droid.
Is there something we're missing? Something we need to call or do to? There seems like there should be no reason for there to be 30-60 ms between calls to ondrawframe, especially when the native resolution (800x400) isn't leaps and bounds larger than the default of 533x320, or whatever it is that it goes to when not allowing for large screens, so it seems ridiculous for it to be because of a buffer swap. The lag even happens when near nothing is being drawn to screen, so it's not because we're drawing too much.
Any enlightenment or help as to how to fix this would be great.
You have to give more information. If its not because of drawing, then it should be because of any calculation or object allocation code. Allocating Objects often during a game, invokes the GC, which is bad for games.
Things about performance and allocating memory for Android games is explained in the links below.
I suggest you watch these 2 videos done by Google Android Group.
http://www.youtube.com/watch?v=U4Bk5rmIpic&feature=player_embedded
http://www.youtube.com/watch?v=7-62tRHLcHk&feature=player_embedded
Hope that helps in optimizing.