I have just started using the OpenGL ES 2 for android for my little game and have encountered a problem on redrawing the screen on each frame.
I have setup a loop on my Renderer's onDrawFrame, just a simple [ updateGameLogic() -> drawGame() ] or Thread.sleep() loop based on the time lapsed from last drawGame call.
Currently the updateGameLogic() method simply translate the camera to the +ve X direction (the game is 2d).
In the drawGame() call, I first clear my screen with GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT). Then I have 3 glBindTexture and glDrawElements calls for drawing 3 categories of objects with different texture atlas.
Here comes the problem, in between each frame drawn on screen, there is a blink of the previous frame appearing which is undesired and makes the game look dizzy. Precisely, say the game is just about to draw frame 3 from frame 2, right before frame 2 vanish and frame 3 appearing, there is a split moment where frame 1 is displayed.
I thought this may be due to the way the GLSurfaceView is buffered by the system so I experimented with calling multiple glClear before drawing but everything stays the same. Would be grateful if someone can provide some explanation / solution to the problem, and what I have done wrong, thanks. (basically paragraph 2 to 4 is all my code so I have not posted it, unless requested)
From the clarification in the comment, it sounds like you have something like this in your code:
public void onDrawFrame(GL10 gl) {
long currentTime = SystemClock.elapsedRealtime();
long deltaTime = currentTime - mLastFrameTime;
if (deltaTime < 33) {
Thread.sleep(33 - deltaTime);
return;
}
mLastFrameTime = currentTime;
updateGameLogic(deltaTime);
drawGame();
}
This will indeed cause problems. When onDrawFrame() is called, you have to render a frame. You can't just return without drawing anything. The caller will assume that you rendered a frame in any case, and it will end up being presented on the screen. If you decide not to render anything, whatever happened to be in the surface you were supposed to draw to will be presented. There's no telling what this will be, but it's quite likely that it's an old frame from 2-3 frames earlier.
If you want to artificially throttle the frame rate, e.g. to save power, unfortunately there's no very good way to do this in Android. Using sleeps in onDrawFrame() is kind of dirty (and inherently unreliable, IMHO), but it might be necessary in this case. The key is that either before or after you sleep, you still need to render a frame. As a first attempt, I would try tweaking the above to something like this:
public void onDrawFrame(GL10 gl) {
long currentTime = SystemClock.elapsedRealtime();
long deltaTime = currentTime - mLastFrameTime;
if (deltaTime < 33) {
Thread.sleep(33 - deltaTime);
currentTime = SystemClock.elapsedRealtime();
deltaTime = currentTime - mLastFrameTime;
}
mLastFrameTime = currentTime;
updateGameLogic(deltaTime);
drawGame();
}
Note that while there is still an artificial delay, there is no early return in the code anymore.
There are probably more robust variations of this idea for throttling the redraws to 30 fps. Some searching on SO or the rest of the internet should reveal previous discussions.
I'm measuring time interval for looped animations/particles/etc in my Android app. App is a live wallpaper so it doesn't prevent system from scaling clock down for power saving.
Because of this all methods to measure time intervals between last frames doesn't measure monotonic time - I experience animations inconsistently slowing down and speeding up.
I've used all possible methods to retrieve system time - all of them are not monotonic (even SystemClock.elapsedRealtime() and System.nanoTime() which are guaranteed to be monotonic but nope they are not).
If I hold a finger on screen to prevent power saving all animations are smooth.
This issue is very noticeable on Nexus10, slightly noticeable on Nexus7 1st gen and on Nexus4 it is not present at all.
Here is code excerpt to measure time:
public void onDrawFrame(GL10 glUnused) {
// calculate timers for animations based on difference between lastFrameTime and SystemClock.elapsedRealtime()
...
// save last timestamp for this frame
lastFrameTime = SystemClock.elapsedRealtime();
}
Expanding a little on what I think you're doing based on the outline of your code, your onDrawFrame() method in pseudo code looks like this:
deltaTime = SystemClock.elapsedRealtime() - lastFrameTrime
// point 1, see text below
update animation based on deltaTime
draw frame
// point 2, see text below
lastFrameTime = SystemClock.elapsedRealtime()
The problem with this sequence is that you lose the time between point 1 and point 2 from your total animation time. You need to make sure that the total sum of all your deltaTime values, which are the times applied to your animation, covers the entire wall clock time of the amimation. With the logic you use, this is not the case. You only add up the times between the end of one call to the start of the next call. You do not account for the time needed to execute the method, which can be significant.
This can be fixed with a slight change in the sequence:
currentTime = SystemClock.elapsedRealtime()
deltaTime = currentTime - lastFrameTime
lastFrameTime = currentTime
update animation based on deltaTime
draw frame
The key point is that you only call elapsedRealtime() once per frame.
I'm making an android game and am currently not getting the performance I'd like. I have a game loop in its own thread which updates an object's position. The rendering thread will traverse these objects and draw them. The current behavior is what seems like choppy/uneven movement. What I cannot explain is that before I put the update logic in its own thread, I had it in the onDrawFrame method, right before the gl calls. In that case, the animation was perfectly smooth, it only becomes choppy/uneven specifically when I try to throttle my update loop via Thread.sleep. Even when I allow the update thread to go berserk (no sleep), the animation is smooth, only when Thread.sleep is involved does it affect the quality of the animation.
I've created a skeleton project to see if I could recreate the issue, below are the update loop and the onDrawFrame method in the renderer:
Update Loop
#Override
public void run()
{
while(gameOn)
{
long currentRun = SystemClock.uptimeMillis();
if(lastRun == 0)
{
lastRun = currentRun - 16;
}
long delta = currentRun - lastRun;
lastRun = currentRun;
posY += moveY*delta/20.0;
GlobalObjects.ypos = posY;
long rightNow = SystemClock.uptimeMillis();
if(rightNow - currentRun < 16)
{
try {
Thread.sleep(16 - (rightNow - currentRun));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
And here is my onDrawFrame method:
#Override
public void onDrawFrame(GL10 gl) {
gl.glClearColor(1f, 1f, 0, 0);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT |
GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTranslatef(transX, GlobalObjects.ypos, transZ);
//gl.glRotatef(45, 0, 0, 1);
//gl.glColor4f(0, 1, 0, 0);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, uvBuffer);
gl.glDrawElements(GL10.GL_TRIANGLES, drawOrder.length,
GL10.GL_UNSIGNED_SHORT, indiceBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
I've looked through replica island's source and he's doing his update logic in a separate thread, as well as throttling it with Thread.sleep, but his game looks very smooth. Does anyone have any ideas or has anyone experienced what I'm describing?
---EDIT: 1/25/13---
I've had some time to think and have smoothed out this game engine considerably. How I managed this might be blasphemous or insulting to actual game programmers, so please feel free to correct any of these ideas.
The basic idea is to keep a pattern of update, draw... update, draw... while keeping the time delta relatively the same (often out of your control).
My first course of action was to synchronize my renderer in such a way that it only drew after being notified it was allowed to do so. This looks something like this:
public void onDrawFrame(GL10 gl10) {
synchronized(drawLock)
{
while(!GlobalGameObjects.getInstance().isUpdateHappened())
{
try
{
Log.d("test1", "draw locking");
drawLock.wait();
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
When I finish my update logic, I call drawLock.notify(), releasing the rendering thread to draw what I just updated. The purpose of this is to help establish the pattern of update, draw... update, draw... etc.
Once I implemented that, it was considerably smoother, although I was still experiencing occasional jumps in movement. After some testing, I saw that I had multiple updates occurring between calls of ondrawFrame. This was causing one frame to show the result of two (or more) updates, a larger jump than normal.
What I did to resolve this was to cap the time delta to some value, say 18ms, between two onDrawFrame calls and store the extra time in a remainder. This remainder would be distributed to subsequent time deltas over the next few updates if they could handle it. This idea prevents all sudden long jumps, essentially smoothing a time spike out over multiple frames. Doing this gave me great results.
The downside to this approach is that for a little time, the position of objects will not be accurate with time, and will actually speed up to make up for that difference. But it's smoother and change in speed is not very noticeable.
Finally, I decided to rewrite my engine with the two above ideas in mind, rather than patching up the engine I had originally made. I made some optimizations for the thread synchronization that perhaps someone could comment on.
My current threads interact like this:
-Update thread updates the current buffer (double buffer system in order to update and draw simultaneously) and will then give this buffer to the renderer if the previous frame has been drawn.
-If the previous frame has not yet draw, or is drawing, the update thread will wait until the render thread notifies it that it has drawn.
-Render thread waits until notified by update thread that an update has occurred.
-When the render thread draws, it sets a "last drawn variable" indicating which of the two buffers it last drew and also notifies the update thread if it was waiting on the previous buffer to be drawn.
That may be a little convoluted, but what that's doing is allowing for the advantages of multithreading, in that it can perform the update for frame n while frame n-1 is drawing while also preventing multiple update iterations per frame if the renderer is taking a long time. To further explain, this multiple-update scenario is handled by the update thread locking if it detects that the lastDrawn buffer is equal to the one which was just updated. If they are equal, this indicates to the update thread that the frame before has not yet been drawn.
So far I'm getting good results. Let me know if anyone has any comments, would be happy to hear your thoughts on anything I'm doing, right or wrong.
Thanks
(The answer from Blackhex raised some interesting points, but I can't cram all this into a comment.)
Having two threads operating asynchronously is bound to lead to issues like this. Look at it this way: the event that drives animation is the hardware "vsync" signal, i.e. the point at which the Android surface compositor provides a new screen full of data to the display hardware. You want to have a new frame of data whenever vsync arrives. If you don't have new data, the game looks choppy. If you generated 3 frames of data in that period, two will be ignored, and you're just wasting battery life.
(Running a CPU full out may also cause the device to heat up, which can lead to thermal throttling, which slows everything in the system down... and can make your animation choppy.)
The easiest way to stay in sync with the display is to perform all of your state updates in onDrawFrame(). If it sometimes takes longer than one frame to perform your state updates and render the frame, then you're going to look bad, and need to modify your approach. Simply shifting all game state updates to a second core isn't going to help as much as you might like -- if core #1 is the renderer thread, and core #2 is the game state update thread, then core #1 is going to sit idle while core #2 updates the state, after which core #1 will resume to do the actual rendering while core #2 sits idle, and it's going to take just as long. To actually increase the amount of computation you can do per frame, you'd need to have two (or more) cores working simultaneously, which raises some interesting synchronization issues depending on how you define your division of labor (see http://developer.android.com/training/articles/smp.html if you want to go down that road).
Attempting to use Thread.sleep() to manage the frame rate generally ends badly. You can't know how long the period between vsync is, or how long until the next one arrives. It's different for every device, and on some devices it may be variable. You essentially end up with two clocks -- vsync and sleep -- beating against each other, and the result is choppy animation. On top of that, Thread.sleep() doesn't make any specific guarantees about accuracy or minimum sleep duration.
I haven't really gone through the Replica Island sources, but in GameRenderer.onDrawFrame() you can see the interaction between their game state thread (which creates a list of objects to draw) and the GL renderer thread (which just draws the list). In their model, the game state only updates as needed, and if nothing has changed it just re-draws the previous draw list. This model works well for an event-driven game, i.e. where the contents on screen update when something happens (you hit a key, a timer fires, etc). When an event occurs, they can do a minimal state update and adjust the draw list as appropriate.
Viewed another way, the render thread and the game state work in parallel because they're not rigidly tied together. The game state just runs around updating things as needed, and the render thread locks it down every vsync and draws whatever it finds. So long as neither side keeps anything locked up for too long, they don't visibly interfere. The only interesting shared state is the draw list, guarded with a mutex, so their multi-core issues are minimized.
For Android Breakout ( http://code.google.com/p/android-breakout/ ), the game has a ball bouncing around, in continuous motion. There we want to update our state as frequently as the display allows us to, so we drive the state change off of vsync, using a time delta from the previous frame to determine how far things have advanced. The per-frame computation is small, and the rendering is pretty trivial for a modern GL device, so it all fits easily in 1/60th of a second. If the display updated much faster (240Hz) we might occasionally drop frames (again, unlikely to be noticed) and we'd be burning 4x as much CPU on frame updates (which is unfortunate).
If for some reason one of these games missed a vsync, the player may or may not notice. The state advances by elapsed time, not a pre-set notion of a fixed-duration "frame", so e.g. the ball will either move 1 unit on each of two consecutive frames, or 2 units on one frame. Depending on the frame rate and the responsiveness of the display, this may not be visible. (This is a key design issue, and one that can mess with your head if you envisioned your game state in terms of "ticks".)
Both of these are valid approaches. The key is to draw the current state whenever onDrawFrame is called, and to update state as infrequently as possible.
Note for anyone else who happens to read this: don't use System.currentTimeMillis(). The example in the question used SystemClock.uptimeMillis(), which is based on the monotonic clock rather than wall-clock time. That, or System.nanoTime(), are better choices. (I'm on a minor crusade against currentTimeMillis, which on a mobile device could suddenly jump forward or backward.)
Update: I wrote an even longer answer to a similar question.
Update 2: I wrote an even longer longer answer about the general problem (see Appendix A).
One part of the problem may be caused by fact that Thread.sleep() is not accurate. Try to investigate what is the actual time of the sleep.
The most important thing that should make your animations smooth is that you should compute some interpolation factor, call it alpha, that linearly interpolates your animations in consecutive rendering thread calls between two consecutive animation update thread calls. In other words, if your update interval is high comparing to your framerate, not interpolating your animation update steps is like you'd be rendering at update interval framerate.
EDIT: As an example, this is how PlayN does it:
#Override
public void run() {
// The thread can be stopped between runs.
if (!running.get())
return;
int now = time();
float delta = now - lastTime;
if (delta > MAX_DELTA)
delta = MAX_DELTA;
lastTime = now;
if (updateRate == 0) {
platform.update(delta);
accum = 0;
} else {
accum += delta;
while (accum >= updateRate) {
platform.update(updateRate);
accum -= updateRate;
}
}
platform.graphics().paint(platform.game, (updateRate == 0) ? 0 : accum / updateRate);
if (LOG_FPS) {
totalTime += delta / 1000;
framesPainted++;
if (totalTime > 1) {
log().info("FPS: " + framesPainted / totalTime);
totalTime = framesPainted = 0;
}
}
}
I am currently developing a game for Android, and I would like your expertise on an issue that I have been having.
Background:
My game incorporates frame rate independent motion, which takes into
account the delta time value before performing necessary Velocity
calculations.
The game is a traditional 2D platformer.
The Issue:
Here's my issue (simplified). Let's pretend that my character is a square standing on top of a platform (with "gravity" being a constant downward velocity of characterVelocityDown).
I have defined the collision detection as follows (assuming Y axis points downwards):
Given characterFootY is the y-coordinate of the base of my square character, platformSurfaceY is the upper y-coordinate of my platform, and platformBaseY is the lower y-coordinate of my platform:
if (characterFootY + characterVelocityDown > platformSurfaceY && characterFootY + characterDy < platformBaseY) {
//Collision Is True
characterFootY = platformSurfaceY;
characterVelocityDown = 0;
} else{
characterVelocityDown = deltaTime * 6;
This approach works perfectly fine when the game is running at regular speed; however, if the game slows down, the deltaTime (which is the elapsed time between the previous frame and the current frame) becomes large, and characterFootY + characterVelocityDown exceed the boundaries that define collision detection and the character just falls straight through (as if teleporting).
How should I approach this issue to prevent this?
Thanks in advance for your help and I am looking forward to learning from you!
What you need to do is to run your physics loop with constant delta time and iterate it as many time as it need with current tick.
const float PHYSICS_TICK = 1/60.f; // 60 FPS
void Update( float dt )
{
m_dt += dt;
while( m_dt > PHYSICS_TICK )
{
UpdatePhysics( PHYSICS_TICK );
m_dt -= PHYSICS_TICK;
}
}
There are various technics used to handle the tick left ( m_dt )
Caps for miniumum tick and maximum tick are also a must.
I guess the issue here is that slowdowns are inevitable. You can try and optimize the code but you'll always have users with slow devices or busy sections of your game where it takes a little longer than usual to process it all. Instead of assuming a consistent delta, assume the opposite. Code under the realization that someone could try installing it on an abacus.
So basically, as SeveN says, make your game loop handle slowdowns. The only real way to do this (in my admittedly limited experience) would be to place a cap on how large delta can be. This will result in your clock not running on time exactly, but when you think about it, that's how most games handle slowdown. You don't fire up StarCraft on your pentium 66 and have it run at 5 FPS but full speed, it slow down and processes it as normal, albeit at a slideshow.
If you did such a thing, during periods of slowdown in your game, it'd visibly slow down... but the calculations should still all be spot on.
edit: just realised you're SeveN. Well done.
I'm creating a live wallpaper using openGL ES and I've run into trouble getting my wallpaper to remain over 30fps.
One thing I've noticed is that if I completely empty my onDraw() method (apart from a bit of timing code) the onDraw() method is getting called only every ~11ms when the rendermode is set to 'continuous'.
This seems a little odd. Especially when I add in some polygons and textures, the code then takes only a few extra ms between frames.
my code:
public void onDrawFrame(GL10 gl) {
long endTime = System.currentTimeMillis();
long dt = endTime - startTime;
Log.v("time",Long.toString(dt));
startTime = System.currentTimeMillis();
}
Is this normal behaviour? perhaps there is something going on behind the scenes.
I've tried messing around with my setup code, but it doesn't seem to change anything
traceView seems to suggest that EGLImpl.eglSwapBuffers seems to be taking around 45% of the execution time with a 0.633 cpu/call. The code shown above takes up about 25%, with the next highest entries being Thread.sleep().
EDIT - I think that swap buffers is synced with the display refresh rate which would explain what I'm seeing.