I'll try to explain what I mean.
I'm developing a 2d game. When I run the code below on the small screen it works more quickly than the same code on the big screen. I think it depends on an iteration of the game loop takes more time on the big screen than on the small. How can I implement time unit or something else to it doesn't depend on the iteration of the game loop?
private void createDebris(){
if(dx<=0) return;
if(stepDebris==2){
Debris debris = new Debris(gameActivity, dx-=1280*coefX/77, 800*coefY-50*coefY, coefX, coefY);
synchronized (necessaryObjects) {
necessaryObjects.add(debris);
}
stepDebris=-1;
Log.e("COUNT", (count++)+"");
}
stepDebris++;
}
P.S. Debris is visual object which is drawn on canvas. I'll appreciate your answers. Thanks.
If, stepDebris is the unit by which you move objects on the screen - then incrementing it per draw call is wrong, because it ties the rate of movement to the framerate.
What you want is something like this
stepDebris += elapsedMilliseconds * speedFactor
where elapsedMilliseconds is the time elapsed since the game started (in mS). Once you find the correct speedFactor for stepDebris - it will move at the same speed on different machines/resolutions irrespective of framerate.
Hope this helps!
You can make an 'iteration' take x milliseconds by measuring the time it takes to do the actual iteration (= y), and afterwards sleeping (x-y) millisecs.
See also step 3 of this tutorial
Android provies the Handler API, which implements timed event loops, to control the timing of computation. This article is pretty good.
You will save battery life by implementing a frame rate controller with this interface rather redrawing as fast as you can.
Related
I have just started using the OpenGL ES 2 for android for my little game and have encountered a problem on redrawing the screen on each frame.
I have setup a loop on my Renderer's onDrawFrame, just a simple [ updateGameLogic() -> drawGame() ] or Thread.sleep() loop based on the time lapsed from last drawGame call.
Currently the updateGameLogic() method simply translate the camera to the +ve X direction (the game is 2d).
In the drawGame() call, I first clear my screen with GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT). Then I have 3 glBindTexture and glDrawElements calls for drawing 3 categories of objects with different texture atlas.
Here comes the problem, in between each frame drawn on screen, there is a blink of the previous frame appearing which is undesired and makes the game look dizzy. Precisely, say the game is just about to draw frame 3 from frame 2, right before frame 2 vanish and frame 3 appearing, there is a split moment where frame 1 is displayed.
I thought this may be due to the way the GLSurfaceView is buffered by the system so I experimented with calling multiple glClear before drawing but everything stays the same. Would be grateful if someone can provide some explanation / solution to the problem, and what I have done wrong, thanks. (basically paragraph 2 to 4 is all my code so I have not posted it, unless requested)
From the clarification in the comment, it sounds like you have something like this in your code:
public void onDrawFrame(GL10 gl) {
long currentTime = SystemClock.elapsedRealtime();
long deltaTime = currentTime - mLastFrameTime;
if (deltaTime < 33) {
Thread.sleep(33 - deltaTime);
return;
}
mLastFrameTime = currentTime;
updateGameLogic(deltaTime);
drawGame();
}
This will indeed cause problems. When onDrawFrame() is called, you have to render a frame. You can't just return without drawing anything. The caller will assume that you rendered a frame in any case, and it will end up being presented on the screen. If you decide not to render anything, whatever happened to be in the surface you were supposed to draw to will be presented. There's no telling what this will be, but it's quite likely that it's an old frame from 2-3 frames earlier.
If you want to artificially throttle the frame rate, e.g. to save power, unfortunately there's no very good way to do this in Android. Using sleeps in onDrawFrame() is kind of dirty (and inherently unreliable, IMHO), but it might be necessary in this case. The key is that either before or after you sleep, you still need to render a frame. As a first attempt, I would try tweaking the above to something like this:
public void onDrawFrame(GL10 gl) {
long currentTime = SystemClock.elapsedRealtime();
long deltaTime = currentTime - mLastFrameTime;
if (deltaTime < 33) {
Thread.sleep(33 - deltaTime);
currentTime = SystemClock.elapsedRealtime();
deltaTime = currentTime - mLastFrameTime;
}
mLastFrameTime = currentTime;
updateGameLogic(deltaTime);
drawGame();
}
Note that while there is still an artificial delay, there is no early return in the code anymore.
There are probably more robust variations of this idea for throttling the redraws to 30 fps. Some searching on SO or the rest of the internet should reveal previous discussions.
I try to create game for Android and I have problem with high speed objects, they don't wanna to collide.
I have Sphere with Sphere Collider and Bouncy material, and RigidBody with this param (Gravity=false, Interpolate=Interpolate, Collision Detection = Continuous Dynamic)
Also I have 3 walls with Box Collider and Bouncy material.
This is my code for Sphere
function IncreaseBallVelocity() {
rigidbody.velocity *= 1.05;
}
function Awake () {
rigidbody.AddForce(4, 4, 0, ForceMode.Impulse);
InvokeRepeating("IncreaseBallVelocity", 2, 2);
}
In project Settings I set: "Min Penetration For Penalty Force"=0.001, "Solver Interation Count"=50
When I play on the start it work fine (it bounces) but when speed go to high, Sphere just passes the wall.
Can anyone help me?
Thanks.
Edited
var hit : RaycastHit;
var mainGameScript : MainGame;
var particles_splash : GameObject;
function Awake () {
rigidbody.AddForce(4, 4, 0, ForceMode.Impulse);
InvokeRepeating("IncreaseBallVelocity", 2, 2);
}
function Update() {
if (rigidbody.SweepTest(transform.forward, hit, 0.5))
Debug.Log(hit.distance + "mts distance to obstacle");
if(transform.position.y < -3) {
mainGameScript.GameOver();
//Application.LoadLevel("Menu");
}
}
function IncreaseBallVelocity() {
rigidbody.velocity *= 1.05;
}
function OnCollisionEnter(collision : Collision) {
Instantiate(particles_splash, transform.position, transform.rotation);
}
EDITED added more info
Fixed Timestep = 0.02 Maximum Allowed Tir = 0.333
There is no difference between running the game in editor player and on Android
No. It looks OK when I set 0.01
My Paddle is Box Collider without Rigidbody, walls are the same
There are all in same layer (when speed is normal it all works) value in PhysicsManager are the default (same like in image) exept "Solver Interation Co..." = 50
No. When I change speed it pass other wall
I am using standard cube but I expand/shrink it to fit my screen and other objects, when I expand wall more then it's OK it bouncing
No. It's simple project simple example from Video http://www.youtube.com/watch?v=edfd1HJmKPY
I don't use gravity
See:
Similar SO Question
A community script that uses ray tracing to help manage fast objects
UnityAnswers post leading to the script in (2)
You could also try changing the fixed time step for physics. The smaller this value, the more times Unity calculates the physics of a scene. But be warned, making this value too small, say <= 0.005, will likely result in an unstable game, especially on a portable device.
The script above is best for bullets or small objects. You can manually force rigid body collisions tests:
public class example : MonoBehaviour {
public RaycastHit hit;
void Update() {
if (rigidbody.SweepTest(transform.forward, out hit, 10))
Debug.Log(hit.distance + "mts distance to obstacle");
}
}
I think the main problem is the manipulation of Rigidbody's velocity. I would try the following to solve the problem.
Redesign your code to ensure that IncreaseBallVelocity and every other manipulation of Rigidbody is called within FixedUpdate. Check that there are no other manipulations to Transform.position.
Try to replace setting velocity directly by using AddForce or similar methods so the physics engine has a higher chance to calculate all dependencies.
If there are more items (main player character, ...) involved related to the physics calculation, ensure that their code runs in FixedUpdate too.
Another point I stumbled upon were meshes that are scaled very much. Having a GameObject with scale <= 0.01 or >= 100 has definitely a negative impact on physics calculation. According to the docs and this Unity forum entry from one of the gurus you should avoid Transform.scale values != 1
Still not happy? OK then the next test is starting with high velocities but no acceleration. At this phase we want to know, if the high velocity itself or the acceleration is to blame for the problem. It would be interesting to know the velocities' values at which the physics engine starts to fail - please post them so that we can compare them.
EDIT: Some more things to investigate
6.7 m/sec does not sound that much so that I guess there is a special reason or a combination of reasons why things go wrong.
Is your Maximum Allowed Timestep high enough? For testing I suggest 5 to 10x Fixed Timestep. Note that this might kill the frame rate but that can be dfixed later.
Is there any difference between running the game in editor player and on Android?
Did you notice any drops in frame rate because of the 0.01 FixedTimestep? This would indicate that the physics engine might be in trouble.
Could it be that there are static colliders (objects having a collider but no Rigidbody) that are moved around or manipulated otherwise? This would cause heavy recalculations within PhysX.
What about the layers: Are all walls on the same layer resp. are the involved layers are configured appropriately in collision detection matrix?
Does the no-bounce effect always happen at the same wall? If so, can you just copy the 1st wall and put it in place of the second one to see if there is something wrong with this specific wall.
If not to much effort, I would try to set up some standard cubes as walls just to be sure that transform.scale is not to blame for it (I made really bad experience with this).
Do you manipulate gravity or TimeManager.timeScale from within a script?
BTW: are you using gravity? (Should be no problem just
I'm making an android game and am currently not getting the performance I'd like. I have a game loop in its own thread which updates an object's position. The rendering thread will traverse these objects and draw them. The current behavior is what seems like choppy/uneven movement. What I cannot explain is that before I put the update logic in its own thread, I had it in the onDrawFrame method, right before the gl calls. In that case, the animation was perfectly smooth, it only becomes choppy/uneven specifically when I try to throttle my update loop via Thread.sleep. Even when I allow the update thread to go berserk (no sleep), the animation is smooth, only when Thread.sleep is involved does it affect the quality of the animation.
I've created a skeleton project to see if I could recreate the issue, below are the update loop and the onDrawFrame method in the renderer:
Update Loop
#Override
public void run()
{
while(gameOn)
{
long currentRun = SystemClock.uptimeMillis();
if(lastRun == 0)
{
lastRun = currentRun - 16;
}
long delta = currentRun - lastRun;
lastRun = currentRun;
posY += moveY*delta/20.0;
GlobalObjects.ypos = posY;
long rightNow = SystemClock.uptimeMillis();
if(rightNow - currentRun < 16)
{
try {
Thread.sleep(16 - (rightNow - currentRun));
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
And here is my onDrawFrame method:
#Override
public void onDrawFrame(GL10 gl) {
gl.glClearColor(1f, 1f, 0, 0);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT |
GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTranslatef(transX, GlobalObjects.ypos, transZ);
//gl.glRotatef(45, 0, 0, 1);
//gl.glColor4f(0, 1, 0, 0);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, uvBuffer);
gl.glDrawElements(GL10.GL_TRIANGLES, drawOrder.length,
GL10.GL_UNSIGNED_SHORT, indiceBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
I've looked through replica island's source and he's doing his update logic in a separate thread, as well as throttling it with Thread.sleep, but his game looks very smooth. Does anyone have any ideas or has anyone experienced what I'm describing?
---EDIT: 1/25/13---
I've had some time to think and have smoothed out this game engine considerably. How I managed this might be blasphemous or insulting to actual game programmers, so please feel free to correct any of these ideas.
The basic idea is to keep a pattern of update, draw... update, draw... while keeping the time delta relatively the same (often out of your control).
My first course of action was to synchronize my renderer in such a way that it only drew after being notified it was allowed to do so. This looks something like this:
public void onDrawFrame(GL10 gl10) {
synchronized(drawLock)
{
while(!GlobalGameObjects.getInstance().isUpdateHappened())
{
try
{
Log.d("test1", "draw locking");
drawLock.wait();
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
When I finish my update logic, I call drawLock.notify(), releasing the rendering thread to draw what I just updated. The purpose of this is to help establish the pattern of update, draw... update, draw... etc.
Once I implemented that, it was considerably smoother, although I was still experiencing occasional jumps in movement. After some testing, I saw that I had multiple updates occurring between calls of ondrawFrame. This was causing one frame to show the result of two (or more) updates, a larger jump than normal.
What I did to resolve this was to cap the time delta to some value, say 18ms, between two onDrawFrame calls and store the extra time in a remainder. This remainder would be distributed to subsequent time deltas over the next few updates if they could handle it. This idea prevents all sudden long jumps, essentially smoothing a time spike out over multiple frames. Doing this gave me great results.
The downside to this approach is that for a little time, the position of objects will not be accurate with time, and will actually speed up to make up for that difference. But it's smoother and change in speed is not very noticeable.
Finally, I decided to rewrite my engine with the two above ideas in mind, rather than patching up the engine I had originally made. I made some optimizations for the thread synchronization that perhaps someone could comment on.
My current threads interact like this:
-Update thread updates the current buffer (double buffer system in order to update and draw simultaneously) and will then give this buffer to the renderer if the previous frame has been drawn.
-If the previous frame has not yet draw, or is drawing, the update thread will wait until the render thread notifies it that it has drawn.
-Render thread waits until notified by update thread that an update has occurred.
-When the render thread draws, it sets a "last drawn variable" indicating which of the two buffers it last drew and also notifies the update thread if it was waiting on the previous buffer to be drawn.
That may be a little convoluted, but what that's doing is allowing for the advantages of multithreading, in that it can perform the update for frame n while frame n-1 is drawing while also preventing multiple update iterations per frame if the renderer is taking a long time. To further explain, this multiple-update scenario is handled by the update thread locking if it detects that the lastDrawn buffer is equal to the one which was just updated. If they are equal, this indicates to the update thread that the frame before has not yet been drawn.
So far I'm getting good results. Let me know if anyone has any comments, would be happy to hear your thoughts on anything I'm doing, right or wrong.
Thanks
(The answer from Blackhex raised some interesting points, but I can't cram all this into a comment.)
Having two threads operating asynchronously is bound to lead to issues like this. Look at it this way: the event that drives animation is the hardware "vsync" signal, i.e. the point at which the Android surface compositor provides a new screen full of data to the display hardware. You want to have a new frame of data whenever vsync arrives. If you don't have new data, the game looks choppy. If you generated 3 frames of data in that period, two will be ignored, and you're just wasting battery life.
(Running a CPU full out may also cause the device to heat up, which can lead to thermal throttling, which slows everything in the system down... and can make your animation choppy.)
The easiest way to stay in sync with the display is to perform all of your state updates in onDrawFrame(). If it sometimes takes longer than one frame to perform your state updates and render the frame, then you're going to look bad, and need to modify your approach. Simply shifting all game state updates to a second core isn't going to help as much as you might like -- if core #1 is the renderer thread, and core #2 is the game state update thread, then core #1 is going to sit idle while core #2 updates the state, after which core #1 will resume to do the actual rendering while core #2 sits idle, and it's going to take just as long. To actually increase the amount of computation you can do per frame, you'd need to have two (or more) cores working simultaneously, which raises some interesting synchronization issues depending on how you define your division of labor (see http://developer.android.com/training/articles/smp.html if you want to go down that road).
Attempting to use Thread.sleep() to manage the frame rate generally ends badly. You can't know how long the period between vsync is, or how long until the next one arrives. It's different for every device, and on some devices it may be variable. You essentially end up with two clocks -- vsync and sleep -- beating against each other, and the result is choppy animation. On top of that, Thread.sleep() doesn't make any specific guarantees about accuracy or minimum sleep duration.
I haven't really gone through the Replica Island sources, but in GameRenderer.onDrawFrame() you can see the interaction between their game state thread (which creates a list of objects to draw) and the GL renderer thread (which just draws the list). In their model, the game state only updates as needed, and if nothing has changed it just re-draws the previous draw list. This model works well for an event-driven game, i.e. where the contents on screen update when something happens (you hit a key, a timer fires, etc). When an event occurs, they can do a minimal state update and adjust the draw list as appropriate.
Viewed another way, the render thread and the game state work in parallel because they're not rigidly tied together. The game state just runs around updating things as needed, and the render thread locks it down every vsync and draws whatever it finds. So long as neither side keeps anything locked up for too long, they don't visibly interfere. The only interesting shared state is the draw list, guarded with a mutex, so their multi-core issues are minimized.
For Android Breakout ( http://code.google.com/p/android-breakout/ ), the game has a ball bouncing around, in continuous motion. There we want to update our state as frequently as the display allows us to, so we drive the state change off of vsync, using a time delta from the previous frame to determine how far things have advanced. The per-frame computation is small, and the rendering is pretty trivial for a modern GL device, so it all fits easily in 1/60th of a second. If the display updated much faster (240Hz) we might occasionally drop frames (again, unlikely to be noticed) and we'd be burning 4x as much CPU on frame updates (which is unfortunate).
If for some reason one of these games missed a vsync, the player may or may not notice. The state advances by elapsed time, not a pre-set notion of a fixed-duration "frame", so e.g. the ball will either move 1 unit on each of two consecutive frames, or 2 units on one frame. Depending on the frame rate and the responsiveness of the display, this may not be visible. (This is a key design issue, and one that can mess with your head if you envisioned your game state in terms of "ticks".)
Both of these are valid approaches. The key is to draw the current state whenever onDrawFrame is called, and to update state as infrequently as possible.
Note for anyone else who happens to read this: don't use System.currentTimeMillis(). The example in the question used SystemClock.uptimeMillis(), which is based on the monotonic clock rather than wall-clock time. That, or System.nanoTime(), are better choices. (I'm on a minor crusade against currentTimeMillis, which on a mobile device could suddenly jump forward or backward.)
Update: I wrote an even longer answer to a similar question.
Update 2: I wrote an even longer longer answer about the general problem (see Appendix A).
One part of the problem may be caused by fact that Thread.sleep() is not accurate. Try to investigate what is the actual time of the sleep.
The most important thing that should make your animations smooth is that you should compute some interpolation factor, call it alpha, that linearly interpolates your animations in consecutive rendering thread calls between two consecutive animation update thread calls. In other words, if your update interval is high comparing to your framerate, not interpolating your animation update steps is like you'd be rendering at update interval framerate.
EDIT: As an example, this is how PlayN does it:
#Override
public void run() {
// The thread can be stopped between runs.
if (!running.get())
return;
int now = time();
float delta = now - lastTime;
if (delta > MAX_DELTA)
delta = MAX_DELTA;
lastTime = now;
if (updateRate == 0) {
platform.update(delta);
accum = 0;
} else {
accum += delta;
while (accum >= updateRate) {
platform.update(updateRate);
accum -= updateRate;
}
}
platform.graphics().paint(platform.game, (updateRate == 0) ? 0 : accum / updateRate);
if (LOG_FPS) {
totalTime += delta / 1000;
framesPainted++;
if (totalTime > 1) {
log().info("FPS: " + framesPainted / totalTime);
totalTime = framesPainted = 0;
}
}
}
I am currently developing a game for Android, and I would like your expertise on an issue that I have been having.
Background:
My game incorporates frame rate independent motion, which takes into
account the delta time value before performing necessary Velocity
calculations.
The game is a traditional 2D platformer.
The Issue:
Here's my issue (simplified). Let's pretend that my character is a square standing on top of a platform (with "gravity" being a constant downward velocity of characterVelocityDown).
I have defined the collision detection as follows (assuming Y axis points downwards):
Given characterFootY is the y-coordinate of the base of my square character, platformSurfaceY is the upper y-coordinate of my platform, and platformBaseY is the lower y-coordinate of my platform:
if (characterFootY + characterVelocityDown > platformSurfaceY && characterFootY + characterDy < platformBaseY) {
//Collision Is True
characterFootY = platformSurfaceY;
characterVelocityDown = 0;
} else{
characterVelocityDown = deltaTime * 6;
This approach works perfectly fine when the game is running at regular speed; however, if the game slows down, the deltaTime (which is the elapsed time between the previous frame and the current frame) becomes large, and characterFootY + characterVelocityDown exceed the boundaries that define collision detection and the character just falls straight through (as if teleporting).
How should I approach this issue to prevent this?
Thanks in advance for your help and I am looking forward to learning from you!
What you need to do is to run your physics loop with constant delta time and iterate it as many time as it need with current tick.
const float PHYSICS_TICK = 1/60.f; // 60 FPS
void Update( float dt )
{
m_dt += dt;
while( m_dt > PHYSICS_TICK )
{
UpdatePhysics( PHYSICS_TICK );
m_dt -= PHYSICS_TICK;
}
}
There are various technics used to handle the tick left ( m_dt )
Caps for miniumum tick and maximum tick are also a must.
I guess the issue here is that slowdowns are inevitable. You can try and optimize the code but you'll always have users with slow devices or busy sections of your game where it takes a little longer than usual to process it all. Instead of assuming a consistent delta, assume the opposite. Code under the realization that someone could try installing it on an abacus.
So basically, as SeveN says, make your game loop handle slowdowns. The only real way to do this (in my admittedly limited experience) would be to place a cap on how large delta can be. This will result in your clock not running on time exactly, but when you think about it, that's how most games handle slowdown. You don't fire up StarCraft on your pentium 66 and have it run at 5 FPS but full speed, it slow down and processes it as normal, albeit at a slideshow.
If you did such a thing, during periods of slowdown in your game, it'd visibly slow down... but the calculations should still all be spot on.
edit: just realised you're SeveN. Well done.
I was trying to make an into transition to my game by having two bitmaps slide apart, like a garage door opening from the middle and half sliding downwards and half upwards. Anyway, when I do it, it looks really choppy and the frame rate seems unstable/unreliable. Here's how I'm doing it.
public class TFView extends View{
...
public void startlevel(Canvas c){
long l =(SystemClock.currentThreadTimeMillis()-starttime)/3;//*(height/500);
if(l<1000){
c.drawBitmap(metalbottom,0,height/2+l,p);
c.drawBitmap(metaltop,0,0-l,p);}
}
public void endlevel(Canvas c){
long l =(SystemClock.currentThreadTimeMillis()-failtime)/3;
if(l>=height/2){
c.drawBitmap(metaltop, 0, 0, p);
c.drawBitmap(metalbottom, 0,height/2 , p);
}
else{
c.drawBitmap(metalbottom,0,-height/2+l,p);
c.drawBitmap(metaltop,0,height-l,p);}
}}
and i set the times for when I want to open/close the doors respectively. So what do you think I should change to make it a more smooth transition? Would converting it to surfaceview help?
I had the same problem. I know what you mean with "choppy". The animation speed is NOT consistent even though you have a time based animation i.e. you are using
SystemClock.currentThreadTimeMillis()
The choppiness is caused by currentThreadTimeMillis(). It "returns milliseconds running in the current thread". That is, when you use currentThreadTimeMillis() "time" only elapses when the current thread is RUNNING. But your renderer thread is NOT ALWAYS running - as you would expect in a multitasking environment. Thus every time the thread is not running your animation is also not "running" (time is not elapsing).
Solution: Use
SystemClock.elapsedRealtime()
I have to admit I'm not the best when it comes to animation in Android but I thought I'd contribute.
From your explanation, could you use TranslateAnimation? Your animation would then be very smooth.
As far as I'm aware, if the animations Android provides are not sufficient you should be drawing your graphics in a separate thread, implementing SurfaceView.
This may help or take a look at the Lunar Lander example.