I've created a simple game on android that has the basic updatePhysics() and onDraw() in the main game loop. Initially, I didn't put anything to keep a consistent framerate, so it would loop infinitely without sleeping. But after doing some research, I found that it would probably be better to regulate this so that the framerate is consistent. So I put Thread.sleep() in to make it go at around 30 fps. Here's the code:
if(!h.getSurface().isValid())
continue;
synchronized(h){
startTime = System.currentTimeMillis();
if(state == STATE_GAME){
updatePhysics();
}
onDraw();
endTime = System.currentTimeMillis();
try{
Thread.sleep(WAIT_TIME - (endTime - startTime));
}catch(Exception e){}
}
What I found was my game became really choppy, it wasn't 30 fps at all, more like around 20. However, if I increased that rate to around 40, then it would look like 30 fps(Becomes smoother). I read online that games are usually 25 - 30 fps so I assume 40 may be a bit too high. Am I doing something wrong, am I supposed to use Thread.sleep()? And also, if I run this without regulating the fps, how would it affect other devices? It runs smooth on my galaxy s2 without the Thread.sleep(), and the inconsistent framerate is not noticeable. But I'm concerned with the lower end devices. What do high end games like angry birds do? Thanks for any answers I'm very new to game development.
Related
I'm writing a simple NDK OpenSL ES audio app that records the users touches on a virtual piano keyboard and then plays them back forever over a set loop. After much experimenting and reading, I've settled on using a separate POSIX loop to achieve this. As you can see in the code it subtracts any processing time taken from the sleep time in order to make the interval of each loop as close to the desired sleep interval as possible (in this case it's 5000000 nanoseconds.
void init_timing_loop() {
pthread_t fade_in;
pthread_create(&fade_in, NULL, timing_loop, (void*)NULL);
}
void* timing_loop(void* args) {
while (1) {
clock_gettime(CLOCK_MONOTONIC, &timing.start_time_s);
tic_counter(); // simple logic gates that cycle the current tic
play_all_parts(); // for-loops through all parts and plays any notes (From an OpenSL buffer) that fall on the current tic
clock_gettime(CLOCK_MONOTONIC, &timing.finish_time_s);
timing.diff_time_s.tv_nsec = (5000000 - (timing.finish_time_s.tv_nsec - timing.start_time_s.tv_nsec));
nanosleep(&timing.diff_time_s, NULL);
}
return NULL;
}
The problem is that even using this the results are better, but quite inconsistent. sometimes notes will delay for perhaps even 50ms at a time, which makes for very wonky playback.
Is there a better way of approaching this? To debug I ran the following code:
gettimeofday(&timing.curr_time, &timing.tzp);
__android_log_print(ANDROID_LOG_DEBUG, "timing_loop", "gettimeofday: %d %d",
timing.curr_time.tv_sec, timing.curr_time.tv_usec);
Which gives a fairly consistent readout - that doesn't reflect the playback inaccuracies whatsoever. Are there other forces at work with Android preventing accurate timing? Or is OpenSL ES a potential issue? All the buffer data is loaded into memory - could there be bottlenecks there?
Happy to post more OpenSL code if needed... but at this stage I'm trying figure out if this thread loop is accurate or if there's a better way to do it.
You should consider seconds when using clock_gettime as well, you may get greater timing.start_time_s.tv_nsec than timing.finish_time_s.tv_nsec. tv_nsec starts from zero when tv_sec is increased.
timing.diff_time_s.tv_nsec =
(5000000 - (timing.finish_time_s.tv_nsec - timing.start_time_s.tv_nsec));
try something like
#define NS_IN_SEC 1000000000
(timing.finish_time_s.tv_sec * NS_IN_SEC + timing.finish_time_s.tv_nsec) -
(timing.start_time_s.tv_nsec * NS_IN_SEC + timing.start_time_s.tv_nsec)
I'm building a scrolling shooter with libgdx. in windows, everything runs just fine, but on android i get noticeable jitter and the framerate drops from 61 fps avg without sound to 48-56 fps avg with sound. it plays a lot of small sound effects concurrently as there are a lot of bullets to fire and enemies to hit at once. my sound routine:
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.audio.Sound;
public class SoundFX {
static final int BGDIE = 1, BGHIT = 2, BGLASER = 3, BGSPAWN = 4, PDIE = 5, PHIT = 6, PLASER = 7, PSPAWN = 8, PAUSE = 9;
Sound S_BGDIE, S_BGHIT, S_BGLASER, S_BGSPAWN, S_PDIE, S_PHIT, S_PLASER, S_PSPAWN, S_PAUSE;
public void load()
{
S_BGDIE = Gdx.audio.newSound(Gdx.files.internal("data/sfx/badguydie.mp3"));
S_BGHIT = Gdx.audio.newSound(Gdx.files.internal("data/sfx/badguygothit.mp3"));
S_BGLASER = Gdx.audio.newSound(Gdx.files.internal("data/sfx/badguylaser.mp3"));
S_BGSPAWN = Gdx.audio.newSound(Gdx.files.internal("data/sfx/badguyspawn.mp3"));
S_PDIE = Gdx.audio.newSound(Gdx.files.internal("data/sfx/playerdie.mp3"));
S_PHIT = Gdx.audio.newSound(Gdx.files.internal("data/sfx/playergothit.mp3"));
S_PLASER = Gdx.audio.newSound(Gdx.files.internal("data/sfx/playerlaser.mp3"));
S_PSPAWN = Gdx.audio.newSound(Gdx.files.internal("data/sfx/playerspawn.mp3"));
S_PAUSE = Gdx.audio.newSound(Gdx.files.internal("data/sfx/pause.mp3"));
}
public void unload()
{
S_BGDIE.dispose();
S_BGHIT.dispose();
S_BGLASER.dispose();
S_BGSPAWN.dispose();
S_PDIE.dispose();
S_PHIT.dispose();
S_PLASER.dispose();
S_PSPAWN.dispose();
S_PAUSE.dispose();
}
public void play(int id)
{
switch(id)
{
case BGDIE:
S_BGDIE.play();
break;
case BGHIT:
S_BGHIT.play();
break;
case BGLASER:
S_BGLASER.play();
break;
case BGSPAWN:
S_BGSPAWN.play();
break;
case PDIE:
S_PDIE.play();
break;
case PHIT:
S_PHIT.play();
break;
case PLASER:
S_PLASER.play();
break;
case PSPAWN:
S_PSPAWN.play();
break;
case PAUSE:
S_PAUSE.play();
break;
default:
System.out.println("invalid sfx call");
break;
}
}
}
play gets called roughly 4-10 times a second depending on what's happening in the game, the sound effects are less than a second in duration and on avarage 8kb each.
what's going on here, and how can i fix this? it makes the game look very unprofessional and on hard levels almost unplayable.
A little background into the problem about why you notice such a big drop in performance when playing multiple sounds. This is because underneath the covers libgdx needs to mix the sounds together before playing them back to the user. When you get to a certain number of sounds running concurrently the demands on the system to do the mixing becomes noticeably high. Since you are likely trying the play sounds on the rendering thread this increase in demand results in a drop in framerate.
In my experience with any sort of sound effects in games, when you encounter a situation where you have a lot of small sounds playing concurrently there are few solutions. I will write these as more general solutions than specifics for libGDX:
1) Play all sounds on a single thread
This is what you are currently doing, while it doesn't work well once a large number of sounds playing it is easy to implement and for most situations it works. However, once you get enough sounds playing together you will notice lag and poor quality.
2) Play sounds on a separate thread from the rendering thread
This is one possible approach (and depending on if the framework allows), its not very good because it doesn't scale well. However, if you are only suffering minor performance loss and aren't expecting to play a huge number (~20+) of sounds concurrently this is a relatively minor change and will usually fix the problem.
3) Play sounds upto a cap
This solution is a bit more work, and may not be acceptable if you have sounds which must be played. You simply track the number of sounds playing and allow for additional sounds to be played until you hit a certain point. Anything above the cap is simply not played. This allows you to guarantee a certain level of performance. The biggest downside to this method is if you play a sound when your character is critically injured and it doesn't get played, then this may upset the user as they could be listening for the sound to know if they are going to die soon, but instead find themselves dead.
4) Selectively play important sounds
This is a bit more difficult to implement but it works extremely well as you are essentially setting a cap on performance. What you would want to do is determine the priority for a sound to be played. Is it close to you? Is it loud? Is it important? Then, you only play sounds which are important enough to play upto a certain cap (could be 4, 6, 8, etc). You would also need to keep track of how many sounds are currently playing to determine how many new sounds you can play at a given point.
A work-around: the sounds that are less important to be played to the player, you can play them only if the current FPS is above a certain value. For example:
if (Gdx.graphics.getFramesPerSecond() > 56) playSound();
And important sounds should always be played, no matter the FPS value.
I'm trying to get the time using android and open gl for my racing game.
My code now is:
deltaTime = (System.currentTimeMillis() + startTime) / 1000000000000.0f;
startTime = System.currentTimeMillis();
tickTime += deltaTime;
DecimalFormat dec = new DecimalFormat("#.##");
Log.d("time", dec.format(tickTime/100));
but it's a bit too fast.
You may want to look at a bit of Android Breakout:
http://code.google.com/p/android-breakout/source/browse/src/com/faddensoft/breakout/GameState.java#1001
The computation is similar, but note it uses System.nanoTime(), which uses the monotonic clock. You don't want to use System.currentTimeMillis(), which uses the wall clock. If the device is connected to a network, the wall clock can be updated, which can cause big jumps forward or backward.
The code also includes a (disabled) frame-rate-smoothing experiment that didn't seem to matter much.
As I think you discovered, the key to this approach is to recognize that the time interval between frames is not constant, and you need to update the game state based on how much time has actually elapsed, not a fixed notion of display update frequency.
Since you're working in milliseconds, shouldn't you be dividing by 1000f instead of 1000000000000.0f?
So i overclocked my phone to 1.664ghz and I know there are apps that test your phone's CPU performance and stressers but I would like to make my own someway. What is the best way to really make your CPU work? I was thinking just making a for loop do 1 million iterations of doing some time-consuming math...but that did not work becuase my phone did it in a few milliseconds i think...i tried trillions of iterations...the app froze but my task manager did not show the cpu even being used by the app. Usually stress test apps show up as red and say cpu:85% ram: 10mb ...So how can i really make my processor seriously think?
To compile a regex string:
Pattern p1 = Pattern.compile("a*b"); // a simple regex
// slightly more complex regex: an attempt at validating email addresses
Pattern p2 = Pattern.compile("[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+(?:[A-Z]{2}|com|org|net|edu|gov|mil|biz|info|mobi|name|aero|asia|jobs|museum)\b");
You need to launch these in background threads:
class RegexThread extends Thread {
RegexThread() {
// Create a new, second thread
super("Regex Thread");
start(); // Start the thread
}
// This is the entry point for the second thread.
public void run() {
while(true) {
Pattern p = Pattern.compile("[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+(?:[A-Z]{2}|com|org|net|edu|gov|mil|biz|info|mobi|name|aero|asia|jobs|museum)\b");
}
}
}
class CPUStresser {
public static void main(String args[]) {
static int NUM_THREADS = 10, RUNNING_TIME = 120; // run 10 threads for 120s
for(int i = 0; i < NUM_THREADS; ++i) {
new RegexThread(); // create a new thread
}
Thread.sleep(1000 * RUNNING_TIME);
}
}
(above code appropriated from here)
See how that goes.
I would suggest a slightly different test, it is not a simple mathematical algorithms and functions. There are plenty of odd-looking tests whose results always contains all reviews. You launch the application, it works for a while, and then gives you the result in standard scores. The more points more (or less), it is considered that the device better. But that the comparison results mean in real life, is not always clear. And not all.
Regard to mathematics, the first thing that comes to mind is a massive amount of counting decimal places and the task to count the number "pi"
OK. No problem, we will do it:
Here's a test number one - "The Number Pi" - how long it takes your phone to calculate the ten million digits of Pi (3.14) (if someone said this phrase a hundred years ago, exactly would be immediately went to a psychiatric hospital)
When you feel that the phone is slow. You turn / twist interface. But how to measure it - it is unclear.
Angry Birds run on different devices at different times - perhaps test "Angry Birds"
We think further - get a couple more tests, "heavy book" and "a large page."
algorithm of calculation:
Test "of Pi"
Take the Speed Pi.
Count ten million marks by using a slow algorithm "Abraham Sharp Series. Repeat measurements several times, take the average.
Test "Angry Birds"
Take the very first Angry Birds (not required, but these versions are not the most optimized)
Measure the time from launch to the first sounds of music. Exit. Immediately run over and over again. Repeat several times and take the average.
Test "Large Page"
Measure the load time of heavy site pages. You can do it with your favorite browser :)
You can use This link (sorry for the Cyrillic)
This page is maintained by using "computers browser" along with pictures. Total turns out 6.5 Mb and 99 files (I'm still on this page in its stored version of a small sound file)
All 99 files upload to the phone. Turn off Wi-Fi and mobile Internet (this is important!)
Page opens with your browser. Click the "back" button. And now click "Forward" and measure the time the page is fully loaded. And so a few times. Back-forward, backward-forward. As usual, we take the average.
All results are given in seconds.
During testing all devices that support microSD cards, was one and the same card-Transcend 16 Gb, class 10. And all data on it.
Well, the actual results of the tests for some devices TEST RESULT
https://play.google.com/store/apps/details?id=xcom.saplin.xOPS - the app crunches numbers (integer and float) on multiple threads (2x number of cores) and builds performance and CPU temperature graphs.
https://github.com/maxim-saplin/xOPS-Console/blob/master/Saplin.xOPS/Compute.cs - that's the core of the app
I have a C++ game running through JNI in Android. The frame rate varies from about 20-45fps due to scene complexity. Anything above 30fps is silly for the game; it's just burning battery. I'd like to limit the frame rate to 30 fps.
I could switch to RENDERMODE_WHEN_DIRTY, and use a Timer or ScheduledThreadPoolExecutor to requestRender(). But that adds a whole mess of extra moving parts that might or might not work consistently and correctly.
I tried injecting Thread.sleep() when things are running quickly, but this doesn't seem to work at all for small time values. And it may just be backing events into the queue anyway, not actually pausing.
Is there a "capFramerate()" method hiding in the API? Any reliable way to do this?
The solution from Mark is almost good, but not entirely correct. The problem is that the swap itself takes a considerable amount of time (especially if the video driver is caching instructions). Therefore you have to take that into account or you'll end with a lower frame rate than desired.
So the thing should be:
somewhere at the start (like the constructor):
startTime = System.currentTimeMillis();
then in the render loop:
public void onDrawFrame(GL10 gl)
{
endTime = System.currentTimeMillis();
dt = endTime - startTime;
if (dt < 33)
Thread.Sleep(33 - dt);
startTime = System.currentTimeMillis();
UpdateGame(dt);
RenderGame(gl);
}
This way you will take into account the time it takes to swap the buffers and the time to draw the frame.
When using GLSurfaceView, you perform the drawing in your Renderer's onDrawFrame which is handled in a separate thread by the GLSurfaceView. Simply make sure that each call to onDrawFrame takes (1000/[frames]) milliseconds, in your case something like 33ms.
To do this: (in your onDrawFrame)
Measure the current time before your start drawing using System.currentTimeMillis (Let's call it startTime)
Perform the drawing
Measure time again (Let's call it endTime)
deltaT = endTime - starTime
if deltaT < 33, sleep (33-deltaT)
That's it.
Fili's answer looked great to me, bad sadly limited the FPS on my Android device to 25 FPS, even though I requested 30. I figured out that Thread.sleep() works not accurately enough and sleeps longer than it should.
I found this implementation from the LWJGL project to do the job:
https://github.com/LWJGL/lwjgl/blob/master/src/java/org/lwjgl/opengl/Sync.java
Fili's solution is failing for some people, so I suspect it's sleeping until immediately after the next vsync instead of immediately before. I also feel that moving the sleep to the end of the function would give better results, because there it can pad out the current frame before the next vsync, instead of trying to compensate for the previous one. Thread.sleep() is inaccurate, but fortunately we only need it to be accurate to the nearest vsync period of 1/60s. The LWJGL code tyrondis posted a link to seems over-complicated for this situation, it's probably designed for when vsync is disabled or unavailable, which should not be the case in the context of this question.
I would try something like this:
private long lastTick = System.currentTimeMillis();
public void onDrawFrame(GL10 gl)
{
UpdateGame(dt);
RenderGame(gl);
// Subtract 10 from the desired period of 33ms to make generous
// allowance for overhead and inaccuracy; vsync will take up the slack
long nextTick = lastTick + 23;
long now;
while ((now = System.currentTimeMillis()) < nextTick)
Thread.sleep(nextTick - now);
lastTick = now;
}
If you don't want to rely on Thread.sleep, use the following
double frameStartTime = (double) System.nanoTime()/1000000;
// start time in milliseconds
// using System.currentTimeMillis() is a bad idea
// call this when you first start to draw
int frameRate = 30;
double frameInterval = (double) 1000/frame_rate;
// 1s is 1000ms, ms is millisecond
// 30 frame per seconds means one frame is 1s/30 = 1000ms/30
public void onDrawFrame(GL10 gl)
{
double endTime = (double) System.nanoTime()/1000000;
double elapsedTime = endTime - frameStartTime;
if (elapsed >= frameInterval)
{
// call GLES20.glClear(...) here
UpdateGame(elapsedTime);
RenderGame(gl);
frameStartTime += frameInterval;
}
}
You may also try and reduce the thread priority from onSurfaceCreated():
Process.setThreadPriority(Process.THREAD_PRIORITY_LESS_FAVORABLE);