Creating a game loop (sleep time help) - android

I've been using this tutorial to create a game loop.
In the section marked "FPS dependent on Constant Game Speed" there is some example code that includes a Sleep command
I googled the equivalent in java and found it is
Thread.sleep();
but it returns an error in eclipse
Unhandled exception type InterruptedException
What on earth does that mean.
And also I was wondering what the
update_game();
display_game();
methods may contain in an opengl-es game (ie: where is the renderer updated and what sort of things go on in display_game();
I am currently using a system that uses the GLSurfaceView and GLSurfaceRenderer features
Here is my adaptation of the code in the tutorial
public Input(Context context){
super(context);
glSurfaceRenderer = new GLSurfaceRenderer();
checkcollisions = new Collisions();
while (gameisrunning) {
setRenderer(glSurfaceRenderer);
nextGameTick += skipTicks;
sleepTime = nextGameTick - SystemClock.uptimeMillis();
if(sleepTime >= 0) {
Thread.sleep(sleepTime);
}else{
//S*** we're behind
}
}
This is called in my GLSurfaceView although I'm not sure whether this is the right place to implement this.

Looks like you need to go through a couple of tutorials on Java before trying to tackle android game development. Then read some tutorials on Android development, then some more general game development tutorials. (Programming is a lot of reading.)
Thread is throwing an exception when it gets interrupted. You have to tell Java how to deal with that.
To answer your question directly, though, here's a method that sleeps till a specific time:
private void waitUntil(long time) {
long sleepTime = time - new Date().getTime();
while (sleepTime >= 0) {
try {
Thread.sleep(sleepTime);
} catch (InterruptedException e) {
// Interrupted. sleepTime will be positive, so we'll do it again.
}
sleepTime = time - new Date().getTime();
}
}
You should understand at least this method before continuing on game development.

Ironically, Ron Romero's code doesn't work. There are two issues with it: the while loop and the sleeptime variable.
The sleepTime variable is set based off of a specified time (in milliseconds) to sleep. It takes the time and subtracts it from current millisecond time. The problem with this is that the current millisecond time is a huge number (hence the long variable type), which provides a NEGATIVE sleepTime number. You'll never enter the while loop with this code.
The second thing that's wrong is the while loop check itself. Utilizing the sleep function in java, you don't need to do a loop like this. All you have to do is pass the sleepTime that you want to sleep for in milliseconds to the sleep function and it will work just fine.
private void waitUntil(long time) {
try {
Thread.sleep(time);
}
catch (InterruptedException e) {
//This is just here to handle an error without crashing
}
}

Java performs compile-time checking of checked exceptions.

Related

why kotlin code so long on first execution [duplicate]

I'm curious about this.
I wanted to check which function was faster, so I create a little code and I executed a lot of times.
public static void main(String[] args) {
long ts;
String c = "sgfrt34tdfg34";
ts = System.currentTimeMillis();
for (int k = 0; k < 10000000; k++) {
c.getBytes();
}
System.out.println("t1->" + (System.currentTimeMillis() - ts));
ts = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++) {
Bytes.toBytes(c);
}
System.out.println("t2->" + (System.currentTimeMillis() - ts));
}
The "second" loop is faster, so, I thought that Bytes class from hadoop was faster than the function from String class. Then, I changed the order of the loops and then c.getBytes() got faster. I executed many times, and my conclusion was, I don't know why, but something happen in my VM after the first code execute so that the results become faster for the second loop.
This is a classic java benchmarking issue. Hotspot/JIT/etc will compile your code as you use it, so it gets faster during the run.
Run around the loop at least 3000 times (10000 on a server or on 64 bit) first - then do your measurements.
You know there's something wrong, because Bytes.toBytes calls c.getBytes internally:
public static byte[] toBytes(String s) {
try {
return s.getBytes(HConstants.UTF8_ENCODING);
} catch (UnsupportedEncodingException e) {
LOG.error("UTF-8 not supported?", e);
return null;
}
}
The source is taken from here. This tells you that the call cannot possibly be faster than the direct call - at the very best (i.e. if it gets inlined) it would have the same timing. Generally, though, you'd expect it to be a little slower, because of the small overhead in calling a function.
This is the classic problem with micro-benchmarking in interpreted, garbage-collected environments with components that run at arbitrary time, such as garbage collectors. On top of that, there are hardware optimizations, such as caching, that skew the picture. As the result, the best way to see what is going on is often to look at the source.
The "second" loop is faster, so,
When you execute a method at least 10000 times, it triggers the whole method to be compiled. This means that your second loop can be
faster as it is already compiled the first time you run it.
slower because when optimised it doesn't have good information/counters on how the code is executed.
The best solution is to place each loop in a separate method so one loop doesn't optimise the other AND run this a few times, ignoring the first run.
e.g.
for(int i = 0; i < 3; i++) {
long time1 = doTest1(); // timed using System.nanoTime();
long time2 = doTest2();
System.out.printf("Test1 took %,d on average, Test2 took %,d on average%n",
time1/RUNS, time2/RUNS);
}
Most likely, the code was still compiling or not yet compiled at the time the first loop ran.
Wrap the entire method in an outer loop so you can run the benchmarks a few times, and you should see more stable results.
Read: Dynamic compilation and performance measurement.
It simply might be the case that you allocate so much space for objects with your calls to getBytes(), that the JVM Garbage Collector starts and cleans up the unused references (bringing out the trash).
Few more observations
As pointed by #dasblinkenlight above, Hadoop's Bytes.toBytes(c); internally calls the String.getBytes("UTF-8")
The variant method String.getBytes() which takes Character Set as input is faster than the one that does not take any character set. So for a given string, getBytes("UTF-8") would be faster than getBytes(). I have tested this on my machine (Windows8, JDK 7). Run the two loops one with getBytes("UTF-8") and other with getBytes() in sequence in equal iterations.
long ts;
String c = "sgfrt34tdfg34";
ts = System.currentTimeMillis();
for (int k = 0; k < 10000000; k++) {
c.getBytes("UTF-8");
}
System.out.println("t1->" + (System.currentTimeMillis() - ts));
ts = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++) {
c.getBytes();
}
System.out.println("t2->" + (System.currentTimeMillis() - ts));
this gives:
t1->1970
t2->2541
and the results are same even if you change order of executions of loop. To discount any JIT optimizations, I would suggest run the tests in separate methods to confirm this (as suggested by #Peter Lawrey above)
So, Bytes.toBytes(c) should always be faster than String.getBytes()

Mana recovery issue

We're making a game in Android Studio and we got stuck. The resource (mana) used for specific spells should recover on time, e.g. 1 mana point per 5 minutes. We don't really get how to make it recover while the game is off. Is there a method to check current date/time and count the amount of mana replenished? Converting date and time to String and comparing it with the new date/time seems to be an "exciting" work to do, but we would bypass these mechanics if there is a way.
Thank you in advance.
The best way to do this in the background is to register a receiver in your manifest. This means the receiver will keep listening for broadcasts even if the app is off.
What you need is this particular action when registering your receiver Intent.ACTION_TIME_TICK
There is a more detailed answer about this matter here Time change listener
Another solution is to use the Calendar class in java. With it you can get the exact minutes passed from a point in the past to this moment. This way you don't have to worry about parsing dates and similar. I can't provide you specific examples because me myself have not used the Calendar class very much, but I'm sure you can find lots of stuff in the official documentation and on stackoverflow about it.
No need to work with Date objects, the simple usage of System.currentTimeMillis() should work. Here's a basic outline:
long mLastManaRefreshTime = System.currentTimeMillis();
void refreshMana()
{
long timeDelta = System.currentTimeMillis() - mLastManaRefreshTime;
mLastManaRefreshTime = System.currentTimeMillis();
float totalManaToRefresh = (float)AMOUNT_TO_REFRESH_IN_ONE_MINUTE * ((float)timeDelta / 60000f);
mMana += totalManaToRefresh;
if (mMana > MAX_MANA)
mMana = MAX_MANA;
}
This method is of course just an outline. You will need to call this once every update cycle. It will calculate how much time passed since the last time refreshMana was called, and replenish the required amount.
If you need this to work while the game is off, you can save the mLastManaRefreshTime to a SharedPreferences object and reload it when the game loads up again.
With System.currentTimeMillis() you can a current time-stamp in milliseconds.
You could save the latest time-stamp in your Preferences with every 5 min tick of the running game. For the other case, when your App comes back from a state where it does not do this (i.e. called the first time, woken up etc.).
Something like this:
int manacycles = ((int) (((System.currentTimeMillis() - oldtimestamp) / 1000) / 60) ) % 5;
would give you the number of Mana points you would have to add.
Alternately you could do the same thing with the Calendar class.
Also keep in mind players could cheat this way by simply changing their time. If your game is online you could get the time from the internet, with something like this:
try {
TimeTCPClient client = new TimeTCPClient();
try {
// Set timeout of 60 seconds
client.setDefaultTimeout(60000);
// Connecting to time server
// Other time servers can be found at : http://tf.nist.gov/tf-cgi/servers.cgi#
// Make sure that your program NEVER queries a server more frequently than once every 4 seconds
client.connect("nist.time.nosc.us");
System.out.println(client.getDate());
} finally {
client.disconnect();
}
} catch (IOException e) {
e.printStackTrace();
}

Occasional Stutter/Lag in game loop from lockCanvas method

I am smooth scrolling a bitmap at a given speed. I'm doing this with a game loop.
It scrolls pretty smoothly, around 60 fps, except for occasional stutters / jumps. These jumps occur anywhere from once a second to a couple of times a second. Usually they start or become more frequent after a few seconds of running, but I'm not sure if this is a big clue or not.
The reason for the jumps is that occasionally an iteration of the run loop will take about twice as long as usual, so the bitmap stays in one place for a while and then jumps further to catch up and maintain its constant speed. I used interpolation to figure out the new position of the bitmap with each update based on the time that has elapsed. When a longer than usual time has elapsed, I've tried doing a couple of mini-updates instead of moving the entire distance at once, but the paused bitmap is still very noticeable.
I ran traceview, and the extra time is being spent inside lockCanvas. Most of the time this method takes around 10 ms, but in these long cases, it takes around 24ms. When I traced for four seconds, this happened 7 times.
In the following code, I'm having it sleep for a bit if it ran fast, but that's not actually making any difference. If I get rid of that part of the code my problem is not solved. There doesn't need to be a constant frame rate, since I'm just calculating the position based on how much time has passed.
#Override
public void run() {
long beginTime = 0; // the time when the cycle begun
long timeDiff; // the time it took for the cycle to execute
int sleepTime; // ms to sleep (<0 if we're behind)
sleepTime = 0;
while (mRun) {
Canvas c = null;
try {
beginTime = System.currentTimeMillis();
c = mSurfaceHolder.lockCanvas(null);
synchronized (mSurfaceHolder) {
if (mMode == STATE_RUNNING) {
updatePhysics();
}
doDraw(c);
}
} catch(Exception e){
System.out.println(e.getStackTrace());
}finally {
if (c != null) {
mSurfaceHolder.unlockCanvasAndPost(c);
}
}
timeDiff = System.currentTimeMillis() - beginTime;
sleepTime = (int)(FRAME_PERIOD - timeDiff);
if(sleepTime > 0){
try {
Thread.sleep(sleepTime);
} catch (InterruptedException e) {}
}
}
}
The code to updatePhysics() and doDraw() do a little bit of math, and I've tried to make that as efficient as possible. Basically they just calculate the new position of the bitmap based on time and speed. I just have one bitmap, and it is not being reallocated every time or something like that.
Also, I'm positive that my surfaceHolder is ready, so it's not the common answer I've found from searching google that repeated calls to a non-ready surfaceHolder have been throttled.
Any ideas what could cause this? My surface holder uses PixelFormat.RGB_565 and my Bitmap is encoded as Bitmap.Config.RGB_565 if that makes a difference. I originally got the Bitmap from a relative layout that I made.
One possible explanation is that your code generates a lot of short-lived objects, and the garbage collector kicks in periodically to reclaim memory.
These noticeable pauses were the bane of developers in early versions of Java, until generational garbage collection pretty much eliminated this issue. However, as far as I know, Android's Dalvik virtual machine does not employ generational garbage collection, so you should be cautious about creating objects that you immediately discard, especially in loops.
Profiling memory allocation will shine more light on this issue.
If this is indeed the problem, you could try to reuse objects, or handle data using primitives.

Android - Scheduling an Events to Occur Every 10ms?

I'm working on creating an app that allows very low bandwidth communication via high frequency sound waves. I've gotten to the point where I can create a frequency and do the fourier transform (with the help of Moonblink's open source code for Audalyzer).
But here's my problem: I'm unable to get the code to run with the correct timing. Let's say I want a piece of code to execute every 10ms, how would I go about doing this?
I've tried using a TimerTask, but there is a huge delay before the code actually executes, like up to 100ms.
I also tried this method simply by pinging the current time and executing only when that time has elapsed. But there is still a delay problem. Do you guys have any ideas?
Thread analysis = new Thread(new Runnable()
{
#Override
public void run()
{
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_DISPLAY);
long executeTime = System.currentTimeMillis();
manualAnalyzer.measureStart();
while (FFTransforming)
{
if(System.currentTimeMillis() >= executeTime)
{
//Reset the timer to execute again in 10ms
executeTime+=10;
//Perform Fourier Transform
manualAnalyzer.doUpdate(0);
//TODO: Analyze the results of the transform here...
}
}
manualAnalyzer.measureStop();
}
});
analysis.start();
I would recommend a very different approach: Do not try to run your code in real time.
Instead, rely on only the low-level audio code running in real time, by recording (or playing) continuously for a period of time encompassing the events of interest.
Your code then runs somewhat asynchronously to this, decoupled by the audio buffers. Your code's sense of time is determined not by the system clock as it executes, but rather by the defined inter-sample-interval of the audio data you work with. (ie, if you are using 48 Ksps then 10 mS later is 480 samples later)
You may need to modify your protocol governing interaction between the devices to widen the time window in which transmissions can be expected to occur. Ie, you can have precise timing with respect to the actual modulation and symbols within a "packet", but you should not expect nearly the same order of precision in determining when a packet is sent or received - you will have to "find" it amidst a longer recording containing noise.
Your thread/loop strategy is probably roughly as close as you're going to get. However, 10ms is not a lot of time, most Android devices are not super-powerful, and a Fourier transform is a lot of work to do. I find it unlikely that you'll be able to fit that much work in 10ms. I suspect you're going to have to increase that period.
i changed your code so that it takes the execution time of doUpdate into account. The use of System.nanoTime() should also increase accuracy.
public void run() {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_DISPLAY);
long executeTime=0;
long nextTime = System.nanoTime();
manualAnalyzer.measureStart();
while (FFTransforming)
{
if(System.nanoTime() >= nextTime)
{
executeTime = System.nanoTime();
//Perform Fourier Transform
manualAnalyzer.doUpdate(0);
//TODO: Analyze the results of the transform here...
executeTime = System.nanoTime() - executeTime;
//guard against the case that doUpdate took longer than 10ms
final long i = executeTime/10000000;
//set the timer to execute again at the next full 10ms intervall
nextTime+= 10000000+ i*10000000
}
}
manualAnalyzer.measureStop();
}
What else could you do?
eliminate Garbage Collection
go native with the NDK (just an idea, this might as well give no benefit)

How to reduce App's CPU usage in Android phone?

I developed an auto-call application. The app reads a text file that includes a phone number list and calls for a few second, ends the call and then repeats.
My problem is that the app does not send calls after 10~16 hours. I don't know the reason exactly, but I guess that the problem is the CPU usage. My app's CPU usage is almost 50%! How do I reduce CPU usage?
Here is part of source code:
if(r_count.compareTo("0")!=0) {
while(index < repeat_count) {
count = 1;
time_count = 2;
while(count < map.length) {
performDial(); //start call
reject(); //end call
finishActivity(1);
TimeDelay("60"); // wait for 60sec
count = count + 2;
time_count = time_count + 2;
onBackPressed(); // press back button for calling next number
showCallLog();
finishActivity(0);
}
index++;
}
This is the TimeDelay() method source:
public void TimeDelay(String delayTime) {
saveTime = System.currentTimeMillis()/1000;
currentTime = 0;
dTime = Integer.parseInt(delayTime);
while(currentTime - saveTime < dTime) {
currentTime = System.currentTimeMillis()/1000;
}
}
TimeDelay() repeats in the while loop for a few times.
The reason it's using 50% of your CPU is that Android apparently won't let it use 100% of the CPU, which a loop like the one in your TimeDelay() ordinarily would. (Or else you have two CPUs and it is in fact using 100% of one CPU.) What you're doing is called a busy wait and it should be obvious why continually checking a condition will use lots of CPU. So don't do that. Use Thread.sleep() instead. Your app will then use no CPU at all during the wait.
Also, for God's sake, why are you passing a string and then parseInting it, rather than just passing an Integer in the first place? :-)
If your method take a long time to finish , especially the while loop. You should put Thread.sleep(50) inside your loop. This makes you processor be able to handle other processes.
Your CPU will be reduced. Not sure but you should try.
Hope you get good result.

Categories

Resources