opengl android time too fast - android

I'm trying to get the time using android and open gl for my racing game.
My code now is:
deltaTime = (System.currentTimeMillis() + startTime) / 1000000000000.0f;
startTime = System.currentTimeMillis();
tickTime += deltaTime;
DecimalFormat dec = new DecimalFormat("#.##");
Log.d("time", dec.format(tickTime/100));
but it's a bit too fast.

You may want to look at a bit of Android Breakout:
http://code.google.com/p/android-breakout/source/browse/src/com/faddensoft/breakout/GameState.java#1001
The computation is similar, but note it uses System.nanoTime(), which uses the monotonic clock. You don't want to use System.currentTimeMillis(), which uses the wall clock. If the device is connected to a network, the wall clock can be updated, which can cause big jumps forward or backward.
The code also includes a (disabled) frame-rate-smoothing experiment that didn't seem to matter much.
As I think you discovered, the key to this approach is to recognize that the time interval between frames is not constant, and you need to update the game state based on how much time has actually elapsed, not a fixed notion of display update frequency.

Since you're working in milliseconds, shouldn't you be dividing by 1000f instead of 1000000000000.0f?

Related

Mana recovery issue

We're making a game in Android Studio and we got stuck. The resource (mana) used for specific spells should recover on time, e.g. 1 mana point per 5 minutes. We don't really get how to make it recover while the game is off. Is there a method to check current date/time and count the amount of mana replenished? Converting date and time to String and comparing it with the new date/time seems to be an "exciting" work to do, but we would bypass these mechanics if there is a way.
Thank you in advance.
The best way to do this in the background is to register a receiver in your manifest. This means the receiver will keep listening for broadcasts even if the app is off.
What you need is this particular action when registering your receiver Intent.ACTION_TIME_TICK
There is a more detailed answer about this matter here Time change listener
Another solution is to use the Calendar class in java. With it you can get the exact minutes passed from a point in the past to this moment. This way you don't have to worry about parsing dates and similar. I can't provide you specific examples because me myself have not used the Calendar class very much, but I'm sure you can find lots of stuff in the official documentation and on stackoverflow about it.
No need to work with Date objects, the simple usage of System.currentTimeMillis() should work. Here's a basic outline:
long mLastManaRefreshTime = System.currentTimeMillis();
void refreshMana()
{
long timeDelta = System.currentTimeMillis() - mLastManaRefreshTime;
mLastManaRefreshTime = System.currentTimeMillis();
float totalManaToRefresh = (float)AMOUNT_TO_REFRESH_IN_ONE_MINUTE * ((float)timeDelta / 60000f);
mMana += totalManaToRefresh;
if (mMana > MAX_MANA)
mMana = MAX_MANA;
}
This method is of course just an outline. You will need to call this once every update cycle. It will calculate how much time passed since the last time refreshMana was called, and replenish the required amount.
If you need this to work while the game is off, you can save the mLastManaRefreshTime to a SharedPreferences object and reload it when the game loads up again.
With System.currentTimeMillis() you can a current time-stamp in milliseconds.
You could save the latest time-stamp in your Preferences with every 5 min tick of the running game. For the other case, when your App comes back from a state where it does not do this (i.e. called the first time, woken up etc.).
Something like this:
int manacycles = ((int) (((System.currentTimeMillis() - oldtimestamp) / 1000) / 60) ) % 5;
would give you the number of Mana points you would have to add.
Alternately you could do the same thing with the Calendar class.
Also keep in mind players could cheat this way by simply changing their time. If your game is online you could get the time from the internet, with something like this:
try {
TimeTCPClient client = new TimeTCPClient();
try {
// Set timeout of 60 seconds
client.setDefaultTimeout(60000);
// Connecting to time server
// Other time servers can be found at : http://tf.nist.gov/tf-cgi/servers.cgi#
// Make sure that your program NEVER queries a server more frequently than once every 4 seconds
client.connect("nist.time.nosc.us");
System.out.println(client.getDate());
} finally {
client.disconnect();
}
} catch (IOException e) {
e.printStackTrace();
}

SensorEvent.timestamp inconsistency

my application performs in background step counting using the step detector sensor API's introduced in android 4.4.X.
It's essential to my app to know the exact time (at least accuracy of a second) each step event has accrued.
because I perform sensor batching , the time onSensorChanged(SensorEvent event) been called is not the same time when the step event took place - I must use the event.timestampfield to get the event time.
the documentation about this field is:
The time in nanosecond at which the event happened
The problem:
In some devices (such Moto X 2013) seems like this timestamp is time in nano seconds since boot, while in some devices (such Nexus 5) it's actually returns universal system time in nano seconds same as System.currentTimeMills() / 1000.
I understand, there's already an old open issue about that, but since sensor batching is introduced - it becomes important to use this field to know the event time, and it's not possible to rely anymore on the System.currentTimeMills()
My question:
What should I do to get always the event time in system milliseconds across all devices?
Instead of your "2-day" comparison, you could just check if event.timestamp is less than e.g. 1262304000000000000 - that way you'd only have a problem if the user's clock is set in the past, or their phone has been running for 40 years...
Except that a comment on this issue indicates that sometimes it's even milliseconds instead of nanoseconds. And other comments indicate that there's an offset applied, in which case it won't be either system time or uptime-based.
If you really have to be accurate, the only way I can see is to initially capture an event (or two, for comparison) with max_report_latency_ns set to 0 (i.e. non-batched) and compare the timestamp to the system time and/or elapsedRealtime. Then use that comparison to calculate an offset (and potentially decide whether you need to compensate for milliseconds vs nanoseconds) and use that offset for your batched events.
E.g. grab a couple of events, preferably a couple of seconds apart, recording the System.currentTimeMillis() each time and then do something like this:
long timestampDelta = event2.timestamp - event1.timestamp;
long sysTimeDelta = sysTimeMillis2 - sysTimeMillis1;
long divisor; // to get from timestamp to milliseconds
long offset; // to get from event milliseconds to system milliseconds
if (timestampDelta/sysTimeDelta > 1000) { // in reality ~1 vs ~1,000,000
// timestamps are in nanoseconds
divisor = 1000000;
} else {
// timestamps are in milliseconds
divisor = 1;
}
offset = sysTimeMillis1 - (event1.timestamp / divisor);
And then for your batched events
long eventTimeMillis = (event.timestamp / divisor) + offset;
One final caveat - even if you do all that, if the system time changes during your capture, it may affect your timestamps. Good luck!
I found a work-around solution that solving the problem. the solution assumes that the timestamp can be only one of the two: system timestamp, or boot time:
protected long getEventTimestampInMills(SensorEvent event) {
long timestamp = event.timestamp / 1000 / 1000;
/**
* work around the problem that in some devices event.timestamp is
* actually returns nano seconds since last boot.
*/
if (System.currentTimeMillis() - timestamp > Consts.ONE_DAY * 2) {
/**
* if we getting from the original event timestamp a value that does
* not make sense(it is very very not unlikely that will be batched
* events of two days..) then assume that the event time is actually
* nano seconds since boot
*/
timestamp = System.currentTimeMillis()
+ (event.timestamp - System.nanoTime()) / 1000000L;
}
return timestamp;
}
According to the link in your question:
This is, in fact, "working as intended". The timestamps are not
defined as being the Unix time; they're just "a time" that's only
valid for a given sensor. This means that timestamps can only be
compared if they come from the same sensor.
So, the timestamp-field could be completely unrelated to the current system time.
However; if at startup you were to take two sensor samples, without batching, you could calculate the difference between the System.currentTimeMillis() and the timestamp, as well as the quotient to the differences between the different times you should be able to convert between the different times:
//receive event1:
long t1Sys = System.currentTimeMillis();
long t1Evt = event.timestamp;
//receive event2:
long t2Sys = System.currentTimeMillis();
long t2Evt = event.timestamp;
//Unregister sensor
long startoffset = t1Sys - t1Evt; //not exact, but should definitely be less than a second, possibly use an averaged value.
long rateoffset = (t2Sys - t1Sys) / (t2Evt - t1Evt);
Now any timestamp from that sensor can be converted
long sensorTimeMillis = event.timestamp * rateoffset + startoffset;

How should the framerate be regulated on an android game?

I've created a simple game on android that has the basic updatePhysics() and onDraw() in the main game loop. Initially, I didn't put anything to keep a consistent framerate, so it would loop infinitely without sleeping. But after doing some research, I found that it would probably be better to regulate this so that the framerate is consistent. So I put Thread.sleep() in to make it go at around 30 fps. Here's the code:
if(!h.getSurface().isValid())
continue;
synchronized(h){
startTime = System.currentTimeMillis();
if(state == STATE_GAME){
updatePhysics();
}
onDraw();
endTime = System.currentTimeMillis();
try{
Thread.sleep(WAIT_TIME - (endTime - startTime));
}catch(Exception e){}
}
What I found was my game became really choppy, it wasn't 30 fps at all, more like around 20. However, if I increased that rate to around 40, then it would look like 30 fps(Becomes smoother). I read online that games are usually 25 - 30 fps so I assume 40 may be a bit too high. Am I doing something wrong, am I supposed to use Thread.sleep()? And also, if I run this without regulating the fps, how would it affect other devices? It runs smooth on my galaxy s2 without the Thread.sleep(), and the inconsistent framerate is not noticeable. But I'm concerned with the lower end devices. What do high end games like angry birds do? Thanks for any answers I'm very new to game development.

Android Sensor Timestamp reference time

I'm reading timestamp values from SensorEvent data but I can't work out the reference time for these values. Android documentation just says "The time in nanosecond at which the event happened" As an example:
My current Android device date, October 14th 2011 23:29:56.421 (GMT+2)
System.currentTimeMillis * 1000000 (nanosec) = 1318627796431000000 (that's ok)
sensorevent.timestamp (nanosec) = 67578436328000 = 19 hours 46 min ????
May you help me?
thanks
It appears that what you are dealing with is the number of nanoseconds since the operating system started, also known as "uptime".
Further info on the issue: http://code.google.com/p/android/issues/detail?id=7981
I should add that the linked question SensorEvent.timestamp to absolute (utc) timestamp? deals with the same issue and is where I found the answer.
I know that it's a very old question, but, I'm also struggling for converting SensorEvent.timestamp to a human readable time. So I'm writing here what I've understood so far and how I'm converting it in order to get better solutions from you guys. Any comments will be welcomed.
As I understood, SensorEvent.timestamp is an elapsed time since the device's boot-up. So I have to know the uptime of the device. So if there is an API returning device's boot-up, it will be very easy, but, I haven't found it.
So I'm using SystemClock.elapsedRealtime() and System.currentTimeMillis() to 'estimate' a device's uptime. This is my code.
private long mUptimeMillis; // member variable of the activity or service
...
atComponentsStartUp...() {
...
/* Call elapsedRealtime() and currentTimeMillis() in a row
in order to minimize the time gap */
long elapsedRealtime = SystemClock.elapsedRealtime();
long currentTimeMillis = System.currentTimeMillis();
/* Get an uptime. It assume that elapsedRealtime() and
currentTimeMillis() are called at the exact same time.
Actually they don't, but, ignore the gap
because it is not a significant value.
(On my device, it's less than 1 ms) */
mUptimeMillis = (currentTimeMillis - elapsedRealtime);
....
}
...
public void onSensorChanged(SensorEvent event) {
...
eventTimeMillis = ((event.timestamp / 1000000) + mUptimeMillis);
Calendar calendar = Calendar.getInstance();
calendar.setTimeInMillis(eventTimeMillis);
...
}
I think this works for Apps that a millisecond time error is okey. Please, leave your ideas.

How to limit framerate when using Android's GLSurfaceView.RENDERMODE_CONTINUOUSLY?

I have a C++ game running through JNI in Android. The frame rate varies from about 20-45fps due to scene complexity. Anything above 30fps is silly for the game; it's just burning battery. I'd like to limit the frame rate to 30 fps.
I could switch to RENDERMODE_WHEN_DIRTY, and use a Timer or ScheduledThreadPoolExecutor to requestRender(). But that adds a whole mess of extra moving parts that might or might not work consistently and correctly.
I tried injecting Thread.sleep() when things are running quickly, but this doesn't seem to work at all for small time values. And it may just be backing events into the queue anyway, not actually pausing.
Is there a "capFramerate()" method hiding in the API? Any reliable way to do this?
The solution from Mark is almost good, but not entirely correct. The problem is that the swap itself takes a considerable amount of time (especially if the video driver is caching instructions). Therefore you have to take that into account or you'll end with a lower frame rate than desired.
So the thing should be:
somewhere at the start (like the constructor):
startTime = System.currentTimeMillis();
then in the render loop:
public void onDrawFrame(GL10 gl)
{
endTime = System.currentTimeMillis();
dt = endTime - startTime;
if (dt < 33)
Thread.Sleep(33 - dt);
startTime = System.currentTimeMillis();
UpdateGame(dt);
RenderGame(gl);
}
This way you will take into account the time it takes to swap the buffers and the time to draw the frame.
When using GLSurfaceView, you perform the drawing in your Renderer's onDrawFrame which is handled in a separate thread by the GLSurfaceView. Simply make sure that each call to onDrawFrame takes (1000/[frames]) milliseconds, in your case something like 33ms.
To do this: (in your onDrawFrame)
Measure the current time before your start drawing using System.currentTimeMillis (Let's call it startTime)
Perform the drawing
Measure time again (Let's call it endTime)
deltaT = endTime - starTime
if deltaT < 33, sleep (33-deltaT)
That's it.
Fili's answer looked great to me, bad sadly limited the FPS on my Android device to 25 FPS, even though I requested 30. I figured out that Thread.sleep() works not accurately enough and sleeps longer than it should.
I found this implementation from the LWJGL project to do the job:
https://github.com/LWJGL/lwjgl/blob/master/src/java/org/lwjgl/opengl/Sync.java
Fili's solution is failing for some people, so I suspect it's sleeping until immediately after the next vsync instead of immediately before. I also feel that moving the sleep to the end of the function would give better results, because there it can pad out the current frame before the next vsync, instead of trying to compensate for the previous one. Thread.sleep() is inaccurate, but fortunately we only need it to be accurate to the nearest vsync period of 1/60s. The LWJGL code tyrondis posted a link to seems over-complicated for this situation, it's probably designed for when vsync is disabled or unavailable, which should not be the case in the context of this question.
I would try something like this:
private long lastTick = System.currentTimeMillis();
public void onDrawFrame(GL10 gl)
{
UpdateGame(dt);
RenderGame(gl);
// Subtract 10 from the desired period of 33ms to make generous
// allowance for overhead and inaccuracy; vsync will take up the slack
long nextTick = lastTick + 23;
long now;
while ((now = System.currentTimeMillis()) < nextTick)
Thread.sleep(nextTick - now);
lastTick = now;
}
If you don't want to rely on Thread.sleep, use the following
double frameStartTime = (double) System.nanoTime()/1000000;
// start time in milliseconds
// using System.currentTimeMillis() is a bad idea
// call this when you first start to draw
int frameRate = 30;
double frameInterval = (double) 1000/frame_rate;
// 1s is 1000ms, ms is millisecond
// 30 frame per seconds means one frame is 1s/30 = 1000ms/30
public void onDrawFrame(GL10 gl)
{
double endTime = (double) System.nanoTime()/1000000;
double elapsedTime = endTime - frameStartTime;
if (elapsed >= frameInterval)
{
// call GLES20.glClear(...) here
UpdateGame(elapsedTime);
RenderGame(gl);
frameStartTime += frameInterval;
}
}
You may also try and reduce the thread priority from onSurfaceCreated():
Process.setThreadPriority(Process.THREAD_PRIORITY_LESS_FAVORABLE);

Categories

Resources