Why does SystemClock.currentThreadTimeMillis() appear to be run half rate? - android

I'm trying to write a music app capable of playing MIDI files in sync with user input. I've got as far as creating a custom view, playing sounds, and reading MIDI data. I run the playback of MIDI data in the view's onDraw() method so I can apply user input to it. The note data is stored as an array of note pitches and times to play measured in ms from the start.
My test data plays a note of a different pitch every 500 ms (half a second). I log the currentThreadTimeMillis() alongside the time interval for each note, and it is as I expect. Every 500 ticks with a small variation, a note is played. However, this count in ms is about half real-world speed, taking a second to count 500 ms! I am running on a Galaxy Ace so this isn't an issue of a slow emulator.
How come SystemClock.currentThreadTimeMillis() is taking a good two seconds to count 1000ms?
public void onDraw(Canvas canvas) {
// TODO : Draw notes
canvas.drawColor(Color.BLACK);
canvas.drawBitmap(aNote.img, aNote.x, aNote.y, null);
if (ready>0){
elapsed_time = SystemClock.currentThreadTimeMillis() - start_time;
midi_note_data = MIDI.track_data.get(current_note);
if (midi_note_data.start_time <= elapsed_time){
current_note++;
pitch = midi_note_data.pitch;
Log.d("MUSIC", "Note time " + midi_note_data.start_time + " ThreadTime "+ elapsed_time);
sounds.play(tone, 1.0f, 1.0f, 0, 0, notes[pitch]);
}
if (current_note > MIDI.track_data.size()-1){
ready = -2;
}
}else if (ready == 0){
start_time = SystemClock.currentThreadTimeMillis();
Log.d("MUSIC", "Start time "+start_time);
ready = 1;
}
}

You are using SystemClock.currentThreadTimeMillis(), which is the amount of time that the current thread has been running for. On Android, I'm guessing the UI thread is only actually running half of the time. (the other half is for the rest of the operating system!)
Switch to System.currentTimeMillis() and your problems will be solved.
You can read more about the different time measuring systems here.

Related

How to get a precise clock for midi/audio purpose

I'm trying to get a precise clock that is not influenced by other processes inside the app.
I currently use System.nanoTime() like below inside a thread.
I use to calculate the timing of each of the sixteen step.
Currently timed operations have sometime a perceptible delay that i try to fix.
I would like to know if there is a more precise way to obtaining timed operations, maybe by check the internal soundcard clock and use it to generate the timing i need.
I need it to send midi notes from android device to external audio sinthetizers and for audio i need precise timing
Is there anyone who can help me improve this aspect?
Thanks
cellLength = (long)(0.115*1000000000L);
for ( int x = 0; x < 16; x++ ) {
noteStartTimes[x] = x*cellLength ;
}
long startTime = System.nanoTime();
index = 0;
while (isPlaying) {
if (noteStartTimes[index] < System.nanoTime() - startTime) {
index++ ;
if (index == 16) { //reset things
startTime = System.nanoTime() + cellLength;
index = 0 ;
}
}
}
For any messages that you receive, the onSend callback gives you a timestamp.
For any messages that you send, you can provide a timestamp.
These timestamps are based on System.nanoTime(), so your own code should use this as well.
If your code is delayed (by its own processing, or by other apps, or by background services), System.nanoTime() will accurately report the delay. But no timer will function can make your code run earlier.

SensorEvent.timestamp inconsistency

my application performs in background step counting using the step detector sensor API's introduced in android 4.4.X.
It's essential to my app to know the exact time (at least accuracy of a second) each step event has accrued.
because I perform sensor batching , the time onSensorChanged(SensorEvent event) been called is not the same time when the step event took place - I must use the event.timestampfield to get the event time.
the documentation about this field is:
The time in nanosecond at which the event happened
The problem:
In some devices (such Moto X 2013) seems like this timestamp is time in nano seconds since boot, while in some devices (such Nexus 5) it's actually returns universal system time in nano seconds same as System.currentTimeMills() / 1000.
I understand, there's already an old open issue about that, but since sensor batching is introduced - it becomes important to use this field to know the event time, and it's not possible to rely anymore on the System.currentTimeMills()
My question:
What should I do to get always the event time in system milliseconds across all devices?
Instead of your "2-day" comparison, you could just check if event.timestamp is less than e.g. 1262304000000000000 - that way you'd only have a problem if the user's clock is set in the past, or their phone has been running for 40 years...
Except that a comment on this issue indicates that sometimes it's even milliseconds instead of nanoseconds. And other comments indicate that there's an offset applied, in which case it won't be either system time or uptime-based.
If you really have to be accurate, the only way I can see is to initially capture an event (or two, for comparison) with max_report_latency_ns set to 0 (i.e. non-batched) and compare the timestamp to the system time and/or elapsedRealtime. Then use that comparison to calculate an offset (and potentially decide whether you need to compensate for milliseconds vs nanoseconds) and use that offset for your batched events.
E.g. grab a couple of events, preferably a couple of seconds apart, recording the System.currentTimeMillis() each time and then do something like this:
long timestampDelta = event2.timestamp - event1.timestamp;
long sysTimeDelta = sysTimeMillis2 - sysTimeMillis1;
long divisor; // to get from timestamp to milliseconds
long offset; // to get from event milliseconds to system milliseconds
if (timestampDelta/sysTimeDelta > 1000) { // in reality ~1 vs ~1,000,000
// timestamps are in nanoseconds
divisor = 1000000;
} else {
// timestamps are in milliseconds
divisor = 1;
}
offset = sysTimeMillis1 - (event1.timestamp / divisor);
And then for your batched events
long eventTimeMillis = (event.timestamp / divisor) + offset;
One final caveat - even if you do all that, if the system time changes during your capture, it may affect your timestamps. Good luck!
I found a work-around solution that solving the problem. the solution assumes that the timestamp can be only one of the two: system timestamp, or boot time:
protected long getEventTimestampInMills(SensorEvent event) {
long timestamp = event.timestamp / 1000 / 1000;
/**
* work around the problem that in some devices event.timestamp is
* actually returns nano seconds since last boot.
*/
if (System.currentTimeMillis() - timestamp > Consts.ONE_DAY * 2) {
/**
* if we getting from the original event timestamp a value that does
* not make sense(it is very very not unlikely that will be batched
* events of two days..) then assume that the event time is actually
* nano seconds since boot
*/
timestamp = System.currentTimeMillis()
+ (event.timestamp - System.nanoTime()) / 1000000L;
}
return timestamp;
}
According to the link in your question:
This is, in fact, "working as intended". The timestamps are not
defined as being the Unix time; they're just "a time" that's only
valid for a given sensor. This means that timestamps can only be
compared if they come from the same sensor.
So, the timestamp-field could be completely unrelated to the current system time.
However; if at startup you were to take two sensor samples, without batching, you could calculate the difference between the System.currentTimeMillis() and the timestamp, as well as the quotient to the differences between the different times you should be able to convert between the different times:
//receive event1:
long t1Sys = System.currentTimeMillis();
long t1Evt = event.timestamp;
//receive event2:
long t2Sys = System.currentTimeMillis();
long t2Evt = event.timestamp;
//Unregister sensor
long startoffset = t1Sys - t1Evt; //not exact, but should definitely be less than a second, possibly use an averaged value.
long rateoffset = (t2Sys - t1Sys) / (t2Evt - t1Evt);
Now any timestamp from that sensor can be converted
long sensorTimeMillis = event.timestamp * rateoffset + startoffset;

opengl android time too fast

I'm trying to get the time using android and open gl for my racing game.
My code now is:
deltaTime = (System.currentTimeMillis() + startTime) / 1000000000000.0f;
startTime = System.currentTimeMillis();
tickTime += deltaTime;
DecimalFormat dec = new DecimalFormat("#.##");
Log.d("time", dec.format(tickTime/100));
but it's a bit too fast.
You may want to look at a bit of Android Breakout:
http://code.google.com/p/android-breakout/source/browse/src/com/faddensoft/breakout/GameState.java#1001
The computation is similar, but note it uses System.nanoTime(), which uses the monotonic clock. You don't want to use System.currentTimeMillis(), which uses the wall clock. If the device is connected to a network, the wall clock can be updated, which can cause big jumps forward or backward.
The code also includes a (disabled) frame-rate-smoothing experiment that didn't seem to matter much.
As I think you discovered, the key to this approach is to recognize that the time interval between frames is not constant, and you need to update the game state based on how much time has actually elapsed, not a fixed notion of display update frequency.
Since you're working in milliseconds, shouldn't you be dividing by 1000f instead of 1000000000000.0f?

Android - Scheduling an Events to Occur Every 10ms?

I'm working on creating an app that allows very low bandwidth communication via high frequency sound waves. I've gotten to the point where I can create a frequency and do the fourier transform (with the help of Moonblink's open source code for Audalyzer).
But here's my problem: I'm unable to get the code to run with the correct timing. Let's say I want a piece of code to execute every 10ms, how would I go about doing this?
I've tried using a TimerTask, but there is a huge delay before the code actually executes, like up to 100ms.
I also tried this method simply by pinging the current time and executing only when that time has elapsed. But there is still a delay problem. Do you guys have any ideas?
Thread analysis = new Thread(new Runnable()
{
#Override
public void run()
{
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_DISPLAY);
long executeTime = System.currentTimeMillis();
manualAnalyzer.measureStart();
while (FFTransforming)
{
if(System.currentTimeMillis() >= executeTime)
{
//Reset the timer to execute again in 10ms
executeTime+=10;
//Perform Fourier Transform
manualAnalyzer.doUpdate(0);
//TODO: Analyze the results of the transform here...
}
}
manualAnalyzer.measureStop();
}
});
analysis.start();
I would recommend a very different approach: Do not try to run your code in real time.
Instead, rely on only the low-level audio code running in real time, by recording (or playing) continuously for a period of time encompassing the events of interest.
Your code then runs somewhat asynchronously to this, decoupled by the audio buffers. Your code's sense of time is determined not by the system clock as it executes, but rather by the defined inter-sample-interval of the audio data you work with. (ie, if you are using 48 Ksps then 10 mS later is 480 samples later)
You may need to modify your protocol governing interaction between the devices to widen the time window in which transmissions can be expected to occur. Ie, you can have precise timing with respect to the actual modulation and symbols within a "packet", but you should not expect nearly the same order of precision in determining when a packet is sent or received - you will have to "find" it amidst a longer recording containing noise.
Your thread/loop strategy is probably roughly as close as you're going to get. However, 10ms is not a lot of time, most Android devices are not super-powerful, and a Fourier transform is a lot of work to do. I find it unlikely that you'll be able to fit that much work in 10ms. I suspect you're going to have to increase that period.
i changed your code so that it takes the execution time of doUpdate into account. The use of System.nanoTime() should also increase accuracy.
public void run() {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_DISPLAY);
long executeTime=0;
long nextTime = System.nanoTime();
manualAnalyzer.measureStart();
while (FFTransforming)
{
if(System.nanoTime() >= nextTime)
{
executeTime = System.nanoTime();
//Perform Fourier Transform
manualAnalyzer.doUpdate(0);
//TODO: Analyze the results of the transform here...
executeTime = System.nanoTime() - executeTime;
//guard against the case that doUpdate took longer than 10ms
final long i = executeTime/10000000;
//set the timer to execute again at the next full 10ms intervall
nextTime+= 10000000+ i*10000000
}
}
manualAnalyzer.measureStop();
}
What else could you do?
eliminate Garbage Collection
go native with the NDK (just an idea, this might as well give no benefit)

How to limit framerate when using Android's GLSurfaceView.RENDERMODE_CONTINUOUSLY?

I have a C++ game running through JNI in Android. The frame rate varies from about 20-45fps due to scene complexity. Anything above 30fps is silly for the game; it's just burning battery. I'd like to limit the frame rate to 30 fps.
I could switch to RENDERMODE_WHEN_DIRTY, and use a Timer or ScheduledThreadPoolExecutor to requestRender(). But that adds a whole mess of extra moving parts that might or might not work consistently and correctly.
I tried injecting Thread.sleep() when things are running quickly, but this doesn't seem to work at all for small time values. And it may just be backing events into the queue anyway, not actually pausing.
Is there a "capFramerate()" method hiding in the API? Any reliable way to do this?
The solution from Mark is almost good, but not entirely correct. The problem is that the swap itself takes a considerable amount of time (especially if the video driver is caching instructions). Therefore you have to take that into account or you'll end with a lower frame rate than desired.
So the thing should be:
somewhere at the start (like the constructor):
startTime = System.currentTimeMillis();
then in the render loop:
public void onDrawFrame(GL10 gl)
{
endTime = System.currentTimeMillis();
dt = endTime - startTime;
if (dt < 33)
Thread.Sleep(33 - dt);
startTime = System.currentTimeMillis();
UpdateGame(dt);
RenderGame(gl);
}
This way you will take into account the time it takes to swap the buffers and the time to draw the frame.
When using GLSurfaceView, you perform the drawing in your Renderer's onDrawFrame which is handled in a separate thread by the GLSurfaceView. Simply make sure that each call to onDrawFrame takes (1000/[frames]) milliseconds, in your case something like 33ms.
To do this: (in your onDrawFrame)
Measure the current time before your start drawing using System.currentTimeMillis (Let's call it startTime)
Perform the drawing
Measure time again (Let's call it endTime)
deltaT = endTime - starTime
if deltaT < 33, sleep (33-deltaT)
That's it.
Fili's answer looked great to me, bad sadly limited the FPS on my Android device to 25 FPS, even though I requested 30. I figured out that Thread.sleep() works not accurately enough and sleeps longer than it should.
I found this implementation from the LWJGL project to do the job:
https://github.com/LWJGL/lwjgl/blob/master/src/java/org/lwjgl/opengl/Sync.java
Fili's solution is failing for some people, so I suspect it's sleeping until immediately after the next vsync instead of immediately before. I also feel that moving the sleep to the end of the function would give better results, because there it can pad out the current frame before the next vsync, instead of trying to compensate for the previous one. Thread.sleep() is inaccurate, but fortunately we only need it to be accurate to the nearest vsync period of 1/60s. The LWJGL code tyrondis posted a link to seems over-complicated for this situation, it's probably designed for when vsync is disabled or unavailable, which should not be the case in the context of this question.
I would try something like this:
private long lastTick = System.currentTimeMillis();
public void onDrawFrame(GL10 gl)
{
UpdateGame(dt);
RenderGame(gl);
// Subtract 10 from the desired period of 33ms to make generous
// allowance for overhead and inaccuracy; vsync will take up the slack
long nextTick = lastTick + 23;
long now;
while ((now = System.currentTimeMillis()) < nextTick)
Thread.sleep(nextTick - now);
lastTick = now;
}
If you don't want to rely on Thread.sleep, use the following
double frameStartTime = (double) System.nanoTime()/1000000;
// start time in milliseconds
// using System.currentTimeMillis() is a bad idea
// call this when you first start to draw
int frameRate = 30;
double frameInterval = (double) 1000/frame_rate;
// 1s is 1000ms, ms is millisecond
// 30 frame per seconds means one frame is 1s/30 = 1000ms/30
public void onDrawFrame(GL10 gl)
{
double endTime = (double) System.nanoTime()/1000000;
double elapsedTime = endTime - frameStartTime;
if (elapsed >= frameInterval)
{
// call GLES20.glClear(...) here
UpdateGame(elapsedTime);
RenderGame(gl);
frameStartTime += frameInterval;
}
}
You may also try and reduce the thread priority from onSurfaceCreated():
Process.setThreadPriority(Process.THREAD_PRIORITY_LESS_FAVORABLE);

Categories

Resources