Android: How to control onSensorChange sampling rate - android

I am a bit puzzled by the sensor reading rates in Android. The code below reports delays of ~53 ms (ZTE Blade, rate sensor events is set to SENSOR_DELAY_FASTEST).
public void onSensorChanged(SensorEvent event) {
synchronized (this) {
switch (event.sensor.getType()) {
case Sensor.TYPE_MAGNETIC_FIELD:
TimeNew = event.timestamp;
delay = (long)((TimeNew - TimeOld)/1000000);
TimeOld = TimeNew;
Log.d("Test", delay + " ms");
break;
}
}
}
The log:
DEBUG/Test(23024): 52 ms
DEBUG/Test(23024): 53 ms
DEBUG/Test(23024): 54 ms
DEBUG/Test(23024): 56 ms
DEBUG/Test(23024): 52 ms
DEBUG/Test(23024): 52 ms
DEBUG/Test(23024): 55 ms
DEBUG/Test(23024): 52 ms
If we want to average say 100 samples, and then save the data, the time between each 100th sample will vary significantly. That is presumably because the sensor value is not changing in regular time periods.
But am I missing something ? Is there a way to get (more) regular measurements (e.g. 100 ms) ? Or perhaps I should go for averaging over a specific period of time instead of no.of samples ?
Also 50ms seems to be a bit long time. Could that be hardware limitation of the device ? Will this number vary on different platforms ?
Any advice is appreciated.

I would average over a period of time rather than a number of samples. I would expect different devices with different device capabilities would generate substantially different results if you were to go with the number-based approach. If you want more regular management of the sampling I would probably disconnect the test from the event and just poll the device at whatever frequency you wish.

Related

setting an accuracy timer for gps app in android

i'm new to coding so please be gentle
i have an gps app that i want to alter its accuracy .
i want it too only log when it has a strong signal, not signal that comes and goes.
below 20 plots perfectly well but when signal is a bit in and out it will plot extremes
so what i was thinking was if the accuracy was say below 10 for a 10 seconds then start logging, and if over 30 for 30 seconds stop logging.
but set accuracy to 20 while logging.
i can work out how to use the simple command of
if(currentAccuracy <= 20.0)
but how would i implement the timer section
all i can find on timers is count down clocks
it is currently this
public boolean shouldLog(float currentAccuracy) { if(currentAccuracy <= 20.0) return true; else return false; } –
im sure this is simple if you know your way around but im at a loss
thanks for your help

Kmsg timestamps are 500ms in the future

I am trying to keep track of when the system wakes and suspends (ideally when monotonic_time starts and stops) so that I can accurately correlate monotonic time-stamps to the realtime clock.
On android the first method that came to mind was to monitor kmsg for a wakeup message and use its timestamp as a fairly accurate mark. As I was unsure of the accuracy of this timestamp, I decided to log the current monotonic time as well.
The following code is running in a standalone executable
while(true)
{
fgets(mLineBuffer, sizeof(mLineBuffer), mKmsgFile);
//Find first space
char * messageContent = strchr(mLineBuffer,' ');
//Offset one to get character after space
messageContent++;
if (strncmp (messageContent,"Enabling non-boot CPUs ...",25) == 0 )
{
clock_gettime(CLOCK_MONOTONIC,&mMono);
std::cout << mLineBuffer;
std::cout << std::to_string(mMono.tv_sec) << "." << std::to_string(mMono.tv_nsec) << "\n";
}
}
I expected the time returned by clock_gettime to be at some point after the kmsg log timestamp, but instead it is anywhere from 600ms before to 200ms after.
<6>[226692.217017] Enabling non-boot CPUs ...
226691.681130889
-0.535886111
<6>[226692.626100] Enabling non-boot CPUs ...
226692.80532881
+0.17922881
<6>[226693.305535] Enabling non-boot CPUs ...
226692.803398747
-0.502136253
During this particular session, CLOCK_MONOTONIC consistently differed from the kmsg timestamp by roughly -500ms, only once flipping over to +179ms over the course of 10 wakeups. During a later session it was consistently off by -200ms.
The same consistent offset is present when monitoring all kmsg entries during normal operation (not suspending or waking). Perhaps returning from suspend occasionally delays my process long enough to produce a timestamp that is ahead of kmsg, resulting in the single +179ms difference.
CLOCK_MONOTONIC_COARSE and CLOCK_MONOTONIC_RAW behave in the same manner.
Is this expected behavior? Does the kernel run on a separate monotonic clock?
Is there any other way to get wakeup/suspend times that correlate to monotonic time?
The ultimate goal is to use this information to help graph the contents of wakeup_sources over time, with a particular focus on activity immediately after waking. Though, if the kmsg timestamps are "incorrect", then the wakeup_sources ones probably are too.

Explanation of how this MIDI lib for Android works

I'm using the library of #LeffelMania : https://github.com/LeffelMania/android-midi-lib
I'm musician but I've always recorded as studio recordings, not MIDI, so I don't understand some things.
The thing I want to understand is this piece of code:
// 2. Add events to the tracks
// Track 0 is the tempo map
TimeSignature ts = new TimeSignature();
ts.setTimeSignature(4, 4, TimeSignature.DEFAULT_METER, TimeSignature.DEFAULT_DIVISION);
Tempo tempo = new Tempo();
tempo.setBpm(228);
tempoTrack.insertEvent(ts);
tempoTrack.insertEvent(tempo);
// Track 1 will have some notes in it
final int NOTE_COUNT = 80;
for(int i = 0; i < NOTE_COUNT; i++)
{
int channel = 0;
int pitch = 1 + i;
int velocity = 100;
long tick = i * 480;
long duration = 120;
noteTrack.insertNote(channel, pitch, velocity, tick, duration);
}
Ok, I have 228 Beats per minute, and I know that I have to insert the note after the previous note. What I don't understand is the duration.. is it in milliseconds? it doesn't have sense if I keep the duration = 120 and I set my BPM to 60 for example. Neither I understand the velocity
MY SCOPE
I want to insert notes of X pitch with Y duration.
Could anyone give me some clue?
The way MIDI files are designed, notes are in terms of musical length, not time. So when you insert a note, its duration is a number of ticks, not a number of seconds. By default, there are 480 ticks per quarter note. So that code snippet is inserting 80 sixteenth notes since there are four sixteenths per quarter and 480 / 4 = 120. If you change the tempo, they will still be sixteenth notes, just played at a different speed.
If you think of playing a key on a piano, the velocity parameter is the speed at which the key is struck. The valid values are 1 to 127. A velocity of 0 means to stop playing the note. Typically a higher velocity means a louder note, but really it can control any parameter the MIDI instrument allows it to control.
A note in a MIDI file consists of two events: a Note On and a Note Off. If you look at the insertNote code you'll see that it is inserting two events into the track. The first is a Note On command at time tick with the specified velocity. The second is a Note On command at time tick + duration with a velocity of 0.
Pitch values also run from 0 to 127. If you do a Google search for "MIDI pitch numbers" you'll get dozens of hits showing you how pitch number relates to note and frequency.
There is a nice description of timing in MIDI files here. Here's an excerpt in case the link dies:
In a standard MIDI file, there’s information in the file header about “ticks per quarter note”, a.k.a. “parts per quarter” (or “PPQ”). For the purpose of this discussion, we’ll consider “beat” and “quarter note” to be synonymous, so you can think of a “tick” as a fraction of a beat. The PPQ is stated in the last word of information (the last two bytes) of the header chunk that appears at the beginning of the file. The PPQ could be a low number such as 24 or 96, which is often sufficient resolution for simple music, or it could be a larger number such as 480 for higher resolution, or even something like 500 or 1000 if one prefers to refer to time in milliseconds.
What the PPQ means in terms of absolute time depends on the designated tempo. By default, the time signature is 4/4 and the tempo is 120 beats per minute. That can be changed, however, by a “meta event” that specifies a different tempo. (You can read about the Set Tempo meta event message in the file format description document.) The tempo is expressed as a 24-bit number that designates microseconds per quarter-note. That’s kind of upside-down from the way we normally express tempo, but it has some advantages. So, for example, a tempo of 100 bpm would be 600000 microseconds per quarter note, so the MIDI meta event for expressing that would be FF 51 03 09 27 C0 (the last three bytes are the Hex for 600000). The meta event would be preceded by a delta time, just like any other MIDI message in the file, so a change of tempo can occur anywhere in the music.
Delta times are always expressed as a variable-length quantity, the format of which is explained in the document. For example, if the PPQ is 480 (standard in most MIDI sequencing software), a delta time of a dotted quarter note (720 ticks) would be expressed by the two bytes 82 D0 (hexadecimal).

What is the meaning of Incl CPU Time, Excl CPU Time, Incl Real CPU Time, Excl Real CPU Time in traceview?

1) Exclusive time is the time spent in the method
2) Inclusive time is the time spent in the method plus the time spent in any called functions
3) We refer to calling methods as "parents" and called methods as "children."
Reference Link : Click here
Question here is :
what are difference between
Incl CPU Time & Incl Real CPU Time ?
Excl CPU Time & Excl Real CPU Time ?
in my one example trace file
for Method1() : Incl CPU Time = 242 msec & Incl Real CPU Time = 5012 msec
i can not identify reason behind 5012-242 = 4770 msec gap in above both times.
Please help me if you have any idea.
Here's the DDMS documentation
Incl CPU time is the inclusive cpu time. It is the sum of the time spent in the function itself, as well as the sum of the times of all functions that it calls.
Excl CPU time is the exclusive cpu time. It is only the time spent in the function itself. You'll notice that it is always the same as the "incl time" of the "self" child.
The documentation doesn't clarify the difference between CPU time and real time, but I agree with Neetesh that CPU time is the time that the function is actually running (this would not include waiting on IO) and the real time is the wall clock time (which would include time spent doing IO).
cpu time is the time for which the process uses the cpu and cpu real time is the total time from the starting of process to end of process it includes waiting time of process to execute.
from the source code of .trace, you can see the cpu time detail different from the real cpu time, it's the same with the description of the android doc:
CPU time considers only the time that the thread is actively using CPU time, and real time provides absolute timing information from the moment your app enters a method to when it exits that method—regardless of whether the thread is active or sleeping.
Just as Chris and David said, I did a test.
#include <unistd.h>
#define S ((long long)1000 * 1000 * 1000)
// My CPU frequency is 3 GHz
void run() {
for (int i = 0; i < S; ++i);
}
void g() {
run();
run();
run();
for (int i = 0; i < S; ++i);
}
int main() {
g();
// run();
return 0;
}
As you can see, the inclusive time of function g is 8 s and its exclusive time is 2 s:

How to get x, y, z values from Android Accelerometer sensor on a regular frequency, for instance per 20ms, 40 ms or 60 ms

Im working on an Android project and met the situation below:
Now we are needing the accelerometer value on a regular frequency, such as 20ms, 40ms or 60ms
Now we are SENSOR_DELAY_GAME right now but we found different devices are having different intervals for this parameter. For instance, the G2 is using 40ms, G7 is using 60ms and Nexus S is using 20ms.
I tried to set timer or used thread.sleep but because of the GC problem of Java, they can not let the system to get the value on a regular frequency.
This is very annoying and if any one has any idea to say if inside Android SDK there is a proper method to allow me get the accelerometer values on a regular frequency, that will be very helpful!
Thanks a lot!
I've done this by simply throwing away values that are sooner than I want them. Not ideal from a battery consumption standpoint as I need to have the sensors feed me more often than I need but at least then I can control that they come in on a regular interval.
Something like:
static final int ACCEL_SENSOR_DELAY = 100; // the number of milisecs to wait before accepting another reading from accelerometer sensor
long lastAccelSensorChange = 0; // the last time an accelerometer reading was processed
#Override
public void onSensorChanged(SensorEvent sensorEvent) {
if (sensorEvent.accuracy == SensorManager.SENSOR_STATUS_UNRELIABLE) return;
long now = System.currentTimeMillis();
if (now-ACCEL_SENSOR_DELAY > lastAccelSensorChange) {
lastAccelSensorChange = now;
mCompassValues = event.values.clone();
//... do your stuff
}
I have built a code that allows you to get the exact frequency on any device.
You can download here the project and get some explanations.
In the code, you can try the different rates. For example, the normal mode on my Galaxy S2 is 5Hz.
Use registerListener by setting the sampling period as below:
boolean registerListener (SensorEventListener listener, Sensor sensor, int samplingPeriodUs)
Source
Word of caution: The samplingPeriodUs argument is only a hint to the system. Test it before using this.

Categories

Resources