I am currently working on an android application where I have to log all the sensor values. I got the sensor event timestamp from "event.timestamp" and I converted this value into a unix timestamp.
long currTimeRelativeToBootMs = SystemClock.uptimeMillis();
long currTimeAbsoluteMs = System.currentTimeMillis();
mStartTimeAbsoluteS = ((double)(currTimeAbsoluteMs - currTimeRelativeToBootMs))/(double)1000.0;
...
//timestampRelativeInNs = event.timestamp
double temp = mStartTimeAbsoluteS+((double)timestampRelativeInNs)/1000000000.0;
My application works fine on my HTC phone (Android 2.x.x) but it did not work on the new Google Nexus7.
I compared the "event.timestamp"-values from the different devices. I started the devices approximately at the same time but I got rather different values. The one from the Nexus7 is longer by 4 figures .....
SensorEvent-Timestamp(HTC): 175120992123000
SensorEvent-Timestamp(Nex): 1355418999245703000
What could be the reason for that issue??? How can I fix that???
Related
I've seen numerous questions/answers showing how to get temperature information from an Android device - using this approach:
int zoneNumber = 0; // Usually 0 or 1
String temperatureFileLocation = "sys/devices/virtual/thermal/thermal_zone" + zoneNumber + "/temp";
File temperatureFile = new File(temperatureFileLocation);
scanner = new Scanner(temperatureFile);
double temperatureC = scanner.nextFloat(); // Degrees C
...
scanner.close(); // finally
I wasn't really sure what each zone is for (i.e., in which part of the device the sensor is located) but I just discovered that there is also a file that describes the type of each zone - for example:
String zoneTypeFileLocation = "sys/devices/virtual/thermal/thermal_zone" + zoneNumber + "/type"; // NB - that's "/type" not "/temp" !
Now, when using Scanner to read in what type each zone is, I get values back such as this:
mtktswmt
mtktscpu
mtktspmic
mtktspa
mtktsabb
mtktsbattery
tsen_max
sec-fuelguage
Can anyone explain what locations/components all these zone names are actually referring to?
(Ideally, I would like to obtain the temperature of the device's NFC hardware.)
I guess that's the Hardware thermal sensors of the mobile. They usually give the temperature of the given zones when the mobile is working or even when you perform some benchmarks results.
like
mtktswmt is Wifi Chip temperature zone.
mtktscpu is cpu temperature zone.
mtktspmic is Multi IO and Regulator Chip temperature zone.
mtktspa is Thermal sensor MD1
mtktsabb is processor temperature zone.
mtktsbattery is the battery temperature zone.
tsen_max is the maximum temperature sensor capacity(I dont know for sure).
sec-fuelguage is the fuel gauge chip.
the mtkt prefix is just the name of the maker. In this case it is Mediatek
That's pretty hardcore hardware stuff. These are actually used by the makers of the android mobile phone(I guess). Even the above mentioned data is searched from google android open source project where the values were found in kernal drivers. Hence it's pretty hardcore hardware to play with it.
For using the Hardware Properties that actually gives you your desired results try HardwarePropertiesManager.
I hope it Helps.
I upgraded my Samsung Galaxy S4 from latest KitKat to Lollipop (5.0.1) yesterday and my IR remote control app that I have used for months stopped working.
Since I was using a late copy of KitKat ConsumerIrManager, the transmit( ) function was sending the number of pulses using the code below. It worked very nicely.
private void irSend(int freqHz, int[] pulseTrainInMicroS) {
int [] pulseCounts = new int [pulseTrainInMicroS.length];
for (int i=0; i<pulseTrainInMicroS.length; i++) {
long iValue = pulseTrainInMicroS[i] * freqHz / 1000000;
pulseCounts[i] = (int) iValue;
}
m_IRService.transmit(freqHz, pulseCounts);
}
when it stopped working yesterday, I began looking closely at it.
I noticed that the transmitted waveform is not having any relationship with the requested pulse train. even the code below doesn't work correctly! there is
private void TestSend() {
int [] pulseCounts = {100, 100, 100};
m_IRService.transmit(38000, pulseCounts);
}
the resulting waveforms had many problems and so are entirely useless.
the waveforms were entirely wrong
the frequency was wrong and the pulse spacing was not regular
they were not repeatable
looking at the demodulated waveform:
if my 100, 100, 100 were correctly rendered, I should have seen two pulses 2.6ms (before 4.4.3(?) 100 us) long. instead I received (see attached) "[demodulated] not repeatable 1.BMP" and "[demodulated] not repeatable 2.BMP". note that the waveform isn't 2 pulses...in fact, it's not even repeatable.
as for the captures below, the signal goes low when the IR is detected.
we should have seen two pulses going low for 2.6 ms and 2.6 ms between them (see green line below).
I had also tried shorter pulses using 50, 50, 50 and have observed that the first pulse isn't correct either (see below).
looking at the modulated waveform:
the frequency was not correct; instead, it was about 18kHz and irregular.
I'm quite experienced with this and have formal education in electronics.
It seems to me there's a bug in ConsumerIrManager.transmit( )...
curiously, the "WatchOn" application that comes with the phone still works.
thank you for any insights you can give.
Test equipment:
Tektronix TDS-2014B, 100 MHz, used in peak-detect mode.
As #IvanTellez says, a change was made in Android in respect to this functionality. Strangely, when I had it outputting simple IR signals (for troubleshooting purposes), the function behaves as shown above (erratically, wrong carrier frequency, etc). When I eventually returned to normal types of IR signals, it worked correctly.
my application performs in background step counting using the step detector sensor API's introduced in android 4.4.X.
It's essential to my app to know the exact time (at least accuracy of a second) each step event has accrued.
because I perform sensor batching , the time onSensorChanged(SensorEvent event) been called is not the same time when the step event took place - I must use the event.timestampfield to get the event time.
the documentation about this field is:
The time in nanosecond at which the event happened
The problem:
In some devices (such Moto X 2013) seems like this timestamp is time in nano seconds since boot, while in some devices (such Nexus 5) it's actually returns universal system time in nano seconds same as System.currentTimeMills() / 1000.
I understand, there's already an old open issue about that, but since sensor batching is introduced - it becomes important to use this field to know the event time, and it's not possible to rely anymore on the System.currentTimeMills()
My question:
What should I do to get always the event time in system milliseconds across all devices?
Instead of your "2-day" comparison, you could just check if event.timestamp is less than e.g. 1262304000000000000 - that way you'd only have a problem if the user's clock is set in the past, or their phone has been running for 40 years...
Except that a comment on this issue indicates that sometimes it's even milliseconds instead of nanoseconds. And other comments indicate that there's an offset applied, in which case it won't be either system time or uptime-based.
If you really have to be accurate, the only way I can see is to initially capture an event (or two, for comparison) with max_report_latency_ns set to 0 (i.e. non-batched) and compare the timestamp to the system time and/or elapsedRealtime. Then use that comparison to calculate an offset (and potentially decide whether you need to compensate for milliseconds vs nanoseconds) and use that offset for your batched events.
E.g. grab a couple of events, preferably a couple of seconds apart, recording the System.currentTimeMillis() each time and then do something like this:
long timestampDelta = event2.timestamp - event1.timestamp;
long sysTimeDelta = sysTimeMillis2 - sysTimeMillis1;
long divisor; // to get from timestamp to milliseconds
long offset; // to get from event milliseconds to system milliseconds
if (timestampDelta/sysTimeDelta > 1000) { // in reality ~1 vs ~1,000,000
// timestamps are in nanoseconds
divisor = 1000000;
} else {
// timestamps are in milliseconds
divisor = 1;
}
offset = sysTimeMillis1 - (event1.timestamp / divisor);
And then for your batched events
long eventTimeMillis = (event.timestamp / divisor) + offset;
One final caveat - even if you do all that, if the system time changes during your capture, it may affect your timestamps. Good luck!
I found a work-around solution that solving the problem. the solution assumes that the timestamp can be only one of the two: system timestamp, or boot time:
protected long getEventTimestampInMills(SensorEvent event) {
long timestamp = event.timestamp / 1000 / 1000;
/**
* work around the problem that in some devices event.timestamp is
* actually returns nano seconds since last boot.
*/
if (System.currentTimeMillis() - timestamp > Consts.ONE_DAY * 2) {
/**
* if we getting from the original event timestamp a value that does
* not make sense(it is very very not unlikely that will be batched
* events of two days..) then assume that the event time is actually
* nano seconds since boot
*/
timestamp = System.currentTimeMillis()
+ (event.timestamp - System.nanoTime()) / 1000000L;
}
return timestamp;
}
According to the link in your question:
This is, in fact, "working as intended". The timestamps are not
defined as being the Unix time; they're just "a time" that's only
valid for a given sensor. This means that timestamps can only be
compared if they come from the same sensor.
So, the timestamp-field could be completely unrelated to the current system time.
However; if at startup you were to take two sensor samples, without batching, you could calculate the difference between the System.currentTimeMillis() and the timestamp, as well as the quotient to the differences between the different times you should be able to convert between the different times:
//receive event1:
long t1Sys = System.currentTimeMillis();
long t1Evt = event.timestamp;
//receive event2:
long t2Sys = System.currentTimeMillis();
long t2Evt = event.timestamp;
//Unregister sensor
long startoffset = t1Sys - t1Evt; //not exact, but should definitely be less than a second, possibly use an averaged value.
long rateoffset = (t2Sys - t1Sys) / (t2Evt - t1Evt);
Now any timestamp from that sensor can be converted
long sensorTimeMillis = event.timestamp * rateoffset + startoffset;
I'm trying to get the time using android and open gl for my racing game.
My code now is:
deltaTime = (System.currentTimeMillis() + startTime) / 1000000000000.0f;
startTime = System.currentTimeMillis();
tickTime += deltaTime;
DecimalFormat dec = new DecimalFormat("#.##");
Log.d("time", dec.format(tickTime/100));
but it's a bit too fast.
You may want to look at a bit of Android Breakout:
http://code.google.com/p/android-breakout/source/browse/src/com/faddensoft/breakout/GameState.java#1001
The computation is similar, but note it uses System.nanoTime(), which uses the monotonic clock. You don't want to use System.currentTimeMillis(), which uses the wall clock. If the device is connected to a network, the wall clock can be updated, which can cause big jumps forward or backward.
The code also includes a (disabled) frame-rate-smoothing experiment that didn't seem to matter much.
As I think you discovered, the key to this approach is to recognize that the time interval between frames is not constant, and you need to update the game state based on how much time has actually elapsed, not a fixed notion of display update frequency.
Since you're working in milliseconds, shouldn't you be dividing by 1000f instead of 1000000000000.0f?
I'm reading timestamp values from SensorEvent data but I can't work out the reference time for these values. Android documentation just says "The time in nanosecond at which the event happened" As an example:
My current Android device date, October 14th 2011 23:29:56.421 (GMT+2)
System.currentTimeMillis * 1000000 (nanosec) = 1318627796431000000 (that's ok)
sensorevent.timestamp (nanosec) = 67578436328000 = 19 hours 46 min ????
May you help me?
thanks
It appears that what you are dealing with is the number of nanoseconds since the operating system started, also known as "uptime".
Further info on the issue: http://code.google.com/p/android/issues/detail?id=7981
I should add that the linked question SensorEvent.timestamp to absolute (utc) timestamp? deals with the same issue and is where I found the answer.
I know that it's a very old question, but, I'm also struggling for converting SensorEvent.timestamp to a human readable time. So I'm writing here what I've understood so far and how I'm converting it in order to get better solutions from you guys. Any comments will be welcomed.
As I understood, SensorEvent.timestamp is an elapsed time since the device's boot-up. So I have to know the uptime of the device. So if there is an API returning device's boot-up, it will be very easy, but, I haven't found it.
So I'm using SystemClock.elapsedRealtime() and System.currentTimeMillis() to 'estimate' a device's uptime. This is my code.
private long mUptimeMillis; // member variable of the activity or service
...
atComponentsStartUp...() {
...
/* Call elapsedRealtime() and currentTimeMillis() in a row
in order to minimize the time gap */
long elapsedRealtime = SystemClock.elapsedRealtime();
long currentTimeMillis = System.currentTimeMillis();
/* Get an uptime. It assume that elapsedRealtime() and
currentTimeMillis() are called at the exact same time.
Actually they don't, but, ignore the gap
because it is not a significant value.
(On my device, it's less than 1 ms) */
mUptimeMillis = (currentTimeMillis - elapsedRealtime);
....
}
...
public void onSensorChanged(SensorEvent event) {
...
eventTimeMillis = ((event.timestamp / 1000000) + mUptimeMillis);
Calendar calendar = Calendar.getInstance();
calendar.setTimeInMillis(eventTimeMillis);
...
}
I think this works for Apps that a millisecond time error is okey. Please, leave your ideas.