I'm developing a tool which receives motion sensor data and sends it to a machine learning algorithm, which ultimately will deduce different types of movement.
I read the Motion sensor guide and it seems like there is some redundancy in the data you can get from the sensors. For example: the accelrometer data contains gravity data and the linear acceleration data shows acceleration without acceleration due to gravity.
So my question is: do i really need all the sensors to get all forms of motion or can I give up some of them?
EDIT: (clarifying the question)
I want to collect the minimal data that will allow me to deduce the same things. What I'm looking for is user behavior: the angle which the user holds his phone, the way the user moves while using his phone, etc..
The answer I'm looking for should include the sets of sensors that have high correlation within them, such that only some of the sensors in this set are required to deduce the same type of motion\movement\rotation\acceleration\etc..
The term "Motion" in the question have no precise meaning. So I answer more generally.
"The way one holds his phone" is nothing but the orientation of the phone.There are three sensors which individually tells the orientation of the phone.
Accelerometer sensor
Orientation sensor
Rotation Vector sensor
Among them only the accelerometer is physical sensor and other two are virtual sensors (they don't have special piece of hardware, they use accelerometer data and report the orientation in different formats).
The orientation sensor is deprecated so you can't use it.
Rotation vector sensor tells the orientation encoded in a quaternion. If your code is based on quaternions then normalize the sensor output using SensorManager.getQuaternionFromVector() and continue. If your code is based on rotation matrix then obtain rotation matrix by calling SensorManager.getRotationMatrixFromVector() passing sensor output and continue. If you want the orientation alone get it by calling SensorManager.getOrientation() passing rotation matrix obtained previously.
Using accelerometer sensor we can find the orientation, but the recommended approach is to combine it with magnetic field sensor output. Call SensorManager.getRotationMatrix() by passing the output of accelerometer output and magnetic field sensor output and get the rotation matrix. If your code is based on rotation matrix, just continue. If you want the orientation alone get it by calling SensorManager.getOrientation() passing rotation matrix obtained in previously. If your code is based on quaternion call SensorManager.getQuaternionFromVector() by passing rotation vector (orientation) obtained previously.
"The way one moves his phone" - Here I consider four motions.
Change of position (Simple translation) and rate change of position (velocity) - No sensor to detect them.
Rate of change of velocity (Simple acceleration) - Accelerometer detects it. But it also contains the gravity component. Normally we need acceleration without gravity component. This could be calculated simply as explained here. However there is another virtual sensor called Linear Acceleration which does the job for us.
Change of orientation (Rotation) - Whenever the orientation changes the accelerometer, orientation and rotation vector sensors report us (gyroscope also reports, but is explained in next point). How to use this sensor to get the current orientation is explained in first part of the answer.
Rate of change of orientation (Angular velocity) - Whenever the orientation changes the gyroscope sensor reports. The output is three numbers representing angular acceleration along x, y and z axes. The unit is radians per second.
Output of the gyroscope sensors is not accurate in long term and the output of accelerometer is not accurate in short term, so combine them to get steady output. For details see this question.
Now it is clear that the gyroscope and accelerometer is required in minimum. However using wide range of sensors minimizes our work.
You can't decide what you get - each sensor's data is already defined, and you get all or nothing. If you see closely, there isn't a place in public API which would let you ask for specific things.
To back this up here's quote from Google's document explaining sensor types:
An accelerometer sensor reports the acceleration of the device along the 3 sensor axes. The measured acceleration includes both the physical acceleration (change of velocity) and the gravity. The measurement is reported in the x, y and z fields of sensors_event_t.acceleration.
If you see into android source, the structs here are strictly defined, and struct for acceleration contains specific fields. So even if you would get 0 in fields you don't like, you won't gain anything.
But what you're referring to are two things - base sensors, which are roughly equivalent to physical sensors on the device, and composite sensors, which combine readings from various physical sensors to get more useful data.
So while you can't decide what you get for a particular sensor (like "only gravity" or "only acceleration in Y axis"), composite sensors do give you data that you can compute by yourself using only base sensors. So linear acceleration is composition of data from accelerometer and gyroscope (or magnetic sensor), after some calculations. Similarly step detector "sensor" uses only accelerometer, but interpretes the data automatically to just give you an event that "yes, someone has made a step" with single value 1.
If you're feeding raw motion data to some algorithms, I would guess base sensors are what you're looking for. That said, I believe you can still safely register for all sensors (both base and composite ones) that combined give you all data that you need (and maybe more), without meaningful battery impact.
For more detailed information on each of the sensors refer to Sensor types on Android website, and if you're curious, you can read up short summary on sensors stack as well.
No, you don't need every sensor. Some of the sensors exist as a convenience to the user. Your example of the linear acceleration sensor is one- it tells you the results of the accelerometer with gravity taken out. You could do this yourself from the raw accelerometer data, but that takes a bit of math (you need to subtract the vector gravity over all 3 axes) and a bit of knowhow (did you remember to calibrate the sensor? It may not read 9.8 at rest. For that matter, 9.8 may not be your gravity if you're not at sea level). That's a lot of work that would need to be repeated by each app, so they created a software "sensor" that sits on top of the accelerometer and provides the computed data. It would be unusual for an app to use raw and linear accelerometers in the same app, generally its one or the other. The step counter is another example of this, it guesses at what a step is based on the accelerometer data. You also wouldn't want calibrated and uncalibrated gyroscope data.
As for what you do need- no clue, you don't say enough about what you're trying to do. One warning though- you said you're trying to detect motion. YOu can't do that. You can detect accelerations and rotation. You cannot detect motion at a constant speed. If you're developing any type of app using these it pays to use the correct terminology and think in terms of physics and how the physical accelerometer and gyroscope work, otherwise you're going to cause yourself bugs.
Related
So, right now I'm grabbing the accelerometer data and converting them to a decently rough estimate of the angle at which the phone is being held. For right now I'm just focused on the yaw axis.
My area of interest is between 0 and 45 degrees on the yaw axis, so I made a limited queue of the past 5 to 10 readings and compared the numbers to determine if it's going up or down, which kind of works, but it is slow and not really as precise or reliable as I'd want it to be.
Is there a way you can kind of just determine which direction your phone is rotating with just the accelerometer and the magnetic field sensor I guess, without keeping a history of past readings, or something like that? I'm really new to sensor manipulation and Android in general. Any help understanding would be great.
It's not clear exactly what you're looking for here, position or velocity. Generally speaking, you don't want to get a position measurement by using integration on the accelerometer data. There's a lot of error associated with that calculation.
If you literally want the "direction your phone is rotating," rather than angular position, you can actually get that directly from the gyroscope sensor, which provides rotational velocities. That would let you get the direction it's rotating from the velocity without storing data. You should be aware that not every phone has a gyroscope sensor, but it does seem like the newer ones do.
If you want the absolute orientation of the phone (position), you can use the Rotation Vector sensor. This is a combined sensor that automatically integrates data from several of the sensors in one go, and provides additional accuracy. From this, you can get roll-pitch-yaw with a single measurement. Basically, you first want to get your data from the Rotation_vector sensor. Then you use the sensor data with getRotationMatrixFromVector. You can use the output from that in getOrientation (see the same page as the previous link), which will spit out roll-pitch-yaw measurements for you. You might need to rotate the axes around a bit to get the angles measured positive in the direction you want.
Android provides both the rotation vector sensor and the orientation sensor. I know they returns different data, because for vector sensor we have sin of angles, in orientation sensor we have angles. But what's the conceptual difference? I can't understand from the docs. Which one provides the orientation of the device in the three-dimensional space? I'm confused!
The older ORIENTATION sensors report orientation using three angles. The problem with this coordinate systems is that it suffers from "gimbal lock": when the actual orientation vector is close to vertical, one of the coordinates goes to 90 or -90 degrees, and the remaining two coordinates become either uninterpretable or dangerously denormalized.
The newer ROTATION sensors report orientation using quarternion coordinates, which are more complicated to work with, but don't suffer from the Gimbal lock problem. When orientation is reported using quaternion coordinates, you can determine the precise orientation of the device no matter what the orientation is.
Quaternions are also more computationally efficient. You don't need to call expensive trig functions to apply a quaternion rotation to a vector. If the w coordinate isn't supplied you can still compute w with a single sqrt call, compared to three sin and three cos function calls for orientation coordinates in the three angle Euler form.
Short story: the ORIENTATION-style sensors were done wrong. They were fixed in API 9, by replacing them with ROTATION sensors.
ROTATION_VECTOR sensor was introduced in API 9 and represents 'virtual' sensor which combines data from different sensors (usually ACCELEROMETER, GEOMAGNETIC_FIELD and GYROSCOPE) and does some smart calculations to provide more accurate data rather than using raw data from ACCEL and GEOMAGNETIC_FIELD sensors. This is called 'sensor fusion'. More info you can find here
ORIENTATION sensor is deprecated since it is providing not very accurate data. Documentation is suggesting to use raw data from ACCELEROMETER and GEOMAGNETIC_FIELD sensors instead.
Unfortunately, I cannot provide any examples how to use ROTATION_VECTOR sensor data since I'm in process of investigation right now :)
Just in case you need some examples how to use raw data - feel free to ask me - I'll post some examples, but simple googling can help you better ;)
They are conceptually same, just representationally different.
Have a look at the code of orientation sensor here.
The parameter of the function for the orientation sensor is the rotation matrix, which inturn is calculated from the rotation vector(the quats representation)
On cheap Android phones (unlike higher end iPhones) compass will work only when the phone orientation is close to horizontal (i.e. parallel to ground surface).
Technically a good compass (i.e. a floating magnetic sphere) should work at any orientation but the cheap ones don't. Hence, to use a compass make sure that your phone is horizontal via looking at ACCELEROMETER before you use MAGNETOMETER. Hopefully Google will use better magnetometers in the future!
I am writing some programming on the Android sensors, where I am confused by the readings of magnetometer sensor.
Magnetometer reports the magnetic strengths on the three axes of the phone. And I observe that at a same location, if the phone's heading changes, the magnetic readings dramatically change.
In my understand, however, the earth's magnetic field at a specific location should be relative stable, regardless of the phone's placement gesture.
So, my question is, is there any way to transform the raw readings from the 3-axis magnetometer sensor to the world's coordinate system? The accelerometer and orientation data are also available on mobile phones. If so, I suspect the transformed magnetism should be the same even the phone's heading direction changes.
I have referred to the Android source codes, specially, the getOrientation() function and the getRotationMatrix() function. I hoped to get some help from their code implementation. But I did not understand very well. Could someone give any explains on the algorithm principle of these functions?
Link to the code of the functions: http://www.netmite.com/android/mydroid/cupcake/frameworks/base/core/java/android/hardware/SensorManager.java
Thanks! I am really anxious to the solution to this question.
This is impossible, since the device does not know its orientation in world space.
Of course, the orientation can be guessed by the sensor input, and that is what getOrientation() and getRotationMatrix() do. However, on a long timescale only the measurement of acceleration (by gravity) and the magnetic field provide the necessary information. Gyroscope data can be used to refine the estimate for shorter periods, but getOrientation is not guaranteed to use it, and maybe that sensor is not even existent on the particular device.
This means backtransforming using getOrientation would use the exact same data which you want to correct, rendering it useless.
I am needing to implement a shake recognizer, and I am using the accelerometer on the device to that. However, when I check the values I get from the sensor, it appears that they vary wildly from device to device. For instance, I get a value range of 0-8 as force (after some calculations) on one device, and on the other 0 - 4.
So it looks like they have very different ranges.
Is there anything I can do to make these ranges equal. Or are there some variables that I can use to somehow calculate what a fairly hard shake would be?
According to specification accelerometer should return Measures the acceleration force in m/s2. So it should be calibrated. One thing you could check however is the Sensor class's getMaximumRange() and getResolution()
The physical placement of the chip on the pcb and the securing of the pcb within the device and the construction of the device could all lead to different damping effects in responce to your shaking input force.
You don't say how your processing the sensor data there may well be effects related to sampleing and filtering performed at the driver level.
You clearly need to be flexible in your code with the range of values you expect and test on a good range of devices.
The sensor should be calibrated.
Apparently it isn't. If the gain in the different directions (that is x, y, z) is not significantly different then it is enough to look for sudden changes in the length^2 of the accelerometer vector: x^2+y^2+z^2.
If the gains are also significantly different then you have no choice but to write an app for accelerometer calibration...
By the way, you are not the first one to report gross inaccuracies, see for example Android: the range of z-value in the accelerometer sensor are different on different devices.
the Android SDK actually offers a nice interface to access the sensors.
But e.g. the linear acceleration-sensor can be evaluated as the documentation describes from gravity and acceleration - so there is no real physical counterpart for this Sensor, it is rather a - let's call it - "virtual sensor".
For the proximity-sensor things are rather clear, i can't imagine it is influenced by some other values.
But the GPS-sensor could be influenced by the accleremoter sensor when the GPS-signal is rather weaks I think values are somehow estimated supported by other sensors.
So basically my question is: which sensors do get direct input from physical sensors and which are somehow altered or totally calculated by the Android-SDK?
And how do I get raw input from the sensors?
I appended a list of the sensors available through the Sensor-class. GPS, W-LAN, Camera, etc. missing
//API-Level: 3
TYPE_ACCELEROMETER
TYPE_GYROSCOPE
TYPE_LIGHT
TYPE_MAGNETIC_FIELD
TYPE_PRESSURE
TYPE_PROXIMITY
TYPE_TEMPERATURE
//API-Level: 9 (2.3)
TYPE_GRAVITY
TYPE_LINEAR_ACCELERATION // can be calculated via acc. and grav. (link above)
TYPE_ROTATION_VECTOR
I am pretty sure the GPS at the moment is a stand alone or give raw data output.
The orientation sensor is one that I know that is not a raw from a single sensor but is actually the fusion of 2 sensors and in the future possibly more (gyro). As of now the orientation is a combination of the magnetic field sensor (compass) and the accelerometer. Any modern day compass will use both the compass and accelerometer to calculate its final direction and to compensate for drift, noise and other interference. If you notice when calculating the orientation with get rotation matrix and get orientation it requires you to listen for both magnetic field and accelerometer sensors.
I would say the gravity, linear acceleration and rotation vector sensors are not actual sensors and just parts of data from other sensors separated out, mostly from accelerometer and compass.
Lastly the pressure and temperature sensor are actually calculated through a single sensor.