Interpretation of Rotation Vector Measurements - android

I use android devices (Xiaomi Mi9-SE (Android 10) & Huawei P10 (Android 9)) to collect movement data from bicycle rides. Hereby, I am measuring i.a the Rotation Vector which describes the rotation of the smartphone to the earth's reference frame (android documentation).
The sensor returns four values, which represent the rotation as a quaternion (w,x,y,z).
In my experiment I recorded a bike ride with two smartphones, which were glued to each other in the same alignment. When I plot the returned values from the Rotation Sensors numerically, they seem to be similar in some phases and dissimilar in others (cf. 1, 2)
However, when in a dissimilar phase (e.g. ~22:12:30h), it seems like the rotations are still somehow the same, but the axis are swapped. I plotted two corresponding rotations (cf. 3, 4). If the x and y axis were swapped and mirrored on the new y, the rotation would become similar. I wonder if this behavior is due to inaccurate/bad sensors or whether I missed something working with quaternions.
Would be really great if someone could tell me, if/where I miss a point here.

Related

Android accelerometer mapping movement to custom coordinate system

If I have custom coordinate system X - left/right, Y - forward/backward, Z - Up/down that is represented on my PC screen inside my unreal project, how would I map the accelerator values In a way that when I move my phone toward the PC screen (regardless of the phone orientation) so that my Y value goes up and same for other axes?
I got something similar working with rotation by taking "referent" rotation quaternion, inverting it and multiplying it by current rotation quaternion, but I'm just stuck on how to transform movement.
Example of my problem is that if I'm moving my phone up with screen pointing at sky my Z axis increases which is what I want, but when I also point my phone screen to my PC screen and move it forward Z axis again goes up, when I would want in this case that my Y value increases.
There is a similar question Acceleration from device's coordinate system into absolute coordinate system but that doesn't really solve my problem since I don't want to depend on the location of the north for Y and so on.
Clarification of question intent
It sounds like what you want is the acceleration of your device with respect to your laptop. As you correctly mentioned, the similar question Acceleration from device's coordinate system into absolute coordinate system maps the local accelerometer data of a device with respect to a global frame of reference (FoR) (the Cartesian "flat" Earth FoR to be specific - as opposed to the ultra-realistic spherical Earth FoR).
What you know
From your device, you know the local Phone FoR, and from the link above, you can also find the behavior of your device with respect to a flat Earth FoR with a rotation matrix, which I'll call R_EP for Rotation in Earth FoR from Phone FoR. In order to represent the acceleration of your device with respect to your laptop, you will need to know how your laptop is oriented and positioned with respect to either your phone's FoR (A), or the flat Earth FoR (B), or some other FoR that is known to both your laptop and your phone but I'll ignore this cause it's irrelevant and the method is identical to B.
What you'll need
In the first case, A, this will allow you to construct a rotation matrix which I'll call R_LP for Rotation in Laptop FoR from Phone FoR - and that would be super convenient because that's your answer. But alas, life isn't fun without a little bit of a challenge.
In the second case, B, this will allow you to construct a rotation matrix which I'll call R_LE for Rotation in Laptop FoR from Earth FoR. Because the Hamilton product is associative (but NOT commutative: Are quaternions generally multiplied in an order opposite to matrices?), you can find the acceleration of your phone with respect to your laptop by daisy-chaining the rotations, like so:
a_P]L = R_LE * R_EP * a_P]P
Where the ] means "in the frame of", and a_P is acceleration of the Phone. So a_P]L is the acceleration of the Phone in the Laptop FoR, and a_P]P is the acceleration of the Phone in the Phone's FoR.
NOTE When "daisy-chaining" rotation matrices, it's important that they follow a specific order. Always make sure that the rotation matrices are multiplied in the correct order, see Sections 2.6 and 3.1.4 in [1] for more information.
Hint
To define your laptop's FoR (orientation and position) with respect to the global "flat" Earth FoR, you can place your phone on your laptop and set the current orientation and position as your laptop's FoR. This will let you construct R_LE.
Misconceptions
A rotation quaternion, q, is NEITHER the orientation NOR attitude of one frame of reference relative to another. Instead, it represents a "midpoint" vector normal to the rotation plane about which vectors from one frame of reference are rotated to the other. This is why defining quaternions to rotate from a GLOBAL frame to a local frame (or vice-versa) is incredibly important. The ENU to NED rotation is a perfect example, where the rotation quaternion is [0; sqrt(2)/2; sqrt(2)/2; 0], a "midpoint" between the two abscissa (X) axes (in both the global and local frames of reference). If you do the "right hand rule" with your three fingers pointing along the ENU orientation, and rapidly switch back and forth from the NED orientation, you'll see that the rotation from both FoR's is simply a rotation about [1; 1; 0] in the Global FoR.
References
I cannot recommend the following open-source reference highly enough:
[1] "Quaternion kinematics for the error-state Kalman filter" by Joan Solà. https://hal.archives-ouvertes.fr/hal-01122406v5
For a "playground" to experiment with, and gain a "hands-on" understanding of quaternions:
[2] Visualizing quaternions, An explorable video series. Lessons by Grant Sanderson. Technology by Ben Eater https://eater.net/quaternions

Sensor values of TYPE_ROTATION_VECTOR aren't the same of different devices

I'm using sensor data of type ROTATION_VECTOR in my app.
Using my Nexus 5 I can get azimuth from orientation[0] and can get the phones heading in the range shown in the picture below (it's very accurate).
Since I've tried my app on different devices, I found out that the sensor values differs from my Nexus 5 test device. On my Samsung Galaxy Nexus and on a Samsung Galaxy S3 Mini the azimuth is influenced by tilting the devices as shown in the picture.
TYPE_ROTATION_VECTOR is using sensor fusion, that's why I checked the single sensor values on different devices with sensor test apps. On Nexus 5 the orientation values for z-axis are staying roughly the same when tilting the device, on Samsung Galaxy Nexus the value for z changed while tilting (about 90° from standing upright to lying). I fear that the sensor fusion is using these values and that's why my azimuth is different on different devices.
Does anyone experience a similar scenario and even more important: does anybody has a workaround or a different way to receive the azimuth?
Just to make sure, I looking for the direction, in which the back camera is pointing...
Here is my code:
final float[] mRotationMatrix = new float[9];
final float[] mRotationMatrixFromVector = new float[9];
final float[] orientation = new float[3];
...
SensorManager.getRotationMatrixFromVector(mRotationMatrixFromVector,
event.values);
// enables usable range like in picture
SensorManager.remapCoordinateSystem(mRotationMatrixFromVector,
SensorManager.AXIS_X, SensorManager.AXIS_Z,
mRotationMatrix);
SensorManager.getOrientation(mRotationMatrix, orientation);
You generally have three angles defining orientation, and to use them to project some distance outward, you have to use them all in the proper order, as you will be rotating your axis system. I answered a similar question here which may help you - there is some javascript code there which I left unoptimized to show the steps used in solving that problem.
Usually, azimuth is in the reference system of the earth, and you can just use it by itself. But that may not be so in all cases. If you suspect that your azimuth is being affected by the altitude, its probable (assuming they don't just have the numbers labelled wrong!) that the angles they are supplying you with are meant to be used in a different order, and you'll have to figure out which order they are being applied. Once you have the correct order, apply the three angles in that order (see my other answer), then the projection of the rotated X axis on the XY plane is the actual azimuth.

Android, remove gravity from the accelerometer using the compass

First of all, I suppose the device at rest at the first measure so the acceleration I have is the gravity one, second I won't use a low pass filter.
Android gives me the linear acceleration and the compass value. My guess is that I can use the compass to rotate the acceleration to the earth reference and so remove gravity.
My guess is that if I calculate the difference of two compass measurement I have a quantity that is index of the rotation of the cellphone and so, if I add it to the initial gravity vector i can rotate it too.
Then at the i-th measure I have, my guess is this:
#a[i] contains the 3-acceleration at time i
#b[i] cointains the 3-compass values at time i
b[i]=numpy.sin(b[i]/(180./math.pi)) # I normalize the compass values from 0 to 1
#since b is a unit vector I need to "de-normalize" it
b[i]=b[i]*sqrt(g**2.)
deltab=b[i]-b[i-1]
# At the very beginning of the code I had something like g=a[0]
g=g-deltab #well I've tried also with the plus sign
it don't work..but I can't see the problem..any idea?
EDIT: I'm also trying this method, which again I don't know why it don't works:
I've found here to compute the rotation matrix giving the vector and the rotation angle.
Here is how to build this matrix: http://en.wikipedia.org/wiki/Rotation_matrix#Rotation_matrix_from_axis_and_angle
The angle I've choosen is the one of the scalar product by the old and the new compass direction.
The vector with which they rotate is the cross product between the old and the new vector (I guess..and this may be wrong)
So: If I get the rotation of the compass, and then I build the rotation matrix, and then I apply it to my initial gravity vector, is it the correct rotate the gravity vector?
#a[i] contains the 3-acceleration at time i
#b[i] cointains the 3-compass values at time i
omega=cross(b[i],b[i-1])
theta=dot(b[i],b[i-1])/sqrt(dot(b[i],b[i])*dot(b[i-1],b[i-1]))
M=rotationMatrix(omega,theta)
aWithoutGravity[i]=dot(M,a[i])
As you describe it, this problem can not be solved. If you had two coordinate systems, you could describe one in terms of the other, but you only have two vectors (described within coordinate system of the phone), and single vectors introduce ambiguities, since you really only know the angle between them.
For example, consider taking an initial measurement of the gravity and magnetic field vector and now rotate and spin your phone. You can again measure the magnetic field vector, but imagine you don't know the gravity vector, where will it be? All you know is its angle relative to the magnetic field vector (and you might also assume you know its magnitude), basically forming a cone around the magnetic field vector (or ring if you assume the magnitude). But since you don't know exact information about it, you can't use it to completely determine the linear acceleration. That is, given a measured acceleration (physically, gravity + linear acceleration), every vector in the cone of possible gravity vectors would imply a different linear acceleration.
It does get you part of the way there, in that you now have a complicated geometry problem where there will only be a certain range of linear acceleration vectors that will be consistent with the "gravity cone", but the problem become much more complicated than a simple subtraction, and there's no a priori reason to believe that this smaller subset of possible linear acceleration vectors tells you anything useful.
For example, consider that the original reading of the compass is (1,0,0), and the reading for acceleration is (6, 6, 0). If the phone is then just rotated about (1,0,0), other readings of acceleration are possible, such as (6, -6, 0), (6, 0, 6), (6, 0, -6), and many in between. So, because of this ambiguity, one can't tell based purely on the compass reading whether (6, 6, 0) changed into, say, (6, -6, 0) because of the acceleration or because the phone was rotated.
The compass values (i.e. the values from the android sensor Sensor.TYPE_MAGNETIC_FIELD) has components in both the gravity direction and the direction of magnetic north. So you can use it, in conjunction with the gravity vector, to work out the direction of east, because that's given by the vector cross product of the two vectors. In fact, depending on where you are in the world, the Sensor.TYPE_MAGNETIC_FIELD component in the gravity direction can be quite large.
Also note that Android has two sensors Sensor.TYPE_GRAVITY and Sensor.TYPE_LINEAR_ACCELERATION which are derived from Sensor.TYPE_ACCELEROMETER. And it's always true that the readings satisfy
Sensor.TYPE_ACCELEROMETER = Sensor.TYPE_GRAVITY + Sensor.TYPE_LINEAR_ACCELERATION

How to get the rotation between accelerometer's axis and motion vector?

I'm working on a Pedestrian Navigation System on Android. I am currently trying to get the rotation matrix between the 3-axis accelerometer referential and the motion that is applied to the device.
Let's say you are walking straight forward with the device in your hand, but the Y axis of the accelerometer (thus the device's Y axis) is not oriented in the same direction as the one you're heading yourself (basically you're holding the device in an awkward way).
Then if I want to apply distances (based on step detection), it would be wrong to apply them to the accelerometer referential (which I know the orientation) : it has to be rotated accordingly to the motion heading.
This is why I would like to know if you could enlighten me about a method to compute the accelerometer readings to turn them into rotation angles (or a matrix). Such a method has to avoid integrating the acceleration even once, as the error is abyssal on cheap accelerometers (otherwise it would be pretty easy to perform I think).
EDIT : Magnetometer and Gyroscope help you to find the device's orientation even when stationnary, but doesn't allow you to know in which direction the device is moving. I have the first one, and I'm searching for the second. Basically :
Human Referential (motion direction) -> Device Referential (or accelerometer referential) -> Earth Referential
and i'm searching the way to find the rotation matrix to compute distances from HR to DR, which I would then apply to the rotation matrix I found to go from DR to ER.
I found an article which might be useful, here is the link.
The Idea is based upon acquiring the accelerometer data in the 3 axis, (which is an n times 3
matrix) and finding the PCA (principle component analysis) of the matrix. The PCA is the 3 vectors with the highest energy of the matrix).
Idealy, the main vector (with the highest energy) direction is upwards, and the second is the heading direction.
You can read it all in the article.
I tried implementing the algorithm in matlab, the result is o.k. (not great)
Hope you can do better (I would like to hear about good results).
Hope this helps
Ariel

Compute relative orientation given azimuth, pitch, and roll in android?

When I listen to orientation event in an android app, I get a SensorEvent, which contains 3 floats - azimuth, pitch, and roll in relation to the real-world's axis.
Now say I am building an app like labyrinth, but I don't want to force the user the be over the phone and hold the phone such that the xy plane is parallel to the ground. Instead I want to be able to allow the user to hold the phone as they wish, laying down or, perhaps, sitting down and holding the phone at an angle. In other words, I need to calibrate the phone in accordance with the user's preference.
How can I do that?
Also note that I believe that my answer has to do with getRotationMatrix and getOrientation, but I am not sure how!
Please help! I've been stuck at this for hours.
For a Labyrinth style app, you probably care more for the acceleration (gravity) vector than the axes orientation. This vector, in Phone coordinate system, is given by the combination of the three accelerometers measurements, rather than the rotation angles. Specifically, only the x and y readings should affect the ball's motion.
If you do actually need the orientation, then the 3 angular readings represent the 3 Euler angles. However, I suspect you probably don't really need the angles themselves, but rather the rotation matrix R, which is returned by the getRotationMatrix() API. Once you have this matrix, then it is basically the calibration that you are looking for. When you want to transform a vector in world coordinates to your device coordinates, you should multiply it by the inverse of this matrix (where in this special case, inv(R) = transpose(R).
So, following the example I found in the documentation, if you want to transform the world gravity vector g ([0 0 g]) to the device coordinates, multiply it by inv(R):
g = inv(R) * g
(note that this should give you the same result as reading the accelerometers)
Possible APIs to use here: invertM() and multiplyMV() methods of the matrix class.
I don't know of any android-specific APIs, but all you want to do is decrease the azimuth by a certain amount, right? So you move the "origin" from (0,0,0) to whatever they want. In pseudocode:
myGetRotationMatrix:
return getRotationMatrix() - origin

Categories

Resources