For an Android application, I need to get magnetic field measurements across the axis of global (world's) coordinate system. Here is how I'm going (guessing) to implement this. Please, correct me if necessary. Also, please, note that the question is about algorithmic part of the task, and not about Android APIs for sensors - I have an experience with the latter.
First step is to obtain TYPE_MAGNETIC_FIELD sensor data (M) and TYPE_ACCELEROMETER sensor data (G). The second is supposed to be used according to Android's documentation, but I'm not sure if it shouldn't be TYPE_GRAVITY instead (again as G), because accelerometer seems providing not the pure gravity.
Next step is to get rotation matrices via getRotationMatrix(R, I, G, M), where R and I are rotation and inclination matrix correspondingly.
And now goes the most questionnable part: in order to convert M vector into the world's coordinate system, I suppose to multiply [R * I] * M.
I'm not sure this is a correct way for transforming magnetic field reading into another basis. Also, I don't know if remapCoordinateSystem should be used in addition or as replacement for something above.
If there exists some source code which does this thing already, I'd appreciate posting a link, but I don't want to use big general purposes libraries (for example, for augmented reality support) for this specific task, because I'd like to keep it as simple as possible.
P.S.
I came to the idea to add some information to the original post for clarity.
Let us suppose a device rests on a table and continuously reads data from its magnetic sensor. Each measurement contains 3 values, presenting magnetic field in axis X, Y, Z, which are device's local coordinate system. I take it that I can neglect environmental field fluctuations (smoothed by lowpass filter), so this 3 values should remain almost the same all the time the device remains in place. If we rotate device around any axis, the values change, because we change the local coordinate system. But the field itself is not actually changed. So I want to translate local X, Y, Z field measurements into such X', Y', Z', that they keep their respective values regardless to device rotation, provided that device is not moved from its location (only rotated).
I've implemented the algorithm described above and got regular and noticable changes in values X', Y', Z', obtained through suggested transformations, so there is something wrong in it.
P.P.S.
Occasionally I've found an exact duplicate of my question here on SO - How can I get the magnetic field vector, independent of the device rotation? - but unfortunately the answer contains my suggestions, and OP of that question confirms that they do not work.
The coordinates of M with respect to the word coordinate is just the multiplication R*M.
The rotation matrix R is mathematically the change of basis matrix from the device coordinate to the word coordinate.
Let X, Y, Z be the device coordinate basis and W_1, W_2, W_3 be the word coordinate basis then
M = m_1 X + m_2 Y + m_3
and also
M = c_1 W_1 + c_2 W_2 + c_3 W_3
where R * (m_1, m_2, m_3) = (c_1, c_2, c_3) transpose.
Low pass filter is only used to filter out accelerations in the X, Y directions. RemapCoordinateSystem is used to change the order of the basis, ie changing from W_1, W_2, W_3 to W_1, W_3, W_2.
The magnetometer sensor on your device returns a 3-vector in device coordinates. You can use getRotationMatrix() to get a matrix that could be used to convert that device-coordinates
vector to world coordinates. You could also learn about Quaternions and use
TYPE_ROTATION_VECTOR directly. However, there's no Quaternion library in Android (that I know of) and that's a discussion beyond the scope of this question.
However, none of this will do you any good because the device orientation information is based in part on the value from the magnetometers. In other words, the device will always tell you that the magnetic vector is facing exactly North.
Now, what you can do is get magnetic dip. This is one of the outputs from getRotationMatrix(), although you'll have to convert a matrix to an angle for it to be useful. That too, is beyond the scope of this question.
Finally, your last option is to build a table which is level and which has an arrow on it pointing true north. (You'll have to align it by the stars at night or something.) Then, place your device flat on the table with the top of the device facing north. In this case, device coordinates will be the same as world coordinates and the magnetometer sensor will produce the values you want.
Your comments indicate that you're interested in local variations. There's simply no way to get true north with your Android device alone. Theoretically, you could build a table as I described, and then walk around holding the device in strictly the same orientation as before, keeping an eye on the table for reference. I doubt you could pull it off, though.
You could try using gyros in your app to help you keep the device oriented exactly the same way at all times, but the gyros in any Android device you use are likely to drift too much for this to work.
Or perhaps we still don't understand what you're trying to do. Bottom line, though, is that you simply cannot get a global coordinate system with an Android device alone -- whatever you get will always be aligned with the local magnetic field at that exact spot.
Related
If I have custom coordinate system X - left/right, Y - forward/backward, Z - Up/down that is represented on my PC screen inside my unreal project, how would I map the accelerator values In a way that when I move my phone toward the PC screen (regardless of the phone orientation) so that my Y value goes up and same for other axes?
I got something similar working with rotation by taking "referent" rotation quaternion, inverting it and multiplying it by current rotation quaternion, but I'm just stuck on how to transform movement.
Example of my problem is that if I'm moving my phone up with screen pointing at sky my Z axis increases which is what I want, but when I also point my phone screen to my PC screen and move it forward Z axis again goes up, when I would want in this case that my Y value increases.
There is a similar question Acceleration from device's coordinate system into absolute coordinate system but that doesn't really solve my problem since I don't want to depend on the location of the north for Y and so on.
Clarification of question intent
It sounds like what you want is the acceleration of your device with respect to your laptop. As you correctly mentioned, the similar question Acceleration from device's coordinate system into absolute coordinate system maps the local accelerometer data of a device with respect to a global frame of reference (FoR) (the Cartesian "flat" Earth FoR to be specific - as opposed to the ultra-realistic spherical Earth FoR).
What you know
From your device, you know the local Phone FoR, and from the link above, you can also find the behavior of your device with respect to a flat Earth FoR with a rotation matrix, which I'll call R_EP for Rotation in Earth FoR from Phone FoR. In order to represent the acceleration of your device with respect to your laptop, you will need to know how your laptop is oriented and positioned with respect to either your phone's FoR (A), or the flat Earth FoR (B), or some other FoR that is known to both your laptop and your phone but I'll ignore this cause it's irrelevant and the method is identical to B.
What you'll need
In the first case, A, this will allow you to construct a rotation matrix which I'll call R_LP for Rotation in Laptop FoR from Phone FoR - and that would be super convenient because that's your answer. But alas, life isn't fun without a little bit of a challenge.
In the second case, B, this will allow you to construct a rotation matrix which I'll call R_LE for Rotation in Laptop FoR from Earth FoR. Because the Hamilton product is associative (but NOT commutative: Are quaternions generally multiplied in an order opposite to matrices?), you can find the acceleration of your phone with respect to your laptop by daisy-chaining the rotations, like so:
a_P]L = R_LE * R_EP * a_P]P
Where the ] means "in the frame of", and a_P is acceleration of the Phone. So a_P]L is the acceleration of the Phone in the Laptop FoR, and a_P]P is the acceleration of the Phone in the Phone's FoR.
NOTE When "daisy-chaining" rotation matrices, it's important that they follow a specific order. Always make sure that the rotation matrices are multiplied in the correct order, see Sections 2.6 and 3.1.4 in [1] for more information.
Hint
To define your laptop's FoR (orientation and position) with respect to the global "flat" Earth FoR, you can place your phone on your laptop and set the current orientation and position as your laptop's FoR. This will let you construct R_LE.
Misconceptions
A rotation quaternion, q, is NEITHER the orientation NOR attitude of one frame of reference relative to another. Instead, it represents a "midpoint" vector normal to the rotation plane about which vectors from one frame of reference are rotated to the other. This is why defining quaternions to rotate from a GLOBAL frame to a local frame (or vice-versa) is incredibly important. The ENU to NED rotation is a perfect example, where the rotation quaternion is [0; sqrt(2)/2; sqrt(2)/2; 0], a "midpoint" between the two abscissa (X) axes (in both the global and local frames of reference). If you do the "right hand rule" with your three fingers pointing along the ENU orientation, and rapidly switch back and forth from the NED orientation, you'll see that the rotation from both FoR's is simply a rotation about [1; 1; 0] in the Global FoR.
References
I cannot recommend the following open-source reference highly enough:
[1] "Quaternion kinematics for the error-state Kalman filter" by Joan Solà. https://hal.archives-ouvertes.fr/hal-01122406v5
For a "playground" to experiment with, and gain a "hands-on" understanding of quaternions:
[2] Visualizing quaternions, An explorable video series. Lessons by Grant Sanderson. Technology by Ben Eater https://eater.net/quaternions
I am working on an application which uses compass although I successfully implemented it but there is no such brief explanation that how it works. Like how the TYPE_ACCELEROMETER and TYPE_MAGNETIC_FIELD was used to get the orientation. I know we did get it and the zero index of the array we got is the azimuth from which we get the degree and came to know where north is. But how it get to this all. Can anyone explain please?
Secondly what if I place my mobile on the floor suppose it points in the direction of north when it is placed on ground now I take up my mobile and hold I upside down what will happen and why?
According to my understanding the direction shouldn't changed. I might be wrong, can anyone please guide to me on this. Thanks a lot.
You need both the Magnectic field and Accelerometer to calculate the compass direction. You can see why if you read the source code of the getRotationMatrix or for a more verbose explanation see Convert magnetic field X, Y, Z values from device into global reference frame
If you stand at the same place, the magnetic field does not change. That is the magnetic field vector returned in onSensorChanged should change very little. It is not the vector that change but the coordinates of the device that change. It is the meaning of what is being calculate and call the "compass direction" that confuses you. In this case the direction to be used as the "compass direction" is the direction of the y coordinate. Thus when you hold the phone vertically, the y coordinate points to the sky or the ground and it does not make sense to talk about "compass direction"
I'm taking a free online Android course, and this is the example code he uses to make a simple compass, you can look at that: https://github.com/aporter/coursera-android/tree/master/Examples/SensorCompass.
The way is works is that the phone senses the strongest "north" magnetic field, and points the arrow north, and then when the accelerometer moves, it changes the arrow according to that.
Also, if you hold the phone vertically above your head, the display will not be pointing north because the phone is not parallel to the ground.
I have written a simple Activity which is a SensorEventListener for Sensor.TYPE_ACCELEROMETER.
In my onSensorChanged(SensorEvent event) i just pick the values in X,Y,Z format and write them on to a file.
Added to this X,Y,Z is a label, the label is specific to the activity i am performing.
so its X,Y,Z,label
Like this i obtain my activity profile. Would like to have suggestions on what operations to perform after data collection so as to remove noise and get the best data for an activity.
The main intent of this data collection is to construct a user activity detection application using neural network library (NeuroPh for Android) Link.
Just for fun I wrote a pedometer a few weeks ago, and it would have been able to detect the three activities that you mentioned. I'd make the following observations:In addition to Sensor.TYPE_ACCELEROMETER, Android also has Sensor.TYPE_GRAVITY and Sensor.TYPE_LINEAR_ACCELERATION. If you log the values of all three, then you notice that the values of TYPE_ACCELEROMETER are always equal to the sum of the values of TYPE_GRAVITY and TYPE_LINEAR_ACCELERATION. The onSensorChanged(…) method first gives you TYPE_ACCELEROMETER, followed by TYPE_GRAVITY and TYPE_LINEAR_ACCELERATION which are the results of its internal methodology of splitting the accelerometer readings into gravity and the acceleration that's not due to gravity. Given that you're interested in the acceleration due to activities, rather than the acceleration due to gravity, you may find TYPE_LINEAR_ACCELERATION is better for what you need.Whatever sensors you use, the X, Y, Z that you're measuring will depend on the orientation of the device. However, for detecting the activities that you mention, the result can't depend on e.g. whether the user is holding the device in a portrait or landscape position, or whether the device is flat or vertical, so the individual values of X, Y and Z won't be any use. Instead you'll have to look at the length of the vector, i.e. sqrt(XX+YY+ZZ) which is independent of the device orientation.You only need to smooth the data if you're feeding it into something which is sensitive to noise. Instead, I'd say that the data is the data, and you'll get the best results if you use mechanisms which aren't sensitive to noise and hence don't need the data to be smoothed. By definition, smoothing is discarding data. You want to design an algorithm that takes noisy data in at one end and outputs the current activity at the other end, so don't prejudge whether it's necessary to include smoothing as part of that algorithmHere is a graph of sqrt(XX+YY+ZZ) from Sensor.TYPE_ ACCELEROMETER which I recorded when I was building my pedometer. The graphs shows the readings measured when I walked for 100 steps. The green line is sqrt(XX+YY+Z*Z), the blue line is an exponentially weighted moving average of the green line which gives me the average level of the green line, and the red line shows my algorithm counting steps. I was able to count the steps just by looking for the maximum and minimums and when the green line crosses the blue line. I didn't use any smoothing or Fast Fourier Transforms. In my experience, for this sort of thing the simplest algorithms often work best, because although complex ones might work in some situations it's harder to predict how they'll behave in all situations. And robustness is a vital characteristic of any algorithm :-).
This sounds like an interesting problem!
Have you plotted your data against time to get a feel for it, to see what kind of noise you are dealing with, and to help decide how you might pre-process your data for input to the detector?
^
|
A |
|
|
|
|_________________>
| time
|
v
I'd start with lines for each activity:
|Ax + Ay + Az|
|Vx + Vy + Vz| (approximate by calculating area of trapezoids formed by your data points)
... etc
Maybe you can work out the orientation of the phone by attempting to detect gravity, then rotate your vectors to a 'standard' orientation (eg positive Z axis = up). If you can do that, then the different axes may become more meaningful. For example, walking (in pocket) would tend to have a velocity on the horizontal plane, which might be distinguished from walking (in hand) by motion in the vertical plane.
As for filters, if the data appears noisy, a simple starting point is to apply a moving average to smooth it. This is a common technique for sensor data in general:
https://en.wikipedia.org/wiki/Moving_average
Also, this post seems relevant to your question:
How to remove Gravity factor from Accelerometer readings in Android 3-axis accelerometer
Things Identified by me:
The data has to be preprocessed as and how you need it to be, In my
case i just want 3 inputs and one output
The data has to be
subjected to Smoothing (Five-point Smoothing or Any other technique
which suites you the best) Reference. So that Noise gets filtered out (not completely though). Moving Average is one of the techniques
Linearized data would be good, because you dont have any idea how the data was sampled, Use interpolation to help you in Linearizing the data
Finally use FFT (Fast Fourier Transform) to extract the recipe out of the dish, that is to extract features out of your dataset!
I'm working on a Pedestrian Navigation System on Android. I am currently trying to get the rotation matrix between the 3-axis accelerometer referential and the motion that is applied to the device.
Let's say you are walking straight forward with the device in your hand, but the Y axis of the accelerometer (thus the device's Y axis) is not oriented in the same direction as the one you're heading yourself (basically you're holding the device in an awkward way).
Then if I want to apply distances (based on step detection), it would be wrong to apply them to the accelerometer referential (which I know the orientation) : it has to be rotated accordingly to the motion heading.
This is why I would like to know if you could enlighten me about a method to compute the accelerometer readings to turn them into rotation angles (or a matrix). Such a method has to avoid integrating the acceleration even once, as the error is abyssal on cheap accelerometers (otherwise it would be pretty easy to perform I think).
EDIT : Magnetometer and Gyroscope help you to find the device's orientation even when stationnary, but doesn't allow you to know in which direction the device is moving. I have the first one, and I'm searching for the second. Basically :
Human Referential (motion direction) -> Device Referential (or accelerometer referential) -> Earth Referential
and i'm searching the way to find the rotation matrix to compute distances from HR to DR, which I would then apply to the rotation matrix I found to go from DR to ER.
I found an article which might be useful, here is the link.
The Idea is based upon acquiring the accelerometer data in the 3 axis, (which is an n times 3
matrix) and finding the PCA (principle component analysis) of the matrix. The PCA is the 3 vectors with the highest energy of the matrix).
Idealy, the main vector (with the highest energy) direction is upwards, and the second is the heading direction.
You can read it all in the article.
I tried implementing the algorithm in matlab, the result is o.k. (not great)
Hope you can do better (I would like to hear about good results).
Hope this helps
Ariel
When I listen to orientation event in an android app, I get a SensorEvent, which contains 3 floats - azimuth, pitch, and roll in relation to the real-world's axis.
Now say I am building an app like labyrinth, but I don't want to force the user the be over the phone and hold the phone such that the xy plane is parallel to the ground. Instead I want to be able to allow the user to hold the phone as they wish, laying down or, perhaps, sitting down and holding the phone at an angle. In other words, I need to calibrate the phone in accordance with the user's preference.
How can I do that?
Also note that I believe that my answer has to do with getRotationMatrix and getOrientation, but I am not sure how!
Please help! I've been stuck at this for hours.
For a Labyrinth style app, you probably care more for the acceleration (gravity) vector than the axes orientation. This vector, in Phone coordinate system, is given by the combination of the three accelerometers measurements, rather than the rotation angles. Specifically, only the x and y readings should affect the ball's motion.
If you do actually need the orientation, then the 3 angular readings represent the 3 Euler angles. However, I suspect you probably don't really need the angles themselves, but rather the rotation matrix R, which is returned by the getRotationMatrix() API. Once you have this matrix, then it is basically the calibration that you are looking for. When you want to transform a vector in world coordinates to your device coordinates, you should multiply it by the inverse of this matrix (where in this special case, inv(R) = transpose(R).
So, following the example I found in the documentation, if you want to transform the world gravity vector g ([0 0 g]) to the device coordinates, multiply it by inv(R):
g = inv(R) * g
(note that this should give you the same result as reading the accelerometers)
Possible APIs to use here: invertM() and multiplyMV() methods of the matrix class.
I don't know of any android-specific APIs, but all you want to do is decrease the azimuth by a certain amount, right? So you move the "origin" from (0,0,0) to whatever they want. In pseudocode:
myGetRotationMatrix:
return getRotationMatrix() - origin