i have come across this code which uses the phones sensors to get the orientation of the device in degrees along the 3 axes... this value is calculated from a 4x4 matrix called rotation matrix.. so i was wondering what kind of data is stored in the rotation matrix ?
the code is similar to the one in this example
Android: Problems calculating the Orientation of the Device
The Wikipedia article about rotation matrices is reasonable. Basically, the rotation matrix tells you how to map a point in one co-ordinate system to a point in a different co-ordinate system. In the context of Android sensors, the rotation matrix is telling you how to map a point from the co-ordinate system of the phone (where the phone itself lies in the x-y plane) to the real world North/East/"Gravity-direction" co-ordinate system.
Android uses either 3x3 or 4x4 rotation matrices. When 4x4, it's the Quaternion representation of a rotation. For the 3x3 case, in terms of the Euler Angles aximuth, pitch and roll, see my answer to 'Compute rotation matrix using the magnetic field' for how those angles are embedded in the rotation matrix (NB: the accepted answer to that question is wrong).
Related
I have two 4x4 rotation matrices M and N. M is describing my current object attitude in space, and N is a desired object attitude. Now I would like to rotate M matrix towards N, so the object will slowly rotate towards desired position in following iterations. Any idea how to approach this?
If these matrices are not strange which should be the case describing "rotation matrices" you should do this by interpolating their base vectors in polar system.
To examine we want to convert top-left 3x3 matrix to 3 vectors defined by angles and distance. Once this is done you should do a linear interpolation on angles and distances for that top-left 3x3 part while the rest should have a direct cartesian interpolation. From angles and distances you can then convert back to cartesian coordinates.
Naturally there is still work internally like choosing which way to rotate (using closest) and checking there are no edge cases where one base vector rotates into different direction then the other...
I managed to successfully do this in 2D system which is a bit easier but should be no different in 3D.
To note a cartesian interpolation works fairly fine as long as angles are relatively small (<10 degrees to guess) which is most likely not your case at all.
I am looking for a solution that replaces the deprecated Android sensor Sensor.TYPE_ORIENTATION.
The most reported solution is to combine Sensor.TYPE_ACCELEROMETER and Sensor.TYPE_MAGNETIC_FIELD, then calculate a rotation matrix by using SensorManager#getRotationMatrix and obtain the Euler angles by using SensorManager#getOrientation.
Another reported solution is to use Sensor.TYPE_ROTATION_VECTOR, which also ends up with a rotation matrix and the Euler angles by using SensorManager#getOrientation
Unfortunately those behave totally different to TYPE_ORIENTATION when rotating the mobile device. Try both types while your phone is laying on the desk and then turning it up (pitch) to 90° (the screen is now directly facing to you). The calculated Euler angles of azimuth and roll get really wild (cause of something called the Gimbal lock problem) while the degree values retrieved with TYPE_ORIENTATION are pretty stable (not accurate but quite ok). Every value (yaw, pitch and roll) of TYPE_ORIENTATION seems to be some kind of "projected" degree without having the Gimbal Lock problem.
What would be a way to get similar degrees (for yaw, roll and pitch) without using the depreciated TYPE_ORIENTATION sensor (maybe from the rotation matrix)? How does the TYPE_ORIENTATION algorithm does it internally?
The azimuth in getOrientation is the angle between the magnetic north and the projection of the device y-axis into the world x-y plane. When the device is up to 90° the projection is a zero vector, thus the azimuth does not make sense in this case and can be any value. Physically, trying to find the angle between the magnetic north and a vector pointing to the Sky will not make sense.
You should look at my project at https://github.com/hoananguyen/dsensor/blob/master/dsensor/src/main/java/com/hoan/dsensor_master/DProcessedSensor.java
I think that i've read all the posts about this subject, but still i cant understand a few things:
Q1:
To get the magnetic field vector in global coords i need to multiply the inverted rotation matrix and the magnetic field vector, why do i need to invert the rotation matrix?
Q2:
Lets say that i have a device and i can calculate the azimuth based on the rotation along the Z-axis using getOrientation(...) method.
can i use the rotation matrix or some other method to calculate the azimuth to magnetic north regardless to the phone attitude?
So if i will rotate the phone the angle between me and the magnetic north will remain the same?
Q3:
When i multiply the magnetic vector(4th col is zero) with the inverted rotation matrix i get that x is very close to zero. i know that this is o.k from other posts but i cant understand why?
Q4:
In theory, lets say that i have two devices located 1 meter from each other, is it possible to make a spatial position of the two devices based only on their magnetic fields (in global coords)
Thanks in advanced.
P.S
I've already read these posts:
Getting magnetic field values in global coordinates,
How can I get the magnetic field vector, independent of the device rotation?
Convert magnetic field X, Y, Z values from device into global reference frame
If you read my answer at Convert magnetic field X, Y, Z values from device into global reference frame You still do not understand it.
A1. You multiply the Rotation matrix with the coordinates of the magnetic field vector in device coordinate system to get the coordinates of the magnetic field vector in the world coordinate system.
Let me emphasize: The above said Rotation matrix and not inverted rotation matrix.
The Rotation matrix obtained by calling getRotationMatrix is the change of basis matrix from the device basis to the world basis. That is given any vector v with coordinates in device coordinate sytem, the coordinates in world coordinate system of the same vector v can be obtained by multiply the rotation matrix with the coordinates in device system coordinate.
The inverted rotation matrix is the change of basis matrix from the world basis to the device basis. Thus when you multitply this matrix with a coordinates, it is interpreted as multiply the matrix with the coordinates of a vector in world coordinate system to obtain the coordinates of the same vector in device coordinate system. Therefore if you multiply the inverted rotation matrix with the coordinates of the magnetic field vector as returned by the magnetic sensor. Then the coordinates is interpreted as the coordinates of a vector in the world coordinate system and thus do not represent the magnetic field vector and the resulting product is not the coordinates of the magnetic field vector in the world coordinates system. Actually it is the coordinates of a vector in device coordinate system.
A2. getOrientation is meaningful only if the device is flat. For me it just a bunch of angle calculations. I look at what I try to do geometrically and then use the rotation matrix to calculate what I want. For example, to calculate the direction where the back camera pointed, I look at it as the direction of -z (the opposite of the vector orthogonal to the screen). Thus to find this direction, I projected -z to the world East-North plane and calculate the angle between this projection vector and the North axis. Now if you think this way then rotation of the device will not change the direction of -z, thus the projection vectors are the same as you rotate the device. If you use getOrientation then you have to precall remapCoordinateSystem(inR, AXIS_X, AXIS_Z, outR) for getOrientation to give you the correct result.
A3. The getRotationMatrix assumes that the geomagnetic parameter is the coordinates of a vector lying entirely in the North-Sky plane. Well any vector lying in this plane has to have x coordinate equal to 0. This is just basic linear algebra.
A4. The answer is no. To get the spatial position you have to be to express these vectors relative to a fixed coordinate system. with only coordinates of these vectors in device coordinate system, there is no way you can find a fixed basis that allow you to calculate the change of basis matrix from the device basis to this fixed basis. The 2 conditions stated in my link above need to be satisfied to calculate the change of basis.
A1. rotation matrix tells the position of your phone in the world coordinates. if you want to convert the magnetic vector from your phone coordinates to the world coordinates you have to multiply to the inverse.
A2. don't quite understand the question, sorry.
A3. x coordinate is the lateral component of the magnetic force which is corresponds to the deviation of the north pole from the magnetic pole or something like that. it's supposed to be quite small, same as z coordinate, which is the vertical component.
A4. in theory this might work, but with the precision of the sensors you have in your Android device, this approach seems not very feasible.
Android provides sensor data in device coordinate system no matter how it is oriented. Is there any way to have sensor data in 'gravity' coordinate system? I mean no matter how the device is oriented I want accelerometer data and orientation in coordinate system where y-axis points toward the sky, x-axis toward the east and z-axis towards south pole.
I took a look at remapCoordinateSystem but seems to be limited to only swapping axis. I guess for orientation I will have to do some low level rotation matrix transformation (or is there any better solution?). But how about acceleration data? Is there any way to have data in relation to coordinate system that is fixed (sort of world coordinate system).
The reason I need this is I'm trying to do some simple motion gestures when phone is in the pocket and it would be easier for me to have all data in coordinates system related to user rather device coordinate system (that will have a little bit different orientation in different user's pockets)
Well you basically get the North orientation when starting - for this you use the accelerometer and the magnetic field sensor to compute orientation (or the deprecated Orientation sensor).
Once you have it you can compute a rotation matrix (Direction Cosine Matrix) from those azimuth, pitch and roll angles. Multiplying your acceleration vector by that matrix will transform your device-frame movements into Earth-frame ones.
As your device will change its orientation as time goes by, you'll need to update it. To do so, retrieve gyroscope's data and update your Direction Cosine Matrix for each new value. You could also get the orientation true value just like the first time, but it's less accurate.
My solution involves DCM, but you could also use quaternions, it's just a matter of choice. Feel free to ask more if needed. I hope this is what you wanted to know !
When I listen to orientation event in an android app, I get a SensorEvent, which contains 3 floats - azimuth, pitch, and roll in relation to the real-world's axis.
Now say I am building an app like labyrinth, but I don't want to force the user the be over the phone and hold the phone such that the xy plane is parallel to the ground. Instead I want to be able to allow the user to hold the phone as they wish, laying down or, perhaps, sitting down and holding the phone at an angle. In other words, I need to calibrate the phone in accordance with the user's preference.
How can I do that?
Also note that I believe that my answer has to do with getRotationMatrix and getOrientation, but I am not sure how!
Please help! I've been stuck at this for hours.
For a Labyrinth style app, you probably care more for the acceleration (gravity) vector than the axes orientation. This vector, in Phone coordinate system, is given by the combination of the three accelerometers measurements, rather than the rotation angles. Specifically, only the x and y readings should affect the ball's motion.
If you do actually need the orientation, then the 3 angular readings represent the 3 Euler angles. However, I suspect you probably don't really need the angles themselves, but rather the rotation matrix R, which is returned by the getRotationMatrix() API. Once you have this matrix, then it is basically the calibration that you are looking for. When you want to transform a vector in world coordinates to your device coordinates, you should multiply it by the inverse of this matrix (where in this special case, inv(R) = transpose(R).
So, following the example I found in the documentation, if you want to transform the world gravity vector g ([0 0 g]) to the device coordinates, multiply it by inv(R):
g = inv(R) * g
(note that this should give you the same result as reading the accelerometers)
Possible APIs to use here: invertM() and multiplyMV() methods of the matrix class.
I don't know of any android-specific APIs, but all you want to do is decrease the azimuth by a certain amount, right? So you move the "origin" from (0,0,0) to whatever they want. In pseudocode:
myGetRotationMatrix:
return getRotationMatrix() - origin