Magnetic Fields, Rotation Matrix And global coordinates - android

I think that i've read all the posts about this subject, but still i cant understand a few things:
Q1:
To get the magnetic field vector in global coords i need to multiply the inverted rotation matrix and the magnetic field vector, why do i need to invert the rotation matrix?
Q2:
Lets say that i have a device and i can calculate the azimuth based on the rotation along the Z-axis using getOrientation(...) method.
can i use the rotation matrix or some other method to calculate the azimuth to magnetic north regardless to the phone attitude?
So if i will rotate the phone the angle between me and the magnetic north will remain the same?
Q3:
When i multiply the magnetic vector(4th col is zero) with the inverted rotation matrix i get that x is very close to zero. i know that this is o.k from other posts but i cant understand why?
Q4:
In theory, lets say that i have two devices located 1 meter from each other, is it possible to make a spatial position of the two devices based only on their magnetic fields (in global coords)
Thanks in advanced.
P.S
I've already read these posts:
Getting magnetic field values in global coordinates,
How can I get the magnetic field vector, independent of the device rotation?
Convert magnetic field X, Y, Z values from device into global reference frame

If you read my answer at Convert magnetic field X, Y, Z values from device into global reference frame You still do not understand it.
A1. You multiply the Rotation matrix with the coordinates of the magnetic field vector in device coordinate system to get the coordinates of the magnetic field vector in the world coordinate system.
Let me emphasize: The above said Rotation matrix and not inverted rotation matrix.
The Rotation matrix obtained by calling getRotationMatrix is the change of basis matrix from the device basis to the world basis. That is given any vector v with coordinates in device coordinate sytem, the coordinates in world coordinate system of the same vector v can be obtained by multiply the rotation matrix with the coordinates in device system coordinate.
The inverted rotation matrix is the change of basis matrix from the world basis to the device basis. Thus when you multitply this matrix with a coordinates, it is interpreted as multiply the matrix with the coordinates of a vector in world coordinate system to obtain the coordinates of the same vector in device coordinate system. Therefore if you multiply the inverted rotation matrix with the coordinates of the magnetic field vector as returned by the magnetic sensor. Then the coordinates is interpreted as the coordinates of a vector in the world coordinate system and thus do not represent the magnetic field vector and the resulting product is not the coordinates of the magnetic field vector in the world coordinates system. Actually it is the coordinates of a vector in device coordinate system.
A2. getOrientation is meaningful only if the device is flat. For me it just a bunch of angle calculations. I look at what I try to do geometrically and then use the rotation matrix to calculate what I want. For example, to calculate the direction where the back camera pointed, I look at it as the direction of -z (the opposite of the vector orthogonal to the screen). Thus to find this direction, I projected -z to the world East-North plane and calculate the angle between this projection vector and the North axis. Now if you think this way then rotation of the device will not change the direction of -z, thus the projection vectors are the same as you rotate the device. If you use getOrientation then you have to precall remapCoordinateSystem(inR, AXIS_X, AXIS_Z, outR) for getOrientation to give you the correct result.
A3. The getRotationMatrix assumes that the geomagnetic parameter is the coordinates of a vector lying entirely in the North-Sky plane. Well any vector lying in this plane has to have x coordinate equal to 0. This is just basic linear algebra.
A4. The answer is no. To get the spatial position you have to be to express these vectors relative to a fixed coordinate system. with only coordinates of these vectors in device coordinate system, there is no way you can find a fixed basis that allow you to calculate the change of basis matrix from the device basis to this fixed basis. The 2 conditions stated in my link above need to be satisfied to calculate the change of basis.

A1. rotation matrix tells the position of your phone in the world coordinates. if you want to convert the magnetic vector from your phone coordinates to the world coordinates you have to multiply to the inverse.
A2. don't quite understand the question, sorry.
A3. x coordinate is the lateral component of the magnetic force which is corresponds to the deviation of the north pole from the magnetic pole or something like that. it's supposed to be quite small, same as z coordinate, which is the vertical component.
A4. in theory this might work, but with the precision of the sensors you have in your Android device, this approach seems not very feasible.

Related

Equivalent replacement for deprecated Sensor.TYPE_ORIENTATION

I am looking for a solution that replaces the deprecated Android sensor Sensor.TYPE_ORIENTATION.
The most reported solution is to combine Sensor.TYPE_ACCELEROMETER and Sensor.TYPE_MAGNETIC_FIELD, then calculate a rotation matrix by using SensorManager#getRotationMatrix and obtain the Euler angles by using SensorManager#getOrientation.
Another reported solution is to use Sensor.TYPE_ROTATION_VECTOR, which also ends up with a rotation matrix and the Euler angles by using SensorManager#getOrientation
Unfortunately those behave totally different to TYPE_ORIENTATION when rotating the mobile device. Try both types while your phone is laying on the desk and then turning it up (pitch) to 90° (the screen is now directly facing to you). The calculated Euler angles of azimuth and roll get really wild (cause of something called the Gimbal lock problem) while the degree values retrieved with TYPE_ORIENTATION are pretty stable (not accurate but quite ok). Every value (yaw, pitch and roll) of TYPE_ORIENTATION seems to be some kind of "projected" degree without having the Gimbal Lock problem.
What would be a way to get similar degrees (for yaw, roll and pitch) without using the depreciated TYPE_ORIENTATION sensor (maybe from the rotation matrix)? How does the TYPE_ORIENTATION algorithm does it internally?
The azimuth in getOrientation is the angle between the magnetic north and the projection of the device y-axis into the world x-y plane. When the device is up to 90° the projection is a zero vector, thus the azimuth does not make sense in this case and can be any value. Physically, trying to find the angle between the magnetic north and a vector pointing to the Sky will not make sense.
You should look at my project at https://github.com/hoananguyen/dsensor/blob/master/dsensor/src/main/java/com/hoan/dsensor_master/DProcessedSensor.java

What exactly is a rotation matrix?

i have come across this code which uses the phones sensors to get the orientation of the device in degrees along the 3 axes... this value is calculated from a 4x4 matrix called rotation matrix.. so i was wondering what kind of data is stored in the rotation matrix ?
the code is similar to the one in this example
Android: Problems calculating the Orientation of the Device
The Wikipedia article about rotation matrices is reasonable. Basically, the rotation matrix tells you how to map a point in one co-ordinate system to a point in a different co-ordinate system. In the context of Android sensors, the rotation matrix is telling you how to map a point from the co-ordinate system of the phone (where the phone itself lies in the x-y plane) to the real world North/East/"Gravity-direction" co-ordinate system.
Android uses either 3x3 or 4x4 rotation matrices. When 4x4, it's the Quaternion representation of a rotation. For the 3x3 case, in terms of the Euler Angles aximuth, pitch and roll, see my answer to 'Compute rotation matrix using the magnetic field' for how those angles are embedded in the rotation matrix (NB: the accepted answer to that question is wrong).

How to calculate a specific distance inside of a picture?

sorry for my bad english. I have the following problem:
Lets say the camera of my mobile device is showing this picture.
In the picture you can see 4 different positions. Every position is known to me (longitude, latitude).
Now i want to know, where in the picture a specific position is. For example, i want to have a rectangle 20 meters in front and 5 meters to the left of me. I just know the latitude/longitude of this point, but i don't know, where i have to place it inside of the picture (x,y). For example, POS3 is at (0,400) in my view. POS4 is at (600,400) and so on.
Where do i have to put the new point, which is 20 meters in front and 5 meters to the left of me? (So my Input is: (LatXY,LonXY) and my result should be (x,y) on the screen)
I also got the height of the camera and the angles of x,y and z - axis from the camera.
Can i use simple mathematic operations to solve this problem?
Thank you very much!
The answer you want will depend on the accuracy of the result you need. As danaid pointed out, nonlinearity in the image sensor and other factors, such as atmospheric distortion, may induce errors, but would be difficult problems to solve with different cameras, etc., on different devices. So let's start by getting a reasonable approximation which can be tweaked as more accuracy is needed.
First, you may be able to ignore the directional information from the device, if you choose. If you have the five locations, (POS1 - POS4 and camera, in a consistent basis set of coordinates, you have all you need. In fact, you don't even need all those points.
A note on consistent coordinates. At his scale, once you use the convert the lat and long to meters, using cos(lat) for your scaling factor, you should be able to treat everyone from a "flat earth" perspective. You then just need to remember that the camera's x-y plane is roughly the global x-z plane.
Conceptual Background
The diagram below lays out the projection of the points onto the image plane. The dz used for perspective can be derived directly using the proportion of the distance in view between far points and near points, vs. their physical distance. In the simple case where the line POS1 to POS2 is parallel to the line POS3 to POS4, the perspective factor is just the ratio of the scaling of the two lines:
Scale (POS1, POS2) = pixel distance (pos1, pos2) / Physical distance (POS1, POS2)
Scale (POS3, POS4) = pixel distance (pos3, pos4) / Physical distance (POS3, POS4)
Perspective factor = Scale (POS3, POS4) / Scale (POS1, POS2)
So the perspective factor to apply to a vertex of your rect would be the proportion of the distance to the vertex between the lines. Simplifying:
Factor(rect) ~= [(Rect.z - (POS3, POS4).z / ((POS1, POS2).z - (POS3, POS4).z)] * Perspective factor.
Answer
A perspective transformation is linear with respect to the distance from the focal point in the direction of view. The diagram below is drawn with the X axis parallel to the image plane, and the Y axis pointing in the direction of view. In this coordinate system, for any point P and an image plane any distance from the origin, the projected point p has an X coordinate p.x which is proportional to P.x/P.y. These values can be linearly interpolated.
In the diagram, tp is the desired projection of the target point. to get tp.x, interpolate between, for example, pos1.x and pos3.x using adjustments for the distance, as follows:
tp.x = pos1.x + ((pos3.x-pos1.x)*((TP.x/TP.y)-(POS1.x/POS1.y))/((POS3.x/POS3.y)-(POS1.x/POS1.y))
The advantage of this approach is that it does not require any prior knowledge of the angle viewed by each pixel, and it will be relatively robust against reasonable errors in the location and orientation of the camera.
Further refinement
Using more data means being able to compensate for more errors. With multiple points in view, the camera location and orientation can be calibrated using the Tienstra method. A concise proof of this approach, (using barycentric coordinates), can be found here.
Since the transformation required are all linear based on homogeneous coordinates, you could apply barycentric coordinates to interpolate based on any three or more points, given their X,Y,Z,W coordinates in homogeneous 3-space and their (x,y) coordinates in image space. The closer the points are to the destination point, the less significant the nonlinearities are likely to be, so in your example, you would use POS 1 and POS3, since the rect is on the left, and POS2 or POS4 depending on the relative distance.
(Barycentric coordinates are likely most familiar as the method used to interpolate colors on a triangle (fragment) in 3D graphics.)
Edit: Barycentric coordinates still require the W homogeneous coordinate factor, which is another way of expressing the perspective correction for the distance from the focal point. See this article on GameDev for more details.
Two related SO questions: perspective correction of texture coordinates in 3d and Barycentric coordinates texture mapping.
I see a couple of problems.
The only real mistake is you're scaling your projection up by _canvasWidth/2 etc instead of translating that far from the principal point - add those value to the projected result, multiplication is like "zooming" that far into the projection.
Second, dealing in a global cartesian coordinate space is a bad idea. With the formulae you're using, the difference between (60.1234, 20.122) and (60.1235, 20.122) (i.e. a small, latitude difference) causes changes of similar magnitude in all 3 axes which doesn't feel right.
It's more straightforward to take the same approach as computer graphics: set your camera as the origin of your "camera space", and convert between world objects and camera space by getting the haversine distance (or similar) between your camera location and the location of the object. See here: http://www.movable-type.co.uk/scripts/latlong.html
Third your perspective projection calculations are for an ideal pinhole camera, which you probably do not have. It will only be a small correction, but to be accurate you need to figure out how to additionally apply the projection that corresponds to the intrinsic camera parameters of your camera. There are two ways to accomplish this: you can do it as a post multiplication to the scheme you already have, or you can change from multiplying by a 3x3 matrix to using a full 4x4 camera matrix:http://en.wikipedia.org/wiki/Camera_matrix with the parameters in there.
Using this approach the perspective projection is symmetric about the origin - if you don't check for z depth you'll project points behind you onto you screen as if they were the same z distance in front of you.
Then lastly I'm not sure about android APIs but make sure you're getting true north bearing and not magnetic north bearing. Some platform return either depending on an argument or configuration. (And your degrees are in radians if that's what the APIs want etc - silly things, but I've lost hours debugging less :) ).
If you know the points in the camera frame and the real world coordinates, some simple linear algebra will suffice. A package like OpenCV will have this type of functionality, or alternatively you can create the projection matrices yourself:
http://en.wikipedia.org/wiki/3D_projection
Once you have a set of points it is as simple as filling in a few vectors to solve the system of equations. This will give you a projection matrix. Once you have a projection matrix, you can assume the 4 points are planar. Multiply any 3D coordinate to find the corresponding 2D image plane coordinate.

Changing sensor coordinate system in android

Android provides sensor data in device coordinate system no matter how it is oriented. Is there any way to have sensor data in 'gravity' coordinate system? I mean no matter how the device is oriented I want accelerometer data and orientation in coordinate system where y-axis points toward the sky, x-axis toward the east and z-axis towards south pole.
I took a look at remapCoordinateSystem but seems to be limited to only swapping axis. I guess for orientation I will have to do some low level rotation matrix transformation (or is there any better solution?). But how about acceleration data? Is there any way to have data in relation to coordinate system that is fixed (sort of world coordinate system).
The reason I need this is I'm trying to do some simple motion gestures when phone is in the pocket and it would be easier for me to have all data in coordinates system related to user rather device coordinate system (that will have a little bit different orientation in different user's pockets)
Well you basically get the North orientation when starting - for this you use the accelerometer and the magnetic field sensor to compute orientation (or the deprecated Orientation sensor).
Once you have it you can compute a rotation matrix (Direction Cosine Matrix) from those azimuth, pitch and roll angles. Multiplying your acceleration vector by that matrix will transform your device-frame movements into Earth-frame ones.
As your device will change its orientation as time goes by, you'll need to update it. To do so, retrieve gyroscope's data and update your Direction Cosine Matrix for each new value. You could also get the orientation true value just like the first time, but it's less accurate.
My solution involves DCM, but you could also use quaternions, it's just a matter of choice. Feel free to ask more if needed. I hope this is what you wanted to know !

Compute relative orientation given azimuth, pitch, and roll in android?

When I listen to orientation event in an android app, I get a SensorEvent, which contains 3 floats - azimuth, pitch, and roll in relation to the real-world's axis.
Now say I am building an app like labyrinth, but I don't want to force the user the be over the phone and hold the phone such that the xy plane is parallel to the ground. Instead I want to be able to allow the user to hold the phone as they wish, laying down or, perhaps, sitting down and holding the phone at an angle. In other words, I need to calibrate the phone in accordance with the user's preference.
How can I do that?
Also note that I believe that my answer has to do with getRotationMatrix and getOrientation, but I am not sure how!
Please help! I've been stuck at this for hours.
For a Labyrinth style app, you probably care more for the acceleration (gravity) vector than the axes orientation. This vector, in Phone coordinate system, is given by the combination of the three accelerometers measurements, rather than the rotation angles. Specifically, only the x and y readings should affect the ball's motion.
If you do actually need the orientation, then the 3 angular readings represent the 3 Euler angles. However, I suspect you probably don't really need the angles themselves, but rather the rotation matrix R, which is returned by the getRotationMatrix() API. Once you have this matrix, then it is basically the calibration that you are looking for. When you want to transform a vector in world coordinates to your device coordinates, you should multiply it by the inverse of this matrix (where in this special case, inv(R) = transpose(R).
So, following the example I found in the documentation, if you want to transform the world gravity vector g ([0 0 g]) to the device coordinates, multiply it by inv(R):
g = inv(R) * g
(note that this should give you the same result as reading the accelerometers)
Possible APIs to use here: invertM() and multiplyMV() methods of the matrix class.
I don't know of any android-specific APIs, but all you want to do is decrease the azimuth by a certain amount, right? So you move the "origin" from (0,0,0) to whatever they want. In pseudocode:
myGetRotationMatrix:
return getRotationMatrix() - origin

Categories

Resources