How to calculate the direction the screen is facing - android

How do you calculate the direction that your camera is pointing towards in Android? Azimuth works only if the device is vertical. How do you account for the pitch and roll?
If R is the rotation matrix, I want to find something like:
getRotationMatrix(R, I, grav, mag);
float[] rotated = R {0, 0, -1} ; //not sure how to do matrix multiplication
float direction = Math.atan(rotated[0]/rotated[1]);

I would lookup the Accelerometer API. Also, check out this article, it might help: http://www.anddev.org/convert_android_accelerometer_values_and_get_tilt_from_accel-t6595.html

Related

Trouble mapping device coordinate system to real-world (rotation vector) coordinate system in Processing Android

I know this question has been asked many many times, but with all the knowledge out there I still can't get it to work for myself in the specific setting I now find myself in: Processing for Android.
The coordinate systems involved are (1) the real-world coordinate system as per Android's view: y is tangential to the ground and pointing north, z goes up into the sky, and x goes to your right, if you're standing on the ground and looking north; and (2) the device coordinate system as per Processing's view: x points to the right of the screen, y down, and z comes out of the screen.
The goal is simply to draw a cube on the screen and have it rotate on device rotation such that it seems that it is stable in actual space. That is: I want a map between the two coordinate systems so that I can draw in terms of the real-world coordinates instead of the screen coordinates.
In the code I'm using the Ketai sensor library, and subscribe to the onRotationVectorEvent(float x, float y, float z) event. Also, I have a simple quaternion class lying around that I got from https://github.com/kynd/PQuaternion. So far I have the following code, in which I have two different ways of trying to map, that coincide, but nevertheless don't work as I want them to:
import ketai.sensors.*;
KetaiSensor sensor;
PVector rotationAngle = new PVector(0, 0, 0);
Quaternion rot = new Quaternion();
void setup() {
fullScreen(P3D);
sensor = new KetaiSensor(this);
sensor.start();
}
void draw() {
background(#333333);
translate(width/2, height/2);
lights();
// method 1: draw lines for real-world axes in terms of processing's coordinates
PVector rot_x_axis = rot.mult(new PVector(400, 0, 0));
PVector rot_y_axis = rot.mult(new PVector(0, 0, -400));
PVector rot_z_axis = rot.mult(new PVector(0, 400, 4));
stroke(#ffffff);
strokeWeight(8); line(0, 0, 0, rot_x_axis.x, rot_x_axis.y, rot_x_axis.z);
strokeWeight(5); line(0, 0, 0, rot_y_axis.x, rot_y_axis.y, rot_y_axis.z);
strokeWeight(2); line(0, 0, 0, rot_z_axis.x, rot_z_axis.y, rot_z_axis.z);
// method 2: first rotate appropriately
fill(#f4f7d2);
rotate(asin(rotationAngle.mag()) * 2, rotationAngle.x, rotationAngle.y, rotationAngle.z);
box(200, 200, 200);
}
void onRotationVectorEvent(float x, float y, float z) {
rotationAngle = new PVector(x, y, z);
// I believe these two do the same thing.
rot.set(x, y, z, cos(asin(rotationAngle.mag())));
//rot.setAngleAxis(asin(rotationAngle.mag())*2, rotationAngle);
}
The above works well enough that the real-world axis lines coincide with the cube drawn, and both rotate in an interesting way. But still, there seems to be some "gimbal stuff" going on, in the sense that, when I rotate my device up and down standing one way, the cube also rotates up and down, but standing another way, the cube rotates sideways --- as if I'm applying the rotations in the wrong order. However, I'm trying to avoid gimbal madness by working with quaternions this way --- how does it still apply?
I've solved it now, just by a simple "click to test next configuration" UI, to test all possible 6 * 8 configurations of rotate(asin(rotationAngle.mag()) * 2, <SIGN> * rotationAngle.<DIM>, <SIGN> * rotationAngle.<DIM>, <SIGN> * rotationAngle.<DIM>); -- the solution to which seemed to be 0, -1, 2, i.e.:
rotate(asin(rotationAngle.mag()) * 2, rotationAngle.x, -rotationAngle.y, rotationAngle.z);

Using Rotation Matrix to rotate points in space

I'm using android's rotation matrix to rotate multiple points in space.
Work so far
I start by reading the matrix from the SensorManager.getRotationMatrix function. Next I transform the rotation matrix into a quaternion using the explanation given in this link. I'm doing this because I read that Euler angles can lead to Gimbal lock issue and that operations with a 3x3 matrix can be exhaustive. source
Problem
Now what I want to do is: Imagine the phone is the origin of the referential and given a set of points (projected lat/lng coordinates into a xyz coordinate system see method bellow) I want to rotate them so I can check which ones are on my line of sight. For that I'm using this SO question which returns a X and Y (left and top respectively) to display the point on screen. It's working fine but only works when facing North (because it doesn't take orientation into account and my projected vector uses North/South as X and East/West as Z). So my thought was to rotate all objects. Also even though the initial altitude (Y) is 0 I want to be able to position the point up/down according to phone's orientation.
I think part of the solution may be on this post. But since this uses Euler angles I don't think that's the best method.
Conclusion
So, if it's really better to rotate each point's position how can I archive that using the rotation quaternion? Otherwise which is the better way?
I'm sorry if I said anything wrong in this post. I'm not good at physics.
Code
//this functions returns a 3d vector (0 for Y since I'm discarding altitude) using 2 coordinates
public static float[] convLocToVec(LatLng source, LatLng destination)
{
float[] z = new float[1];
z[0] = 0;
Location.distanceBetween(source.latitude, source.longitude, destination
.latitude, source.longitude, z);
float[] x = new float[1];
Location.distanceBetween(source.latitude, source.longitude, source
.latitude, destination.longitude, x);
if (source.latitude < destination.latitude)
z[0] *= -1;
if (source.longitude > destination.longitude)
x[0] *= -1;
return new float[]{x[0], (float) 0, z[0]};
}
Thanks for your help and have a nice day.
UPDATE 1
According to Wikipedia:
Compute the matrix product of a 3 × 3 rotation matrix R and the
original 3 × 1 column matrix representing v→. This requires 3 × (3
multiplications + 2 additions) = 9 multiplications and 6 additions,
the most efficient method for rotating a vector.
Should I really just use the rotation matrix to rotate a vector?
Since no one answered I'm here to answer myself.
After some research (a lot actually) I came to the conclusion that yes it is possible to rotate a vector using a quaternion but it's better for you that you transform it into a rotation matrix.
Rotation matrix - 9 multiplications and 6 additions
Quartenion - 15 multiplications and 15 additions
Source: Performance comparisons
It's better to use the rotation matrix provided by Android. Also if you are going to use quaternion somehow (Sensor.TYPE_ROTATION_VECTOR + SensorManager.getQuaternionFromVector for example) you can (and should) transform it into a rotation matrix. You can use the method SensorManager.getRotationMatrixFromVector to convert the rotation vector to a matrix. After you get the rotation matrix you just have to multiply it for the projected vector you want. You can use this function for that:
public float[] multiplyByVector(float[][] A, float[] x) {
int m = A.length;
int n = A[0].length;
if (x.length != n) throw new RuntimeException("Illegal matrix dimensions.");
float[] y = new float[m];
for (int i = 0; i < m; i++)
for (int j = 0; j < n; j++)
y[i] += (A[i][j] * x[j]);
return y;
}
Although I'm still not able to get this running correctly I will mark this as answer.

Scale, rotate, translate w. matrices in openGl ES 2.0

I'm working with OpenGL ES 2.0 and trying to build my object class with some methods to rotate/translate/scale them.
I just set up my object in 0,0,0 and move it afterwards to the desired position on the screen. Below are my methods to move it seperately. After that i run the buildObjectModelMatrix to pass all the matrices into one objectMatrix, so i can take the vertices and multiply them with my modelMatrix/objectMatrix and render it afterwards.
What i think is right, i have to multiply my matrices in this order:
[scale]x[rotation]x[translation]
->
[temp]x[translation]
->
[objectMatrix]
I've found some literature. Maybe i get it in a few Minutes, if i will, i will update it.
Beginning Android 3D
http://gamedev.stackexchange.com
setIdentityM(scaleMatrix, 0);
setIdentityM(translateMatrix, 0);
setIdentityM(rotateMatrix, 0);
public void translate(float x, float y, float z) {
translateM(translateMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
public void rotate(float angle, float x, float y, float z) {
rotateM(rotateMatrix, 0, angle, x, y, z);
buildObjectModelMatrix();
}
public void scale(float x, float y,float z) {
scaleM(scaleMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, scaleMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, translateMatrix, 0);
}
SOLVED:
The Problem within the whole thing is, if you scale before you translate you get a difference in the distance you translate! the correct code for multiplying your matrices should be (correct me if i'm wrong)
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, translateMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, scaleMatrix, 0);
}
with this you translate and rotate first. Afterwards you can scale the object.
Tested with multiple Objects... so i hope this helped :)
You know this is the most common issue with most people when beginning to deal with matrix operations. How matrix multiplication works is as if you were looking from the objects first person view getting some commands: For instance if you began at (0,0,0) facing toward positive X axis and up would be positive Y axis then translate (a,0,0) would mean "go forward", translate (0,0,a) would would mean "go left", rotate (a, 0, 1, 0) would mean "turn left"...
So if in your case you scaled by 3 units, rotated by 90 degrees and then translated by (2,0,0) what happens is you first enlarge yourself by scale of 3, then turn 90 degrees so you are now facing positive Z still being quite large. Then you go forward by 2 units measured in your own coordinate system which means you will actually go to (0,0,2*3). So you end up at (0,0,6) looking toward positive Z axis.
I believe this way is the best to be able to imagine what goes on when dealing with such operations. And might save your life when having a bug in matrix operation order.
You should know that although this kind of matrix operating is normal when beginning with a 3D scene you should try to move to a better system as soon as possible. What I mostly use is to have an object structure/class which contains 3 vectors: position, forward and up (this is much like using glLookAt but not totally the same). So when having these 3 vectors you can simply set a specific position or rotation using trigonometry or your matrix tools by multiplying the vectors with matrices instead of the matrices with matrices. Or you can work with them internally (first person) where for instance "go forward" would be done as position = position + position*forward*scale, turn left would be rotating a forward vector around the up vector. Anyway I hope can understand how to manipulate those 3 vectors to get a desired effect... So what you need to do to reconstruct the matrix from those 3 vector is need to generate another vector right which is a cross product of up and forward then the model matrix consists of:
right.x, right.y, right.z, .0
up.x, up.y, up.z, .0
forward.x, forward.y, forward.z, .0
position.x, position.y, position.z, 1.0
Just note the row-column order may change depending on what you are working with.
I hope this gives you some better understanding...

OpenGLES20 - Rotation matrix screws movements - rotate view matrix with angles instead?

Current state:
Creating the frustum, saved in mViewMatrix, also I have both a quaternion q, a float array of three angles (yaw, roll, pitch e.g.) as well as a rotation matrix mRotationMatrix - representing the quaternion as well as the angles' rotation.
What I want to achieve is some sort of an augmented reality effect. I'm currently applying the mRotationMatrix to the mViewMatrix:
Matrix.setLookAtM(mTmpMatrix, 0, // mViewMatrix
mCameraPosition[0], mCameraPosition[1], mCameraPosition[2], // eye
mTargetRotPosition[0], mTargetRotPosition[1], mTargetRotPosition[2],
0, 1, 0); // up
Matrix.setIdentityM(mViewMatrix, 0);
Matrix.multiplyMM(mViewMatrix, 0, mRotationMatrix, 0, mTmpMatrix, 0);
This handles the whole rotation, up vector as well, so the rotation works fine. But since the rotation matrix comes from the device's sensors, the rotation matrix is kind of around the wrong axis.
As a reference, this image should help:
Scenario #1:
Yaw: pointing towards north, it's 0.
Pitch: 0
Roll: 0
Camera is looking to the right, but y is correct.
If I now increase pitch, i.e. pick up the device, the camera now moves to the right, instead of looking up.
If I increase yaw, camera is moving up, instead of to the right.
If I increase roll, weird transformations happen.
In the video, I'm executing the movements in this order. The compass is also showing correct movements, just the transformations of the OpenGL camera are screwed.
Video: Sample screenrecord video
Currently, I'm using the following code to get the rotation matrix, as well as pitch/roll/yaw:
switch (rotation) {
case Surface.ROTATION_0:
mRemappedXAxis = SensorManager.AXIS_MINUS_Y;
mRemappedYAxis = SensorManager.AXIS_Z;
break;
case Surface.ROTATION_90:
mRemappedXAxis = SensorManager.AXIS_X;
mRemappedYAxis = SensorManager.AXIS_Y;
break;
case Surface.ROTATION_180:
mRemappedXAxis = SensorManager.AXIS_Y;
mRemappedYAxis = SensorManager.AXIS_MINUS_Z;
break;
case Surface.ROTATION_270:
mRemappedXAxis = SensorManager.AXIS_MINUS_X;
mRemappedYAxis = SensorManager.AXIS_MINUS_Y;
break;
}
float[] rotationMatrix = new float[16];
float[] correctedRotationMatrix = new float[16];
float[] rotationVector = new float[]{x, y, z}; // from sensor fusion
float[] orientationVals = new float[3];
SensorManager.getRotationMatrixFromVector(rotationMatrix, rotationVector);
SensorManager.remapCoordinateSystem(rotationMatrix, mRemappedXAxis, mRemappedYAxis, correctedRotationMatrix);
SensorManager.getOrientation(correctedRotationMatrix, orientationVals);
I've already tried some other remap-combinations, but none of them seemed to change anything in the movement-translation..
My other thought would be to rotate the vectors I'm using in setLookAtM by myself. But I don't know how I am supposed to work with the up vector.
If someone could either show me / point me in the direction how to handle the rotation, that the movements I execute will be parsed right, or else how I am supposed to do this with the bare angles in OpenGL, I'd be thankful.
In my case , i used to calculate the incremental rotation delta(angle) every time the finger moved on the screen. From this incremental rotation angle, i created a temporary rotation matrix. Then, i post multiplied my overall rotation matrix(Historical rotation matrix with all the previous incremental rotations) with this and finally used this overall rotation matrix in my draw method.
The problem was that i was POST MULTIPLYING the incremental rotation with my overall rotation which meant... my latest rotation would be applied to the object first and the oldest(first) rotation would be applied last.
This is what messed up everything.
The solution was simple, instead of post multiplying, i pre-multiplied the incremental rotation with my overall rotation matrix.My rotations were now in right order and everything worked fine.
Hope this helps.
Here is where i learnt it from.Check this question.
"9.070 How do I transform my objects around a fixed coordinate system rather than the object's local coordinate system?"
http://www.opengl.org/archives/resources/faq/technical/transformations.htm#tran0162

Strange Matrix transformation for SVG rotate

I have a java code for SVG drawing. It processes transforms including rotate, and does this very well, as far as I can see in numerous test pictures compared against their rendering in Chrome. Next what I need is to get actual object location, which is in many images declared via transforms. So I decided just to read X and Y from Matrix used for drawing. Unfortunately I get incorrect values for rotate transform, that is they do not correspond to real object location in the image.
The stripped down code looks like this:
Matrix matrix = new Matrix();
float cx = 1000; // suppose this is an object X coordinate
float cy = 300; // this is its Y coordinate
float angle = -90; // rotate counterclockwise, got from "rotate(-90, 1000, 300)"
// shift to -X,-Y, so object is in the center
matrix.postTranslate(-cx, -cy);
// rotate actually
matrix.postRotate(angle);
// shift back
matrix.postTranslate(cx, cy);
// debug goes here
float[] values = new float[9];
matrix.getValues(values);
Log.v("HELLO", values[Matrix.MTRANS_X] + " " + values[Matrix.MTRANS_Y]);
The log outputs the values 700 and 1300 respectively. I'd expect 0 and 0, because I see the object rotated inplace in my image (that is there is no any movement), and postTranslate calls should compensate each other. Of course, I see how these values are formed from 1000 and 300, but don't understand why. Once again, I point out that the matrix with these strange values is used for actual object drawing, and it looks correct. Could someone explain what happens here? Am I missing something? So far I have only one solution of my problem: just do not try to obtain position from rotate, do it only for explicit matrix and translate transforms. But this approach lacks generality, and anyway I thought matrix should have reasonable values (including offsets) for any transformation type.
The answer is that the matrix is an operator for space transformation, and should not be used for direct extraction of object position. Instead, one should get initial object coordinates, as specified in x and y attributes of an SVG tag, and apply the matrix on them:
float[] src = new float[2];
src[0] = cx;
src[1] = cy;
matrix.mapPoints(src);
After this we get proper location values in x and y variables.

Categories

Resources