OpenGLES20 - Rotation matrix screws movements - rotate view matrix with angles instead? - android

Current state:
Creating the frustum, saved in mViewMatrix, also I have both a quaternion q, a float array of three angles (yaw, roll, pitch e.g.) as well as a rotation matrix mRotationMatrix - representing the quaternion as well as the angles' rotation.
What I want to achieve is some sort of an augmented reality effect. I'm currently applying the mRotationMatrix to the mViewMatrix:
Matrix.setLookAtM(mTmpMatrix, 0, // mViewMatrix
mCameraPosition[0], mCameraPosition[1], mCameraPosition[2], // eye
mTargetRotPosition[0], mTargetRotPosition[1], mTargetRotPosition[2],
0, 1, 0); // up
Matrix.setIdentityM(mViewMatrix, 0);
Matrix.multiplyMM(mViewMatrix, 0, mRotationMatrix, 0, mTmpMatrix, 0);
This handles the whole rotation, up vector as well, so the rotation works fine. But since the rotation matrix comes from the device's sensors, the rotation matrix is kind of around the wrong axis.
As a reference, this image should help:
Scenario #1:
Yaw: pointing towards north, it's 0.
Pitch: 0
Roll: 0
Camera is looking to the right, but y is correct.
If I now increase pitch, i.e. pick up the device, the camera now moves to the right, instead of looking up.
If I increase yaw, camera is moving up, instead of to the right.
If I increase roll, weird transformations happen.
In the video, I'm executing the movements in this order. The compass is also showing correct movements, just the transformations of the OpenGL camera are screwed.
Video: Sample screenrecord video
Currently, I'm using the following code to get the rotation matrix, as well as pitch/roll/yaw:
switch (rotation) {
case Surface.ROTATION_0:
mRemappedXAxis = SensorManager.AXIS_MINUS_Y;
mRemappedYAxis = SensorManager.AXIS_Z;
break;
case Surface.ROTATION_90:
mRemappedXAxis = SensorManager.AXIS_X;
mRemappedYAxis = SensorManager.AXIS_Y;
break;
case Surface.ROTATION_180:
mRemappedXAxis = SensorManager.AXIS_Y;
mRemappedYAxis = SensorManager.AXIS_MINUS_Z;
break;
case Surface.ROTATION_270:
mRemappedXAxis = SensorManager.AXIS_MINUS_X;
mRemappedYAxis = SensorManager.AXIS_MINUS_Y;
break;
}
float[] rotationMatrix = new float[16];
float[] correctedRotationMatrix = new float[16];
float[] rotationVector = new float[]{x, y, z}; // from sensor fusion
float[] orientationVals = new float[3];
SensorManager.getRotationMatrixFromVector(rotationMatrix, rotationVector);
SensorManager.remapCoordinateSystem(rotationMatrix, mRemappedXAxis, mRemappedYAxis, correctedRotationMatrix);
SensorManager.getOrientation(correctedRotationMatrix, orientationVals);
I've already tried some other remap-combinations, but none of them seemed to change anything in the movement-translation..
My other thought would be to rotate the vectors I'm using in setLookAtM by myself. But I don't know how I am supposed to work with the up vector.
If someone could either show me / point me in the direction how to handle the rotation, that the movements I execute will be parsed right, or else how I am supposed to do this with the bare angles in OpenGL, I'd be thankful.

In my case , i used to calculate the incremental rotation delta(angle) every time the finger moved on the screen. From this incremental rotation angle, i created a temporary rotation matrix. Then, i post multiplied my overall rotation matrix(Historical rotation matrix with all the previous incremental rotations) with this and finally used this overall rotation matrix in my draw method.
The problem was that i was POST MULTIPLYING the incremental rotation with my overall rotation which meant... my latest rotation would be applied to the object first and the oldest(first) rotation would be applied last.
This is what messed up everything.
The solution was simple, instead of post multiplying, i pre-multiplied the incremental rotation with my overall rotation matrix.My rotations were now in right order and everything worked fine.
Hope this helps.
Here is where i learnt it from.Check this question.
"9.070 How do I transform my objects around a fixed coordinate system rather than the object's local coordinate system?"
http://www.opengl.org/archives/resources/faq/technical/transformations.htm#tran0162

Related

Finding relative orientation in Android

I'm building an AR application where there is an object on the camera preview being tracked by a computer vision algorithm (solvePnP). I want to make this experience smoother and use the phone's gyroscope to rotate the object in between solvePnP results. The first step to do this was to run solvePnP once, project the points on the screen and then rely on orientation only to track the object (I make sure not to translate the phone too much).
The desired effect is that, if I hold the phone on portrait mode, upright, with the camera pointing to some object of interest. If I rotate the phone positively about its vertical axis, I expect to see the corresponding object on the screen to rotate negatively about its vertical axis. I'm using Sensor.TYPE_ROTATION_VECTOR to get the quaternions from the phone.
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() == Sensor.TYPE_ROTATION_VECTOR) {
if (!didInitRotation) {
SensorManager.getRotationMatrixFromVector(mInitialRotationMatrix, event.values);
didInitRotation = true;
}
SensorManager.getRotationMatrixFromVector(mRotationMatrix, event.values);
float[] rotationMatrixInverse = new float[16];
float[] relativeOrientation = new float[16];
Matrix.invertM(rotationMatrixInverse, 0, mRotationMatrix, 0);
Matrix.multiplyMM(relativeOrientation, 0, rotationMatrixInverse, 0, mInitialRotationMatrix, 0);
mGLAssetSurfaceView.getRenderer().setRotationMatrix(relativeOrientation);
}
}
As you can see, I hold a reference to some "initial rotation matrix" and then multiply it by the inverse of the current rotation matrix to get the relative orienation. I then apply this to the view matrix in OpenGL.
The result is that the TYPE_ROTATION_VECTOR behaves strangely. If I rotate the phone about its x-axis, I see the object on the screen moving diagonally from one corner of the screen to another.

Using Rotation Matrix to rotate points in space

I'm using android's rotation matrix to rotate multiple points in space.
Work so far
I start by reading the matrix from the SensorManager.getRotationMatrix function. Next I transform the rotation matrix into a quaternion using the explanation given in this link. I'm doing this because I read that Euler angles can lead to Gimbal lock issue and that operations with a 3x3 matrix can be exhaustive. source
Problem
Now what I want to do is: Imagine the phone is the origin of the referential and given a set of points (projected lat/lng coordinates into a xyz coordinate system see method bellow) I want to rotate them so I can check which ones are on my line of sight. For that I'm using this SO question which returns a X and Y (left and top respectively) to display the point on screen. It's working fine but only works when facing North (because it doesn't take orientation into account and my projected vector uses North/South as X and East/West as Z). So my thought was to rotate all objects. Also even though the initial altitude (Y) is 0 I want to be able to position the point up/down according to phone's orientation.
I think part of the solution may be on this post. But since this uses Euler angles I don't think that's the best method.
Conclusion
So, if it's really better to rotate each point's position how can I archive that using the rotation quaternion? Otherwise which is the better way?
I'm sorry if I said anything wrong in this post. I'm not good at physics.
Code
//this functions returns a 3d vector (0 for Y since I'm discarding altitude) using 2 coordinates
public static float[] convLocToVec(LatLng source, LatLng destination)
{
float[] z = new float[1];
z[0] = 0;
Location.distanceBetween(source.latitude, source.longitude, destination
.latitude, source.longitude, z);
float[] x = new float[1];
Location.distanceBetween(source.latitude, source.longitude, source
.latitude, destination.longitude, x);
if (source.latitude < destination.latitude)
z[0] *= -1;
if (source.longitude > destination.longitude)
x[0] *= -1;
return new float[]{x[0], (float) 0, z[0]};
}
Thanks for your help and have a nice day.
UPDATE 1
According to Wikipedia:
Compute the matrix product of a 3 × 3 rotation matrix R and the
original 3 × 1 column matrix representing v→. This requires 3 × (3
multiplications + 2 additions) = 9 multiplications and 6 additions,
the most efficient method for rotating a vector.
Should I really just use the rotation matrix to rotate a vector?
Since no one answered I'm here to answer myself.
After some research (a lot actually) I came to the conclusion that yes it is possible to rotate a vector using a quaternion but it's better for you that you transform it into a rotation matrix.
Rotation matrix - 9 multiplications and 6 additions
Quartenion - 15 multiplications and 15 additions
Source: Performance comparisons
It's better to use the rotation matrix provided by Android. Also if you are going to use quaternion somehow (Sensor.TYPE_ROTATION_VECTOR + SensorManager.getQuaternionFromVector for example) you can (and should) transform it into a rotation matrix. You can use the method SensorManager.getRotationMatrixFromVector to convert the rotation vector to a matrix. After you get the rotation matrix you just have to multiply it for the projected vector you want. You can use this function for that:
public float[] multiplyByVector(float[][] A, float[] x) {
int m = A.length;
int n = A[0].length;
if (x.length != n) throw new RuntimeException("Illegal matrix dimensions.");
float[] y = new float[m];
for (int i = 0; i < m; i++)
for (int j = 0; j < n; j++)
y[i] += (A[i][j] * x[j]);
return y;
}
Although I'm still not able to get this running correctly I will mark this as answer.

Scale, rotate, translate w. matrices in openGl ES 2.0

I'm working with OpenGL ES 2.0 and trying to build my object class with some methods to rotate/translate/scale them.
I just set up my object in 0,0,0 and move it afterwards to the desired position on the screen. Below are my methods to move it seperately. After that i run the buildObjectModelMatrix to pass all the matrices into one objectMatrix, so i can take the vertices and multiply them with my modelMatrix/objectMatrix and render it afterwards.
What i think is right, i have to multiply my matrices in this order:
[scale]x[rotation]x[translation]
->
[temp]x[translation]
->
[objectMatrix]
I've found some literature. Maybe i get it in a few Minutes, if i will, i will update it.
Beginning Android 3D
http://gamedev.stackexchange.com
setIdentityM(scaleMatrix, 0);
setIdentityM(translateMatrix, 0);
setIdentityM(rotateMatrix, 0);
public void translate(float x, float y, float z) {
translateM(translateMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
public void rotate(float angle, float x, float y, float z) {
rotateM(rotateMatrix, 0, angle, x, y, z);
buildObjectModelMatrix();
}
public void scale(float x, float y,float z) {
scaleM(scaleMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, scaleMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, translateMatrix, 0);
}
SOLVED:
The Problem within the whole thing is, if you scale before you translate you get a difference in the distance you translate! the correct code for multiplying your matrices should be (correct me if i'm wrong)
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, translateMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, scaleMatrix, 0);
}
with this you translate and rotate first. Afterwards you can scale the object.
Tested with multiple Objects... so i hope this helped :)
You know this is the most common issue with most people when beginning to deal with matrix operations. How matrix multiplication works is as if you were looking from the objects first person view getting some commands: For instance if you began at (0,0,0) facing toward positive X axis and up would be positive Y axis then translate (a,0,0) would mean "go forward", translate (0,0,a) would would mean "go left", rotate (a, 0, 1, 0) would mean "turn left"...
So if in your case you scaled by 3 units, rotated by 90 degrees and then translated by (2,0,0) what happens is you first enlarge yourself by scale of 3, then turn 90 degrees so you are now facing positive Z still being quite large. Then you go forward by 2 units measured in your own coordinate system which means you will actually go to (0,0,2*3). So you end up at (0,0,6) looking toward positive Z axis.
I believe this way is the best to be able to imagine what goes on when dealing with such operations. And might save your life when having a bug in matrix operation order.
You should know that although this kind of matrix operating is normal when beginning with a 3D scene you should try to move to a better system as soon as possible. What I mostly use is to have an object structure/class which contains 3 vectors: position, forward and up (this is much like using glLookAt but not totally the same). So when having these 3 vectors you can simply set a specific position or rotation using trigonometry or your matrix tools by multiplying the vectors with matrices instead of the matrices with matrices. Or you can work with them internally (first person) where for instance "go forward" would be done as position = position + position*forward*scale, turn left would be rotating a forward vector around the up vector. Anyway I hope can understand how to manipulate those 3 vectors to get a desired effect... So what you need to do to reconstruct the matrix from those 3 vector is need to generate another vector right which is a cross product of up and forward then the model matrix consists of:
right.x, right.y, right.z, .0
up.x, up.y, up.z, .0
forward.x, forward.y, forward.z, .0
position.x, position.y, position.z, 1.0
Just note the row-column order may change depending on what you are working with.
I hope this gives you some better understanding...

Android: axes vectors from orientation/rotational angles?

So there's a couple methods in the Android SensorManager to get your phone's orientation:
float[] rotational = new float[9];
float[] orientation = new float[3];
SensorManager.getRotationMatrix(rotational, whatever, whatever, whatever);
SensorManager.getOrientation(rotational, orientation);
This gives you a rotation matrix called "rotational" and an array of 3 orientation angles called "orientation". However, I can't use the angles in my AR program - what I need is the actual vectors which represent the axes.
For example, in this image from Wikipedia:
I'm basically being given the α, β, and γ angles (though not exactly since I don't have an N - I'm being given the angles from each of the blue axes), and I need to find vectors which represents the X, Y, and Z axes (red in the image). Does anyone know how to do this conversion? The directions on Wikipedia are very complicated, and my attempts to follow them have not worked. Also, I think the data that Android gives you may be in a slightly different order or format than what the conversion directions on Wikipedia expect.
Or as an alternative to these conversions, does anyone know any other ways to get the X, Y, and Z axes from the camera's perspective? (Meaning, what vector is the camera looking down? And what vector does the camera consider to be "up"?)
The rotation matrix in Android provides a rotation from the body (a.k.a device) frame to the world (a.k.a. inertial) frame. A normal back facing camera appears in landscape mode on the screen. This is native mode for a tablet, so has the following axes in the device frame:
camera_x_tablet_body = (1,0,0)
camera_y_tablet_body = (0,1,0)
camera_z_tablet_body = (0,0,1)
On a phone, where portrait is native mode, a rotation of the device into landscape with top turned to point left is:
camera_x_phone_body = (0,-1,0)
camera_y_phone_body = (1,0,0)
camera_z_phone_body = (0,0,1)
Now applying the rotation matrix will put this in the world frame, so (for rotation matrix R[] of size 9):
camera_x_tablet_world = (R[0],R[3],R[6]);
camera_y_tablet_world = (R[1],R[4],R[7]);
camera_z_tablet_world = (R[2],R[5],R[8]);
In general, you can use SensorManager.remapCoordinateSystem() which for the phone example above would be Display.getRotation()=Surface.ROTATION_90 and give the answer you provided.
But if you rotate differently (ROTATION_270 for example) it will be different.
Also, an aside: the best method to get orientation in Android is to listen for Sensor.TYPE_ROTATION_VECTOR events. These are filled with the best possible orientation on most (i.e. Gingerbread or newer) platforms. It is actually the vector part of the quaternion. You can get the full quaternion using this (and last two lines are a way to get the RotationMatrix):
float vec[] = event.values.clone();
float quat[] = new float[4];
SensorManager.getQuaternionFromVector(quat, vec);
float [] RotMat = new float[9];
SensorManager.getRotationMatrixFromVector(RotMat, quat);
More information at: http://www.sensorplatforms.com/which-sensors-in-android-gets-direct-input-what-are-virtual-sensors
SensorManager.getRotationMatrix(rotational, null, gravityVals, geoMagVals);
// camera's x-axis
Vector u = new Vector(-rotational[1], -rotational[4], -rotational[7]); // right of phone (in landscape mode)
// camera's y-axis
Vector v = new Vector(rotational[0], rotational[3], rotational[6]); // top of phone (in landscape mode)
// camera's z-axis (negative into the scene)
Vector n = new Vector(rotational[2], rotational[5], rotational[8]); // front of phone (the screen)
// world axes (x,y,z):
// +x is East
// +y is North
// +z is sky
The orientation matrix that you receive from getRotationMatrix should be based on the gravity field and the magnetic field - in other words X points East, Y - North, Z - the center of the Earth. (http://developer.android.com/reference/android/hardware/SensorManager.html)
To the point of your question, I think the three rotation values can be used directly as a vector, but provide the values in reverse order:
"For either Euler or Tait-Bryan angles, it is very simple to convert from an intrinsic (rotating axes) to an extrinsic (static axes) convention, and vice-versa: just swap the order of the operations. An (α, β, γ) rotation using X-Y-Z intrinsic convention is equivalent to a (γ, β, α) rotation using Z-Y-X extrinsic convention; this is true for all Euler or Tait-Bryan axis combinations."
Source wikipedia
I hope this helps!

How to calculate the direction the screen is facing

How do you calculate the direction that your camera is pointing towards in Android? Azimuth works only if the device is vertical. How do you account for the pitch and roll?
If R is the rotation matrix, I want to find something like:
getRotationMatrix(R, I, grav, mag);
float[] rotated = R {0, 0, -1} ; //not sure how to do matrix multiplication
float direction = Math.atan(rotated[0]/rotated[1]);
I would lookup the Accelerometer API. Also, check out this article, it might help: http://www.anddev.org/convert_android_accelerometer_values_and_get_tilt_from_accel-t6595.html

Categories

Resources