Finding relative orientation in Android - android

I'm building an AR application where there is an object on the camera preview being tracked by a computer vision algorithm (solvePnP). I want to make this experience smoother and use the phone's gyroscope to rotate the object in between solvePnP results. The first step to do this was to run solvePnP once, project the points on the screen and then rely on orientation only to track the object (I make sure not to translate the phone too much).
The desired effect is that, if I hold the phone on portrait mode, upright, with the camera pointing to some object of interest. If I rotate the phone positively about its vertical axis, I expect to see the corresponding object on the screen to rotate negatively about its vertical axis. I'm using Sensor.TYPE_ROTATION_VECTOR to get the quaternions from the phone.
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() == Sensor.TYPE_ROTATION_VECTOR) {
if (!didInitRotation) {
SensorManager.getRotationMatrixFromVector(mInitialRotationMatrix, event.values);
didInitRotation = true;
}
SensorManager.getRotationMatrixFromVector(mRotationMatrix, event.values);
float[] rotationMatrixInverse = new float[16];
float[] relativeOrientation = new float[16];
Matrix.invertM(rotationMatrixInverse, 0, mRotationMatrix, 0);
Matrix.multiplyMM(relativeOrientation, 0, rotationMatrixInverse, 0, mInitialRotationMatrix, 0);
mGLAssetSurfaceView.getRenderer().setRotationMatrix(relativeOrientation);
}
}
As you can see, I hold a reference to some "initial rotation matrix" and then multiply it by the inverse of the current rotation matrix to get the relative orienation. I then apply this to the view matrix in OpenGL.
The result is that the TYPE_ROTATION_VECTOR behaves strangely. If I rotate the phone about its x-axis, I see the object on the screen moving diagonally from one corner of the screen to another.

Related

Scale, rotate, translate w. matrices in openGl ES 2.0

I'm working with OpenGL ES 2.0 and trying to build my object class with some methods to rotate/translate/scale them.
I just set up my object in 0,0,0 and move it afterwards to the desired position on the screen. Below are my methods to move it seperately. After that i run the buildObjectModelMatrix to pass all the matrices into one objectMatrix, so i can take the vertices and multiply them with my modelMatrix/objectMatrix and render it afterwards.
What i think is right, i have to multiply my matrices in this order:
[scale]x[rotation]x[translation]
->
[temp]x[translation]
->
[objectMatrix]
I've found some literature. Maybe i get it in a few Minutes, if i will, i will update it.
Beginning Android 3D
http://gamedev.stackexchange.com
setIdentityM(scaleMatrix, 0);
setIdentityM(translateMatrix, 0);
setIdentityM(rotateMatrix, 0);
public void translate(float x, float y, float z) {
translateM(translateMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
public void rotate(float angle, float x, float y, float z) {
rotateM(rotateMatrix, 0, angle, x, y, z);
buildObjectModelMatrix();
}
public void scale(float x, float y,float z) {
scaleM(scaleMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, scaleMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, translateMatrix, 0);
}
SOLVED:
The Problem within the whole thing is, if you scale before you translate you get a difference in the distance you translate! the correct code for multiplying your matrices should be (correct me if i'm wrong)
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, translateMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, scaleMatrix, 0);
}
with this you translate and rotate first. Afterwards you can scale the object.
Tested with multiple Objects... so i hope this helped :)
You know this is the most common issue with most people when beginning to deal with matrix operations. How matrix multiplication works is as if you were looking from the objects first person view getting some commands: For instance if you began at (0,0,0) facing toward positive X axis and up would be positive Y axis then translate (a,0,0) would mean "go forward", translate (0,0,a) would would mean "go left", rotate (a, 0, 1, 0) would mean "turn left"...
So if in your case you scaled by 3 units, rotated by 90 degrees and then translated by (2,0,0) what happens is you first enlarge yourself by scale of 3, then turn 90 degrees so you are now facing positive Z still being quite large. Then you go forward by 2 units measured in your own coordinate system which means you will actually go to (0,0,2*3). So you end up at (0,0,6) looking toward positive Z axis.
I believe this way is the best to be able to imagine what goes on when dealing with such operations. And might save your life when having a bug in matrix operation order.
You should know that although this kind of matrix operating is normal when beginning with a 3D scene you should try to move to a better system as soon as possible. What I mostly use is to have an object structure/class which contains 3 vectors: position, forward and up (this is much like using glLookAt but not totally the same). So when having these 3 vectors you can simply set a specific position or rotation using trigonometry or your matrix tools by multiplying the vectors with matrices instead of the matrices with matrices. Or you can work with them internally (first person) where for instance "go forward" would be done as position = position + position*forward*scale, turn left would be rotating a forward vector around the up vector. Anyway I hope can understand how to manipulate those 3 vectors to get a desired effect... So what you need to do to reconstruct the matrix from those 3 vector is need to generate another vector right which is a cross product of up and forward then the model matrix consists of:
right.x, right.y, right.z, .0
up.x, up.y, up.z, .0
forward.x, forward.y, forward.z, .0
position.x, position.y, position.z, 1.0
Just note the row-column order may change depending on what you are working with.
I hope this gives you some better understanding...

OpenGLES20 - Rotation matrix screws movements - rotate view matrix with angles instead?

Current state:
Creating the frustum, saved in mViewMatrix, also I have both a quaternion q, a float array of three angles (yaw, roll, pitch e.g.) as well as a rotation matrix mRotationMatrix - representing the quaternion as well as the angles' rotation.
What I want to achieve is some sort of an augmented reality effect. I'm currently applying the mRotationMatrix to the mViewMatrix:
Matrix.setLookAtM(mTmpMatrix, 0, // mViewMatrix
mCameraPosition[0], mCameraPosition[1], mCameraPosition[2], // eye
mTargetRotPosition[0], mTargetRotPosition[1], mTargetRotPosition[2],
0, 1, 0); // up
Matrix.setIdentityM(mViewMatrix, 0);
Matrix.multiplyMM(mViewMatrix, 0, mRotationMatrix, 0, mTmpMatrix, 0);
This handles the whole rotation, up vector as well, so the rotation works fine. But since the rotation matrix comes from the device's sensors, the rotation matrix is kind of around the wrong axis.
As a reference, this image should help:
Scenario #1:
Yaw: pointing towards north, it's 0.
Pitch: 0
Roll: 0
Camera is looking to the right, but y is correct.
If I now increase pitch, i.e. pick up the device, the camera now moves to the right, instead of looking up.
If I increase yaw, camera is moving up, instead of to the right.
If I increase roll, weird transformations happen.
In the video, I'm executing the movements in this order. The compass is also showing correct movements, just the transformations of the OpenGL camera are screwed.
Video: Sample screenrecord video
Currently, I'm using the following code to get the rotation matrix, as well as pitch/roll/yaw:
switch (rotation) {
case Surface.ROTATION_0:
mRemappedXAxis = SensorManager.AXIS_MINUS_Y;
mRemappedYAxis = SensorManager.AXIS_Z;
break;
case Surface.ROTATION_90:
mRemappedXAxis = SensorManager.AXIS_X;
mRemappedYAxis = SensorManager.AXIS_Y;
break;
case Surface.ROTATION_180:
mRemappedXAxis = SensorManager.AXIS_Y;
mRemappedYAxis = SensorManager.AXIS_MINUS_Z;
break;
case Surface.ROTATION_270:
mRemappedXAxis = SensorManager.AXIS_MINUS_X;
mRemappedYAxis = SensorManager.AXIS_MINUS_Y;
break;
}
float[] rotationMatrix = new float[16];
float[] correctedRotationMatrix = new float[16];
float[] rotationVector = new float[]{x, y, z}; // from sensor fusion
float[] orientationVals = new float[3];
SensorManager.getRotationMatrixFromVector(rotationMatrix, rotationVector);
SensorManager.remapCoordinateSystem(rotationMatrix, mRemappedXAxis, mRemappedYAxis, correctedRotationMatrix);
SensorManager.getOrientation(correctedRotationMatrix, orientationVals);
I've already tried some other remap-combinations, but none of them seemed to change anything in the movement-translation..
My other thought would be to rotate the vectors I'm using in setLookAtM by myself. But I don't know how I am supposed to work with the up vector.
If someone could either show me / point me in the direction how to handle the rotation, that the movements I execute will be parsed right, or else how I am supposed to do this with the bare angles in OpenGL, I'd be thankful.
In my case , i used to calculate the incremental rotation delta(angle) every time the finger moved on the screen. From this incremental rotation angle, i created a temporary rotation matrix. Then, i post multiplied my overall rotation matrix(Historical rotation matrix with all the previous incremental rotations) with this and finally used this overall rotation matrix in my draw method.
The problem was that i was POST MULTIPLYING the incremental rotation with my overall rotation which meant... my latest rotation would be applied to the object first and the oldest(first) rotation would be applied last.
This is what messed up everything.
The solution was simple, instead of post multiplying, i pre-multiplied the incremental rotation with my overall rotation matrix.My rotations were now in right order and everything worked fine.
Hope this helps.
Here is where i learnt it from.Check this question.
"9.070 How do I transform my objects around a fixed coordinate system rather than the object's local coordinate system?"
http://www.opengl.org/archives/resources/faq/technical/transformations.htm#tran0162

Acceleration from device's coordinate system into absolute coordinate system

From my Android device I can read an array of linear acceleration values (in the device's coordinate system) and an array of absolute orientation values (in Earth's coordinate system). What I need is to obtain the linear acceleration values in the latter coord. system.
How can I convert them?
EDIT after Ali's reply in comment:
All right, so if I understand correctly, when I measure the linear acceleration, the position of the phone completely does not matter, because the readings are given in Earth's coordinate system. right?
But I just did a test where I put the phone in different positions and got acceleration in different axes. There are 3 pairs of pictures - the first ones show how I put the device (sorry for my Paint "master skill") and the second ones show readings from data provided by the linear acc. sensor:
device put on left side
device lying on back
device standing
And now - why in the third case the acceleration occurs along the Z axis (not Y) since the device position doesn't matter?
I finally managed to solve it! So to get acceleration vector in Earth's coordinate system you need to:
get rotation matrix (float[16] so it could be used later by android.opengl.Matrix class) from SensorManager.getRotationMatrix() (using SENSOR.TYPE_GRAVITY and SENSOR.TYPE_MAGNETIC_FIELD sensors values as parameters),
use android.opengl.Matrix.invertM() on the rotation matrix to invert it (not transpose!),
use Sensor.TYPE_LINEAR_ACCELERATION sensor to get linear acceleration vector (in device's coord. sys.),
use android.opengl.Matrix.multiplyMV() to multiply the rotation matrix by linear acceleration vector.
And there you have it! I hope I will save some precious time for others.
Thanks for Edward Falk and Ali for hints!!
Based on #alex's answer, here is the code snippet:
private float[] gravityValues = null;
private float[] magneticValues = null;
#Override
public void onSensorChanged(SensorEvent event) {
if ((gravityValues != null) && (magneticValues != null)
&& (event.sensor.getType() == Sensor.TYPE_ACCELEROMETER)) {
float[] deviceRelativeAcceleration = new float[4];
deviceRelativeAcceleration[0] = event.values[0];
deviceRelativeAcceleration[1] = event.values[1];
deviceRelativeAcceleration[2] = event.values[2];
deviceRelativeAcceleration[3] = 0;
// Change the device relative acceleration values to earth relative values
// X axis -> East
// Y axis -> North Pole
// Z axis -> Sky
float[] R = new float[16], I = new float[16], earthAcc = new float[16];
SensorManager.getRotationMatrix(R, I, gravityValues, magneticValues);
float[] inv = new float[16];
android.opengl.Matrix.invertM(inv, 0, R, 0);
android.opengl.Matrix.multiplyMV(earthAcc, 0, inv, 0, deviceRelativeAcceleration, 0);
Log.d("Acceleration", "Values: (" + earthAcc[0] + ", " + earthAcc[1] + ", " + earthAcc[2] + ")");
} else if (event.sensor.getType() == Sensor.TYPE_GRAVITY) {
gravityValues = event.values;
} else if (event.sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD) {
magneticValues = event.values;
}
}
According to the documentation you get the linear acceleration in the phone's coordinate system.
You can transform any vector from the phone's coordinate system to the Earth's coordinate system by multiplying it with the rotation matrix. You can get the rotation matrix from getRotationMatrix().
(Perhaps there already is a function doing this multiplication for you but I don't do Android programming and I am not familiar with its API.)
A nice tutorial on the rotation matrix is the Direction Cosine Matrix IMU: Theory manuscript. Good luck!
OK, first of all, if you're trying to do actual inertial navigation on Android, you've got your work cut out for you. The cheap little sensor used in smart phones are just not precise enough. Although, there has been some interesting work done on intertial navigation over small distances, such as inside a building. There are probably papers on the subject you can dig up. Google "Motion Interface Developers Conference" and you might find something useful -- that's a conference that Invensense put on a couple months ago.
Second, no, linear acceleration is in device coordinates, not world coordinates. You'll have to convert yourself, which means knowing the device's 3-d orientation.
What you want to do is use a version of Android that supports the virtual sensors TYPE_GRAVITY and TYPE_LINEAR_ACCELERATION. You'll need a device with gyros to get reasonably accurate and precise readings.
Internally, the system combines gyros, accelerometers, and magnetometers in order to come up with true values for the device orientation. This effectively splits the accelerometer device into its gravity and acceleration components.
So what you want to do is to set up sensor listeners for TYPE_GRAVITY, TYPE_LINEAR_ACCELERATION, and TYPE_MAGNETOMETER. Use the gravity and magnetometer data as inputs to SensorManager. getRotationMatrix() in order to get the rotation matrix that will transform world coordinates into device coordinates or vice versa. In this case, you'll want the "versa" part. That is, convert the linear acceleration input to world coordinates by multiplying them by the transpose of the orientation matrix.

Android: axes vectors from orientation/rotational angles?

So there's a couple methods in the Android SensorManager to get your phone's orientation:
float[] rotational = new float[9];
float[] orientation = new float[3];
SensorManager.getRotationMatrix(rotational, whatever, whatever, whatever);
SensorManager.getOrientation(rotational, orientation);
This gives you a rotation matrix called "rotational" and an array of 3 orientation angles called "orientation". However, I can't use the angles in my AR program - what I need is the actual vectors which represent the axes.
For example, in this image from Wikipedia:
I'm basically being given the α, β, and γ angles (though not exactly since I don't have an N - I'm being given the angles from each of the blue axes), and I need to find vectors which represents the X, Y, and Z axes (red in the image). Does anyone know how to do this conversion? The directions on Wikipedia are very complicated, and my attempts to follow them have not worked. Also, I think the data that Android gives you may be in a slightly different order or format than what the conversion directions on Wikipedia expect.
Or as an alternative to these conversions, does anyone know any other ways to get the X, Y, and Z axes from the camera's perspective? (Meaning, what vector is the camera looking down? And what vector does the camera consider to be "up"?)
The rotation matrix in Android provides a rotation from the body (a.k.a device) frame to the world (a.k.a. inertial) frame. A normal back facing camera appears in landscape mode on the screen. This is native mode for a tablet, so has the following axes in the device frame:
camera_x_tablet_body = (1,0,0)
camera_y_tablet_body = (0,1,0)
camera_z_tablet_body = (0,0,1)
On a phone, where portrait is native mode, a rotation of the device into landscape with top turned to point left is:
camera_x_phone_body = (0,-1,0)
camera_y_phone_body = (1,0,0)
camera_z_phone_body = (0,0,1)
Now applying the rotation matrix will put this in the world frame, so (for rotation matrix R[] of size 9):
camera_x_tablet_world = (R[0],R[3],R[6]);
camera_y_tablet_world = (R[1],R[4],R[7]);
camera_z_tablet_world = (R[2],R[5],R[8]);
In general, you can use SensorManager.remapCoordinateSystem() which for the phone example above would be Display.getRotation()=Surface.ROTATION_90 and give the answer you provided.
But if you rotate differently (ROTATION_270 for example) it will be different.
Also, an aside: the best method to get orientation in Android is to listen for Sensor.TYPE_ROTATION_VECTOR events. These are filled with the best possible orientation on most (i.e. Gingerbread or newer) platforms. It is actually the vector part of the quaternion. You can get the full quaternion using this (and last two lines are a way to get the RotationMatrix):
float vec[] = event.values.clone();
float quat[] = new float[4];
SensorManager.getQuaternionFromVector(quat, vec);
float [] RotMat = new float[9];
SensorManager.getRotationMatrixFromVector(RotMat, quat);
More information at: http://www.sensorplatforms.com/which-sensors-in-android-gets-direct-input-what-are-virtual-sensors
SensorManager.getRotationMatrix(rotational, null, gravityVals, geoMagVals);
// camera's x-axis
Vector u = new Vector(-rotational[1], -rotational[4], -rotational[7]); // right of phone (in landscape mode)
// camera's y-axis
Vector v = new Vector(rotational[0], rotational[3], rotational[6]); // top of phone (in landscape mode)
// camera's z-axis (negative into the scene)
Vector n = new Vector(rotational[2], rotational[5], rotational[8]); // front of phone (the screen)
// world axes (x,y,z):
// +x is East
// +y is North
// +z is sky
The orientation matrix that you receive from getRotationMatrix should be based on the gravity field and the magnetic field - in other words X points East, Y - North, Z - the center of the Earth. (http://developer.android.com/reference/android/hardware/SensorManager.html)
To the point of your question, I think the three rotation values can be used directly as a vector, but provide the values in reverse order:
"For either Euler or Tait-Bryan angles, it is very simple to convert from an intrinsic (rotating axes) to an extrinsic (static axes) convention, and vice-versa: just swap the order of the operations. An (α, β, γ) rotation using X-Y-Z intrinsic convention is equivalent to a (γ, β, α) rotation using Z-Y-X extrinsic convention; this is true for all Euler or Tait-Bryan axis combinations."
Source wikipedia
I hope this helps!

First Person Camera rotation in 3D

I have written a first person camera class for android.
The class is really simple , the camera object has its three axes
X,y and Z
and there are functions to create the ModelView matrix ( i.e. calculateModelViewMatrix() ),
rotate the camera along its X and Y axis
and Translate the camera along its Z-axis.
I think that my ModelViewMatrix calulation is correct and i can also translate the camera along the Z-axis.
Rotation along x-axis seems to work but along Y-axis it gives strange results.
Also another problem with the rotation seems to be that instead of the camera being rotated, my 3d model starts to rotate instead along its axis.
I have written another implementation based on the look at point and using the openGL ES's GLU.gluLookAt( ) function to obtain the ModelView matrix but that too seems to suffer from the exactly the same problems.
EDIT
First of all thanks for your reply.
I have actually made a second implementation of the Camera class, this time using the rotation functions provided in android.opengl.Matrix class as you said.
I have provided the code below, which is much simpler.
To my surprise, the results are "Exactly" the same.
This means that my rotation functions and Android's rotation functions are producing the same results.
I did a simple test and looked at my data.
I just rotated the LookAt point 1-dgree at a time around Y-axis and looked at the coordinates. It seems that my LookAt point is lagging behind the exact rotation angle e.g. at 20-deg it has only roatated 10 to 12 degree.
And after 45-degrees it starts reversing back
There is a class android.opengl.Matrix which is a collection of static methods which do everything you need on a float[16] you pass in. I highly recommend you use those functions instead of rolling your own. You'd probably want either setLookAtM with the lookat point calculated from your camera angles (using sin, cos as you are doing in your code - I assume you know how to do this.)
-- edit in response to new answer --
(you should probably have edited your original question, by the way - your answer as another question confused me for a bit)
Ok, so here's one way of doing it. This is uncompiled and untested. I decided to build the matrix manually instead; perhaps that'll give a bit more information about what's going on...
class TomCamera {
// These are our inputs - eye position, and the orientation of the camera.
public float mEyeX, mEyeY, mEyeZ; // position
public float mYaw, mPitch, mRoll; // euler angles.
// this is the outputted matrix to pass to OpenGL.
public float mCameraMatrix[] = new float [16];
// convert inputs to outputs.
public void createMatrix() {
// create a camera matrix (YXZ order is pretty standard)
// you may want to negate some of these constant 1s to match expectations.
Matrix.setRotateM(mCameraMatrix, 0, mYaw, 0, 1, 0);
Matrix.rotateM(mCameraMatrix, 0, mPitch, 1, 0, 0);
Matrix.rotateM(mCameraMatrix, 0, mRoll, 0, 0, 1);
Matrix.translateM(mCameraMatrix, 0, -mEyeX, -mEyeY, -mEyeZ);
}
}

Categories

Resources