Difference between rotation methods? - android

My question is in about rotate methods in android.graphics.Camera.In Docs,I saw these comments:
public void rotateX (float deg) Since: API Level 1
Applies a rotation
transform around the X axis.
public void rotate (float x, float y, float z) Since: API Level 12
Applies a rotation transform around all three axis.
There is my question:What is difference between using rotate (float x, float y, float z) and a sequence of rotate* methods,for example difference between these two snippets A and B:
A)
camera.rotate (x, y, z);
B)
camera.rotateX (x);
camera.rotateY (y);
camera.rotateZ (z);

The importance lies in the order the rotations are applied in.
Consider for example, an aircraft flying forward which first rotates 90 degrees on its Z axis (roll) and then rotates 90 degrees on its X axis (pitch). The result is that the aircraft is now flying to the right with its right wing pointing downward. Now consider the operation in reverse order with a 90 degree pitch followed by a 90 degree roll. The aircraft is now flying up with its right wing pointing forward (these results may vary depending on your coordinate system).
camera.rotate provides a quick and easy function for applying all three rotations using one function. The reason for the remaining three rotation functions is to allow for situations in which the developer wants to apply one or more of the rotations in a specific order.

Looking at the source in frameworks/base/core/jni/android/graphics/Camera.cpp there is no difference:
static void Camera_rotate(JNIEnv* env, jobject obj, jfloat x, jfloat y, jfloat z) {
Sk3DView* v = (Sk3DView*)env->GetIntField(obj, gNativeInstanceFieldID);
v->rotateX(SkFloatToScalar(x));
v->rotateY(SkFloatToScalar(y));
v->rotateZ(SkFloatToScalar(z));
}

Related

Trouble mapping device coordinate system to real-world (rotation vector) coordinate system in Processing Android

I know this question has been asked many many times, but with all the knowledge out there I still can't get it to work for myself in the specific setting I now find myself in: Processing for Android.
The coordinate systems involved are (1) the real-world coordinate system as per Android's view: y is tangential to the ground and pointing north, z goes up into the sky, and x goes to your right, if you're standing on the ground and looking north; and (2) the device coordinate system as per Processing's view: x points to the right of the screen, y down, and z comes out of the screen.
The goal is simply to draw a cube on the screen and have it rotate on device rotation such that it seems that it is stable in actual space. That is: I want a map between the two coordinate systems so that I can draw in terms of the real-world coordinates instead of the screen coordinates.
In the code I'm using the Ketai sensor library, and subscribe to the onRotationVectorEvent(float x, float y, float z) event. Also, I have a simple quaternion class lying around that I got from https://github.com/kynd/PQuaternion. So far I have the following code, in which I have two different ways of trying to map, that coincide, but nevertheless don't work as I want them to:
import ketai.sensors.*;
KetaiSensor sensor;
PVector rotationAngle = new PVector(0, 0, 0);
Quaternion rot = new Quaternion();
void setup() {
fullScreen(P3D);
sensor = new KetaiSensor(this);
sensor.start();
}
void draw() {
background(#333333);
translate(width/2, height/2);
lights();
// method 1: draw lines for real-world axes in terms of processing's coordinates
PVector rot_x_axis = rot.mult(new PVector(400, 0, 0));
PVector rot_y_axis = rot.mult(new PVector(0, 0, -400));
PVector rot_z_axis = rot.mult(new PVector(0, 400, 4));
stroke(#ffffff);
strokeWeight(8); line(0, 0, 0, rot_x_axis.x, rot_x_axis.y, rot_x_axis.z);
strokeWeight(5); line(0, 0, 0, rot_y_axis.x, rot_y_axis.y, rot_y_axis.z);
strokeWeight(2); line(0, 0, 0, rot_z_axis.x, rot_z_axis.y, rot_z_axis.z);
// method 2: first rotate appropriately
fill(#f4f7d2);
rotate(asin(rotationAngle.mag()) * 2, rotationAngle.x, rotationAngle.y, rotationAngle.z);
box(200, 200, 200);
}
void onRotationVectorEvent(float x, float y, float z) {
rotationAngle = new PVector(x, y, z);
// I believe these two do the same thing.
rot.set(x, y, z, cos(asin(rotationAngle.mag())));
//rot.setAngleAxis(asin(rotationAngle.mag())*2, rotationAngle);
}
The above works well enough that the real-world axis lines coincide with the cube drawn, and both rotate in an interesting way. But still, there seems to be some "gimbal stuff" going on, in the sense that, when I rotate my device up and down standing one way, the cube also rotates up and down, but standing another way, the cube rotates sideways --- as if I'm applying the rotations in the wrong order. However, I'm trying to avoid gimbal madness by working with quaternions this way --- how does it still apply?
I've solved it now, just by a simple "click to test next configuration" UI, to test all possible 6 * 8 configurations of rotate(asin(rotationAngle.mag()) * 2, <SIGN> * rotationAngle.<DIM>, <SIGN> * rotationAngle.<DIM>, <SIGN> * rotationAngle.<DIM>); -- the solution to which seemed to be 0, -1, 2, i.e.:
rotate(asin(rotationAngle.mag()) * 2, rotationAngle.x, -rotationAngle.y, rotationAngle.z);

Using Rotation Matrix to rotate points in space

I'm using android's rotation matrix to rotate multiple points in space.
Work so far
I start by reading the matrix from the SensorManager.getRotationMatrix function. Next I transform the rotation matrix into a quaternion using the explanation given in this link. I'm doing this because I read that Euler angles can lead to Gimbal lock issue and that operations with a 3x3 matrix can be exhaustive. source
Problem
Now what I want to do is: Imagine the phone is the origin of the referential and given a set of points (projected lat/lng coordinates into a xyz coordinate system see method bellow) I want to rotate them so I can check which ones are on my line of sight. For that I'm using this SO question which returns a X and Y (left and top respectively) to display the point on screen. It's working fine but only works when facing North (because it doesn't take orientation into account and my projected vector uses North/South as X and East/West as Z). So my thought was to rotate all objects. Also even though the initial altitude (Y) is 0 I want to be able to position the point up/down according to phone's orientation.
I think part of the solution may be on this post. But since this uses Euler angles I don't think that's the best method.
Conclusion
So, if it's really better to rotate each point's position how can I archive that using the rotation quaternion? Otherwise which is the better way?
I'm sorry if I said anything wrong in this post. I'm not good at physics.
Code
//this functions returns a 3d vector (0 for Y since I'm discarding altitude) using 2 coordinates
public static float[] convLocToVec(LatLng source, LatLng destination)
{
float[] z = new float[1];
z[0] = 0;
Location.distanceBetween(source.latitude, source.longitude, destination
.latitude, source.longitude, z);
float[] x = new float[1];
Location.distanceBetween(source.latitude, source.longitude, source
.latitude, destination.longitude, x);
if (source.latitude < destination.latitude)
z[0] *= -1;
if (source.longitude > destination.longitude)
x[0] *= -1;
return new float[]{x[0], (float) 0, z[0]};
}
Thanks for your help and have a nice day.
UPDATE 1
According to Wikipedia:
Compute the matrix product of a 3 × 3 rotation matrix R and the
original 3 × 1 column matrix representing v→. This requires 3 × (3
multiplications + 2 additions) = 9 multiplications and 6 additions,
the most efficient method for rotating a vector.
Should I really just use the rotation matrix to rotate a vector?
Since no one answered I'm here to answer myself.
After some research (a lot actually) I came to the conclusion that yes it is possible to rotate a vector using a quaternion but it's better for you that you transform it into a rotation matrix.
Rotation matrix - 9 multiplications and 6 additions
Quartenion - 15 multiplications and 15 additions
Source: Performance comparisons
It's better to use the rotation matrix provided by Android. Also if you are going to use quaternion somehow (Sensor.TYPE_ROTATION_VECTOR + SensorManager.getQuaternionFromVector for example) you can (and should) transform it into a rotation matrix. You can use the method SensorManager.getRotationMatrixFromVector to convert the rotation vector to a matrix. After you get the rotation matrix you just have to multiply it for the projected vector you want. You can use this function for that:
public float[] multiplyByVector(float[][] A, float[] x) {
int m = A.length;
int n = A[0].length;
if (x.length != n) throw new RuntimeException("Illegal matrix dimensions.");
float[] y = new float[m];
for (int i = 0; i < m; i++)
for (int j = 0; j < n; j++)
y[i] += (A[i][j] * x[j]);
return y;
}
Although I'm still not able to get this running correctly I will mark this as answer.

Scale, rotate, translate w. matrices in openGl ES 2.0

I'm working with OpenGL ES 2.0 and trying to build my object class with some methods to rotate/translate/scale them.
I just set up my object in 0,0,0 and move it afterwards to the desired position on the screen. Below are my methods to move it seperately. After that i run the buildObjectModelMatrix to pass all the matrices into one objectMatrix, so i can take the vertices and multiply them with my modelMatrix/objectMatrix and render it afterwards.
What i think is right, i have to multiply my matrices in this order:
[scale]x[rotation]x[translation]
->
[temp]x[translation]
->
[objectMatrix]
I've found some literature. Maybe i get it in a few Minutes, if i will, i will update it.
Beginning Android 3D
http://gamedev.stackexchange.com
setIdentityM(scaleMatrix, 0);
setIdentityM(translateMatrix, 0);
setIdentityM(rotateMatrix, 0);
public void translate(float x, float y, float z) {
translateM(translateMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
public void rotate(float angle, float x, float y, float z) {
rotateM(rotateMatrix, 0, angle, x, y, z);
buildObjectModelMatrix();
}
public void scale(float x, float y,float z) {
scaleM(scaleMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, scaleMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, translateMatrix, 0);
}
SOLVED:
The Problem within the whole thing is, if you scale before you translate you get a difference in the distance you translate! the correct code for multiplying your matrices should be (correct me if i'm wrong)
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, translateMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, scaleMatrix, 0);
}
with this you translate and rotate first. Afterwards you can scale the object.
Tested with multiple Objects... so i hope this helped :)
You know this is the most common issue with most people when beginning to deal with matrix operations. How matrix multiplication works is as if you were looking from the objects first person view getting some commands: For instance if you began at (0,0,0) facing toward positive X axis and up would be positive Y axis then translate (a,0,0) would mean "go forward", translate (0,0,a) would would mean "go left", rotate (a, 0, 1, 0) would mean "turn left"...
So if in your case you scaled by 3 units, rotated by 90 degrees and then translated by (2,0,0) what happens is you first enlarge yourself by scale of 3, then turn 90 degrees so you are now facing positive Z still being quite large. Then you go forward by 2 units measured in your own coordinate system which means you will actually go to (0,0,2*3). So you end up at (0,0,6) looking toward positive Z axis.
I believe this way is the best to be able to imagine what goes on when dealing with such operations. And might save your life when having a bug in matrix operation order.
You should know that although this kind of matrix operating is normal when beginning with a 3D scene you should try to move to a better system as soon as possible. What I mostly use is to have an object structure/class which contains 3 vectors: position, forward and up (this is much like using glLookAt but not totally the same). So when having these 3 vectors you can simply set a specific position or rotation using trigonometry or your matrix tools by multiplying the vectors with matrices instead of the matrices with matrices. Or you can work with them internally (first person) where for instance "go forward" would be done as position = position + position*forward*scale, turn left would be rotating a forward vector around the up vector. Anyway I hope can understand how to manipulate those 3 vectors to get a desired effect... So what you need to do to reconstruct the matrix from those 3 vector is need to generate another vector right which is a cross product of up and forward then the model matrix consists of:
right.x, right.y, right.z, .0
up.x, up.y, up.z, .0
forward.x, forward.y, forward.z, .0
position.x, position.y, position.z, 1.0
Just note the row-column order may change depending on what you are working with.
I hope this gives you some better understanding...

Android openGL object translation and rotation in the same time

Using Android opengl I need to move an object from point A to point B and rotate it around its local Z axis in the same time. I have been reading tutorials for the past 3 days, everybody gives you bits of informations and hints, but nobody is capable of explaining this from top to bottom for beginners.
I know how to only translate the object from point A to point B.
I also know how to rotate the object in point A around its local axis (translate it to origin, rotate it, translate it back)
I DON'T know how to rotate and translate in the same time.
I've tried to translate to origin, rotate, translate back, then translate to point B. It doesn't work, and I think I know why (the rotation is messing the object axis, so the translation to point B is incorrect)
A(-x1, y1 , -z1)
B(-x1 + deltaX, y1 + deltaY, -z1 + deltaZ)
_gl.glTranslatef(x1, -y1 , z1);
_gl.glRotatef(degrees, x1, -y1 , z1);
_gl.glTranslatef(-x1, y1 , -z1);
_gl.glTranslatef(deltaX, deltaY, deltaZ);
I need to take into consideration the way the rotation is chaning the axes. Some say I can do that with quaterninons, or with rotation matrixes, etc.
But I don't have enough opengl knowledge to use apis to resolve this.
Can someone explain this to me? With somecode also?
Thank you in advance.
If you have the following code:
glTranslate(x, y, z);
glRotatef(angle, 0, 0, 1);
drawObject();
The object will first be rotated around it's local z-axis and then translated with (x, y, z). The transform call that is closest to the draw call is the one that happens first.
From your code it seems like you actually don't want to rotate the object around it's own origin but some other point, in this case you should do the following:
glTranslate(x, y, z); //Transform 4
glTranslate(origin.x, origin.y, origin.z); //Transfrom 3
glRotatef(angle, 0, 0, 1); // Transform 2
glTranslate(-origin.x, -origin.y, -origin.z); // Transform 1
drawObject();

Android: axes vectors from orientation/rotational angles?

So there's a couple methods in the Android SensorManager to get your phone's orientation:
float[] rotational = new float[9];
float[] orientation = new float[3];
SensorManager.getRotationMatrix(rotational, whatever, whatever, whatever);
SensorManager.getOrientation(rotational, orientation);
This gives you a rotation matrix called "rotational" and an array of 3 orientation angles called "orientation". However, I can't use the angles in my AR program - what I need is the actual vectors which represent the axes.
For example, in this image from Wikipedia:
I'm basically being given the α, β, and γ angles (though not exactly since I don't have an N - I'm being given the angles from each of the blue axes), and I need to find vectors which represents the X, Y, and Z axes (red in the image). Does anyone know how to do this conversion? The directions on Wikipedia are very complicated, and my attempts to follow them have not worked. Also, I think the data that Android gives you may be in a slightly different order or format than what the conversion directions on Wikipedia expect.
Or as an alternative to these conversions, does anyone know any other ways to get the X, Y, and Z axes from the camera's perspective? (Meaning, what vector is the camera looking down? And what vector does the camera consider to be "up"?)
The rotation matrix in Android provides a rotation from the body (a.k.a device) frame to the world (a.k.a. inertial) frame. A normal back facing camera appears in landscape mode on the screen. This is native mode for a tablet, so has the following axes in the device frame:
camera_x_tablet_body = (1,0,0)
camera_y_tablet_body = (0,1,0)
camera_z_tablet_body = (0,0,1)
On a phone, where portrait is native mode, a rotation of the device into landscape with top turned to point left is:
camera_x_phone_body = (0,-1,0)
camera_y_phone_body = (1,0,0)
camera_z_phone_body = (0,0,1)
Now applying the rotation matrix will put this in the world frame, so (for rotation matrix R[] of size 9):
camera_x_tablet_world = (R[0],R[3],R[6]);
camera_y_tablet_world = (R[1],R[4],R[7]);
camera_z_tablet_world = (R[2],R[5],R[8]);
In general, you can use SensorManager.remapCoordinateSystem() which for the phone example above would be Display.getRotation()=Surface.ROTATION_90 and give the answer you provided.
But if you rotate differently (ROTATION_270 for example) it will be different.
Also, an aside: the best method to get orientation in Android is to listen for Sensor.TYPE_ROTATION_VECTOR events. These are filled with the best possible orientation on most (i.e. Gingerbread or newer) platforms. It is actually the vector part of the quaternion. You can get the full quaternion using this (and last two lines are a way to get the RotationMatrix):
float vec[] = event.values.clone();
float quat[] = new float[4];
SensorManager.getQuaternionFromVector(quat, vec);
float [] RotMat = new float[9];
SensorManager.getRotationMatrixFromVector(RotMat, quat);
More information at: http://www.sensorplatforms.com/which-sensors-in-android-gets-direct-input-what-are-virtual-sensors
SensorManager.getRotationMatrix(rotational, null, gravityVals, geoMagVals);
// camera's x-axis
Vector u = new Vector(-rotational[1], -rotational[4], -rotational[7]); // right of phone (in landscape mode)
// camera's y-axis
Vector v = new Vector(rotational[0], rotational[3], rotational[6]); // top of phone (in landscape mode)
// camera's z-axis (negative into the scene)
Vector n = new Vector(rotational[2], rotational[5], rotational[8]); // front of phone (the screen)
// world axes (x,y,z):
// +x is East
// +y is North
// +z is sky
The orientation matrix that you receive from getRotationMatrix should be based on the gravity field and the magnetic field - in other words X points East, Y - North, Z - the center of the Earth. (http://developer.android.com/reference/android/hardware/SensorManager.html)
To the point of your question, I think the three rotation values can be used directly as a vector, but provide the values in reverse order:
"For either Euler or Tait-Bryan angles, it is very simple to convert from an intrinsic (rotating axes) to an extrinsic (static axes) convention, and vice-versa: just swap the order of the operations. An (α, β, γ) rotation using X-Y-Z intrinsic convention is equivalent to a (γ, β, α) rotation using Z-Y-X extrinsic convention; this is true for all Euler or Tait-Bryan axis combinations."
Source wikipedia
I hope this helps!

Categories

Resources