Android VR Toolkit - HeadTransform getHeadView matrix representation - android

I'm starting with Android VR-Toolkit for google cardboard.
I use the sample at google website:
https://developers.google.com/cardboard/get-started
I new with openGL and try to figure out how the values are represented by the system.
For example HeadTransform.getHeadView set a 4x4 matrix and according to the documentation:
A matrix representing the transform from the camera to the head.
Head origin is defined as the center point between the two eyes.
My question is what represents every value (each cell) in the matrix?

Google Cardboard 's headTransform.getHeadView() provide a transform matrix holding bothrotation and translation transforms (of the head).
Here is the matrix layout (column major order)
Rxx Ryx Rzx x
Rxy Ryy Rzy y
Rxz Ryz Rzz z
0 0 0 1
[x, y, z] is the translation matrix. The vector is [0,0,0] in my tests.
[Rxx, Rxy, Rxz] is the x axis after the rotation (replace Rx by Ry for the Y axis and so on).

If you are new in OpenGL maybe vr toolkit is not the best place to start learning opengl.
You could find this interesting ftp://ftp.informatik.hu-berlin.de/pub3/Linux/Qt/QT/developerguides/qtopengltutorial/OpenGLTutorial.pdf

Related

Understanding VML shadow matrix

I am working on a MS-Word rendering tool for Android. I need to render shadows for vml shapes. Microsoft's documentation on shadow matrix is not very clear and I couldn't find detailed explanation on vml shadow matrix on the web.
According to Microsoft documentation shadow matrix contains 6 values, 4 scale values and 2 perspective values (refer here).
[Sxx, Sxy, Syx, Syy, Px, Py] S stands for 'Scale', P stands for 'Perspective '
My doubts:
Android's transformation matrix supports ScaleX and ScaleY, only two scale values. How do I map VML's 4 scale values to Android's transformation matrix?
What those suffix values mean? What is the difference between Sxy and Syx?
Any explanation, even if it is partial, will be really helpful.
Thanks,Bala

gluLookAt vectors from gyroscope angles

Hi I'm trying to rotate the openGL camera using my phone's gyroscope. I am able to get the three angles (X,Y,Z), but I can't find a way to convert them to the direction vectors required by gluLookAt.
The camera is placed at (0,0,0) and should make the same rotations as the phone itself.
gluLookAt needs three vector parameters to calculate correct view matrix. The eye position (in your case (0,0,0)), the center position (where the camera is looking at) and the up vector (where is 'up' direction for the camera).
After choosing initial camera orientation, for many applications the center would be (0,0,1) and up direction (0,1,0). you need to rotate these vectors by desired angle. To do so, you can multiply them with a rotation matrix R = Rx * Rz * Ry where Rx,z,y are rotation matrices along given axes by given angle. If you are using any math package like glm, eigen or similar you can construct these easily. You can even implement them yourself (http://en.wikipedia.org/wiki/Rotation_matrix).
Note that in case eye position is not (0,0,0), you have to rotate the direction vector, not the center vector. Direction vector = center - eye.
So, a little pseudocode:
vec3 eye = .....
vec3 initCenter = .....
vec3 initDirection = initCenter - eye;
vec3 initUp = .....
mat3 rotation = rotate((1,0,0),xAngle) * rotate((0,1,0),yAngle) * rotate((0,0,1),zAngle);
gluLookAt(eye,eye + rotation*initDirection,rotation*initUp)
Implementation might look a little different, you might have to use vec4/mat4 as for example glm generates rotation matrices of dimension 4x4.

Billboarding in Android OpenGL ES 1.0

I'm trying to get billboarding to work, but having trouble with the last step.
After following these directions from NeHe's tutorials (http://nehe.gamedev.net/data/articles/article.asp?article=19) I have my look, right, and up vectors, and I have already translated the modelview matrix to the centerpoint of the billboard by using glTranslatef().
float[] m = {right.x,right.y,right.z,0f,
up.x,up.y,up.z,0f,
look.x,look.y,look.z,0f,
pos.x,pos.y,pos.z,1f}; //pos is the centerpoint position
gl.glMultMatrixf(m, 0);
When I try to create a multiplication matrix out of these like so, the billboards are displayed all over the place in the wrong positions and orientations.
I guess my problem is that I don't know how to correctly create and multiply the matrix. I tried doing this instead, but then half the lines (the ones that need to rotate counter-clockwise) are rotated in the wrong direction:
//normal is the vector that the billboard faces before any manipulations.
float angle = look.getAngleDeg(normal); //returns angle between 0 and 180.
gl.glRotatef(angle, up.x,up.y,up.z);
Got it, using my second method. Calculating the angle between vectors (arccos of the dot product) can only give an angle between 0 and 180, so half the time you want to negate the angle so the rotation is in the opposite direction.
This is easy to check...since I already have the right vector, I can just check if the angle between the right vector and the normal is acute. If it's acute, then you want to negate the original angle.

Android: Matrix -> what is the different between preconcat and postconcat?

I'm using Matrix to scale and rotate Bitmaps. Now I'm wondering what the difference between preconcat & postconcat is, or more precisely the difference between:
postRotate
preRotate
setRotate
From what I could figure out so far setRotate always overwrites the whole matrix, while with preRotate and postRotate I can apply multiple changes to a matrix (e.g. scaling + rotation). However, either using postRotate or preRotate didn't cause any different results for the cases I used them.
The answer to your question isn't really specific to Android; it's a graphics and math question. There's lots of theory in this answer--you've been warned! For a superficial answer to your question, skip to the bottom. Also, because this is such a long-winded tirade, I might have a typo or two making things unclear. I apologize in advance if that's the case.
In computer graphics, we can represent pixels (or in 3D, vertices) as vectors. If your screen is 640x480, here's a 2D vector for the point in the middle of your screen (forgive my shoddy markup):
[320]
[240]
[ 1]
I'll explain why the 1 is important later. Transformations are often represented using matrices because it's then very simple (and very efficient) to chain them together, like you mentioned. To scale the point above by a factor of 1.5, you can left-multiply it by the following matrix:
[1.5 0 0]
[ 0 1.5 0]
[ 0 0 1]
You'll get this new point:
[480]
[360]
[ 1]
Which represents the original point, scaled by 1.5 relative to the corner of your screen (0, 0). This is important: scaling is always done with respect to the origin. If you want to scale with some other point as your center (such as the middle of a sprite), you need to "wrap" the scale in translations to and from the origin. Here's the matrix to translate our original point to the origin:
[1 0 -320]
[0 1 -240]
[0 0 1]
Which yields:
[320*1 + 1*-320] [0]
[240*1 + 1*-240] = [0]
[ 1*1 ] [1]
You'll recognize the above as the identity matrix with the displacement coordinates slapped in the upper-right corner. That's why the 1 (the "homogenous coordinate") is necessary: to make room for these coordinates, thus making it possible to translate using multiplication. Otherwise it would have to be represented by matrix addition, which is more intuitive to humans, but would make graphics cards even more complicated than they already are.
Now, matrix multiplication generally isn't commutative, so when "adding" a transformation (by multiplying your matrix) you need to specify whether you're left-multiplying or right-multiplying. The difference it makes is what order your transformations are chained in. By right-multiplying your matrix (using preRotate()) you're indicating that the rotation step should happen before all the other transformations that you've just asked for. This might be what you want, but it usually isn't.
Often, it doesn't matter. If you only have one transformation, for example, it never matters :) Sometimes, your transformations can happen in either order with the same effect, such as scaling and rotation--my linear algebra is rusty, but I believe that in this case the matrix multiplication actually is commutative because the scale matrix is symmetric, that is, it mirrors itself across the diagonal. But really, just think about it: If I rotate some picture 10 degrees clockwise and then scale it to 200%, it looks the same as if I scaled it first, then rotated it.
If you were doing some weirder compound transformations, you'd begin to notice a discrepancy. My advice is to stick with postRotate().
I answered the question yesterday, but I feel sometiong wrong today ,So I correct the answer here:
matrix: float[] values ={1.2f,0.5f,30,0.5f,1.2f,30,0,0,1};
//as we all know, the basic value in matrix,means no transformation added
matrix2: float[] values2 ={1f,0,0,0,1f,0,0,0,1};
Let's say our matrix values are the values above.
1、 when we do the transformation like below:
matrix.preTranslate(-50, -50);
is equals to do sequence transformation to matrix2 above like below:
matrix2.postTranslate(-50, -50);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);// note here
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
2、 when we do the transformation like below :
matrix.preRotate(50);
is equals to do sequence transformation to matrix2 like below:
matrix2.postRotate(50);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
3、 when we do the transformation like below :
matrix.preScale(1.3f,1.3f);
is equals to do sequence transformation to matrix2 like below:
matrix2.postScale(1.3f,1.3f);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
4、 when we do the transformation like below :
matrix.preSkew(0.4f,0.4f);
is equals to do sequence transformation to matrix2 like below:
matrix2.postSkew(0.4f,0.4f);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);

How to use an OpenCV rotation and translation vector with OpenGL ES in Android?

I'm am working on a basic augmented reality application on Android. What I did so far is detect a square with opencv and then using cvFindExtrinsicCameraParams2() I calculated a rotation and translation vector. For this I used 4 object points, which are just the corners of a square around (0,0,0) and the 4 corners of the square in the image.
This yields me a pretty good rotation and translation matrix. I also calculated the rotation matrix with cvRodrigues2() since using this is easier than the rotation vector. As long as I use these to draw some points in the image everything works fine. My next step is however to pass these vectors and the matrix back to java and then use them with OpenGL to draw a square in an OpenGLView. The square should be exactly around the square in the image which is displayed behind the OpenGLView.
My problem is that I cannot find the correct way of using the rotation matrix and translation vector in OpenGL. I started of with exactly the same object points as used for the openCV functions. Then I applied the rotation matrix and translation vector in pretty much any possible way I could think of. Sadly none of these approaches produce a result which is anyway near what I hoped for. Can anyone tell me how to use them correctly?
So far the "closest" results I have gotten, was when randomly multiplying the whole matrix with -1. But most of the time the squares still look mirror inverted or rotated for 180 degrees. So I guess it was just a lucky hit, but not the right approach.
Okay after some more testing I finally managed to get it to work. While I don't understand it... it does 'work'. For anyone who will need to do this in the future here is my solution.
float rv[3]; // the rotation vector
float rotMat[9]; // rotation matrix
float tv[3]; // translation vector.
rv[1]=-1.0f * rv[1]; rv[2]=-1.0f * rv[2];
//Convert the rotation vector into a matrix here.
//Complete matrix ready to use for OpenGL
float RTMat[] = {rotMat[0], rotMat[3], rotMat[6], 0.0f,
rotMat[1], rotMat[4], rotMat[7], 0.0f,
rotMat[2], rotMat[5], rotMat[8], 0.0f,
tv[0], -tv[1], -tv[2], 1.0f};
As genpfault said in his comment everything needs to be transposed since OpenGL since OpenGL needs a column-major order. (Thanks for the comment, I saw that page earlier already.) Furthermore the y and z rotation angle as well as the y and z translation need to be multiplied by -1. This is what I find a bit weird. Why only those and not the x values too?
This works as it should I guess. But corners the don't match exactly. I guess this is caused by some wrong openGLView configurations. So even though I am still not a 100% happy with my solution I guess it is the answer to my question.
Pandoro's method really works! In case someone wondering "how to convert the rotation vector into a rotation matrix" here's how I did it. By the way, I've used these in OpenGL 2, not ES.
// use the rotation vector generated from OpenCV's cvFindExtrinsicCameraParams2()
float rv[] = {rotation->data.fl[0], rotation->data.fl[1], rotation->data.fl[2] };
// use the translation vector generated from OpenCV's cvFindExtrinsicCameraParams2()
float tv[] = {translation->data.fl[0], translation->data.fl[1], translation->data.fl[2]} ;
float rm[9];
// rotation matrix
CvMat* rotMat = cvCreateMat (3, 3, CV_32FC1);
// rotation vectors can be converted to a 3-by-3 rotation matrix
// by calling cvRodrigues2() - Source: O'Reilly Learning OpenCV
cvRodrigues2(rotation, rotMat, NULL);
for(int i=0; i<9; i++){
rm[i] = rotMat->data.fl[i];
}
rv[1]=-1.0f * rv[1]; rv[2]=-1.0f * rv[2];
//Convert the rotation vector into a matrix here.
//Complete matrix ready to use for OpenGL
float RTMat[] = {rm[0], rm[3], rm[6], 0.0f,
rm[1], rm[4], rm[7], 0.0f,
rm[2], rm[5], rm[8], 0.0f,
tv[0], -tv[1], -tv[2], 1.0f};
Good luck!

Categories

Resources