I'm using Matrix to scale and rotate Bitmaps. Now I'm wondering what the difference between preconcat & postconcat is, or more precisely the difference between:
postRotate
preRotate
setRotate
From what I could figure out so far setRotate always overwrites the whole matrix, while with preRotate and postRotate I can apply multiple changes to a matrix (e.g. scaling + rotation). However, either using postRotate or preRotate didn't cause any different results for the cases I used them.
The answer to your question isn't really specific to Android; it's a graphics and math question. There's lots of theory in this answer--you've been warned! For a superficial answer to your question, skip to the bottom. Also, because this is such a long-winded tirade, I might have a typo or two making things unclear. I apologize in advance if that's the case.
In computer graphics, we can represent pixels (or in 3D, vertices) as vectors. If your screen is 640x480, here's a 2D vector for the point in the middle of your screen (forgive my shoddy markup):
[320]
[240]
[ 1]
I'll explain why the 1 is important later. Transformations are often represented using matrices because it's then very simple (and very efficient) to chain them together, like you mentioned. To scale the point above by a factor of 1.5, you can left-multiply it by the following matrix:
[1.5 0 0]
[ 0 1.5 0]
[ 0 0 1]
You'll get this new point:
[480]
[360]
[ 1]
Which represents the original point, scaled by 1.5 relative to the corner of your screen (0, 0). This is important: scaling is always done with respect to the origin. If you want to scale with some other point as your center (such as the middle of a sprite), you need to "wrap" the scale in translations to and from the origin. Here's the matrix to translate our original point to the origin:
[1 0 -320]
[0 1 -240]
[0 0 1]
Which yields:
[320*1 + 1*-320] [0]
[240*1 + 1*-240] = [0]
[ 1*1 ] [1]
You'll recognize the above as the identity matrix with the displacement coordinates slapped in the upper-right corner. That's why the 1 (the "homogenous coordinate") is necessary: to make room for these coordinates, thus making it possible to translate using multiplication. Otherwise it would have to be represented by matrix addition, which is more intuitive to humans, but would make graphics cards even more complicated than they already are.
Now, matrix multiplication generally isn't commutative, so when "adding" a transformation (by multiplying your matrix) you need to specify whether you're left-multiplying or right-multiplying. The difference it makes is what order your transformations are chained in. By right-multiplying your matrix (using preRotate()) you're indicating that the rotation step should happen before all the other transformations that you've just asked for. This might be what you want, but it usually isn't.
Often, it doesn't matter. If you only have one transformation, for example, it never matters :) Sometimes, your transformations can happen in either order with the same effect, such as scaling and rotation--my linear algebra is rusty, but I believe that in this case the matrix multiplication actually is commutative because the scale matrix is symmetric, that is, it mirrors itself across the diagonal. But really, just think about it: If I rotate some picture 10 degrees clockwise and then scale it to 200%, it looks the same as if I scaled it first, then rotated it.
If you were doing some weirder compound transformations, you'd begin to notice a discrepancy. My advice is to stick with postRotate().
I answered the question yesterday, but I feel sometiong wrong today ,So I correct the answer here:
matrix: float[] values ={1.2f,0.5f,30,0.5f,1.2f,30,0,0,1};
//as we all know, the basic value in matrix,means no transformation added
matrix2: float[] values2 ={1f,0,0,0,1f,0,0,0,1};
Let's say our matrix values are the values above.
1、 when we do the transformation like below:
matrix.preTranslate(-50, -50);
is equals to do sequence transformation to matrix2 above like below:
matrix2.postTranslate(-50, -50);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);// note here
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
2、 when we do the transformation like below :
matrix.preRotate(50);
is equals to do sequence transformation to matrix2 like below:
matrix2.postRotate(50);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
3、 when we do the transformation like below :
matrix.preScale(1.3f,1.3f);
is equals to do sequence transformation to matrix2 like below:
matrix2.postScale(1.3f,1.3f);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
4、 when we do the transformation like below :
matrix.preSkew(0.4f,0.4f);
is equals to do sequence transformation to matrix2 like below:
matrix2.postSkew(0.4f,0.4f);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
Related
I am working on a MS-Word rendering tool for Android. I need to render shadows for vml shapes. Microsoft's documentation on shadow matrix is not very clear and I couldn't find detailed explanation on vml shadow matrix on the web.
According to Microsoft documentation shadow matrix contains 6 values, 4 scale values and 2 perspective values (refer here).
[Sxx, Sxy, Syx, Syy, Px, Py] S stands for 'Scale', P stands for 'Perspective '
My doubts:
Android's transformation matrix supports ScaleX and ScaleY, only two scale values. How do I map VML's 4 scale values to Android's transformation matrix?
What those suffix values mean? What is the difference between Sxy and Syx?
Any explanation, even if it is partial, will be really helpful.
Thanks,Bala
Is it necessary to use a projection matrix like so:
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
and
Matrix.setLookAtM(mVMatrix, 0, 0, 0, 3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation and store results in mMVPMatrix
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
I'm having no end of trouble doing a simple 2d (sprite) rotation around the z axis.
The most success I've had so far is to manipulate the rotation matrix (rotate and translate) and pass it directly to the vertex shader.
It's not perfect and carries with it some shearing/skewing/distortion but at least it allows me to move the 'pivot'/centre point of the quad. If I put the above lines in the whole thing breaks and I get all kinds of odd results.
What is the actual purpose of the lines above (I have read the android docs but I dont understand them) and are they necessary? Do people write OpenGl apps without them?
Thanks!!
OpenGL is a C API but many frameworks will wrap its functions into other functions to make life easier. For example, in OpenGL ES 2.0 you must create and pass matrices to OpenGL. But OpenGL does not provide you with any tools to actually build and calculate these matrixes. This is where many other libraries exist to do this matrix creation for you, and then you pass these constructed matrixes to OpenGL -- or the function may very well pass the matrix to OpenGL for you, after making the calculation. Just depends on the library.
You can easily not use these frameworks and do it yourself, which is a great way to learn the math in 3D graphics -- and the math is really key to everything in this area.
I'm sure you have direct access to the OpenGL API in Android, but you are choosing to use a library that perhaps Android provides natively (similar to how Apple provides GLKit, a recent addition to their frameworks for iOS). But that doesn't mean you must use that library, but it might provide faster development if you know what the library is doing.
In this case, the three functions above appear to be pretty generic matrix/graphics utilities. You have a frustrum function that sets the projection in 3D space. You have the lookAt function that determines what the view of the camera is -- where is it looking and where is the camera while it looks there.
And you have a matrix multiplication function, since in the end all matrices must be combined before they are applied to the vertices of your 3D object.
It's important to understand that a typical modelview matrix will include the camera orientation/location but it will also include the rotation and scaling of your object. So just sending a modelview based on the camera (from LookAt) is not enough, unless you want your object to remain at the center of the screen, with no rotation.
If you were to expand all the math that goes into matrix multiplication, it might look like this for a typical setup:
Frustum * Camera * Translation * Rotation * Vertices
Those middle three, Camera, Translation, Rotation, are usually combined together into your modelview, so multiply those together for that particular matrix, then multiply the modelview by your frustum projection matrix, and this whole result can be applied to your vertices.
You must be very careful about the order of the matrix multiplication. Multiplying a frustum by a modelview is not the same as multiplying a modelview by a frustum.
Now you mention skewing, distortion, etc. One possible reason for this is your viewport. I'm sure somewhere in your API is an option to set the viewport's height and width, which are usually the height and width of your screen. If they are set differently, you will get an improper aspect ratio and some skewing that you see. Just one possible explanation. Or it could be that your parameters to your frustum aren't quite right, since that will certainly affect things like skew also.
I've got the following problem which I tried to solve the whole day.
I load a Bitmap picture with his corresponding height and width into an ImageView.
Then I use a matrix for moving,scaling and rotating the image.
For scaling I'm using the postScale method.
For moving I'm using the postTranslate method.
Only for Rotating I'm using the preRotate method.
Now I need to get the factor I scaled the image with, because I later need this factor in another program.
Using the MSCALE_X and MSCALE_Y values of the matrix only fits until I did no rotation. If I rotated the image, the scale values don't fit anymore (because the matrix was multiplied with the formula which is shown in the api).
Now my Question is:
How can I still get the scale factor of the image after rotating it?
For the rotation factor (degrees) it is simple, because I store it within an extra variable which is icremented/decremented while rotating.
But for the scale factor it does not work, because if I first scale an image down to 50% and then rescale it up to 150% then I scaled it with a factor of 3 but the original scaling factor is only 1.5).
Another example is. Even if I did not rescale the picture it even changes its scaling factor if I rotate it.
//Edit:
Finally I solved the problem on my own :) (doing a bit math and then I figured something interesting (or lets say obvious) out).
Here my solution:
I figured out that the values MSCALE_X and MSCALE_Y are calculated by using the cosinus function (yeah the basic math...). (Using 0° rotation leads to the correct scalingWidth and scalingHeight within X and Y). (90 and 270° results in a scalingWidth/Height of 0 and 180° results in a scalingWidth/Height multiplied by -1).
This leads me to the idea to write the following function:
This function saves the current matrix within a new matrix. Then it rotates the new matrix to the startstate (0°). Now we can read the non violated values MSCALE_X and MSCALE_Y in our matrix (which are the correct scaling factors now)
I had the same problem. This is a simple way and the logic is sound (in my mind).
For: xScale==yScale
float scale = matrix.mapRadius(1f) - matrix.mapRadius(0f);
For: xScale != yScale
float[] points={0f,0f,1f,1f};
matrix.mapPoints(points);
float scaleX=points[2]-points[0];
float scaleY=points[3]-points[1];
If you are not translating, you may be able to get away with just with 1f vector/point. I've tested the xScale==yScale (mapRadius) variant and it seems to work.
I had a similar problem with an app I'm writing. I couldn't see an obvious an simple solution so I just created a RectF object that had the same initial coords of the bitmap. Then, everytime I adjusted the matrix, I'd apply the transformation to the RectF as well (using Matrix.mapRect() ). This worked perfectly for me. It also allowed me to keep track of the absolute position of the edges of the bitmap.
I'm trying to get billboarding to work, but having trouble with the last step.
After following these directions from NeHe's tutorials (http://nehe.gamedev.net/data/articles/article.asp?article=19) I have my look, right, and up vectors, and I have already translated the modelview matrix to the centerpoint of the billboard by using glTranslatef().
float[] m = {right.x,right.y,right.z,0f,
up.x,up.y,up.z,0f,
look.x,look.y,look.z,0f,
pos.x,pos.y,pos.z,1f}; //pos is the centerpoint position
gl.glMultMatrixf(m, 0);
When I try to create a multiplication matrix out of these like so, the billboards are displayed all over the place in the wrong positions and orientations.
I guess my problem is that I don't know how to correctly create and multiply the matrix. I tried doing this instead, but then half the lines (the ones that need to rotate counter-clockwise) are rotated in the wrong direction:
//normal is the vector that the billboard faces before any manipulations.
float angle = look.getAngleDeg(normal); //returns angle between 0 and 180.
gl.glRotatef(angle, up.x,up.y,up.z);
Got it, using my second method. Calculating the angle between vectors (arccos of the dot product) can only give an angle between 0 and 180, so half the time you want to negate the angle so the rotation is in the opposite direction.
This is easy to check...since I already have the right vector, I can just check if the angle between the right vector and the normal is acute. If it's acute, then you want to negate the original angle.
I'm am working on a basic augmented reality application on Android. What I did so far is detect a square with opencv and then using cvFindExtrinsicCameraParams2() I calculated a rotation and translation vector. For this I used 4 object points, which are just the corners of a square around (0,0,0) and the 4 corners of the square in the image.
This yields me a pretty good rotation and translation matrix. I also calculated the rotation matrix with cvRodrigues2() since using this is easier than the rotation vector. As long as I use these to draw some points in the image everything works fine. My next step is however to pass these vectors and the matrix back to java and then use them with OpenGL to draw a square in an OpenGLView. The square should be exactly around the square in the image which is displayed behind the OpenGLView.
My problem is that I cannot find the correct way of using the rotation matrix and translation vector in OpenGL. I started of with exactly the same object points as used for the openCV functions. Then I applied the rotation matrix and translation vector in pretty much any possible way I could think of. Sadly none of these approaches produce a result which is anyway near what I hoped for. Can anyone tell me how to use them correctly?
So far the "closest" results I have gotten, was when randomly multiplying the whole matrix with -1. But most of the time the squares still look mirror inverted or rotated for 180 degrees. So I guess it was just a lucky hit, but not the right approach.
Okay after some more testing I finally managed to get it to work. While I don't understand it... it does 'work'. For anyone who will need to do this in the future here is my solution.
float rv[3]; // the rotation vector
float rotMat[9]; // rotation matrix
float tv[3]; // translation vector.
rv[1]=-1.0f * rv[1]; rv[2]=-1.0f * rv[2];
//Convert the rotation vector into a matrix here.
//Complete matrix ready to use for OpenGL
float RTMat[] = {rotMat[0], rotMat[3], rotMat[6], 0.0f,
rotMat[1], rotMat[4], rotMat[7], 0.0f,
rotMat[2], rotMat[5], rotMat[8], 0.0f,
tv[0], -tv[1], -tv[2], 1.0f};
As genpfault said in his comment everything needs to be transposed since OpenGL since OpenGL needs a column-major order. (Thanks for the comment, I saw that page earlier already.) Furthermore the y and z rotation angle as well as the y and z translation need to be multiplied by -1. This is what I find a bit weird. Why only those and not the x values too?
This works as it should I guess. But corners the don't match exactly. I guess this is caused by some wrong openGLView configurations. So even though I am still not a 100% happy with my solution I guess it is the answer to my question.
Pandoro's method really works! In case someone wondering "how to convert the rotation vector into a rotation matrix" here's how I did it. By the way, I've used these in OpenGL 2, not ES.
// use the rotation vector generated from OpenCV's cvFindExtrinsicCameraParams2()
float rv[] = {rotation->data.fl[0], rotation->data.fl[1], rotation->data.fl[2] };
// use the translation vector generated from OpenCV's cvFindExtrinsicCameraParams2()
float tv[] = {translation->data.fl[0], translation->data.fl[1], translation->data.fl[2]} ;
float rm[9];
// rotation matrix
CvMat* rotMat = cvCreateMat (3, 3, CV_32FC1);
// rotation vectors can be converted to a 3-by-3 rotation matrix
// by calling cvRodrigues2() - Source: O'Reilly Learning OpenCV
cvRodrigues2(rotation, rotMat, NULL);
for(int i=0; i<9; i++){
rm[i] = rotMat->data.fl[i];
}
rv[1]=-1.0f * rv[1]; rv[2]=-1.0f * rv[2];
//Convert the rotation vector into a matrix here.
//Complete matrix ready to use for OpenGL
float RTMat[] = {rm[0], rm[3], rm[6], 0.0f,
rm[1], rm[4], rm[7], 0.0f,
rm[2], rm[5], rm[8], 0.0f,
tv[0], -tv[1], -tv[2], 1.0f};
Good luck!