i was just going through the documentation given on developer.android.com and when i was going through the canvas class if i found this method named scale, so i searched for its documentation and found the following:
public void scale (float sx, float sy)
Since: API Level 1
Preconcat the current matrix with the specified scale.
Parameters
sx The amount to scale in X
sy The amount to scale in Y
what matrix are they talking about over here? How is the matrix associated with canvas and how does it matter if i scale my canvas or not?
Canvas makes a lot of native calls under the covers and delegates its work to GL. I believe there is a default Matrix already associated with it. If you want in-depth knowledge of what's happening I'd recommend looking through the source code for the graphics stuff.
Related
I am working on a MS-Word rendering tool for Android. I need to render shadows for vml shapes. Microsoft's documentation on shadow matrix is not very clear and I couldn't find detailed explanation on vml shadow matrix on the web.
According to Microsoft documentation shadow matrix contains 6 values, 4 scale values and 2 perspective values (refer here).
[Sxx, Sxy, Syx, Syy, Px, Py] S stands for 'Scale', P stands for 'Perspective '
My doubts:
Android's transformation matrix supports ScaleX and ScaleY, only two scale values. How do I map VML's 4 scale values to Android's transformation matrix?
What those suffix values mean? What is the difference between Sxy and Syx?
Any explanation, even if it is partial, will be really helpful.
Thanks,Bala
I made an app in libGDX. I have a sprite which is 256*256 pixels. But this is too big, so I want to scale it down to 160*160 pixels. How could I do that?
You can achieve it using a version of the draw method in the SpriteBatch class, which is
batch.draw(Sprite, float x, float y, float width, float height)
So you can scale to 160 * 160 pixels by calling:
batch.begin();
batch.draw(yourSprite, 0, 0, 160, 160);
batch.end();
You should note that scaling is an expensive operation especially for low -end devices and should be used when absolutely necessary. Ideally, you should resize all your images/textures before using them in your project.
Instead of manually drawing the sprite using batch.draw(sprite...) you can use sprite.draw(Batch batch). Using the sprite's own method will do all of the object's transformations for you, which makes things a bit easier to handle. This is of course assuming you are using the actual Sprite class to hold your texture, which I would highly recommend.
When calling Matrix.postScale( sx, sy, px, py ); the matrix gets scaled and also translated (depending on the given point x, y). That predestines this method to be used for zooming into images because I can easily focus one specific point.
The android doc describes the method like this:
Postconcats the matrix with the specified scale. M' = S(sx, sy, px, py) * M
At a first glance this seems ridiculous because M is supposed to be a 3x3-Matrix. Digging around I've found out that android uses a 4x4-Matrix for its computations (while only providing 3x3 on its API). Since this code is written in C I'm having a hard time trying to understand what is actually happening.
What I actually want to know: How can I apply this kind of scaling (with a focused point) to the 3x3 Matrix that I can access within my Java-code?
I want to skew (correct me if this is not the correct word) a bitmap so that it appears to have depth. A good way to visualize what I am asking for is how the credits of Star Wars are angled to show depth.
I have tried the following:
canvas.getMatrix().postSkew(kx,ky,px,py);
and
canvas.skew(sx,sy);
But I have not had much success. The above methods seem to always transform the bitmap into a parallelogram. Is there a way to transform the bitmap into a trapezoid instead?
Here is a snippet of code that I took from the examples that Romain pointed me to.
canvas.rotate(-mOrientation[0] + mHeading, mCenterX, mCenterY);
camera.save();
if (mReverse) {
camera.translate(0.0f, 0.0f, mDepthZ * interpolatedTime);
} else {
camera.translate(0.0f, 0.0f, mDepthZ * (1.0f - interpolatedTime));
}
camera.rotateX(mOrientation[1]);
camera.applyToCanvas(canvas);
canvas.drawPath(mPath, mPaint);
canvas.drawCircle(mCenterX, mCenterY, mRadius - 37, mPaint);
camera.restore();
I spent a lot of time working on this today (ran into the same problem) and came up with the code below.
Key thing to note, you need to set preTranslate() and postTranslate() to the center (or somewhere else) of your Canvas area. It seems to mean that it uses the center of the image to apply the transformation from, instead of the upper left corner (x=0,y=0) by default. This is why you would get a parallelogram instead of what you would expect, a trapezoid (Thanks for teaching me the names of those).
The other important thing that I picked up is the Save/Restore functions on the Canvas/Camera. Basically, if you call the Rotate functions consecutively three times without restoring the state back each time, you would keep rotating your object around and around each time you draw. That might be what you want, but I certainly didn't in my case. Same applies to the canvas as you are basically applying the Matrix from the Camera object to the Canvas and it needs to be reset otherwise the same thing occurs.
Hope this helps someone, this is not well documented for beginners. Tip to anyone reading this, check out the APIDemos folder in the SDK Samples. There is a Rotate3dAnimation.java file which demonstrates this as well.
//Snippet from a function used to handle a draw
mCanvas.save(); //save a 'clean' matrix that doesn't have any camera rotation in it's matrix
ApplyMatrix(); //apply rotated matrix to canvas
Draw(); //Does drawing
mCanvas.restore(); //restore clean matrix
//
public void ApplyMatrix() {
mCamera.save();
mCamera.rotateX(-66);
mCamera.rotateY(0);
mCamera.rotateZ(0);
mCamera.getMatrix(mMatrix);
int CenterX = mWidth / 2;
int CenterY = mHeight / 2;
mMatrix.preTranslate(-CenterX, -CenterY); //This is the key to getting the correct viewing perspective
mMatrix.postTranslate(CenterX, CenterY);
mCanvas.concat(mMatrix);
mCamera.restore();
}
You cannot achieve the effect you want with skew(). However, you can use a Camera object and 3D rotations to achieve this effect. The Camera will generate a Matrix for you that you can then apply on the Canvas. Note that the result will not be perspective correct, but good enough for your purpose. This how 3D rotations are done in Honeycomb's Launcher for instance (and many other apps.)
I don't think the "Star Wars effect" is an affine transformation, which I think are the only operations supported by Matrix.
I'm am working on a basic augmented reality application on Android. What I did so far is detect a square with opencv and then using cvFindExtrinsicCameraParams2() I calculated a rotation and translation vector. For this I used 4 object points, which are just the corners of a square around (0,0,0) and the 4 corners of the square in the image.
This yields me a pretty good rotation and translation matrix. I also calculated the rotation matrix with cvRodrigues2() since using this is easier than the rotation vector. As long as I use these to draw some points in the image everything works fine. My next step is however to pass these vectors and the matrix back to java and then use them with OpenGL to draw a square in an OpenGLView. The square should be exactly around the square in the image which is displayed behind the OpenGLView.
My problem is that I cannot find the correct way of using the rotation matrix and translation vector in OpenGL. I started of with exactly the same object points as used for the openCV functions. Then I applied the rotation matrix and translation vector in pretty much any possible way I could think of. Sadly none of these approaches produce a result which is anyway near what I hoped for. Can anyone tell me how to use them correctly?
So far the "closest" results I have gotten, was when randomly multiplying the whole matrix with -1. But most of the time the squares still look mirror inverted or rotated for 180 degrees. So I guess it was just a lucky hit, but not the right approach.
Okay after some more testing I finally managed to get it to work. While I don't understand it... it does 'work'. For anyone who will need to do this in the future here is my solution.
float rv[3]; // the rotation vector
float rotMat[9]; // rotation matrix
float tv[3]; // translation vector.
rv[1]=-1.0f * rv[1]; rv[2]=-1.0f * rv[2];
//Convert the rotation vector into a matrix here.
//Complete matrix ready to use for OpenGL
float RTMat[] = {rotMat[0], rotMat[3], rotMat[6], 0.0f,
rotMat[1], rotMat[4], rotMat[7], 0.0f,
rotMat[2], rotMat[5], rotMat[8], 0.0f,
tv[0], -tv[1], -tv[2], 1.0f};
As genpfault said in his comment everything needs to be transposed since OpenGL since OpenGL needs a column-major order. (Thanks for the comment, I saw that page earlier already.) Furthermore the y and z rotation angle as well as the y and z translation need to be multiplied by -1. This is what I find a bit weird. Why only those and not the x values too?
This works as it should I guess. But corners the don't match exactly. I guess this is caused by some wrong openGLView configurations. So even though I am still not a 100% happy with my solution I guess it is the answer to my question.
Pandoro's method really works! In case someone wondering "how to convert the rotation vector into a rotation matrix" here's how I did it. By the way, I've used these in OpenGL 2, not ES.
// use the rotation vector generated from OpenCV's cvFindExtrinsicCameraParams2()
float rv[] = {rotation->data.fl[0], rotation->data.fl[1], rotation->data.fl[2] };
// use the translation vector generated from OpenCV's cvFindExtrinsicCameraParams2()
float tv[] = {translation->data.fl[0], translation->data.fl[1], translation->data.fl[2]} ;
float rm[9];
// rotation matrix
CvMat* rotMat = cvCreateMat (3, 3, CV_32FC1);
// rotation vectors can be converted to a 3-by-3 rotation matrix
// by calling cvRodrigues2() - Source: O'Reilly Learning OpenCV
cvRodrigues2(rotation, rotMat, NULL);
for(int i=0; i<9; i++){
rm[i] = rotMat->data.fl[i];
}
rv[1]=-1.0f * rv[1]; rv[2]=-1.0f * rv[2];
//Convert the rotation vector into a matrix here.
//Complete matrix ready to use for OpenGL
float RTMat[] = {rm[0], rm[3], rm[6], 0.0f,
rm[1], rm[4], rm[7], 0.0f,
rm[2], rm[5], rm[8], 0.0f,
tv[0], -tv[1], -tv[2], 1.0f};
Good luck!