I am drawing an image based texture using opengl in android and trying to rotate it about its center.
But the result is not as expected and it appears skewed.
First screen grab is the texture drawn without rotation and the second one is the one drawn with 10 degree rotation.
Code snippet is as below:
mViewWidth = viewWidth;//View port width
mViewHeight = viewHeight;//View port height
float ratio = (float) viewWidth / viewHeight;
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
.....
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 5, 0f, 0f, 0f, 0.0f, 1.0f, 0.0f);
Matrix.setRotateM(mRotationMatrix, 0, 10, 0, 0, 1.0f);
Matrix.multiplyMM(temp, 0, mProjectionMatrix, 0, mViewMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, temp, 0, mRotationMatrix, 0);
GLES20.glUniformMatrix4fv(mRotationMatrixHandle , 1, false, mRotationMatrix, 0);
And in shader:
....
" gl_Position = uMVPMatrix*a_position;\n"
....
The black area in the first screen grab is the area of GLSurfaceView and the grey area is the area where I am trying to draw the image.
The image is already at origin and I think there is no need to translate before rotating it.
The basic problem is that you're scaling your geometry to adjust for the screen aspect ratio before you apply the rotation.
It might not be obvious that you're actually scaling the geometry. But by calculating the coordinates you use for drawing to adjust for the aspect ratio, you are effectively applying a non-uniform scaling transformation to the geometry. And if you then rotate the result, it will get distorted.
What you need to do is apply the rotation before you scale. This will require some reorganization of your current code. Since you apply the scaling before you pass the coordinates to OpenGL, and then do the rotation in the shader, you can't easily change the order. You either have to:
Apply both transformations, in the proper order, to the input coordinates before you pass them to OpenGL, and remove the rotation from the shader code.
Apply both transformations, in the proper order, in the shader code. To do this, you would not modify the input coordinates to adjust to the aspect ratio, and pass a scaling factor into the shader instead.
For the first option, applying a 2D rotation in your own code is easy enough, and it looks like you only have 4 vertices, so there is no efficiency concern. Still the second options is certainly more elegant. So instead of scaling the coordinates in your client code, pass a scaling factor as a uniform into the shader. Then, in the GLSL code, apply the rotation first, and scale the resulting coordinates.
Another option is that you build the complete transformation matrix (again based on applying the individual transformations in the correct order), and pass that matrix into the shader.
Related
Wasnt sure whether to post this here or in GameDev, but since it's not really game development i decided to ask here.
I'm trying OpenGL ES 2 on Android and right now i have a simple setup. I load an object from a .obj file, display it on the screen, then i can rotate the camera around the object using touch controlls. My viewMatrix is setup like this:
double[] dist = {DISTANCE * Math.sin(yawAngle) * Math.abs(Math.cos(pitchRollAngle)),
DISTANCE * Math.sin(pitchRollAngle),
DISTANCE * Math.cos(yawAngle) * Math.abs(Math.cos(pitchRollAngle))};
Matrix.setLookAtM(viewMatrix, 0, (float) dist[0], (float) dist[1], (float) dist[2], 0f, 0f, 0f, 0f, 1.0f, 0.0f);
And my projection matrix is just this:
Matrix.frustumM(projectionMatrix, 0, -ratio, ratio, -1, 1, 3, 100);
I set the yaw / pitchRoll angle from touch events. Now this works ok, when the object is in the center of the screen, i can rotate around like i should. But if i try to move the object, say, 1 unit on the X axis like this:
float[] modelMatrix = new float[16];
Matrix.setIdentityM(modelMatrix, 0);
Matrix.translateM(modelMatrix, 0, 1, 0, 0);
And then multiply all of them like this:
float[] MVPMatrix = new float[16];
Matrix.multiplyMM(MVPMatrix, 0, modelMatrix, 0, viewMatrix, 0);
Matrix.multiplyMM(MVPMatrix, 0, projectionMatrix, 0, MVPMatrix, 0);
The object spins around on its place, but i want it to rotate around the (0, 0, 0) point. What am i doing wrong?
This is one of the most general questions about matrix multiplication. In your case you need to first rotate the object and then translate it. So if you translate first to X and then rotate, the object will appear at position X but rotated around its own axis. If you rotate first and then translate by X the object will not appear at X but at the point gotten by rotating the X itself. This is an expected result and is how it works.
So to understand what happens: The matrix actually consists of 3 base vectors and a center. When identity the base vectors are (1,0,0), (0,1,0), (0,0,1), (0,0,0). Now when multiplying this matrix with some transformation the base vectors are actually transformed. That results is that the matrix is containing its own coordinate system in which the transformations seem "logical". This results that the rotation matrix will never change the center of the object.
I know this is all too complicated but it is actually very easy to imagine the effect: Take it as if matrix multiplication were actually commands to a character looking from a first person view. So when you say "go forward" (translate) you take a step forward, now "turn 90 degrees" and you turn 90 degrees and are still on the same location, now "go forward again" you take another step forward but this is actually not the same direction as it was on the beginning...
So what you do is you say "go forward by 1", now turn ANGLE degrees. This results in the object being kept in the same location and spinning around its own axis.
And what you should do is say "turn toward your goal" (rotate by ANGLE) now "go forward by 1" and maybe even "turn back by -ANGLE" so you face the same direction as you did in the beginning.
I hope this explanation will help you.
I remember having a similar problem in Delphi 2010 when I had to use OpenGL there. The trick was to place the object back to <0, 0, 0>, applythe rotation and then place it back at its original position before drawing a frame.
I can't recall how I did it, nor do I have access to that code anymore as it belonged to my previous employer, but that's what I remember.
The problem was in this line:
Matrix.multiplyMM(MVPMatrix, 0, modelMatrix, 0, viewMatrix, 0);
I just switched the model and view matrix so that they are multiplied the other way around, and it works! Like this:
Matrix.multiplyMM(MVPMatrix, 0, viewMatrix, 0, modelMatrix, 0);
Thanks to #Zubaja for pointing me in the right direction!
how can I make it so I can plug in pixel coordinates into gl.glTranslatef() ?
Currently, I do this to get the texture appear at the bottom of the screen:
gl.glPushMatrix();
gl.glTranslatef(1.45f, -2.76f, 0);
gl.glScalef(1 / scaleX, 1 / scaleY, 0);
Square sq = getTexture(resourceId);
sq.draw(gl);
gl.glPopMatrix();
Without having to plug in the "1.45f, -2.76f, 0" values, the texture appears at the centre of the screen. How can I position my textures using pixel coordinates? Most of my texture's dimensions are 32x32, and a few 16x16.
Before I have used ((GL11Ext) gl).glDrawTexfOES() to render my textures, however I was unable to perform any transformations to the textures, for example I couldn't rotate them, etc.
I don't know how you set up your projection, but what you should be doing is use glOrtho and glViewport to set up your scene. given a window of size (width, height):
// init opengl at some previous point
gl.glViewport(0, 0, width, height);
// choose bottom-left corner, e.g. (0,0)
// use your own values for near and far planes
float left = 0, bottom = 0, near = 0.1f, far = 1000.0f;
gl.glMatrixMode(GL_PROJECTION);
gl.glOrtho(left, left + width, bottom, bottom + height, near, far);
gl.glMatrixMode(GL_MODELVIEW);
gl.glLoadIdentity();
and you will get pixel-coords (bottom-left at (left,bottom)) for your application
I'm trying to make a hexagon with 6 triangles using rotation and translation. Rather than making multiple translate calls, I instead want to translate the triangle downward once and rotate around the Z axis at 60 degrees six times (my sketch may help with that explanation: http://i.imgur.com/SrrXcA3.jpg). After repeating the drawTriangle() and rotate() methods six times, I should have a hexagon.
Currently my code looks like this:
public void onDrawFrame(GL10 unused)
{
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); //start by clearing the screen for each frame
GLES20.glUseProgram(mPerVertexProgramHandle); //tell OpenGL to use the shader program we've compiled
//Get pointers to the program's variables. Instance variables so we can break apart code
mMVPMatrixHandle = GLES20.glGetUniformLocation(mPerVertexProgramHandle, "uMVPMatrix");
mPositionHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle, "aPosition");
mColorHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle, "aColor");
//Prepare the model matrix!
Matrix.setIdentityM(mModelMatrix, 0); //start modelMatrix as identity (no transformations)
Matrix.translateM(mModelMatrix, 0, 0.0f, -0.577350269f, 0.0f); //shift the triangle down the y axis by -0.577350269f so that its top point is at 0,0,0
drawTriangle(mModelMatrix); //draw the triangle with the given model matrix
Matrix.rotateM(mModelMatrix, 0, 60f, 0.0f, 0.0f, 1.0f);
drawTriangle(mModelMatrix);
}
Here's my problem: it appears my triangle isn't rotating around (0,0,0), but instead it rotates around the triangle's center (as shown in this picture: http://i.imgur.com/oiLFSCE.png).
Is it possible for to rotate triangle around (0,0,0), where its vertex is located?
Are you really be sure that your constant -0.577350269f is the correct value for the triangle center?
Also your code looks unfinish (You use an mvp handle but never use it in the code), could you provide more information?
I know that normalised coordinates should be -1 (Left) and +1 (Right) and -1 (Bottom) and +1 (Top)
like this:
But after applying this:
From my onSurfaceChanged method
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
//The above line can be replaced with:
//Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
//But I get the same results with either frustumM or otrhoM
And this in my onDrawFrame method
Matrix.setLookAtM(mVMatrix, 0, 0, 0, 3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
(I then pass mMVPMatrix into my custom class's draw method where it is rotated and translated).
But, my co-ordinates seem to change - this is roughly what happens:
As you can see, the x co-ordinates are altered somewhat, -1 and +1 are no longer the edges of the screen (on the device I'm using at the moment, the outer edges become -1.7 and +1.7)
Y coordinates remain unchanged.
Would appreciate if someone could point out where I'm going wrong? I need it to be -1 through +1 like it should be.
Thanks
To my eyes it appears correct. If your screen is not a square, then are you sure you want your x axis to be stretched so it behaves like a square screen? Because if you do that, then if you tell OpenGL to draw a square, it will appear as a rectangle on the screen instead if you don't have your x axis edges be larger than your y axis edges when your screen width is wider than your screen height, as suggested by the image. That's why you pass your ratio to the projection, so it knows how to draw things properly.
What is happening when you draw a regular square on the screen? Is it appearing as a square or a rectangle?
I'm writing my first 2D app for Android using OpenGL. I'm writing it on my Desire, so my screen coords should be 0,0 to 799,479 in landscape mode. I'm trying to get OpenGL to use this range in world coordinates.
The app, such as it is, is working fine so far, but I've had to tweak numbers to get stuff to appear on the screen and I'm frustrated by my inability to understand the relationship between the projection matrix, and the rendering of textures in this regard.
Setting the projection matrix:
gl.glViewport(0, 0, width, height);
float ratio = (float) width / height;
float size = .01f * (float) Math.tan(Math.toRadians(45.0) / 2);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-size, size, -size / ratio, size / ratio, 0.01f, 100.0f);
// GLU.gluOrtho2D(gl, 0,width, 0, height);
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
Also, I'm not sure what is 'best' - using glFrustumF or GLU.gluOrtho2D The latter has simpler parameters - just the dimensions of the viewport - but I've not got anywhere with that. (Some sites have height and 0 the other way around but that makes no difference.) But shouldn't this be the natural choice for 2D usage of OpenGL? Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D - please disregard the third dimension everywhere, in the interests of speed"?
Drawing my textures:
I'm drawing stuff using 2 textured triangles. The relevant parts of my init (let me know if I need to edit my question with more detail) are:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatex(nXpos, nYpos, nZoomin);
gl.glRotatef(nRotZ, 0, 0, 1);
gl.glScalef((float)nScaleup,(float)nScaleup, 0.0f);
...
...
gl.glVertexPointer(2, GL10.GL_FIXED, 0, mVertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
mVertexBuffer is an IntBuffer and contains:
int vertices[] =
{
-1, -1,
1, -1,
-1, 1,
1, 1
};
I don't intend, ultimately, to have to pass in nZoomin - I've done it this way because it was how I found the 'magic numbers' needed to actually see anything! Currently I need to use -1000 there, with smaller numbers resulting in smaller images. Am I right in thinking there must be some way of having a value of zero for nZoomin when the projection matrix is set correctly?
My textures are currently 128x128 (but may end up being different sizes, perhaps always square though). I have no way of knowing when they're being displayed at actual size currently. I'd like to be able to pass in a value of, say, 128 for nScaleup to have it plotted at actual size. Is this related to the projection matrix, or do I have two separate issues?
If you're working in 2D, you don't need glFrustum, just use glOrtho. Something like this:
void glOrthof(0, 800, 0, 480, -1, 1);
That'll put the origin at the bottom left. If you want it at the top left, use:
void glOrthof(0, 800, 480, 0, -1, 1);
For 480 and 800, you should obviously substitute the actual size of your view, so your app will be portable to different screen sizes and configurations.
I'm passing -1 and 1 for the z range, but these don't really matter, because the orthogonal projection puts (x, y, z) on the same place on the screen, no matter the value of z. (near and far must not be equal, though.) This is the only way to tell OpenGL to ignore the z coordinate; there is no specific "2D" mode, your matrices are still 4x4, and 2-dimensional vertices will receive a z coordinate of 0.
Note that your coordinates do not range from 0 to 799, but really from 0 to 800. The reason is that OpenGL interprets coordinates as lying between pixels, not on them. Think of it like a ruler of 30 cm: there are 30 intervals of a centimetre on it, and the ticks are numbered 0-30.
The vertex buffer you're using doesn't work, because you're using GL_FIXED format. That means 16 bits before the decimal point, and 16 bits after it, so to specify a 2x2 square around the origin, you need to multiply each value by 0x10000:
int vertices[] =
{
-0x10000, -0x10000,
0x10000, -0x10000,
-0x10000, 0x10000,
0x10000, 0x10000
};
This is probably the reason why you need to scale it so much. If you use this array, without the scaling, you should get a 2x2 pixel square. Turning this into a 1x1 square, so the size can be controlled directly by the scale factor, is left as an exercise to the reader ;)
Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D
I think the problem is that you're using a projection matrix for perspective projection.
Instead you should use parallel projection.
To get this matrix you can use the glOrtho() function.
gl.glMatrixMode(GL10.GL_PROJECTION);
...
gl.glOrtho(0, width, 0, height, 0, 128);
Now the z-value have no influence over an object's size anymore.
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
It's right that in a 2D world, you don't really have about z-values. But you have to decide
which of your objects you want to draw at first.
There are two ways to decide that:
Deactivate GL_DEPTH_TEST and everything is drawn in the order you choose
Activate GL_DEPTH_TEST and let OpenGL decide