I have a simple 2D engine that renders 2D textured quads, and right now I can scale the quad or rotate it, but when I try to translate it I have a strange distortion (the quad is squashed in the half left part of the screen with an infinite perspective effect), here's the code :
private final float quad_vertex[] = {
-0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f // top right
};
final float left = -width/2.0f;
final float right = width/2.0f;
final float bottom = -height/2.0f;
final float top = height/2.0f;
final float near = 0.1f;
final float far = 200.0f;
Matrix.orthoM(projection_matrix, 0, left, right, top, bottom, near, far);
Matrix.setLookAtM(view_matrix, 0, 0, 0, 1.0f, 0.0f, 0f, 0f, 0f, 1.0f, 0.0f);
...
Matrix.setIdentityM(model_matrix, 0);
Matrix.scaleM(model_matrix, 0, scale_width, scale_height, 1.0f);
Matrix.translateM(model_matrix, 0, x, 0, 0);
//Matrix.rotateM(model_matrix, 0, x, 1, 0, 0);
x = x + 1.0f;
Matrix.multiplyMM(viewprojection_matrix, 0, projection_matrix, 0, view_matrix, 0);
Matrix.multiplyMM(modelviewprojection_matrix, 0, viewprojection_matrix, 0, model_matrix, 0);
So, any idea what is the problem ? Thanks in advance :)
Sounds like a similar problem that I ran into. I was using the tutorial code at http://developer.android.com/training/graphics/opengl/index.html. Changing a line in the shader code from gl_Position = vPosition * uMVPMatrix; to gl_Position = uMVPMatrix * vPosition; fixed the problem.
Matrix multiplication is a non-commutative operation - the order of the operands is important!
Related
I have a vertex (Which i will not be showing/rendering in the scene)
float vertex[] = {
1.0f, 1.0f, 1.0f,
};
And i have a mesh, which i have translated and rotated using:
Matrix.translateM(World.mModelMatrix, tmOffset, globalPositionX, globalPositionY, globalPositionZ);
Matrix.rotateM(World.mModelMatrix, rmOffset, globalRotationZ, 0, 0, 1);
Matrix.rotateM(World.mModelMatrix, rmOffset, globalRotationY, 0, 1, 0);
Matrix.rotateM(World.mModelMatrix, rmOffset, globalRotationX, 1, 0, 0);
How can apply those translations and rotations to the vertex, and get its global position (x,y,z) after?
Use the Matrix.multiplyMV method:
float vertex[] = { 1.0f, 1.0f, 1.0f, 1.0f };
float result[] = { 0.0f, 0.0f, 0.0f, 0.0f };
Matrix.multiplyMV(result, 0, matrix, 0, vertex, 0);
Note, that you will have to add a homogeneous coordinate to your vector to make it work.
I'm reading this awesome Beginning Android Games book, and I'm trying now to implement some tests myself.
I'm using OpenGl ES 1.0, and I'm OK now manipulating the view frustum, projections, translation, rotation, scale etc.
What I'm trying to do:
a) render a rocket to the screen, add some velocity and acceleration to it (using Euler's integration - add the acceleration to the velocity, and the velocity to the position) to simulate a path (parabola). - This is done, implemented without any issue.
b) Rotate the rocket, so that we can simulate also the inclination of the object along its path. - That's the problem.
To be clear, I'm adding the image below.
I can't figure out what's the correct angle to add to the rocket, between one frame and the next one.
I tried to get that with some geometry.
Obj Pos 1 is the rocket representation at frame 1.
Obj Pos 2 is the rocket representation, at the next frame (frame 2).
V1 is the vector that holds the center X and Y coordinates of the Obj Pos 1.
V2 i the vector that holds the center X and Y coordinates of the Obj Pos 2.
Tangent line 1 is the tangent line to the parabola, to where V1 points.
Tangent line 2 is the tangent line to the parabola, to where V2 points.
A1 is the angle between both vectors.
A2 is the angle between both tangent lines.
As far as I can see the correct angle to apply to the rocket, from frame 1 to frame 2 is angle A2. But how can I calculate it?
And, is this correct for game purposes? I mean, we don't need to be exact on the physics concept, we just need to be good enough to simulate animation and 'cheat' the user.
Followd the code below:
public class PersonalTest008Rocket extends GLGame {
#Override
public Screen getStartScreen() {
return new RocketScreen(this);
}
class RocketScreen extends Screen {
GLGraphics glGraphics;
Camera2D camera;
final float WORLD_WIDTH = 60;
final float WORLD_HEIGHT = 36;
float[] rocketRawData;
short[] rocketRawIndices;
BindableVertices rocketVertices;
DynamicGameObject rocket;
float angle;
Vector2 gravity;
public RocketScreen(Game game) {
super(game);
glGraphics = ((GLGame) game).getGLGraphics();
camera = new Camera2D(glGraphics, WORLD_WIDTH, WORLD_HEIGHT);
rocketRawData = new float[]{
// x, y, r, g, b, a
+4.0f, +0.0f, 1.0f, 0.0f, 0.0f, 1, // 0
+2.0f, +1.0f, 0.5f, 0.0f, 0.0f, 1, // 1
+2.0f, -1.0f, 0.5f, 0.0f, 0.0f, 1, // 2
-2.0f, +1.0f, 0.0f, 0.5f, 0.5f, 1, // 3
-2.0f, -1.0f, 0.0f, 0.5f, 0.5f, 1, // 4
-3.0f, +1.0f, 0.0f, 0.5f, 0.5f, 1, // 5
-3.0f, -1.0f, 0.0f, 0.5f, 0.5f, 1, // 6
-4.0f, +3.0f, 0.0f, 0.0f, 1.0f, 1, // 7
-5.0f, +0.0f, 0.0f, 0.0f, 1.0f, 1, // 8
-4.0f, -3.0f, 0.0f, 0.0f, 1.0f, 1 // 9
};
rocketRawIndices = new short[]{
0, 1, 2,
1, 4, 2,
1, 3, 4,
3, 4, 6,
3, 5, 6,
3, 7, 5,
5, 8, 6,
6, 9, 4
};
rocketVertices = new BindableVertices(glGraphics, 10, 3 * 8, true, false);
rocketVertices.setVertices(rocketRawData, 0, rocketRawData.length);
rocketVertices.setIndices(rocketRawIndices, 0, rocketRawIndices.length);
int velocity = 30;
angle = 45;
rocket = new DynamicGameObject(0, 0, 9, 6);
rocket.position.add(1, 1);
rocket.velocity.x = (float) Math.cos(Math.toRadians(angle)) * velocity;
rocket.velocity.y = (float) Math.sin(Math.toRadians(angle)) * velocity;
gravity = new Vector2(0, -10);
}
#Override
public void update(float deltaTime) {
rocket.velocity.add(gravity.x * deltaTime, gravity.y * deltaTime);
rocket.position.add(rocket.velocity.x * deltaTime, rocket.velocity.y * deltaTime);
}
#Override
public void present(float deltaTime) {
GL10 gl = glGraphics.getGL();
gl.glClearColor(0.5f, 0.5f, 0.5f, 1);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.setViewportAndMatrices();
gl.glTranslatef(rocket.position.x, rocket.position.y, 0);
gl.glRotatef(angle, 0, 0, 1);
rocketVertices.bind();
rocketVertices.draw(GL10.GL_TRIANGLES, 0, rocketRawIndices.length);
rocketVertices.unbind();
}
#Override
public void pause() {
}
#Override
public void resume() {
}
#Override
public void dispose() {
}
}
}
You can calculate the angle based on the instantaneous velocity vector:
// Get direction to point the ship
mangleInDeg = (float) (Math.atan2(mRelSpeed.y, mRelSpeed.x) * 180 / Math.PI);
mangleInDeg += 90.0; // offset the angle to coincide with angle of the base ship image
I am trying to understand how camera works on OpenGL ES, so I am tryng to look at the same point with the two differents types, Matrix.frustumM and Matrix.orthoM
I will like to know what exactly I am doing when use Matrix.frustumM or orthoM, I know that I apply them to the ProjectionMatrix but I dont understand what defines the parameters(left,right,bottom,top,near,far of what? it is supposed to be the screen of the phone? ) same with orthoM
I want to draw a square on the screen on 0,0,0 with 1f of height and weight(like 2D just to test the cameras)
but if I do onSurfaceCreated
final float eyeX = 2f;
final float eyeY = 5f;
final float eyeZ = 8f;
final float lookX = 2f;
final float lookY = 5f;
final float lookZ = 0.0f;
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
onSurfaceChanged
GLES20.glViewport(0, 0, width, height);
// Create a new perspective projection matrix. The height will stay the
// same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 25.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
That is what i saw onn phone
Draw function:
public void dibujarBackground()
{
// Draw a plane
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mBackgroundDataHandle);
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f,2.0f, 0.0f);
drawBackground();
}
private void drawBackground()
{
coordinate.drawBackground(mPositionHandle, mNormalHandle, mTextureCoordinateHandle);
// This multiplies the view matrix by the model matrix, and stores the
// result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mMVPMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glUniform3f(mLightPosHandle,Light.mLightPosInEyeSpace[0], Light.mLightPosInEyeSpace[1], Light.mLightPosInEyeSpace[2]);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6);
}
Coords of the square:
final float[] backgroundPositionData = {
// In OpenGL counter-clockwise winding is default.
0f, 1f, 0.0f,
0f, 0f, 0.0f,
1f, 1f, 0.0f,
0f, 0f, 0.0f,
1f, 0f, 0.0f,
1f, 1f, 0.0f,
};
final float[] backgroundNormalData = {
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f, };
final float[] backgroundTextureCoordinateData = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f, };
Overall what you get in the end is a single matrix which is used to multiply the positions so that the visible fragments are in range [-1,1] in all 3 dimensions. That means if you use no matrix or use the identity the coordinates will need to be in this range to be visible. So the 3 matrix computations you are using are actually only conveniences to help you achieve a correct transformation:
Ortho is an orthographical transformation. This means the visual representation of x and y screen coordinates are not effected by the z coordinate at all. Visually that means the object does not appear smaller when it is further. The values you insert into this convenience method are border values (left, right, top, bottom) which means a rectangle with same coordinates will take exactly the full screen. These values are mostly used to be the same as your view coordinate system (left = 0, right = screenWidth, top = 0, bottom = screenHeight). Also there are near and far parameters which represent the clipping planes so that positions smaller then near or further then far are not visible. This projection is mostly used for 2D drawing.
Frustum matrix is designed so that the x and y coordinates are reduced with increasing z. This means an object will appear smaller when further. The border parameters are connected to the near parameter so that the rectangle with border coordinates having z at near will appear as full screen. The near must be larger then zero in this case or the result is unpredictable. The far promoter is just a clipping plane but same as with ortho the pixels are clipped if z value is smaller then near or larger then far. The border parameters are best computed with the field of view (angle) and screen aspect ratio. You use the tang function to compute border parameters to get the desired effect. This method is mostly used for 3D drawing.
LookAt is a convenience which is used to transform all the objects to such positions and orientations that they appear to be effected by the camera position. Though this method is defined with vectors you may imagine it having a vector position and rotations. What this does it creates a matrix that will rotate all the objects by -rotations and translate them by -position.
Overall the usage then is pretty simple. Each position should first be multiplied by the model matrix which is the matrix representing the model position in your scene. Then multiplied by the matrix received with lookAt to simulate the camera. Then multiplied by the projection matrix which in most cases is either the ortho or the frustum. The optimization then is to multiply the matrices first on the CPU and then have the positions multiplied by them on the GPU. Some variations then persist where you split the matrix to the "model view matrix" and the "projection matrix". This is used to compute things like lighting effect where the position must not be effected by the projection matrix.
I am not getting expected coordinate values from gluUnProject function.
I will put some code first. Here is the function which get called on touch event
public float[] getWorldSpaceFromMouseCoordinates(float mouseX, float mouseY)
{
float[] finalCoord = { 0.0f, 0.0f, 0.0f, 0.0f };
// mouse Y needs to be inverted
mouseY = (float)_viewport[3] - mouseY;
float[] mouseZ = new float[1];
FloatBuffer fb = FloatBuffer.allocate(1);
GLES20.glReadPixels((int)mouseX, (int)mouseY, 1, 1, GLES20.GL_DEPTH_COMPONENT, GLES20.GL_FLOAT, fb);
int result = GLU.gluUnProject(mouseX, mouseY, fb.get(0), mViewMatrix, 0, mProjectionMatrix, 0, _viewport, 0, finalCoord, 0);
float[] temp2 = new float[4];
Matrix.multiplyMV(temp2, 0, mViewMatrix, 0, finalCoord, 0);
if(result == GL10.GL_TRUE){
finalCoord[0] = temp2[0] / temp2[3];
finalCoord[1] = temp2[1] / temp2[3];
finalCoord[2] = temp2[2] / temp2[3];
}
Log.d("Coordinate:", "" + temp2[0] + "," + temp2[1] + "," + temp2[2]);
return finalCoord;
}
here is setting up matrices
#Override
public void onSurfaceChanged(GL10 unused, int width, int height)
{
// Adjust the viewport based on geometry changes,
// such as screen rotation
GLES20.glViewport(0, 0, width, height);
_viewport = new int[] { 0, 0, width, height };
float ratio = (float) width / height;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 2, 7);
}
setting up modelview matrix (note that model matrix is just an identity.)
// Set the camera position (View matrix)
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
So as per my understanding my expectation from this function is that it will give me world coordinates w.r.t origin which is not happening. I am creating a square with following coordinates
_vertices = new float [] { -0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
however I am getting X values ranging from (.3, -.3) Y values ranging in (.5,-.5) and Z always -1.0 for whole viewport. X values in (0.2,-0.2) when touching corners of square and Y values in (0.15, -0.15).
Let me know if any more code s required.
So I found out what the problem was. glReadPixels() with GL_DEPTH_COMPONENT is not supported in OpenGL ES 2.0. That is because I was always getting wrong depth value and hence wrong coordinates. Now I had two choices whether to use an FBO and store depth using a shader OR I could do Ray Picking(Since I had only one object in scene Si was hoping that gluUnProject() will do). I chose former here is my code I hope it will help somebody (Its not not generic and geometry is hard coded)
public float[] getWorldSpaceFromMouseCoordinates(float mouseX, float mouseY)
{
float[] farCoord = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] nearCoord = { 0.0f, 0.0f, 0.0f, 0.0f };
// mouse Y needs to be inverted
//mouseY = (float) _viewport[3] - mouseY;
// calling glReadPixels() with GL_DEPTH_COMPONENT is not supported in
// GLES so now i will try to implement ray picking
int result = GLU.gluUnProject(mouseX, mouseY, 1.0f, mViewMatrix, 0, mProjectionMatrix, 0, _viewport, 0,
farCoord, 0);
if (result == GL10.GL_TRUE)
{
farCoord[0] = farCoord[0] / farCoord[3];
farCoord[1] = farCoord[1] / farCoord[3];
farCoord[2] = farCoord[2] / farCoord[3];
}
result = GLU.gluUnProject(mouseX, mouseY, 0.0f, mViewMatrix, 0, mProjectionMatrix, 0, _viewport, 0, nearCoord,
0);
if (result == GL10.GL_TRUE)
{
nearCoord[0] = nearCoord[0] / nearCoord[3];
nearCoord[1] = nearCoord[1] / nearCoord[3];
nearCoord[2] = nearCoord[2] / nearCoord[3];
}
float [] dirVector = Vector.normalize(Vector.minus(farCoord, nearCoord));
float [] rayOrigin = {0.0f, 0.0f, 3.0f};
Log.d("Far Coordinate:", "" + farCoord[0] + "," + farCoord[1] + "," + farCoord[2]);
Log.d("Near Coordinate:", "" + nearCoord[0] + "," + nearCoord[1] + "," + nearCoord[2]);
float [] vertices = { -0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
// calculate normal for square
float[] v1 = { vertices[3] - vertices[0], vertices[4] - vertices[1], vertices[5] - vertices[2]};
float[] v2 = { vertices[9] - vertices[0], vertices[10] - vertices[1], vertices[11] - vertices[2]};
float[] n = Vector.normalize(Vector.crossProduct(v1, v2));
// now calculate intersection point as per following link
// http://antongerdelan.net/opengl/raycasting.html
// our plane passes through origin so findint 't' ll be
float t = -(Vector.dot(rayOrigin, n) / Vector.dot(dirVector, n));
// now substitute above t in ray equation gives us intersection point
float [] intersectionPoint = Vector.addition(rayOrigin, Vector.scalarProduct(t, dirVector));
Log.d("Ipoint:", "" + intersectionPoint[0] + "," + intersectionPoint[1] + "," + intersectionPoint[2]);
return intersectionPoint;
}
I am starting up work on an Android game and am learning OpenGL ES. I have used OpenGL a bit, though it was quite some time ago by now. I have mostly used DirectX lately with C++, so I understand graphic API concepts fairly well.
When, playing with the API on my own, I was unable to get the results I anticipated, I turned to this tutorial I found online that seemed fairly comprehensive, and though I understood it easily and followed the tutorial fairly strictly, I still can't get the screen to display a simple square (currently not using anything other than a vertex array with no colour).
Below is the code for my renderer class which I have been staring at for some time and am starting to go a little crazy with my inability to find my mistake. I have done far more complicated things with graphics APIs (in both DirectX and OpenGL) so I find this kind of embarrassing and just need somebody to point out my probably glaringly obvious oversight.
Thank you in advance!
public class GameRenderer implements Renderer {
private float red, green, blue = 0.0f;
private final float vertices[] = {
0.5f, -0.5f, 0.0f, // 0, Bottom Right
0.5f, 0.5f, 0.0f, // 1, Top Right
-0.5f, 0.5f, 0.0f, // 2, Top Left
-0.5f, -0.5f, 0.0f, // 3, Bottom Left
};
private final short indices[] = {
0, 1, 2, 0, 2, 3
};
private final float colours[] = {
1.0f, 1.0f, 1.0f,
0.5f, 0.5f, 0.5f,
0.5f, 0.5f, 0.5f,
0.0f, 0.0f, 0.0f
};
FloatBuffer vFBuff;
FloatBuffer cFBuff;
ShortBuffer iSBuff;
public GameRenderer(){
super();
}
#Override
public void onDrawFrame(GL10 gl) {
gl.glClearColor(red, green, blue, 0.5f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glScalef(2.0f, 2.0f, 0.0f);
gl.glTranslatef(0.0f, 0.0f, -4f);
gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_SHORT, iSBuff);
}
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// TODO Auto-generated method stub
// set viewport
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, 45.0f, (float)width / (float)height, 0.0f, 100.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
}
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glClearDepthf(1.0f);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glDepthFunc(GL10.GL_LEQUAL);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
ByteBuffer vBBuff = ByteBuffer.allocateDirect(vertices.length * 4);
ByteBuffer cBBuff = ByteBuffer.allocateDirect(colours.length * 4);
ByteBuffer iBBuff = ByteBuffer.allocateDirect(indices.length * 2);
vFBuff = vBBuff.asFloatBuffer();
vFBuff.put(vertices);
vFBuff.position(0);
cFBuff = cBBuff.asFloatBuffer();
cFBuff.put(colours);
cFBuff.position(0);
iSBuff = iBBuff.asShortBuffer();
iSBuff.put(indices);
iSBuff.position(0);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vFBuff);
gl.glFrontFace(GL10.GL_CCW);
gl.glEnable(GL10.GL_CULL_FACE);
gl.glCullFace(GL10.GL_BACK);
}
public void setColour(float r, float g, float b) {
red = r;
blue = b;
green = g;
}
GLU.gluPerspective(gl, 45.0f, (float)width / (float)height, 0.0f, 100.0f);
Don't set your zNear to zero:
If (r = zFar / zNear) roughly log2(r) bits of depth buffer precision are lost. Because r approaches infinity as zNear approaches 0, zNear must never be set to 0.