I'm trying to migrate graphics in my game to OpenGL for performance reasons.
I need to draw an object using exact screen coordinates. Say a box 100x100 pixels in the center of 240x320 screen.
I need to rotate it around Z axis, preserving its size.
I need to rotate it around X axis, with perspective effect, preserving (or close to) its size.
I need to rotate it around Y axis, with perspective effect, preserving (or close to) its size.
Here's a picture.
So far I managed to achieve first 2 tasks:
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(120, 160, 0); // move rotation point
gl.glRotatef(angle, 0.0f, 0.0f, 1.0f); // rotate
gl.glTranslatef(-120, -160, 0); // restore rotation point
mesh.draw(gl); // draws 100x100 px rectangle with the following coordinates: (70, 110, 170, 210)
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0f, (float)width, (float)height, 0f, -1f, 1f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
But when I'm trying to rotate my box around x or y, nasty thing are happening with my box and there is no perspective effect. I tried to use some other function instead of glRotate (glFrustum, glPerspective, gluLookAt, applying "skewing" matrix), but I couldn't make them work properly.
I'm trying to migrate graphics in my game to OpenGL for performance reasons.
I need to draw an object using exact screen coordinates. Say a box 100x100 pixels in the center of 240x320 screen.
For a perspective you also need some length for the lens, which determines the FOV. The FOV is the ratio of viewing plane distance to visible extents. In the case of the near plane it thus becomes {left,right,top,bottom}/near. For the sake of simplicity we assume horizontal FOV and a symmetric projection i.e.
FOV = 2*|left|/near = 2*|right|/near = extent/distance
or if you're more into angles
FOV = 2*tan(angular FOV / 2)
For a 90° FOV the length of the lens is half the width of the focal plane. Your focal plane is 240x320 pixels, so 120 to the left and right and 160 to the top and bottom. OpenGL does not really have a focus, but we can say that the middle plane between near and far is the "focal".
So let's say the object will have in average a extent of about the order of magnitude of visible plane limits, i.e. for a visible plane of 240x360, an object will have in average a size of ~200px. It thus makes sense the distance of near to far clipping to be 200, so +- 100 about the focal plane. So for a FOV of 90° the focal plane has distance
2*tan(90°/2) = extent/distance
2*tan(45°) = 2 = 240/distance
2*distance = 240
distance = 120
120, thus near and far clipping distances are 20 and 220.
Last but not least the near clip plane limits must be scaled by near_distance/focal_distance = 20/120
So
left = -120 * 20/120 = -20
right = 120 * 20/120 = 20
bottom = -180 * 20/120 = -30
top = 180 * 20/120 = 30
So this gives us the glFrustum parameters:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-20, 20, -30, 30, 20, 220);
And last but not least we must move the world origin into the "focal" plane
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, -120);
I need to rotate it around Z axis, preserving its size.
done.
I need to rotate it around X axis, with perspective effect, preserving (or close to) its size.
I need to rotate it around Y axis, with perspective effect, preserving (or close to) its size.
Perspective does not preserve size. That's what's makes it a perspective. You can use a very long lens, i.e. small FOV.
Code Update
As a general pro-tip: Do all OpenGL operations in the drawing handler. Don't set the projection in the reshape handler. It's ugly and as soon as you want to have some HUD or other kind of overlay you'll have to discard it anyway. So here's how to change it:
public void onDrawFrame(GL10 gl) {
// fov, extents are parameters set somewhere else
// 2*tan(fov/2.) = width/distance =>
float distance = width/(2.*tan(fov));
float near = distance - extent/2;
float far = distance + extent/2;
if(near < 1.) {
near = 1.;
}
float left = (-width/2) * near/distance;
float right = ( width/2) * near/distance;
float bottom = (-height/2) * near/distance;
float top = ( height/2) * near/distance;
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustum(left, right, bottom, top, near, far);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -focal);
gl.glTranslatef(120, 160, 0); // move rotation point
gl.glRotatef(angle, 0.0f, 0.0f, 1.0f); // rotate
gl.glTranslatef(-120, -160, 0); // restore rotation point
mesh.draw(gl); // draws 100x100 px rectangle with the following coordinates: (70, 110, 170, 210)
}
public void onSurfaceChanged(GL10 gl, int new_width, int new_height) {
width = new_width;
height = new_height;
}
You need to use a perspective projection matrix and then use your model-view matrix to get the position and scaling right.
You're using an orthogonal projection (glOrthof()), which explicitly disables perspective.
It's opposite is glFrustum(), often wrapped by gluPerspective()/ which is easier to use but requires the GLU library.
Related
I'm trying to create a 3D Android game using the OpenGL library. From what I can tell, the Android platform (or OpenGL) doesn't provide a camera object, so I made my own. I managed to create a square that is drawn on the screen. I also managed to get the camera to move in all 3 directions without issue. The problem I'm having is when I turn the camera either in the x or y axis, the square is displayed in a very strange way. It seems highly distorted its perspective. Here's some pictures
Camera at origin looking forward: Position [0,0,0.55f], Forward [0,0,-1], Up [0,1,0]
Camera moved to the right: Position [3,0,0.55f], Forward [0,0,-1], Up [0,1,0]
Camera turned slightly to the right: Position [0,0,0.55f], Forward [0.05f,0,-1], Up [0,1,0]
I'm not sure where the problem is getting generated. Here are the vertices of the square:
static float vertices[] = {
// Front face (CC order)
-1.0f, 1.0f, 1.0f, // top left
-1.0f, -1.0f, 1.0f, // bottom left
1.0f, -1.0f, 1.0f, // bottom right
1.0f, 1.0f, 1.0f, // top right
};
From what I've read, my matrix multiplications are in the correct order. I pass the Matrices to the vertex shader and do the multiplications there:
private final String vertexShaderCode =
"uniform mat4 uVMatrix;" +
"uniform mat4 uMMatrix;" +
"uniform mat4 uPMatrix;" +
"attribute vec4 aVertexPosition;" + // passed in
"attribute vec4 aVertexColor;" +
"varying vec4 vColor;" +
"void main() {" +
" gl_Position = uPMatrix * uVMatrix * uMMatrix * aVertexPosition;" +
" vColor = aVertexColor;" + // pass the vertex's color to the pixel shader
"}";
The Model matrix just moves it to the origin and makes it scale 1:
Matrix.setIdentityM(ModelMatrix, 0);
Matrix.translateM(ModelMatrix, 0, 0, 0, 0);
Matrix.scaleM(ModelMatrix, 0, 1, 1, 1);
I use my Camera object to update the View Matrix:
Matrix.setLookAtM(ViewMatrix, 0,
position.x, position.y, position.z,
position.x + forward.x, position.y + forward.y, position.z + forward.z,
up.x, up.y, up.z);
Here is my ProjectionMatrix:
float ratio = width / (float) height;
Matrix.frustumM(ProjectionMatrix, 0, -ratio, ratio, -1, 1, 0.01f, 10000f);
What am I missing?
With the parameters you pass to frustrumM():
Matrix.frustumM(ProjectionMatrix, 0, -ratio, ratio, -1, 1, 0.01f, 10000f);
you have an extremely strong perspective. This is the reason why the geometry looks very distorted as soon as you rotate it just slightly.
The left, right, bottom, and top values are distances measured at the depth of the near clip plane. So for example for your top value of 1.0, with the near value at 0.01, the top plane of the view volume will move at distance of 1.0 away from the viewing direction at a forward distance of 0.01.
Doing the math, you get atan(top / near) = atan(1.0 / 0.01) = atan(100.0) = 89.42 degrees for half the vertical view angle, or 178.85 degrees for the whole view angle, which corresponds to an extreme fish eye lense on a camera, covering almost the whole space in front of the camera.
To use a more sane level of perspective, you can calculate the values based on the desired view angle. With alpha being the vertical view angle:
float near = 0.01f;
float top = tan(0.5f * alpha) * near;
float right = top * ratio;
Matrix.frustumM(ProjectionMatrix, 0, -right, right, -top, top, near, 10000f);
Start with view angles in the range of 45 to 60 degrees for a generally pleasing amount of perspective. And remember that the tan() function used above takes angles in radian, so you'll have to convert it first if your original angle is in degrees.
Or, if you're scared of math, you can always use perspectiveM() instead. ;)
Depending on your setup, your vertex buffer may be wrong . Your vertices array is a vec3. And the attribute position in your shader is a vec4.
Also you projection matrix is strange , depending on the aspect ratio you want you'll get bizarre results.
You should use perspectiveM instead.
I'm trying to create an app for Android using OpenGL ES, but I'm having trouble handling touch input.
I've created a class CubeGLRenderer which spawns a Cube. CubeGLRenderer is in charge of the projection and view matrix, and Cube is in charge of its model matrix. The Cube is moving along the positive X axis, with no movement in Y nor Z.
CubeGLRenderer updates the view matrix each frame in order to move along with the cube, making the cube look stationary on screen:
Matrix.setLookAtM(mViewMatrix, 0, 0.0f, cubePos.y, -10.0f, 0.0f, cubePos.y, 0.0f, 0.0f, 1.0f, 0.0f);
The projection matrix is calculated whenever the screen dimension changes (i.e. when the orientation of the device changes). The two matrices are then muliplied and passed to Cube.draw() where it applies its model matrix and renders itself to screen.
So far, so good. Let's move on to the problem.
I want to touch the screen and calculate an angle from the center of the cube's screen coordinates to the point of the screen that I touched.
I thought I'd just accomplish this using GLU.gluProject(), but I'm either not using it correctly or simply haven't understood it at all.
Here's the code I use to calculate the screen coordinates from the cube's world coordinates:
public boolean onTouchEvent(MotionEvent e) {
Vec3 cubePos = cube.getPos();
float[] modelMatrix = cube.getModelMatrix();
float[] modelViewMatrix = new float[16];
Matrix.multiplyMM(modelViewMatrix, 0, mViewMatrix, 0, modelMatrix, 0);
int[] view = {0, 0, width, height};
float[] screenCoordinates = new float[3];
GLU.gluProject(cubePos.x, cubePos.y, cubePos.z, modelViewMatrix, 0, mProjectionMatrix, 0, view, 0, screenCoordinates, 0);
switch (e.getAction()) {
case MotionEvent.ACTION_DOWN:
Log.d("CUBEAPP", "screenX: " + String.valueOf(screenCoordinates[0]));
break;
}
return true;
}
What am I doing wrong?
The same calculation you do in the vertex shader you use to render the cube should be used to translate the cube center into the screen space.
Normally you would multiply each each vertex of the cube by the modelViewProjection matrix and then send it to the fragment shader.
You should use the exact same matrix you use in the vertex shader and multiply the center of the cube with it.
However, multiplying a 4x4 matrix with a Vec4 vertex (x, y, z, 1) would give you a Vec4 result of (x2, y2, z2, w).
In order to get the screen space coordinates you need to divide x2 and y2 by w!
After you divide it by w your xy coordinates are suppose to be within [-1..1]x[-1..1] range.
In order to get the exact pixel you would need to normalize x2/w and y2/w into [0..1] and then multiply it by the screen resolution width and height.
Hope this helps.
Hi I have a 512x512 texture that I would like to display within my GlSurfaceview at a 100% scale at a 1:1 pixel for pixel view.
I have having troubles achieving this and require some assistance.
Every combination of settings in OnSurfaceChanged and onDrawFrame result in a scaled image.
Can someone pls direct me to an example where this is possible.
private float[] mProjectionMatrix = new float[16];
// where mWidth and mHeight are set to 512
public void onSurfaceChanged(GL10 gl, int mWidth, int mHeight) {
GLES20.glViewport(0, 0, mWidth, mHeight);
float left = -1.0f /(1/ScreenRatio );
float right = 1.0f /(1/ScreenRatio );
float bottom = -1.0f ;
float top = 1.0f ;
final float near = 1.0f;
final float far = 10.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
#Override
public void onDrawFrame(GL10 glUnused ) {
....stuff here
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0, 0, 1);
Matrix.rotateM(mModelMatrix, 0, 0.0f, 1.0f, 1.0f, 0.0f);
drawCube();
}
many thanks,
There's various options. The simplest IMHO is to not apply any view/projection transformations at all. Then draw a textured quad with a range of (-1.0, 1.0) for both the x- and y-coordinates. That would get your texture to fill the entire view. Since you want it displayed in a 512x512 part of the view, you can set the viewport to cover only that area:
glViewport(0, 0, 512, 512);
Another possibility is that you reduce the range of your input coordinates to map to a 512x512 area of the screen. Or scale the coordinates in the vertex shader.
You didn't specify what version of OpenGL ES you use. In ES 3.0, you could also use glBlitFramebuffer() to copy the texture to your view.
As a beginner to android and openGL 2.0 es, I'm testing simple things and see how it goes.
I downloaded the sample at http://developer.android.com/training/graphics/opengl/touch.html .
I changed the code to check if I could animate a rotation of the camera around the (0,0,0) point, the center of the square.
So i did this:
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
long time = SystemClock.uptimeMillis() % 4000L;
float angle = ((float) (2*Math.PI)/ (float) 4000) * ((int) time);
Matrix.setLookAtM(mVMatrix, 0, (float) (3*Math.sin(angle)), 0, (float) (3.0f*Math.cos(angle)), 0 ,0, 0, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
// Draw square
mSquare.draw(mMVPMatrix);
}
I expected the camera to look always to the center of the square (the (0,0,0) point) but that's not what happens. The camera is indeed rotating around the square but the square does not stay in the center of the screen.. instead it is moving along the X axis...:
I also expected that if we gave the eyeX and eyeY the same values as centerX and centerY,like this:
Matrix.setLookAtM(mVMatrix, 0, 1, 1, -3, 1 ,1, 0, 0f, 1.0f, 0.0f);
the square would keep it's shape (I mean, your field of vision would be dragged but along a plane which would be paralel to the square), but that's also not what happens:
This is my projection matrix:
float ratio = (float) width / height;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 2, 7);
What is going on here?
Looking at the source code to the example you downloaded, I can see why you're having that problem, it has to do with the order of the matrix multiplication.
Typically in OpenGL source you see matrices set up such that
transformed vertex = projMatrix * viewMatrix * modelMatrix * input vertex
However in the source example program that you downloaded, their shader is setup like this:
" gl_Position = vPosition * uMVPMatrix;"
With the position on the other side of the matrix. You can work with OpenGL in this way, but it requires that you reverse the lhs/rhs of your matrix multiplications.
Long story short, in your case, you should change your shader to read:
" gl_Position = uMVPMatrix * vPosition;"
and then I believe you will get the expected behavior.
I'm using OpenGL for Android to draw my 2D images.
Whenever I draw something using the code:
gl.glViewport(aspectRatioOffset, 0, screenWidth, screenHeight);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluOrtho2D(gl, aspectRatioOffset, screenWidth + aspectRatioOffset,screenHeight, 0);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glBindTexture(GL10.GL_TEXTURE_2D, myScene.neededGraphics.get(ID).get(animationID).get(animationIndex));
crop[0] = 0;
crop[1] = 0;
crop[2] = width;
crop[3] = height;
((GL11Ext)gl).glDrawTexfOES(x, y, z, width, height)
I get an upside down result. I'v seen people solve this through doing:
crop[0] = 0;
crop[1] = height;
crop[2] = width;
crop[3] = -height;
This does however hurt the logic in my application, so I would like the result to not be flipped upside down. Does anyone know why it happen, and any way of avoiding or solving it?
Edit: I found a solutions, though I don't know if it is a good one:
int[] crop = new int[4];
crop[0] = 0;
crop[1] = imageDimension[ID][1];
crop[2] = imageDimension[ID][0];
crop[3] = -imageDimension[ID][1];
((GL11)gl).glTexParameteriv(GL10.GL_TEXTURE_2D, GL11Ext.GL_TEXTURE_CROP_RECT_OES, crop, 0);
((GL11Ext)gl).glDrawTexfOES(x, ScreenHeight - y - height, 0, width, height);
Define upside down. OpenGL defines (0, 0) to be in the lower left of the display with y going upward. glDrawTexOES is explicitly defined to work in window coordinates so there's no matrix stack in between even in ES 1.0. If you've set up your projection or modelview matrices to flip coordinates to whatever OpenGL considers upside down then that's not going to have any effect during a call to glDrawTexOES.
What generally happens is that people implicitly flip their graphics when loading them (because they ignore OpenGL's placement of the origin), causing them to appear upside down when using glDrawTexOES. The correct solution is that if you don't want to have to flip coordinates manually later on then don't implicitly flip them when loading your images and/or when setting up your matrix stacks.