Related
I am trying to understand how camera works on OpenGL ES, so I am tryng to look at the same point with the two differents types, Matrix.frustumM and Matrix.orthoM
I will like to know what exactly I am doing when use Matrix.frustumM or orthoM, I know that I apply them to the ProjectionMatrix but I dont understand what defines the parameters(left,right,bottom,top,near,far of what? it is supposed to be the screen of the phone? ) same with orthoM
I want to draw a square on the screen on 0,0,0 with 1f of height and weight(like 2D just to test the cameras)
but if I do onSurfaceCreated
final float eyeX = 2f;
final float eyeY = 5f;
final float eyeZ = 8f;
final float lookX = 2f;
final float lookY = 5f;
final float lookZ = 0.0f;
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
onSurfaceChanged
GLES20.glViewport(0, 0, width, height);
// Create a new perspective projection matrix. The height will stay the
// same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 25.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
That is what i saw onn phone
Draw function:
public void dibujarBackground()
{
// Draw a plane
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mBackgroundDataHandle);
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f,2.0f, 0.0f);
drawBackground();
}
private void drawBackground()
{
coordinate.drawBackground(mPositionHandle, mNormalHandle, mTextureCoordinateHandle);
// This multiplies the view matrix by the model matrix, and stores the
// result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mMVPMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glUniform3f(mLightPosHandle,Light.mLightPosInEyeSpace[0], Light.mLightPosInEyeSpace[1], Light.mLightPosInEyeSpace[2]);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6);
}
Coords of the square:
final float[] backgroundPositionData = {
// In OpenGL counter-clockwise winding is default.
0f, 1f, 0.0f,
0f, 0f, 0.0f,
1f, 1f, 0.0f,
0f, 0f, 0.0f,
1f, 0f, 0.0f,
1f, 1f, 0.0f,
};
final float[] backgroundNormalData = {
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f, };
final float[] backgroundTextureCoordinateData = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f, };
Overall what you get in the end is a single matrix which is used to multiply the positions so that the visible fragments are in range [-1,1] in all 3 dimensions. That means if you use no matrix or use the identity the coordinates will need to be in this range to be visible. So the 3 matrix computations you are using are actually only conveniences to help you achieve a correct transformation:
Ortho is an orthographical transformation. This means the visual representation of x and y screen coordinates are not effected by the z coordinate at all. Visually that means the object does not appear smaller when it is further. The values you insert into this convenience method are border values (left, right, top, bottom) which means a rectangle with same coordinates will take exactly the full screen. These values are mostly used to be the same as your view coordinate system (left = 0, right = screenWidth, top = 0, bottom = screenHeight). Also there are near and far parameters which represent the clipping planes so that positions smaller then near or further then far are not visible. This projection is mostly used for 2D drawing.
Frustum matrix is designed so that the x and y coordinates are reduced with increasing z. This means an object will appear smaller when further. The border parameters are connected to the near parameter so that the rectangle with border coordinates having z at near will appear as full screen. The near must be larger then zero in this case or the result is unpredictable. The far promoter is just a clipping plane but same as with ortho the pixels are clipped if z value is smaller then near or larger then far. The border parameters are best computed with the field of view (angle) and screen aspect ratio. You use the tang function to compute border parameters to get the desired effect. This method is mostly used for 3D drawing.
LookAt is a convenience which is used to transform all the objects to such positions and orientations that they appear to be effected by the camera position. Though this method is defined with vectors you may imagine it having a vector position and rotations. What this does it creates a matrix that will rotate all the objects by -rotations and translate them by -position.
Overall the usage then is pretty simple. Each position should first be multiplied by the model matrix which is the matrix representing the model position in your scene. Then multiplied by the matrix received with lookAt to simulate the camera. Then multiplied by the projection matrix which in most cases is either the ortho or the frustum. The optimization then is to multiply the matrices first on the CPU and then have the positions multiplied by them on the GPU. Some variations then persist where you split the matrix to the "model view matrix" and the "projection matrix". This is used to compute things like lighting effect where the position must not be effected by the projection matrix.
I'm trying to create a simple game in android. By road I meant games like Temple Run or Subway Surf but much simpler and abstract so I could do it only with the OpenGL ES without any other libraries.
So I've read a lot of basic tutorials that explains the 3D construction logic and used the basic sample of creating a 3D cube that rotates.
I am now trying to use that sample to create the game road. I made the square to look more like a rectangle and duplicate it to a 30x5 square road. I've tried many combinations and the internet to find a solution and yet I have this problems\questions:
How do I set all 30x5 squares to be one next to another? I'm always
getting the squares with some unwanted gap
I want to set the vieweye point (the "camera") 45 degrees to the
middle of the first row, so the player could see the road upon him
Next, I would want to move along the road. So Iv'e seen the rotate
and how it works. Is there a way to do the same to the viewpoint or
do I need to change the squares drawing Z's?
I see that onDrawFrame() is calling over and over many times. To
control the FPS, I've seen on the internet that people have used
there own FPS calculation with a sleep(). Isn't there a built one
already?
GLRenderer code:
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.content.Context;
import android.opengl.GLSurfaceView;
import android.opengl.GLU;
import android.util.Log;
class GLRenderer implements GLSurfaceView.Renderer {
private static final String TAG = "GLRenderer" ;
private final Context context;
private float mCubeRotation = 70.0f;
private Triangle triangle;
private Cube[][] cube;
GLRenderer(Context context) {
this.context = context;
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
gl.glClearDepthf(1.0f);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glDepthFunc(GL10.GL_LEQUAL);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT,GL10.GL_NICEST);
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
Log.d("MyOpenGLRenderer", "Surface changed. Width=" + width
+ " Height=" + height);
System.out.println("arg");
//get map
cube = new Cube[30][5];
for(int i = 0; i < cube.length; i++)
for(int j = 0; j < cube[i].length; j++)
cube[i][j] = new Cube();
//draw triangle
triangle = new Triangle(0.5f, 1, 0, 0);
// Define the view frustum
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
float ratio = (float) width / height;
GLU.gluPerspective(gl, 45.0f, ratio, 0.1f, 100.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
public void onDrawFrame(GL10 gl) {
// Clear the screen to black
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
//translate(dx, dy, dz)
// Position model so we can see it
//gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0.0f, 0.0f, -10.0f);
gl.glRotatef(mCubeRotation, 1.0f, 1.0f, 1.0f);
gl.glTranslatef(0.0f, 0.0f, -10.0f);
cube[0][0].draw(gl);
gl.glTranslatef(0.0f, 0.0f, -10.0f);
cube[0][1].draw(gl);
gl.glTranslatef(0.0f, 0.0f, -10.0f);
cube[0][2].draw(gl);
gl.glLoadIdentity();
//set rotation
mCubeRotation -= 0.15f;
System.out.println("mCubeRotation: "+mCubeRotation);
}
}
Cube code:
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.opengles.GL10;
class Cube {
private FloatBuffer mVertexBuffer; //vertex
private FloatBuffer mColorBuffer; //color
private ByteBuffer mIndexBuffer; //face indices
float width = 1.0f;
float height = 0.5f;
float depth = 1.0f;
private float vertices[] = {
-width, -height, -depth, // 0
width, -height, -depth, // 1
width, height, -depth, // 2
-width, height, -depth, // 3
-width, -height, depth, // 4
width, -height, depth, // 5
width, height, depth, // 6
-width, height, depth, // 7
};
private float colors[] = {
0.0f, 1.0f, 0.0f,
1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 1.0f,
0.5f, 0.0f, 1.0f,
1.0f, 0.5f, 0.0f,
1.0f, 1.0f, 0.0f,
0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f,
1.0f, 1.0f
};
private byte indices[] = {
0, 4, 5,
0, 5, 1,
1, 5, 6,
1, 6, 2,
2, 6, 7,
2, 7, 3,
3, 7, 4,
3, 4, 0,
4, 7, 6,
4, 6, 5,
3, 0, 1,
3, 1, 2
};
public Cube() {
ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mVertexBuffer = byteBuf.asFloatBuffer();
mVertexBuffer.put(vertices);
mVertexBuffer.position(0);
byteBuf = ByteBuffer.allocateDirect(colors.length * 4);
byteBuf.order(ByteOrder.nativeOrder());
mColorBuffer = byteBuf.asFloatBuffer();
mColorBuffer.put(colors);
mColorBuffer.position(0);
mIndexBuffer = ByteBuffer.allocateDirect(indices.length);
mIndexBuffer.put(indices);
mIndexBuffer.position(0);
}
public void draw(GL10 gl) {
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer);
gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_COLOR_ARRAY);
gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE,
mIndexBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
}
}
Eventually I'll draw the square array using glDrawArrays() or glDrawElements() but for now I've used only 3 objects.
There's a lot of questions here. I can't cover everything in detail, but hopefully I can give you some pointers to steer you in the right direction.
To draw 150 squares, you have a number of options:
Create a vertex buffer with a single square, and draw it 150 times, with translations applied. This is probably the easiest to get you off the ground, so I would recommend getting it working first. It's a reasonable approach if all your squares look the same.
Create 150 vertex buffers, with different coordinates. I wouldn't recommend it because it's the least efficient, and doesn't have any benefits over other approaches.
Store the vertices for all 150 squares in a single vertex buffer. This will be the most efficient of the first 3 options, but only works well as long as the relative orientation of the squares remains the same. You may want to try this once you have the basics working.
Use instanced rendering. This is a more advanced feature, and only available in ES 3.0. Just mentioning it for future reference.
What you attempted is sort of a hybrid between option 1 and 2. If you want to go for option 1, you only need one instance of your Cube class. If you look at what you did, this makes sense. You created 150 objects that are all exactly the same, which is not very useful.
Now, on your questions:
To draw the squares without gaps between them, the amount of your translations needs to be the same as the size of each square. Your squares are 2 units wide, but you translate each one by 10 units. You also translate them in the z-direction, which I don't quite understand.
If you want to stick with the kind of functionality you have been using, check out GLU.gluLookAt(). It allows you to place your camera where you want it, and point it in any direction.
Same as 2. Call GLU.gluLookAt() every time you want to move the viewpoint.
Android caps the frame rate at 60 frames per second. That's normally what you should be shooting for anyway, IMHO. If you want to limit it to 30 fps later to save power, I think you can cross that bridge when you get there. Based on what I researched recently, there's no clean and portable way to do this on Android. The proposed solutions I have seen all look kind of hacky to me.
A couple more things on your code:
Your color definitions look odd. You specify colors in 4 components, and the size of the array is correct for that. But you write the array with 3 values per line, which makes it look like you want 3 component colors. Either one can be done, but you need to make sure that you're consistent. 3 components are enough, unless you need transparency.
You are using ES 1.0. That's valid, and might be easier to get started with. But you should be aware that many of its features are considered obsolete, and using ES 2.0 would let you learn more modern and current OpenGL features. The initial hurdle will be higher, so there's definitely a tradeoff here.
I have, a problem with the setLookAtM function. My goal is to create a cube within a cube something like this (yep, it's paint :P ):
So basically everything works... almoust... I have the smaller cube and I have the bigger one.
However, there is a problem. I created the bigger one with coords from -1 to 1 and now I want to upscale it. With scale 1.0f i have something like this (the inner cube is rotating):
And thats good, but now... when I try to scale the bigger cube (so that it looks like in the paint drawing) the image goes black or white (i guess it's because the "camera" looks at the white cube but still i dont know why does my inner cube disappear :/ I don't understand what I'm doing wrong. Here is my code:
public void onDrawFrame(GL10 unused) {
float[] scratch = new float[16];
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -5.0f, 0f, 0f, -1.0f, 0f, 1.0f, 0.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
mRoom.mScale = 1.0f;
Matrix.setIdentityM(mScaleMatrix, 0);
Matrix.scaleM(mScaleMatrix, 0, mRoom.mScale, mRoom.mScale, mRoom.mScale);
float[] scaleTempMatrix = new float[16];
Matrix.multiplyMM(scaleTempMatrix, 0, mMVPMatrix, 0, mScaleMatrix, 0);
mRoom.draw(scaleTempMatrix);
When I set for example:
mRoom.mScale = 3.0f;
And
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -2.0f, 0f, 0f, 0.0f, 1.0f, 1.0f, 0.0f);
My camera should be at (0, 0, -2) looking at (0,0, -1) and it should be inside the white cube (since scale is 3.0 so the coords should be from -3 to 3 right?) But all I get is a white screen without the smaller cube rotating inside :/
If your scale is 3x in this code, then your visible coordinate range is actually going to be [-1/3,1/3].
You are thinking about things backwards, it might help if you considered the order in which the scale operation is applied. Right now you are scaling the object-space coordinates, then applying the view matrix and then projection. It may not look that way, but that is how matrix multiplication in GL works; GL effectively flips the operands when it does matrix multiplication and matrix multiplication is not commutative.
I believe this is what you actually want:
public void onDrawFrame(GL10 unused) {
float[] scratch = new float[16];
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -5.0f, 0f, 0f, -1.0f, 0f, 1.0f, 0.0f);
mRoom.mScale = 3.0f;
Matrix.setIdentityM(mScaleMatrix, 0);
Matrix.scaleM(mScaleMatrix, 0, mRoom.mScale, mRoom.mScale, mRoom.mScale);
Matrix.multiplyMM(mMVPMatrix, 0, mScaleMatrix, 0, mProjectionMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mMVPMatrix, 0, mViewMatrix, 0);
mRoom.draw(mMVPMatrix);
Hi I have a 512x512 texture that I would like to display within my GlSurfaceview at a 100% scale at a 1:1 pixel for pixel view.
I have having troubles achieving this and require some assistance.
Every combination of settings in OnSurfaceChanged and onDrawFrame result in a scaled image.
Can someone pls direct me to an example where this is possible.
private float[] mProjectionMatrix = new float[16];
// where mWidth and mHeight are set to 512
public void onSurfaceChanged(GL10 gl, int mWidth, int mHeight) {
GLES20.glViewport(0, 0, mWidth, mHeight);
float left = -1.0f /(1/ScreenRatio );
float right = 1.0f /(1/ScreenRatio );
float bottom = -1.0f ;
float top = 1.0f ;
final float near = 1.0f;
final float far = 10.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
#Override
public void onDrawFrame(GL10 glUnused ) {
....stuff here
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0, 0, 1);
Matrix.rotateM(mModelMatrix, 0, 0.0f, 1.0f, 1.0f, 0.0f);
drawCube();
}
many thanks,
There's various options. The simplest IMHO is to not apply any view/projection transformations at all. Then draw a textured quad with a range of (-1.0, 1.0) for both the x- and y-coordinates. That would get your texture to fill the entire view. Since you want it displayed in a 512x512 part of the view, you can set the viewport to cover only that area:
glViewport(0, 0, 512, 512);
Another possibility is that you reduce the range of your input coordinates to map to a 512x512 area of the screen. Or scale the coordinates in the vertex shader.
You didn't specify what version of OpenGL ES you use. In ES 3.0, you could also use glBlitFramebuffer() to copy the texture to your view.
As a beginner to android and openGL 2.0 es, I'm testing simple things and see how it goes.
I downloaded the sample at http://developer.android.com/training/graphics/opengl/touch.html .
I changed the code to check if I could animate a rotation of the camera around the (0,0,0) point, the center of the square.
So i did this:
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
long time = SystemClock.uptimeMillis() % 4000L;
float angle = ((float) (2*Math.PI)/ (float) 4000) * ((int) time);
Matrix.setLookAtM(mVMatrix, 0, (float) (3*Math.sin(angle)), 0, (float) (3.0f*Math.cos(angle)), 0 ,0, 0, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
// Draw square
mSquare.draw(mMVPMatrix);
}
I expected the camera to look always to the center of the square (the (0,0,0) point) but that's not what happens. The camera is indeed rotating around the square but the square does not stay in the center of the screen.. instead it is moving along the X axis...:
I also expected that if we gave the eyeX and eyeY the same values as centerX and centerY,like this:
Matrix.setLookAtM(mVMatrix, 0, 1, 1, -3, 1 ,1, 0, 0f, 1.0f, 0.0f);
the square would keep it's shape (I mean, your field of vision would be dragged but along a plane which would be paralel to the square), but that's also not what happens:
This is my projection matrix:
float ratio = (float) width / height;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 2, 7);
What is going on here?
Looking at the source code to the example you downloaded, I can see why you're having that problem, it has to do with the order of the matrix multiplication.
Typically in OpenGL source you see matrices set up such that
transformed vertex = projMatrix * viewMatrix * modelMatrix * input vertex
However in the source example program that you downloaded, their shader is setup like this:
" gl_Position = vPosition * uMVPMatrix;"
With the position on the other side of the matrix. You can work with OpenGL in this way, but it requires that you reverse the lhs/rhs of your matrix multiplications.
Long story short, in your case, you should change your shader to read:
" gl_Position = uMVPMatrix * vPosition;"
and then I believe you will get the expected behavior.