I need to plot few coordinates on screen, the coordinates without Z axis get displayed but the coordinates with Z axis values aren't being displayed. I tried normalizing the coordinates before plotting them which worked. When normalized all the coordinates get plotted. But in the case of unnormalized coordinates the vertices with Z axis are hidden.
OpenGL Version : ES 2.0
Coordinates:
float squareCoords[] = {
202.00002f, 244.00002f, 0.0f,
440.00003f, 625.00006f, 0.0f,
440.00003f, 625.00006f, 0.0f,
690.00006f, 186.0f,0.0f,
202.00002f, 244.00002f, 50.0f,
440.00003f, 625.00006f, 50.0f,
440.00003f, 625.00006f, 50.0f,
690.00006f, 186.0f, 50.0f
};
indices:
short[] drawOrder = {
0,1,2,3,
0,4,
1,5,
2,6,
4,5,6,7
};
Draw Code:
GLES20.glDrawElements(
GLES20.GL_LINES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
On Surface Changed code:
public void onSurfaceChanged(GL10 unused, int width, int height) {
mWidth = width;
mHeight = height;
GLES20.glViewport(0, 0, mWidth, mHeight);
float ratio = (float) mWidth / mHeight;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.orthoM(mProjMatrix, 0, 0f, width, 0.0f, height, 0, 50);
}
OnDraw:
public void onDrawFrame(GL10 unused) {
Square square = new Square();
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT|GLES20.GL_DEPTH_BUFFER_BIT);
if (mFirstDraw)
mFirstDraw = false;
long time = SystemClock.uptimeMillis() % 4000L;
float angle = 0.090f * ((int) time);
// float angle = 90;
// Matrix.setRotateM(mRotationMatrix, 0, angle, 0, 0, -1.0f);
// angle += 0.7f;
if (angle > 360f)
angle = 0f;
Matrix.setLookAtM(mVMatrix, 0, 0f, 0f, 4f, 0f, 0f, 0f, 0f, 1f, 0f);
// projection x view = modelView
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
// Creating rotation matrix
Matrix.setRotateM(rotationMatrix, 0, angle, -1f, 0f, 0f);
// rotation x camera = modelView
float[] duplicateMatrix = Arrays.copyOf(mMVPMatrix, 16);
Matrix.multiplyMM(mMVPMatrix, 0, duplicateMatrix, 0, rotationMatrix, 0);
square.draw(mMVPMatrix);
}
I'm rotating the diagram to figure out whether the vertices on Z axis are drawn.
I personally think this line is the culprit, here I've given far value 50 and near value 0. I wonder what these values should be
Matrix.orthoM(mProjMatrix, 0, 0f, width, 0.0f, height, 0, 50);
The problem here was the value of far wasn't higher enough. I put far as 500
Matrix.orthoM(mProjectionMatrix, 0, 0f, width, 0.0f, height,0, 500);
and changed the coordinates to:
float squareCoords[] = {
202.00002f, 244.00002f, 0.0f,
440.00003f, 625.00006f, 0.0f,
440.00003f, 625.00006f, 0.0f,
690.00006f, 186.0f,0.0f,
202.00002f, 244.00002f, 200.0f,
440.00003f, 625.00006f, 200.0f,
440.00003f, 625.00006f, 200.0f,
690.00006f, 186.0f, 200.0f
};
Its working now.
Related
To rotate the camera around an object along the abscissa axis (X), I use the following code:
private float k = 0f;
...
#Override
public void onDrawFrame(GL10 gl) {
// Math.PI * 2 - full rotation
k = (k >= Math.PI * 2) ? 0.0f : k + 0.01f; // gradually rotate the camera
float radius = 2.6f;
float x = (float) (radius * Math.cos(k));
float z = (float) (radius * Math.sin(k));
Matrix.setLookAtM(viewMatrix, 0, x, 0, z, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
...
}
To rotate the camera around an object along the ordinate axis (Y):
...
float y = (float) (radius * Math.cos(k));
float z = (float) (radius * Math.sin(k));
Matrix.setLookAtM(viewMatrix, 0, 0, y, z, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
...
But I don't understand how to rotate the camera around an object along an inclined axis. What is the ratio between x, y, z should be in this case?
Thank you for any comment / response!
Note: The center of the object coincides with the point where the camera is looking (0,0,0).
Found this solution:
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
...
Matrix.setLookAtM(viewMatrix, 0, 0f, 0f, 5f,
0f, 0f, 0f, 0f, 1.0f, 0.0f);
}
#Override
public void onDrawFrame(GL10 gl) {
Matrix.rotateM(viewMatrix, 0, angle1, x:1, y:0, z:0);
Matrix.rotateM(viewMatrix, 0, angle2, x:0, y:1, z:0);
...
}
I have a vertex (Which i will not be showing/rendering in the scene)
float vertex[] = {
1.0f, 1.0f, 1.0f,
};
And i have a mesh, which i have translated and rotated using:
Matrix.translateM(World.mModelMatrix, tmOffset, globalPositionX, globalPositionY, globalPositionZ);
Matrix.rotateM(World.mModelMatrix, rmOffset, globalRotationZ, 0, 0, 1);
Matrix.rotateM(World.mModelMatrix, rmOffset, globalRotationY, 0, 1, 0);
Matrix.rotateM(World.mModelMatrix, rmOffset, globalRotationX, 1, 0, 0);
How can apply those translations and rotations to the vertex, and get its global position (x,y,z) after?
Use the Matrix.multiplyMV method:
float vertex[] = { 1.0f, 1.0f, 1.0f, 1.0f };
float result[] = { 0.0f, 0.0f, 0.0f, 0.0f };
Matrix.multiplyMV(result, 0, matrix, 0, vertex, 0);
Note, that you will have to add a homogeneous coordinate to your vector to make it work.
I am not getting expected coordinate values from gluUnProject function.
I will put some code first. Here is the function which get called on touch event
public float[] getWorldSpaceFromMouseCoordinates(float mouseX, float mouseY)
{
float[] finalCoord = { 0.0f, 0.0f, 0.0f, 0.0f };
// mouse Y needs to be inverted
mouseY = (float)_viewport[3] - mouseY;
float[] mouseZ = new float[1];
FloatBuffer fb = FloatBuffer.allocate(1);
GLES20.glReadPixels((int)mouseX, (int)mouseY, 1, 1, GLES20.GL_DEPTH_COMPONENT, GLES20.GL_FLOAT, fb);
int result = GLU.gluUnProject(mouseX, mouseY, fb.get(0), mViewMatrix, 0, mProjectionMatrix, 0, _viewport, 0, finalCoord, 0);
float[] temp2 = new float[4];
Matrix.multiplyMV(temp2, 0, mViewMatrix, 0, finalCoord, 0);
if(result == GL10.GL_TRUE){
finalCoord[0] = temp2[0] / temp2[3];
finalCoord[1] = temp2[1] / temp2[3];
finalCoord[2] = temp2[2] / temp2[3];
}
Log.d("Coordinate:", "" + temp2[0] + "," + temp2[1] + "," + temp2[2]);
return finalCoord;
}
here is setting up matrices
#Override
public void onSurfaceChanged(GL10 unused, int width, int height)
{
// Adjust the viewport based on geometry changes,
// such as screen rotation
GLES20.glViewport(0, 0, width, height);
_viewport = new int[] { 0, 0, width, height };
float ratio = (float) width / height;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 2, 7);
}
setting up modelview matrix (note that model matrix is just an identity.)
// Set the camera position (View matrix)
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
So as per my understanding my expectation from this function is that it will give me world coordinates w.r.t origin which is not happening. I am creating a square with following coordinates
_vertices = new float [] { -0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
however I am getting X values ranging from (.3, -.3) Y values ranging in (.5,-.5) and Z always -1.0 for whole viewport. X values in (0.2,-0.2) when touching corners of square and Y values in (0.15, -0.15).
Let me know if any more code s required.
So I found out what the problem was. glReadPixels() with GL_DEPTH_COMPONENT is not supported in OpenGL ES 2.0. That is because I was always getting wrong depth value and hence wrong coordinates. Now I had two choices whether to use an FBO and store depth using a shader OR I could do Ray Picking(Since I had only one object in scene Si was hoping that gluUnProject() will do). I chose former here is my code I hope it will help somebody (Its not not generic and geometry is hard coded)
public float[] getWorldSpaceFromMouseCoordinates(float mouseX, float mouseY)
{
float[] farCoord = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] nearCoord = { 0.0f, 0.0f, 0.0f, 0.0f };
// mouse Y needs to be inverted
//mouseY = (float) _viewport[3] - mouseY;
// calling glReadPixels() with GL_DEPTH_COMPONENT is not supported in
// GLES so now i will try to implement ray picking
int result = GLU.gluUnProject(mouseX, mouseY, 1.0f, mViewMatrix, 0, mProjectionMatrix, 0, _viewport, 0,
farCoord, 0);
if (result == GL10.GL_TRUE)
{
farCoord[0] = farCoord[0] / farCoord[3];
farCoord[1] = farCoord[1] / farCoord[3];
farCoord[2] = farCoord[2] / farCoord[3];
}
result = GLU.gluUnProject(mouseX, mouseY, 0.0f, mViewMatrix, 0, mProjectionMatrix, 0, _viewport, 0, nearCoord,
0);
if (result == GL10.GL_TRUE)
{
nearCoord[0] = nearCoord[0] / nearCoord[3];
nearCoord[1] = nearCoord[1] / nearCoord[3];
nearCoord[2] = nearCoord[2] / nearCoord[3];
}
float [] dirVector = Vector.normalize(Vector.minus(farCoord, nearCoord));
float [] rayOrigin = {0.0f, 0.0f, 3.0f};
Log.d("Far Coordinate:", "" + farCoord[0] + "," + farCoord[1] + "," + farCoord[2]);
Log.d("Near Coordinate:", "" + nearCoord[0] + "," + nearCoord[1] + "," + nearCoord[2]);
float [] vertices = { -0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
// calculate normal for square
float[] v1 = { vertices[3] - vertices[0], vertices[4] - vertices[1], vertices[5] - vertices[2]};
float[] v2 = { vertices[9] - vertices[0], vertices[10] - vertices[1], vertices[11] - vertices[2]};
float[] n = Vector.normalize(Vector.crossProduct(v1, v2));
// now calculate intersection point as per following link
// http://antongerdelan.net/opengl/raycasting.html
// our plane passes through origin so findint 't' ll be
float t = -(Vector.dot(rayOrigin, n) / Vector.dot(dirVector, n));
// now substitute above t in ray equation gives us intersection point
float [] intersectionPoint = Vector.addition(rayOrigin, Vector.scalarProduct(t, dirVector));
Log.d("Ipoint:", "" + intersectionPoint[0] + "," + intersectionPoint[1] + "," + intersectionPoint[2]);
return intersectionPoint;
}
I'm creating a curtain like animation triggered by onTouchEvent() where u can drag one end of a square to make it bigger or smaller.
My only problem is that instead of having a square on the entire screen, I get a small line on the top of the screen and i can expand and de-expand that line.
Why won't this code draw a square?
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
float ratio = (float) width / height;
gl.glMatrixMode(GL10.GL_PROJECTION); // set matrix to projection mode
gl.glLoadIdentity(); // reset the matrix to its default state
gl.glOrthof(0, height, width, 0, -3, 8);
}
Vertices:
private float vertices[] = {
-1.0f, 1.0f, 0.0f, // 0, Top Left
-1.0f, -1.0f, 0.0f, // 1, Bottom Left
1.0f, -1.0f, 0.0f, // 2, Bottom Right
1.0f, 1.0f, 0.0f, // 3, Top Right
};
// The order we like to connect them.
private short[] indices = { 0, 1, 2, 0, 2, 3 };
And the draw method in Square:
public void draw(GL10 gl,float x,float y) {
// Counter-clockwise winding.
gl.glFrontFace(GL10.GL_CCW); // OpenGL docs
//Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
//Enable vertex buffer
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_SHORT, indexBuffer);
//Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
}
You are missing this? after setting the projection mode.
gl.glMatrixMode(GL10.GL_MODELVIEW); // set modelview matrix to identity.
gl.glLoadIdentity();
I'm new to OpenGL and I can't figure out how to use gluLookAt. Below is my source -- Any help will be much appreciated.
public void onSurfaceCreated(GL10 gl, EGLConfig config)
{
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_AMBIENT, lightAmbientBuffer); gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_DIFFUSE, lightDiffuseBuffer); gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_POSITION, lightPositionBuffer);
gl.glEnable(GL10.GL_LIGHT0);
gl.glColor4f(1.0f, 1.0f, 1.0f, 0.5f);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE);
gl.glDisable(GL10.GL_DITHER);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
gl.glClearDepthf(1.0f);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glDepthFunc(GL10.GL_LEQUAL);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
cube.loadGLTexture(gl, this.context);
}
public void onDrawFrame(GL10 gl) {
gl.glMatrixMode(GL10.GL_MODELVIEW); //Select The Modelview Matrix
gl.glLoadIdentity(); //Reset The Modelview Matrix
GLU.gluLookAt(gl, xrot, yrot, 0.0f, 0.0f, xrot, yrot, 0.0f, 0.0f, 0.0f);
//Clear Screen And Depth Buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity(); //Reset The Current Modelview Matrix
//Check if the light flag has been set to enable/disable lighting
if(light) {
gl.glEnable(GL10.GL_LIGHTING);
} else {
gl.glDisable(GL10.GL_LIGHTING);
}
//Check if the blend flag has been set to enable/disable blending
if(blend) {
gl.glEnable(GL10.GL_BLEND); //Turn Blending On ( NEW )
gl.glDisable(GL10.GL_DEPTH_TEST); //Turn Depth Testing Off ( NEW )
} else {
gl.glDisable(GL10.GL_BLEND); //Turn Blending On ( NEW )
gl.glEnable(GL10.GL_DEPTH_TEST); //Turn Depth Testing Off ( NEW )
}
//Drawing
gl.glTranslatef(0.0f, 0.0f, z); //Move z units into the screen
gl.glScalef(0.8f, 0.8f, 0.8f); //Scale the Cube to 80 percent, otherwise it would be too large for the screen
//Rotate around the axis based on the rotation matrix (rotation, x, y, z)
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
cube.draw(gl, filter); //Draw the Cube
//Change rotation factors
xrot += xspeed;
yrot += yspeed;
}
/**
* If the surface changes, reset the view
*/
public void onSurfaceChanged(GL10 gl, int width, int height) {
if(height == 0) { //Prevent A Divide By Zero By
height = 1; //Making Height Equal One
}
gl.glViewport(0, 0, width, height); //Reset The Current Viewport
gl.glMatrixMode(GL10.GL_PROJECTION); //Select The Projection Matrix
gl.glLoadIdentity(); //Reset The Projection Matrix
//Calculate The Aspect Ratio Of The Window
GLU.gluPerspective(gl, 45.0f, (float)width / (float)height, 0.1f, 100.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW); //Select The Modelview Matrix
gl.glLoadIdentity(); //Reset The Modelview Matrix
}
Two things I can see. One, since glulookat is defined as
gluLookAt ( eyeX , eyeY , eyeZ , centerX , centerY , centerZ , upX , upY , upZ )
Your call should be changed to be GLU.gluLookAt(gl, xrot, yrot, 0.0f, 0.0f, xrot, yrot, 0.0f, 1.0f, 0.0f);
Notice the new up vector '0.0, 1.0, 0.0'. Basically says the y-axis is where you want 'up' to be.
Also, you seem to be using rotation values for the rest of the call. The first triplet should be the position of where you are looking, and the second vector should be a reference position, normally where your viewer is. Look at http://developer.android.com/reference/android/opengl/GLU.html
Second issue, if you call loadIdentity after a glulookat call, I am pretty sure since it is loading the identity matrix, you will loose the transform that glulookat performs. So try adding glulookat after you have placed your geometry.
Here is what I am basically saying in code:
public void onDrawFrame(GL10 gl) {
//cleaned up the reset code
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
//Check if the light flag has been set to enable/disable lighting
if(light) {
gl.glEnable(GL10.GL_LIGHTING);
} else {
gl.glDisable(GL10.GL_LIGHTING);
}
//Check if the blend flag has been set to enable/disable blending
if(blend) {
gl.glEnable(GL10.GL_BLEND); //Turn Blending On ( NEW )
gl.glDisable(GL10.GL_DEPTH_TEST); //Turn Depth Testing Off ( NEW )
} else {
gl.glDisable(GL10.GL_BLEND); //Turn Blending On ( NEW )
gl.glEnable(GL10.GL_DEPTH_TEST); //Turn Depth Testing Off ( NEW )
}
//Drawing
gl.glTranslatef(0.0f, 0.0f, z); //Move z units into the screen
gl.glScalef(0.8f, 0.8f, 0.8f); //Scale the Cube to 80 percent, otherwise it would be too large for the screen
//Rotate around the axis based on the rotation matrix (rotation, x, y, z)
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
//change the perspective matrix to look at the rotating cube (0,0,z), from (0,0,0)
//with (0,1,0) as the up vector
GLU.gluLookAt(gl, 0.0f, 0.0, z, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f);
cube.draw(gl, filter); //Draw the Cube
//Change rotation factors
xrot += xspeed;
yrot += yspeed;
}