I have an object that I am rendering in Android OpenGL ES 3.0, on a Nexus 9. The object has somewhere around 80000 vertices and a couple hundred thousand triangles.
I know for a fact that those vertices are in a Right-handed coordinate system. When I use my pc to view the object (using a program like Paraview), I see the object in a right-handed coordinate system. But as soon as I render the object on my app in OpenGL, the object has the wrong chirality.
As I mentioned above, I'm pretty certain that my vertices are correct. Therefore, something wrong must be occurring during the coordinate transformations. Does anyone have any idea which matrix (view, model, projection) might be a likely source of my problem? I need to maintain the integrity of my vertices and not perform any transformations (like flipping values manually) on the vertex data itself.
EDIT: Someone asked for my code: I can't post everything because it is an incredibly large project, but I'll show you the lines where my matrices have been set up:
In onSurfaceCreated():
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = -3.0f;
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = 0.0f;
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
Matrix.setLookAtM(mViewMatrix2, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
Matrix.setLookAtM(mViewMatrix3, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
Matrix.setLookAtM(mViewMatrix4, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ); `
In onSurfaceChanged(GL10 glUnused, int width, int height):
GLES30.glViewport(0, 0, width, height);
viewport[0] = 0;
viewport[1] = 0;
viewport[2] = width;
viewport[3] = height;
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 500.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
Matrix.frustumM(mProjectionMatrix2, 0, left, right, bottom, top, near, far);
Matrix.frustumM(mProjectionMatrix3, 0, left, right, bottom, top, near, far);
Matrix.frustumM(mProjectionMatrix4, 0, left, right, bottom, top, near, far);
To clarify, the object that I am referring to uses mViewMatrix and mProjectionMatrix, not the other view matrices and projection matrices. If there isn't something wrong with this code, I can post more showing the places where I manipulated these matrices.
EDIT2: I simply do not understand why, but manually flipping coordinates (for instance, flipping the z-coordinate) either by changing the vertex data or by applying a scale matrix to the modelview, does not fix my chirality problem. I am absolutely stumped as to how to fix this.
Finally figured out my problem (after 16 hours of trying). Turns out I was culling the wrong side. Switching from GL_BACK to GL_FRONT did the trick for me.
Related
I have written application which is using OpenGL ES. I was testing it on emulated and real Nexus 5, with Android 6 (API 23) on it.
During tests on older Android versions (API 22-) it came out, that my 3d object is missing one dimension.
After statring app, it looks like this in both cases (view is set to -z, and y axis is up here):
but when rotating this, there is a difference in behavior in API 22 (or lower) and 23.
Case API 22 (or lower):
Object is flat - Z axis on model seems to be missing, however light is calculated properly (with proper Z values).
Case API 23 (desired one):
All screenshots are from emulator; I have tested it only on one real device, with API 23 (Nexus 5), and it works there.
Rotation is done by touch events, and handled by code like this:
Matrix.rotateM(mCurrentRotation, 0, mDeltaRotationY, 1.0f, 0.0f, 0.0f);
Matrix.multiplyMM(mModelMatrix, 0, mModelMatrix, 0, mCurrentRotation, 0);
OpenGL version is set in AndroidManifest:
<uses-feature
android:glEsVersion="0x00020000"
android:required="true" />
For me it seems that something has changed in behavior of model or view martix.
EDIT
As requested in comment:
Matrices are created in my rendered, which is extending GLSurfaceView.Renderer
I was following http://www.learnopengles.com/ tutorial to prepare this.
public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
...
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = 7.0f;
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = 0.0f;
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
...
}
public void onSurfaceChanged(GL10 glUnused, int width, int height) {
...
final float ratio = width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 20.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
...
}
public void onDrawFrame(GL10 glUnused) {
...
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mMVPMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
...
}
I know this issue is old already, but I've faced the same issue today.
So just in case somebody else encounters this, I identified the problem.
Whenever you call Matrix.multiplyMM(float[] result, ...) the 'result' float array must be equal neither to the left-hand-side matrix nor to the right-hand-side one.
On Android Marshmallow (or above) this does not seem to be a problem though... but below it might be a good idea to call the .clone() on the resulting matrix first and passing this then as the left/right-hand-side matrix instead.
Long story short, for Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0); the second mMVPMatrix needs to be replaced by mMVPMatrix.clone() for this to work properly.
How can I determine the bounds of the x and y coordinate planes displayed on screen in my OpenGL ES program?
I need to fill the screen with 6 identical shapes, all equal in width & height, but in order to do this I must determine what values the x and y coordinate range across (so I can properly set the shape's vertices). In other words, I need a programmatic way to find out the value of -x and x, and -y and y.
Whats the simplest way to do this? Should I be manipulating/reading the projection matrix or the modelView matrix? Neither?
I know onSurfaceChanged() has access to the layout's height and width, but i'm not certain if these parameters are necessary to find the bounds of the on-screen coordinate bounds.
Below are the code snippets that show how I configure the frustum with the modelView and projection matrices:
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{
// Enable depth testing
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
// Position the eye in front of the origin.
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = -0.5f;
// We are looking toward the distance
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = -5.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
// Set the view matrix. This matrix can be said to represent the camera position.
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
...
}
public void onSurfaceChanged(GL10 glUnused, int width, int height)
{
// Set the OpenGL viewport to the same size as the surface.
GLES20.glViewport(0, 0, width, height);
layoutWidth = width; //test: string graphics
layoutHeight = height; //test: string graphics
// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 10.0f;
screenSixth = findScreenSixths(width);
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
This problem of yours seems a bit strange. I could understand you want to manipulate the matrix so you will see all the objects on the screen but what you are asking is how to place the spheres so they are on the screen. In this case the on-screen projection of the spheres does not change and there is no reason to use the same matrix as you do for the rest of the scene. You may simply start with the identity and add a frustum to get the correct projection. After that the border values (left, right...) will be at border of your screen for Z value of near. So place the spheres at places such as (left+radius, top+radius, near).
If you still need some specific position of the spheres due to some interaction with other objects then you will most likely need to check the on-screen projections of the billboarded bounding square of the sphere. That means creating a square with the center the same as sphere and a width of twice the radius. Then multiply these square positions with the billboarded version of the sphere matrix. The billboarding can be found over the web to do properly but unless you do some strange operations on the matrices it usually works by simply setting the top left 3x3 part of the matrix to identity.
I'm using the OpenGL touch events to move shapes but what happens is the shapes on the opposite side of the screen move (x-axis). So if you try to move a shape at the bottom, then a shape at the top will move inside. The top right corner is (0,480) and the bottom left (800,0). I've tried changing the numbers round inthe view matrix but it hasnt worked. Why is this happening?
Im sure I've set my view and projection matrices correctly. Here they are.
#Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
// Set the background clear color to gray.
GLES20.glClearColor(0.5f, 0.5f, 0.5f, 0.5f);
GLES20.glFrontFace(GLES20.GL_CCW); // Counter-clockwise winding.
GLES20.glEnable(GLES20.GL_CULL_FACE);// Use culling to remove back faces.
GLES20.glCullFace(GLES20.GL_BACK);// What faces to remove with the face culling.
GLES20.glEnable(GLES20.GL_DEPTH_TEST);// Enable depth testing
// Position the eye behind the origin.
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = -3.0f;
// We are looking toward the distance
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = 0.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
Matrix.setIdentityM(mViewMatrix, 0);
}
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
// Sets the current view port to the new size.
GLES20.glViewport(0, 0, width, height);
float RATIO = (float) width / (float) height;
// this projection matrix is applied to object coordinates in the onDrawFrame() method
Matrix.frustumM(mProjectionMatrix, 0, -RATIO, RATIO, -1, 1, 1, 10);
Matrix.setIdentityM(mProjectionMatrix, 0);
}
Update
The view seems to render properly. And the shape will appear on the screen where i want them to, if i translate them, or change the vertex coordinates slightly. Whats not right is how it registers the touch events. Any ideas?
This is how i check the touch events.
if(shapeW < minX){minX = shapeW;}
if(shapeW > maxX){maxX = shapeW;}
if(shapeH < minY){minY = shapeH;}
if(shapeH > maxY){maxY = shapeH;}
//Log.i("Min&Max" + (i / 4), String.valueOf(minX + ", " + maxX + ", " + minY + ", " + maxY));
if(minX < MyGLSurfaceView.touchedX && MyGLSurfaceView.touchedX < maxX && minY < MyGLSurfaceView.touchedY && MyGLSurfaceView.touchedY < maxY)
{
xAng[j] = xAngle;
yAng[j] = yAngle;
Log.i("cube "+j, " pressed");
}
From the origin, the z-axis is positive coming towards you and negative going away into the screen. So if my assumption is correct that your shapes are drawn in the z = 0 plane, your eye is actually positioned behind them. Hence if you move an object one way it appears to move the other way. Try using a positive value for eyeZ instead.
For example, eye = (0, 0, 3), look = (0, 0, 0) would position the eye out of the origin towards you looking down into the screen. In contrast, using eye = (0, 0, -3), look = (0, 0, 0) would put the eye into the screen looking back out of it.
I am developing a box2d game on android and when the opengl camera follows the player the player jitters quite badly. When the camera is stationary, it appears to be fine. I tried box2d interpolation and that seemed to help slightly. Any suggestions?
public static void setCamera() {
// Position the eye behind the origin.
float eyeX = cameraX;
float eyeY = cameraY;
float eyeZ = cameraZoom;
// We are looking toward the distance
float lookX = cameraX;
float lookY = cameraY;
float lookZ = -5.0f;
// Set our up vector. This is where our head would be pointing were we
// holding the camera.
float upX = 0.0f;
float upY = 1.0f;
float upZ = 0.0f;
// Set the view matrix. This matrix can be said to represent the camera
// position.
// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination
// of a model and
// view matrix. In OpenGL 2, we can keep track of these matrices
// separately if we choose.
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY,
lookZ, upX, upY, upZ);
// Matrix.scaleM(mViewMatrix, 0, cameraZoom, cameraZoom, 0f);
// Matrix.orthoM(mProjectionMatrix, 0, left, right, top, bottom, near,
// far);
// Matrix.setLookAtM(mViewMatrix, 0, 1, 0, 1.0f, 1.0f, 0f, 0f, 0f, 1.0f,
// 0.0f);
}
I'm starting to learn OpenGL, and are using the following site: http://www.learnopengles.com/android-lesson-one-getting-started/
But it seems that I got a problem at this part (works fine in portrait mode):
private float[] mViewMatrix = new float[16];
/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
private float[] mProjectionMatrix = new float[16];
/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
private float[] mMVPMatrix = new float[16];
/** This will be used to pass in the transformation matrix. */
private int mMVPMatrixHandle;
/** This will be used to pass in model position information. */
private int mPositionHandle;
/** This will be used to pass in model color information. */
private int mColorHandle;
/** How many bytes per float. */
private final int mBytesPerFloat = 4;
/** How many elements per vertex. */
private final int mStrideBytes = 7 * mBytesPerFloat;
/** Size of the position data in elements. */
private final int mPositionDataSize = 3;
/** Offset of the color data. */
private final int mColorOffset = 3;
/** Size of the color data in elements. */
private final int mColorDataSize = 4;
public void onSurfaceChanged(GL10 gl, int width, int height)
{
GLES20.glViewport(0, 0, width, height);
// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 10.0f;
System.out.println("Height: " + height);
System.out.println("Width: " + width);
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
public void onDrawFrame(GL10 gl)
{
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = 1.5f;
final float lookY = 0.0f; //Y direction of what user see
final float lookZ = -5.0f; //Z direction of what user see
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
GLES20.glClearColor(red, green, blue, clearcoloralpha);
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, xax, lookY, lookZ, upX, upY, upZ);
// Draw the triangle facing straight on.
for(int i = 0; i < Triangles.size(); i++)
{
Matrix.setIdentityM(Triangles.get(i).getModelMatrix(), 0);
if(Triangles.get(i).Rotate())
{
Triangles.get(i).rotation = (360.0f / 10000.0f) * ((int) Triangles.get(i).last);
Triangles.get(i).last+=20;
//Rotates the matrix by rotation degrees
Matrix.rotateM(Triangles.get(i).getModelMatrix(), 0, Triangles.get(i).rotation, 0.0f, 0.0f, 1.0f);
}
else
Matrix.rotateM(Triangles.get(i).getModelMatrix(), 0, Triangles.get(i).rotation, 0.0f, 0.0f, 1.0f);
drawTriangle(Triangles.get(i).getFloatBuffer(),Triangles.get(i));
}
}
private void drawTriangle(final FloatBuffer aTriangleBuffer, Triangle tri)
{
aTriangleBuffer.position(0);
GLES20.glVertexAttribPointer(mPositionHandle, tri.DataSize, GLES20.GL_FLOAT, false, mStrideBytes, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
aTriangleBuffer.position(3);
GLES20.glVertexAttribPointer(mColorHandle, tri.ColorDataSize, GLES20.GL_FLOAT, false, mStrideBytes, aTriangleBuffer);
GLES20.glEnableVertexAttribArray(mColorHandle);
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, tri.getModelMatrix(), 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 3);
}
But when I try to move a triangle (to the left or right) in landscape mode the triangles get "cut off" (does not display the whole triangle) when moving them to far to one of the sides. It seems that they are been acted on as if they were outside the screen when they actually are not. As mentioned it seems to work fine in portrait mode.
Height is 752 and Width 1280 in landscape mode (Galaxy Tab 2).
Does this have something to do with the Project Matrix which is set here?
Thanks for any help!
You were right, the problem was you were moving you camera :D
xax should have stayed as 0.0f
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, xax, lookY, lookZ, upX, upY, upZ);