libgdx point light not working on generated mesh - android

I've been trying to illuminate plane meshes generated by the following:
private Model createPlane(float w, float h, Texture texture) {
Mesh mesh = new Mesh(true, 4, 6, new VertexAttribute(Usage.Position, 3, "a_position"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoord0"),
new VertexAttribute(Usage.Normal, 3, "a_normal"));
float w2 = w * this.CELL_SIZE;
float h2 = h * this.CELL_SIZE;
mesh.setVertices(new float[]
{ w2, 0f, h2, 0, 0, 0, 1, 0,
w2, 0f, -h2, 0, h, 0, 1, 0,
-w2, 0f, h2, w, 0, 0, 1, 0,
-w2, 0f, -h2 , w,h, 0, 1, 0
});
mesh.setIndices(new short[] { 0, 1, 2, 1, 3, 2});
Model model = ModelBuilder.createFromMesh(mesh, GL10.GL_TRIANGLES, new Material(TextureAttribute.createDiffuse(texture)));
return model;
}
and are rendered using:
//the environment setup
env = new Environment();
env.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1f));
env.add(new PointLight().set(Color.ORANGE, 5f, 1f, 5f, 10f));
env.add(new DirectionalLight().set(Color.WHITE, -1f, -0.8f, -0.2f));
...
//the render method
batch.begin();
batch.render(inst, env);//inst is a ModelInstance created using the Model generated from createPlane(...)
batch.end();
The meshes display correctly (UVs, textured) and seem to be properly affected by directional and ambient lighting.
When I try to add a point light (see above environment) none of the planes generated from createPlane(...) are affected. I've tried creating another bit of geometry using the ModelBuilder class's createBox(...) and it seems to properly respond to the point light. Because of that I'm assuming that I'm not generating the plane correctly, but the fact that it's apparently being affected by directional/ambient light is throwing me off a bit.
It's worth noting that the size of the planes generated vary, I'm not particularly sure if a point light would affect 4 vertices very much but I expected more than nothing. Moving the point light around (closer to certain vertices) doesn't do anything either.
Any help would be greatly appreciated.
Thanks!

It would be great to know which shader you are using. The default one? I am not sure if they fixed that already, they had a bug a while ago, where pointlightning was only working on some devices (this had something todo with the implementation of opengles by the manufacturer. I personally fixed this by using my own shader.
Edit: This seems to be fixed
I checked the code I am using. The problem was to determine the correct light array in the shader.
Did it like this in the end:
// on some devices this was working
int u_lightPosition = program.getUniformLocation("u_lightPosition[0]");
int u_lightColors = program.getUniformLocation("u_lightColor[0]");
if(u_lightPosition < 0 && u_lightColors < 0) {
// on others this was working
u_lightPosition = program.getUniformLocation("u_lightPosition");
u_lightColors = program.getUniformLocation("u_lightColor");
}
I hope this helps!

Related

OpenGL ES20 - Light a cube, how to get normals?

To add some lightning to my OpenGL ES20 cube, I need to calculate the normals for each plane. I've found a "tutorial" on lightning, but they simply hard-coded the normals into the cube, which appears to me as not the best option, since it seems limited?
So my approach to the cube is as follows:
private float[] mVertices = {
-1, -1, -1, // bottom left
1, -1, -1, // bottom right back
1, 1, -1, // top right
-1, 1, -1, // top left
-1, -1, 1, // bottom left
1, -1, 1, // bottom right front
1, 1, 1, // top right
-1, 1, 1 // top left
};
private float[] mColors = {
0.6f, 0f, 0.6f,
0.6f, 0f, 0.6f,
0.6f, 0f, 0.6f,
0.6f, 0f, 0.6f,
0.8f, 0f, 0.6f,
0.8f, 0f, 0.6f,
0.8f, 0f, 0.6f,
0.8f, 0f, 0.6f
};
private float[] mNormal = new float[?]; ?
private short[] mIndices = {
0, 4, 5,
0, 5, 1,
1, 5, 6,
1, 6, 2,
2, 6, 7,
2, 7, 3,
3, 7, 4,
3, 4, 0,
4, 7, 6,
4, 6, 5,
3, 0, 1,
3, 1, 2
};
i.e. i have all 8 vertices defined as well as indices, on how to combine them to triangles.
To get the normal matrix, I've read that I am supposed to invert the matrix, then transpose it. So I've tried this:
float[] mTempMatrix = new float[mVertices.length];
Matrix.invertM(mTempMatrix, 0, mVertices, 0);
Matrix.transposeM(mNormal, 0, mTempMatrix, 0);
But this is always filled with zeros, and my cube stays black. How example am I supposed to calculate the normal matrix? Should it be something with the model matrix? If yes, where am I supposed to do this, since the only place they are really combined is in the shader? Is there another way to do this which would be more appropriate?
The issue you are facing is that you share vertices between cube faces. The normal is a vector that points othogonal to the plane of the surface.
Consider the top/right/front vertex as an example (that you share with the front, right and top faces).
When used on the front face, the normal needs to point towards you as 0, 0, 1
When used on the right face, the normal needs to point to the right as 1, 0, 0
When used on the top face, the normal needs to point up as 0, 1, 0
So how to reconcile that?
You could set the normal for the vertex to point out from the corner, e.g. as 0.707, 0.707, 0.707. That'll most likely give you an interesting lighting effect on the corner but is probably not what you're after.
The other solution is not to re-use the vertices between faces. So you'll need 24 (4 per side) instead of 8. You'll then have 3 versions of each vertex but each one of those now belongs to just one face, hence you can set the normal vector as perpendicular to that face. You'll also be able to set the color for just that face as well since you'll no longer be sharing the vertex with other faces.

Is Google's Android OpenGL tutorial teaching incorrect linear algebra?

After helping another user with a question regarding the Responding to Touch Events Android tutorial, I downloaded the source code, and was quite baffled by what I saw. The tutorial seems to not be able to decide whether it wants to use row vectors or column vectors, and it looks all mixed up to me.
On the Android Matrix page, they claim that their convention is column-vector/column-major, which is typical of OpenGL.
Am I right, or is there something I am missing? Here are the relevant bits of it:
Start out by creating a MVPMatrix by multiplying mProjMatrix * mVMatrix. So far so good.
// Set the camera position (View matrix)
Matrix.setLookAtM(mVMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0)
Next they are appending a rotation to the left hand side of the MVPMatrix? This seems a little weird.
// Create a rotation for the triangle
Matrix.setRotateM(mRotationMatrix, 0, mAngle, 0, 0, -1.0f);
// Combine the rotation matrix with the projection and camera view
Matrix.multiplyMM(mMVPMatrix, 0, mRotationMatrix, 0, mMVPMatrix, 0)
Uploading in non-transposed order.
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
Finally in their shader, a vector*matrix multiplication?
// the matrix must be included as a modifier of gl_Position
" gl_Position = vPosition * uMVPMatrix;"
Adding this all together, we get:
gl_Position = vPosition * mRotation * mProjection * mView;
Which is not correct by any stretch of my imagination. Is there any explanation that I'm not seeing as to what's going on here?
As the guy who wrote that OpenGL tutorial, I can confirm that the example code is incorrect. Specifically, the order of the factors in the shader code should be reversed:
" gl_Position = uMVPMatrix * vPosition;"
As to the application of the rotation matrix, the order of the factors should also be reversed so that the rotation is the last factor. The rule of thumb is that matrices are applied in right-to-left order, and the rotation is applied first (it's the the "M" part of "MVP"), so it needs to be the rightmost operand. Furthermore, you should use a scratch matrix for this calculation, as recommended by Ian Ni-Lewis (see his more complete answer, below):
float[] scratch = new float[16];
// Combine the rotation matrix with the projection and camera view
Matrix.multiplyMM(scratch, 0, mMVPMatrix, 0, mRotationMatrix, 0);
Thanks for calling attention to this problem. I'll get the training class and sample code fixed as soon as I can.
Edit: This issue has now been corrected in the downloadable sample code and the OpenGL ES training class, including comments on the correct order of the factors. Thanks for the feedback, folks!
The tutorial is incorrect, but many of the mistakes either cancel each other out or are not obvious in this very limited context (fixed camera centered at (0,0), rotation around Z only). The rotation is backwards, but otherwise it kind of looks right. (To see why it's wrong, try a less trivial camera: set the eye and lookAt to y=1, for instance.)
One of the things that made this very hard to debug is that the Matrix methods don't do any alias detection on their inputs. The tutorial code makes it seem like you can call Matrix.multiplyMM with the same matrix used as both an input and the result. This isn't true. But because the implementation multiplies a column at a time, it's far less obvious that something is wrong if the right hand side is reused (as in the current code, where mMVPMatrix is the rhs and the result) than if the left hand side is reused. Each column on the left is read before the corresponding column in the result is written, so the output will be correct even if the LHS is overwritten. But if the right-hand side is the same as the result, then its first column will be overwritten before it's finished being read.
So the tutorial code is at a sort of local maximum: it seems like it works, and if you change any one thing, it breaks spectacularly. Which leads one to believe that wrong as it looks, it might just be correct. ;-)
Anyway, here's some replacement code that gets what I think is the intended result.
Java code:
#Override
public void onDrawFrame(GL10 unused) {
float[] scratch = new float[16];
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
Matrix.setLookAtM(mVMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
// Draw square
mSquare.draw(mMVPMatrix);
// Create a rotation for the triangle
Matrix.setRotateM(mRotationMatrix, 0, mAngle, 0, 0, 1.0f);
// Combine the rotation matrix with the projection and camera view
Matrix.multiplyMM(scratch, 0, mMVPMatrix, 0, mRotationMatrix, 0);
// Draw triangle
mTriangle.draw(scratch);
}
Shader code:
gl_Position = uMVPMatrix * vPosition;
NB: these fixes make the projection correct, but they also reverse the direction of rotation. That's because the original code applied the transformations in the wrong order. Think of it this way: instead of rotating the object clockwise, it was rotating the camera counterclockwise. When you fix the order of operations so that the rotation is applied to the object instead of the camera, then the object starts going counterclockwise. It's not the matrix that's wrong; it's the angle that was used to create the matrix.
So to get the 'correct' result, you also need to flip the sign of mAngle.
I solved this problem as follows:
#Override
public void onDrawFrame(GL10 unused) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -1f, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
Matrix.setRotateM(mModelMatrix, 0, mAngle, 0, 0, 1.0f);
Matrix.translateM(mModelMatrix, 0, 0.4f, 0.0f, 0);
mSquare.draw(mProjMatrix,mViewMatrix,mModelMatrix);
}
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
...
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 1, 99);
}
class Square {
private final String vertexShaderCode =
"uniform mat4 uPMatrix; \n" +
"uniform mat4 uVMatrix; \n" +
"uniform mat4 uMMatrix; \n" +
"attribute vec4 vPosition; \n" +
"void main() { \n" +
" gl_Position = uPMatrix * uVMatrix * uMMatrix * vPosition; \n" +
"} \n";
...
public void draw(float[] mpMatrix,float[] mvMatrix,float[]mmMatrix) {
...
mPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uPMatrix");
mVMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uVMatrix");
mMMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMMatrix");
GLES20.glUniformMatrix4fv(mPMatrixHandle, 1, false, mpMatrix, 0);
GLES20.glUniformMatrix4fv(mVMatrixHandle, 1, false, mvMatrix, 0);
GLES20.glUniformMatrix4fv(mMMatrixHandle, 1, false, mmMatrix, 0);
...
}
}
I’m working on the same issue and that’s what I found:
I believe that Joe’s sample is CORRECT,
including
the order of the factors in the shader code:
gl_Position = vPosition * uMVPMatrix;
To verify it just try to rotate the triangle with reversed factors order,
it will stretch the triangle to vanishing point at 90 degrees.
The real problem seems to be in setLookAtM function.
In Joe’s sample parameters are:
Matrix.setLookAtM(mVMatrix, 0,
0f, 0f,-3f, 0f, 0f, 0f, 0f, 1f, 0f );
which is perfectly logical as well.
However, the resulting view matrix looks weird to me:
-1 0 0 0
0 1 0 0
0 0 -1 0
0 0 -3 1
As we can see, this matrix will invert X coordinate,
since the first member is –1,
which will lead to left/right flip on the screen.
It will also reverse Z-order, but let's focus on X coordinate here.
I think that setLookAtM function is also working correctly.
However, since Matrix class is NOT a part of OpenGL,
it can use some other coordinates system,
for example - regular screen coordinates with Y axis pointing down.
This is just a guess, I didn’t really verify that.
Possible solutions:
We can build desirable view matrix manually,
the code is:
Matrix.setIdentityM(mVMatrix,0);
mVMatrix[14] = -3f;
OR
we can try to trick setLookAtM function by giving it
reversed camera coordinates:
0, 0, +3 (instead of –3).
Matrix.setLookAtM(mVMatrix, 0,
0f, 0f, 3f, 0f, 0f, 0f, 0f, 1f, 0f );
The resulting view matrix will be:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 -3 1
That’s exactly what we need.
Now camera behaves as expected,
and sample works correctly.
No other suggestions worked for me using the current updated Android example code except for the following when trying to move the triangle.
The following link contains the answer. Took over a day to locate it. Posting here to help others as I seen this post many times. OpenGL ES Android Matrix Transformations

opengl es position after glRotatef and glTranslatef

i'am new in OpenGL ES. Can you helps me to calculate world coordinates of cube after rotate and translate. For example:
first i rotate cube:
gl.glRotatef(90, 1, 0, 0);
than change his position
gl.glTranslatef(10, 0, 0);
How can i calculate his "new" world coordinates? I read about glGetFloatv(GL_MODELVIEW_MATRIX , matrix) but not understand it. Maybe someone can provide sample code.
EDIT:
I found solution. Android code
float[] matrix = new float[] {
1,0,0,0,
0,1,0,0,
0,0,1,0,
0,0,0,1,
};
Matrix.rotateM(matrix, 0, rx, 1, 0, 0);
Matrix.rotateM(matrix, 0, ry, 0, 1, 0);
Matrix.rotateM(matrix, 0, rz, 0, 0, 1);
Matrix.translateM(matrix, 0, x, y, z);
x = matrix[12];
y = matrix[13];
z = matrix[14];
Thanks for answers.
Although you have an answer for the part you want, in terms of the rest of your question, you'd do something like (please forgive me if I make any Java errors, I'm not really an Android person):
float[] matrix = new float[16];
gl.glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
// check out matrix[12], matrix[13] and matrix[14] now for the world location
// that (0, 0, 0) [, 1)] would be mapped to
That getFloatv just reads back the current value of the modelview matrix into the float buffer specified. In OpenGL 4x4 matrices are specified so that index 0 is the top left, index 3 is the lowest number in the first column and 12 is the number furthest to the right in the first row. That's usually referred to as column-major layout, though the OpenGL FAQ isn't entirely happy with the term.

Android OpenGL 3D picking

I'm on Android OpenGL-ES 2.0 and after all the limitations that come with it, I can't figure out how to take 2D screen touches to the 3D points I have. I can't get the right results.
I'm trying to implement shooting a ray into the point cloud, which I can then compare distances of my points to the ray, finding the closest point.
public class OpenGLRenderer extends Activity implements GLSurfaceView.Renderer {
public PointCloud ptCloud;
MatrixGrabber mg = new MatrixGrabber();
...
public void onDrawFrame(GL10 gl) {
gl.glDisable(GL10.GL_COLOR_MATERIAL);
gl.glDisable(GL10.GL_BLEND);
gl.glDisable(GL10.GL_LIGHTING);
//Background drawing
if(customBackground)
gl.glClearColor(backgroundRed, backgroundGreen, backgroundBlue, 1.0f);
else
gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
if (PointCloud.doneParsing == true) {
if (envDone == false)
setupEnvironment();
// Clears the screen and depth buffer.
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, 55.0f, (float) screenWidth / (float) screenHeight, 10.0f ,10000.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, eyeX, eyeY, eyeZ,
centerX, centerY, centerZ,
upX, upY, upZ);
if(pickPointTrigger)
pickPoint(gl);
gl.glPushMatrix();
gl.glTranslatef(_xTranslate, _yTranslate, _zTranslate);
gl.glTranslatef(centerX, centerY, centerZ);
gl.glRotatef(_xAngle, 1f, 0f, 0f);
gl.glRotatef(_yAngle, 0f, 1f, 0f);
gl.glRotatef(_zAngle, 0f, 0f, 1f);
gl.glTranslatef(-centerX, -centerY, -centerZ);
ptCloud.draw(gl);
gl.glPopMatrix();
}
}
}
Here is my picking function. I've set the location to the middle of the screen just for debugging purposes:
public void pickPoint(GL10 gl){
mg.getCurrentState(gl);
double mvmatrix[] = new double[16];
double projmatrix[] = new double[16];
int viewport[] = {0,0,screenWidth, screenHeight};
for(int i=0 ; i<16; i++){
mvmatrix[i] = mg.mModelView[i];
projmatrix[i] = mg.mProjection[i];
}
mg.getCurrentState(gl);
float realY = ((float) (screenHeight) - pickY);
float nearCoords[] = { 0.0f, 0.0f, 0.0f, 0.0f };
float farCoords[] = { 0.0f, 0.0f, 0.0f, 0.0f };
GLU.gluUnProject(screenWidth/2, screenHeight/2, 0.0f, mg.mModelView, 0, mg.mProjection, 0,
viewport, 0, nearCoords, 0);
GLU.gluUnProject(screenWidth/2, screenHeight/2, 1.0f, mg.mModelView, 0, mg.mProjection, 0,
viewport, 0, farCoords, 0);
System.out.println("Near: " + nearCoords[0] + "," + nearCoords[1] + "," + nearCoords[2]);
System.out.println("Far: " + farCoords[0] + "," + farCoords[1] + "," + farCoords[2]);
//Plot the points in the scene
nearMarker.set(nearCoords);
farMarker.set(farCoords);
markerOn = true;
double diffX = nearCoords[0] - farCoords[0];
double diffY = nearCoords[1] - farCoords[1];
double diffZ = nearCoords[2] - farCoords[2];
double rayLength = Math.sqrt(Math.pow(diffX, 2) + Math.pow(diffY, 2) + Math.pow(diffZ, 2));
System.out.println("rayLength: " + rayLength);
pickPointTrigger = false;
}
Changing the persepctive zNear and Far doesn't have the expected results, how could the far point of a 1.0-1000.0 perspective be 11 units away?
GLU.gluPerspective(gl, 55.0f, (float) screenWidth / (float) screenHeight, 1.0f ,100.0f);
.....
07-18 11:23:50.430: INFO/System.out(31795): Near: 57.574852,-88.60514,37.272636
07-18 11:23:50.430: INFO/System.out(31795): Far: 0.57574844,0.098602295,0.2700405
07-18 11:23:50.430: INFO/System.out(31795): rayLength: 111.74275719790872
GLU.gluPerspective(gl, 55.0f, (float) width / (float) height, 10.0f , 1000.0f);
...
07-18 11:25:12.420: INFO/System.out(31847): Near: 5.7575016,-7.965394,3.6339219
07-18 11:25:12.420: INFO/System.out(31847): Far: 0.057574987,0.90500546,-0.06634784
07-18 11:25:12.420: INFO/System.out(31847): rayLength: 11.174307289026638
Looking for any suggestions or hopefully bugs you see in my code. Much appreciated. I'm Bountying as much as I can (this has been a problem for a while).
I'm working on this, too - it's a very irritating irritating problem. I have two potential leads: 1. Somehow, the resulting z depend on where the camera is, and not how you'd expect. When the camera z is at 0, the resulting z is -1, no matter what winZ is. Up until now I've mainly been looking at the resulting z, so I don't have any exact figures on the other coordinates, but I messed around with my code and your code, just now, and I've discovered that the reported ray-length increases the farther the camera gets from (0,0,0). At (0,0,0), the ray-length is reported to be 0. An hour or so ago, I gathered a bunch of points (cameraZ, winZ, resultZ) and plugged them into Mathematica. The result seems to indicate a hyperbolic sort of thing; with one of the variables fixed, the other causes the resulting z to vary linearly, with the rate of change depending on the fixed variable.
My second lead is from http://www.gamedev.net/topic/420427-gluunproject-question/; swordfish quotes a formula:
WinZ = (1.0f/fNear-1.0f/fDistance)/(1.0f/fNear-1.0f/fFar)
Now, this doesn't seem to match up with the data I collected, but it's probably worth a look. I think I'm going to see if I can figure out how the math of this thing works and figure out what's wrong. Let me know if you figure anything out. Oh, also, here's the formula fitted to the data I collected:
-0.11072114015496763- 10.000231721597817 x -
0.0003149873867479971x^2 - 0.8633277851535017 y +
9.990256062051143x y + 8.767260632968973*^-9 y^2
Wolfram Alpha plots it like so:
http://www.wolframalpha.com/input/?i=Plot3D[-0.11072114015496763%60+-+10.000231721597817%60+x+-++++0.0003149873867479971%60+x^2+-+0.8633277851535017%60+y+%2B++++9.990256062051143%60+x+y+%2B+8.767260632968973%60*^-9+y^2+%2C+{x%2C+-15%2C++++15}%2C+{y%2C+0%2C+1}]
AHA! Success! As near as I can tell, gluUnProject is just plain broken. Or, nobody understands how to use it at all. Anyway, I made a function that properly undoes the gluProject function, which appears to really be what they use to draw to the screen in some fashion! Code is as follows:
public float[] unproject(float rx, float ry, float rz) {//TODO Factor in projection matrix
float[] modelInv = new float[16];
if (!android.opengl.Matrix.invertM(modelInv, 0, mg.mModelView, 0))
throw new IllegalArgumentException("ModelView is not invertible.");
float[] projInv = new float[16];
if (!android.opengl.Matrix.invertM(projInv, 0, mg.mProjection, 0))
throw new IllegalArgumentException("Projection is not invertible.");
float[] combo = new float[16];
android.opengl.Matrix.multiplyMM(combo, 0, modelInv, 0, projInv, 0);
float[] result = new float[4];
float vx = viewport[0];
float vy = viewport[1];
float vw = viewport[2];
float vh = viewport[3];
float[] rhsVec = {((2*(rx-vx))/vw)-1,((2*(ry-vy))/vh)-1,2*rz-1,1};
android.opengl.Matrix.multiplyMV(result, 0, combo, 0, rhsVec, 0);
float d = 1 / result[3];
float[] endResult = {result[0] * d, result[1] * d, result[2] * d};
return endResult;
}
public float distanceToDepth(float distance) {
return ((1/fNear) - (1/distance))/((1/fNear) - (1/fFar));
}
It currently assumes the following global variables:
mg - a MatrixGrabber with current matrices
viewport - a float[4] with the viewport ({x, y, width, height})
The variables it takes are equivalent to the ones that gluUnProject was supposed to take. For example:
float[] xyz = {0, 0, 0};
xyz = unproject(mouseX, viewport[3] - mouseY, 1);
This will return the point under the mouse, on the far plane. I also added a function to convert between a specified distance from the camera and its 0-1...representation...thing. Like so:
unproject(mouseX, viewport[3] - mouseY, distanceToDepth(5));
This will return the point under the mouse 5 units from the camera.
I tested this with the method given in the question - I checked the distance between the near plane and the far plane. With fNear of 0.1 and fFar of 100, the distance should be 99.9. I have consistently gotten about 99.8977, regardless of position or orientation of the camera, as far as I can tell. Haha, good to have that figured out. Let me know if you do/don't have any problems with it, or if you want me to rewrite it to take inputs instead of using global variables. Hopefully this helps a few people; I had been wondering about this for a few days before seriously trying to fix it.
Hey, so, having figured out how it's supposed to be, I've figured out what they missed in implementing gluUnProject. They forgot (intended not to and didn't tell anyone?) to divide by the fourth element of the resulting vector, which kinda normalizes the vector or something like that. gluProject sets it to 1 before applying matrices, so it needs to be 1 when you're done undoing them. Long story short, you can actually use gluUnProject, but you need to pass it a float[4], and then divide all the resulting coordinates by the 4th one, like so:
float[] xyzw = {0, 0, 0, 0};
android.opengl.GLU.gluUnProject(rx, ry, rz, mg.mModelView, 0, mg.mProjection, 0, this.viewport, 0, xyzw, 0);
xyzw[0] /= xyzw[3];
xyzw[1] /= xyzw[3];
xyzw[2] /= xyzw[3];
//xyzw[3] /= xyzw[3];
xyzw[3] = 1;
return xyzw;
xyzw should now contain the relevant space coordinates. This seems to work exactly the same as the one I cobbled together. It might be a little bit faster; I think they combined one of the steps.

What differences in OpenGl rendering are there between the Atrix 4G and other Android phones?

The reason I'm asking this is that our app (The Elements) runs fine on a Droid and a Nexus One, our two test phones, but not correctly on our recently acquired Atrix 4G. What draws is a skewed version of what should draw, with all the colors being replaced with alternating lines of cyan, magenta, and yellow (approximately), which leads us to believe that one of the primary colors for the sand particles that should show up is missing based on which line it's on. I'm sorry for the unclear description, we had images but since this account doesn't have 10 reputation we couldn't post them.
Here is the code of our gl.c file, which does the texturing and rendering:
/*
* gl.c
* --------------------------
* Defines the gl rendering and initialization
* functions appInit, appDeinit, and appRender.
*/
#include "gl.h"
#include <android/log.h>
unsigned int textureID;
float vertices[] =
{0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f};
float texture[] =
{0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f};
unsigned char indices[] =
{0, 1, 3, 0, 3, 2};
int texWidth = 1, texHeight = 1;
void glInit()
{
//Set some properties
glShadeModel(GL_FLAT);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
//Generate the new texture
glGenTextures(1, &textureID);
//Bind the texture
glBindTexture(GL_TEXTURE_2D, textureID);
//Enable 2D texturing
glEnable(GL_TEXTURE_2D);
//Disable depth testing
glDisable(GL_DEPTH_TEST);
//Enable the vertex and coord arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Set tex params
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//Set up texWidth and texHeight texHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, emptyPixels);
//Free the dummy array
free(emptyPixels);
//Set the pointers
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texture);
}
void glRender()
{
//Check for changes in screen dimensions or work dimensions and handle them
if(dimensionsChanged)
{
vertices[2] = (float) screenWidth;
vertices[5] = (float) screenHeight;
vertices[6] = (float) screenWidth;
vertices[7] = (float) screenHeight;
texture[2] = (float) workWidth/texWidth;
texture[5] = (float) workHeight/texHeight;
texture[6] = (float) workWidth/texWidth;
texture[7] = (float) workHeight/texHeight;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if (!flipped)
{
glOrthof(0, screenWidth, screenHeight, 0, -1, 1); //--Device
}
else
{
glOrthof(0, screenWidth, 0, -screenHeight, -1, 1); //--Emulator
}
dimensionsChanged = FALSE;
zoomChanged = FALSE;
}
else if(zoomChanged)
{
texture[2] = (float) workWidth/texWidth;
texture[5] = (float) workHeight/texHeight;
texture[6] = (float) workWidth/texWidth;
texture[7] = (float) workHeight/texHeight;
zoomChanged = FALSE;
}
//__android_log_write(ANDROID_LOG_INFO, "TheElements", "updateview begin");
UpdateView();
//__android_log_write(ANDROID_LOG_INFO, "TheElements", "updateview end");
//Clear the screen
glClear(GL_COLOR_BUFFER_BIT);
//Sub the work portion of the tex
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, workWidth, workHeight, GL_RGB, GL_UNSIGNED_BYTE, colors);
//Actually draw the rectangle with the text on it
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices);
}
Any ideas as to what the difference is between the Atrix 4G and other phones in terms of OpenGL or why our app is doing what it is in general are much appreciated! Thanks in advance.
Here is an example of what it looks like: http://imgur.com/Oyw64
I know it's a bit late to reply, but the real reason you're seeing the mixed colors is due to OpenGL's scanline packing alignment -- not due to any driver bug or power-of-two sized texture issue.
Your glTexSubImage2D call is sending data in GL_RGB format, so I'm guessing your colors buffer is 3 bytes per pixel. Odds are the Droid and Nexus One phones have a default pack alignment of 1, but the Tegra 2 defaults to an alignment of 4. This means your 3 byte array can become misaligned with what the driver expects after every scanline, and a byte or two will be skipped for the next scanline, resulting in the colors you see. The reason why this works with a power-of-two sized texture is because your buffer just happens to be aligned properly for the next scanline. Basically this is the same issue as loading BMPs, where each scanline has to be padded to 4 bytes, regardless of the bit depth of the image.
You can explicitly disable any alignment packing by calling glPixelStorei(GL_PACK_ALIGNMENT, 1);. Note that changing this only affects the way OpenGL interprets your texture data, so there is no rendering performance penalty for changing this value. When the texture is sent to the graphics subsystem, the scanlines are stored in whatever format is optimal for the hardware, but the driver still has to know how to unpack your data properly. However, since you are changing the texture data every frame, instead of the driver being able to do one memcpy() to upload the entire texture, it will have to do TextureHeight *memcpy()s in order to upload it. This is not likely to be a major bottleneck, but if you are looking for the best performance, you may want to query the driver's default pack alignment on startup using glGetIntegerv(GL_PACK_ALIGNMENT, &align); and adjust your buffer accordingly at runtime.
Here's the specification on glPixelStore() for reference.
Ok, we finally found the actual problem. It turns out that glSubTexImage2D() actually requires the WIDTH to be a power of two, but not the height, for some GPUs including the Tegra 2. We though that it was only the texture that needed to be a power of two and that's where we were wrong. We're going to have to do a bit of recoding, but hopefully this will work out in the end (AT LAST!!).
The Atrix 4G is the first prominent phone that uses Nvidia's Tegra GPU. As such, it has an entirely different OpenGL implementation than previous Android devices. Either you are observing a bug Tegra hardware+software combination, or your application was relying on undefined behavior and you were getting lucky on other devices.
You may want to file a bug report with Nvidia.

Categories

Resources