I'm on Android OpenGL-ES 2.0 and after all the limitations that come with it, I can't figure out how to take 2D screen touches to the 3D points I have. I can't get the right results.
I'm trying to implement shooting a ray into the point cloud, which I can then compare distances of my points to the ray, finding the closest point.
public class OpenGLRenderer extends Activity implements GLSurfaceView.Renderer {
public PointCloud ptCloud;
MatrixGrabber mg = new MatrixGrabber();
...
public void onDrawFrame(GL10 gl) {
gl.glDisable(GL10.GL_COLOR_MATERIAL);
gl.glDisable(GL10.GL_BLEND);
gl.glDisable(GL10.GL_LIGHTING);
//Background drawing
if(customBackground)
gl.glClearColor(backgroundRed, backgroundGreen, backgroundBlue, 1.0f);
else
gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
if (PointCloud.doneParsing == true) {
if (envDone == false)
setupEnvironment();
// Clears the screen and depth buffer.
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, 55.0f, (float) screenWidth / (float) screenHeight, 10.0f ,10000.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, eyeX, eyeY, eyeZ,
centerX, centerY, centerZ,
upX, upY, upZ);
if(pickPointTrigger)
pickPoint(gl);
gl.glPushMatrix();
gl.glTranslatef(_xTranslate, _yTranslate, _zTranslate);
gl.glTranslatef(centerX, centerY, centerZ);
gl.glRotatef(_xAngle, 1f, 0f, 0f);
gl.glRotatef(_yAngle, 0f, 1f, 0f);
gl.glRotatef(_zAngle, 0f, 0f, 1f);
gl.glTranslatef(-centerX, -centerY, -centerZ);
ptCloud.draw(gl);
gl.glPopMatrix();
}
}
}
Here is my picking function. I've set the location to the middle of the screen just for debugging purposes:
public void pickPoint(GL10 gl){
mg.getCurrentState(gl);
double mvmatrix[] = new double[16];
double projmatrix[] = new double[16];
int viewport[] = {0,0,screenWidth, screenHeight};
for(int i=0 ; i<16; i++){
mvmatrix[i] = mg.mModelView[i];
projmatrix[i] = mg.mProjection[i];
}
mg.getCurrentState(gl);
float realY = ((float) (screenHeight) - pickY);
float nearCoords[] = { 0.0f, 0.0f, 0.0f, 0.0f };
float farCoords[] = { 0.0f, 0.0f, 0.0f, 0.0f };
GLU.gluUnProject(screenWidth/2, screenHeight/2, 0.0f, mg.mModelView, 0, mg.mProjection, 0,
viewport, 0, nearCoords, 0);
GLU.gluUnProject(screenWidth/2, screenHeight/2, 1.0f, mg.mModelView, 0, mg.mProjection, 0,
viewport, 0, farCoords, 0);
System.out.println("Near: " + nearCoords[0] + "," + nearCoords[1] + "," + nearCoords[2]);
System.out.println("Far: " + farCoords[0] + "," + farCoords[1] + "," + farCoords[2]);
//Plot the points in the scene
nearMarker.set(nearCoords);
farMarker.set(farCoords);
markerOn = true;
double diffX = nearCoords[0] - farCoords[0];
double diffY = nearCoords[1] - farCoords[1];
double diffZ = nearCoords[2] - farCoords[2];
double rayLength = Math.sqrt(Math.pow(diffX, 2) + Math.pow(diffY, 2) + Math.pow(diffZ, 2));
System.out.println("rayLength: " + rayLength);
pickPointTrigger = false;
}
Changing the persepctive zNear and Far doesn't have the expected results, how could the far point of a 1.0-1000.0 perspective be 11 units away?
GLU.gluPerspective(gl, 55.0f, (float) screenWidth / (float) screenHeight, 1.0f ,100.0f);
.....
07-18 11:23:50.430: INFO/System.out(31795): Near: 57.574852,-88.60514,37.272636
07-18 11:23:50.430: INFO/System.out(31795): Far: 0.57574844,0.098602295,0.2700405
07-18 11:23:50.430: INFO/System.out(31795): rayLength: 111.74275719790872
GLU.gluPerspective(gl, 55.0f, (float) width / (float) height, 10.0f , 1000.0f);
...
07-18 11:25:12.420: INFO/System.out(31847): Near: 5.7575016,-7.965394,3.6339219
07-18 11:25:12.420: INFO/System.out(31847): Far: 0.057574987,0.90500546,-0.06634784
07-18 11:25:12.420: INFO/System.out(31847): rayLength: 11.174307289026638
Looking for any suggestions or hopefully bugs you see in my code. Much appreciated. I'm Bountying as much as I can (this has been a problem for a while).
I'm working on this, too - it's a very irritating irritating problem. I have two potential leads: 1. Somehow, the resulting z depend on where the camera is, and not how you'd expect. When the camera z is at 0, the resulting z is -1, no matter what winZ is. Up until now I've mainly been looking at the resulting z, so I don't have any exact figures on the other coordinates, but I messed around with my code and your code, just now, and I've discovered that the reported ray-length increases the farther the camera gets from (0,0,0). At (0,0,0), the ray-length is reported to be 0. An hour or so ago, I gathered a bunch of points (cameraZ, winZ, resultZ) and plugged them into Mathematica. The result seems to indicate a hyperbolic sort of thing; with one of the variables fixed, the other causes the resulting z to vary linearly, with the rate of change depending on the fixed variable.
My second lead is from http://www.gamedev.net/topic/420427-gluunproject-question/; swordfish quotes a formula:
WinZ = (1.0f/fNear-1.0f/fDistance)/(1.0f/fNear-1.0f/fFar)
Now, this doesn't seem to match up with the data I collected, but it's probably worth a look. I think I'm going to see if I can figure out how the math of this thing works and figure out what's wrong. Let me know if you figure anything out. Oh, also, here's the formula fitted to the data I collected:
-0.11072114015496763- 10.000231721597817 x -
0.0003149873867479971x^2 - 0.8633277851535017 y +
9.990256062051143x y + 8.767260632968973*^-9 y^2
Wolfram Alpha plots it like so:
http://www.wolframalpha.com/input/?i=Plot3D[-0.11072114015496763%60+-+10.000231721597817%60+x+-++++0.0003149873867479971%60+x^2+-+0.8633277851535017%60+y+%2B++++9.990256062051143%60+x+y+%2B+8.767260632968973%60*^-9+y^2+%2C+{x%2C+-15%2C++++15}%2C+{y%2C+0%2C+1}]
AHA! Success! As near as I can tell, gluUnProject is just plain broken. Or, nobody understands how to use it at all. Anyway, I made a function that properly undoes the gluProject function, which appears to really be what they use to draw to the screen in some fashion! Code is as follows:
public float[] unproject(float rx, float ry, float rz) {//TODO Factor in projection matrix
float[] modelInv = new float[16];
if (!android.opengl.Matrix.invertM(modelInv, 0, mg.mModelView, 0))
throw new IllegalArgumentException("ModelView is not invertible.");
float[] projInv = new float[16];
if (!android.opengl.Matrix.invertM(projInv, 0, mg.mProjection, 0))
throw new IllegalArgumentException("Projection is not invertible.");
float[] combo = new float[16];
android.opengl.Matrix.multiplyMM(combo, 0, modelInv, 0, projInv, 0);
float[] result = new float[4];
float vx = viewport[0];
float vy = viewport[1];
float vw = viewport[2];
float vh = viewport[3];
float[] rhsVec = {((2*(rx-vx))/vw)-1,((2*(ry-vy))/vh)-1,2*rz-1,1};
android.opengl.Matrix.multiplyMV(result, 0, combo, 0, rhsVec, 0);
float d = 1 / result[3];
float[] endResult = {result[0] * d, result[1] * d, result[2] * d};
return endResult;
}
public float distanceToDepth(float distance) {
return ((1/fNear) - (1/distance))/((1/fNear) - (1/fFar));
}
It currently assumes the following global variables:
mg - a MatrixGrabber with current matrices
viewport - a float[4] with the viewport ({x, y, width, height})
The variables it takes are equivalent to the ones that gluUnProject was supposed to take. For example:
float[] xyz = {0, 0, 0};
xyz = unproject(mouseX, viewport[3] - mouseY, 1);
This will return the point under the mouse, on the far plane. I also added a function to convert between a specified distance from the camera and its 0-1...representation...thing. Like so:
unproject(mouseX, viewport[3] - mouseY, distanceToDepth(5));
This will return the point under the mouse 5 units from the camera.
I tested this with the method given in the question - I checked the distance between the near plane and the far plane. With fNear of 0.1 and fFar of 100, the distance should be 99.9. I have consistently gotten about 99.8977, regardless of position or orientation of the camera, as far as I can tell. Haha, good to have that figured out. Let me know if you do/don't have any problems with it, or if you want me to rewrite it to take inputs instead of using global variables. Hopefully this helps a few people; I had been wondering about this for a few days before seriously trying to fix it.
Hey, so, having figured out how it's supposed to be, I've figured out what they missed in implementing gluUnProject. They forgot (intended not to and didn't tell anyone?) to divide by the fourth element of the resulting vector, which kinda normalizes the vector or something like that. gluProject sets it to 1 before applying matrices, so it needs to be 1 when you're done undoing them. Long story short, you can actually use gluUnProject, but you need to pass it a float[4], and then divide all the resulting coordinates by the 4th one, like so:
float[] xyzw = {0, 0, 0, 0};
android.opengl.GLU.gluUnProject(rx, ry, rz, mg.mModelView, 0, mg.mProjection, 0, this.viewport, 0, xyzw, 0);
xyzw[0] /= xyzw[3];
xyzw[1] /= xyzw[3];
xyzw[2] /= xyzw[3];
//xyzw[3] /= xyzw[3];
xyzw[3] = 1;
return xyzw;
xyzw should now contain the relevant space coordinates. This seems to work exactly the same as the one I cobbled together. It might be a little bit faster; I think they combined one of the steps.
Related
I am currently working on developing a 3-D game engine for Android, and everything works fine on my Lenovo TAB 10, but rotation about any axis causes the meshes to flatten (skew) during the rotation. I don't know where to start looking since it works on one device. Any ideas?
I rotate everything around arbitrary axes (right, up, and forward) relative to the objects themselves. The rotated axes are then put into a rotation matrix as follows:
mMatRotate = new cMatrix4(
mvRight.mX, mvUp.mX, mvForward.mX, 0.0f,
mvRight.mY, mvUp.mY, mvForward.mY, 0.0f,
mvRight.mZ, mvUp.mZ, mvForward.mZ, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);
The actual matrix is defined as follows:
cMatrix4( float a, float b, float c, float d,
float e, float f, float g, float h,
float i, float j, float k, float l,
float m, float n, float o, float p )
{
mfMatrixData[0] = a; mfMatrixData[1] = b; mfMatrixData[2] = c; mfMatrixData[3] = d;
mfMatrixData[4] = e; mfMatrixData[5] = f; mfMatrixData[6] = g; mfMatrixData[7] = h;
mfMatrixData[8] = i; mfMatrixData[9] = j; mfMatrixData[10] = k; mfMatrixData[11] = l;
mfMatrixData[12] = m; mfMatrixData[13] = n; mfMatrixData[14] = o; mfMatrixData[15] = p;
}
Inside my draw() function looks like this:
Matrix.setIdentityM(mMesh.mModelMatrix, 0);
Matrix.translateM(mMesh.mModelMatrix, 0, mvLocation.mX, mvLocation.mY, mvLocation.mZ);
Matrix.scaleM(mMesh.mModelMatrix, 0, mfScale, mfScale, mfScale);
Matrix.multiplyMM(mMesh.mModelMatrix, 0, mMesh.mModelMatrix, 0, mMatRotate.getFloatArray(), 0);
mMesh.draw(viewMatrix, projMatrix, Renderer);
And the final transform looks like this:
Matrix.multiplyMM(mtrx, 0, viewMatrix, 0, mModelMatrix, 0);
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mtrx, 0);
Matrix.multiplyMM(mtrx, 0, projMatrix, 0, mtrx, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mtrx, 0);
I'm kind of stuck on this one. Any suggestions would be helpful.
Okay, I solved the problem myself, and I'll post it here in case anyone else is interested or encounters a similar issue. It all comes down t the fact that the Lenovo TAB 10 relies entirely on hardware, while the Samsung Galaxy Core Prime does not. The issue is in the line:
Matrix.multiplyMM(mMesh.mModelMatrix, 0, mMesh.mModelMatrix, 0, mMatRotate.getFloatArray(), 0);
In hardware, it seems, it is okay to perform a matrix multiply and place the result into one of the input matrices; but the software implementation does not function the same. This makes sense, I guess, because it would take too long to make a copy every time you do a matrix multiply.
For 3d-picking I planned to do this:
-Get touch coordinates (x, y)
-Choose vertex from vertex buffer of my model (xM, yM, zM).
-Then project by my own hands (xM, yM, zM) on the screen coords
(xM, yM, zM) ---> (xP, yP, ...)
and then check match (for example sqrt((x - xP)^2 + (y - yP)^2) < SOME_EPS)
For projecting I saved Frustum Matrix in mProjectionMatrix:
gl.glFrustumf(-ratio / q, ratio / q, -1 / q, 1 / q, 1, 25);
Matrix.frustumM(mProjectionMatrix, 0, -ratio / q, ratio / q, -1 / q, 1 / q, 1, 25);
and saved Transform coordinates in mAccRotation:
gl.glLoadMatrixf(mAccRotation, 0);
So the testing function turned into this:
(TESTIFY_VERT is one of ones in my model)
public void touch(float x, float y){
float TESTIFY_VERT[] = {0.0f, 0.0f, -1.5f, 1.0f}; //first vert in L0
float Resulted[] = new float[4];
float rMatrix[] = new float[16];
Matrix.multiplyMM(rMatrix, 0, mProjectionMatrix, 0, mAccRotation, 0);
Matrix.multiplyMV(Resulted, 0, rMatrix, 0, TESTIFY_VERT, 0);
}
So I tried to use Resulted[0], Resulted[1] as (xP, yP)
and tried to use (Resulted[0] + 1) * ( WIDTH / 2.0f ), (-Resulted[1] + 1) * ( HEIGHT / 2.0f )
And this don't work. Why?
Can you give an advice?
PS I have seen ALL such a questions and they don't answer my problem.
Maybe you are missing perspective divide. Divide Resulted[0], Resulted[1] by Resulted[3] and use the ratios as (xP, yP) i.e.:
float xP = Resulted[0] / Resulted[3];
float yP = Resulted[1] / Resulted[3];
I'm building an Android application that uses OpenGL ES 2.0 and I've run into a wall. I'm trying to convert screen coordinates (where the user touches) to world coordinates. I've tried reading and playing around with GLU.gluUnProject but I'm either doing it wrong or just don't understand it.
This is my attempt....
public void getWorldFromScreen(float x, float y) {
int viewport[] = { 0, 0, width , height};
float startY = ((float) (height) - y);
float[] near = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] far = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] mv = new float[16];
Matrix.multiplyMM(mv, 0, mViewMatrix, 0, mModelMatrix, 0);
GLU.gluUnProject(x, startY, 0, mv, 0, mProjectionMatrix, 0, viewport, 0, near, 0);
GLU.gluUnProject(x, startY, 1, mv, 0, mProjectionMatrix, 0, viewport, 0, far, 0);
float nearX = near[0] / near[3];
float nearY = near[1] / near[3];
float nearZ = near[2] / near[3];
float farX = far[0] / far[3];
float farY = far[1] / far[3];
float farZ = far[2] / far[3];
}
The numbers I am getting don't seem right, is this the right way to utilize this method? Does it work for OpenGL ES 2.0? Should I make the Model Matrix an identity matrix before these calculations (Matrix.setIdentityM(mModelMatix, 0))?
As a follow up, if this is correct, how do I pick the output Z? Basically, I always know at what distance I want the world coordinates to be at, but the Z parameter in GLU.gluUnProject appears to be some kind of interpolation between the near and far plane. Is it just a linear interpolation?
Thanks in advance
/**
* Calculates the transform from screen coordinate
* system to world coordinate system coordinates
* for a specific point, given a camera position.
*
* #param touch Vec2 point of screen touch, the
actual position on physical screen (ej: 160, 240)
* #param cam camera object with x,y,z of the
camera and screenWidth and screenHeight of
the device.
* #return position in WCS.
*/
public Vec2 GetWorldCoords( Vec2 touch, Camera cam)
{
// Initialize auxiliary variables.
Vec2 worldPos = new Vec2();
// SCREEN height & width (ej: 320 x 480)
float screenW = cam.GetScreenWidth();
float screenH = cam.GetScreenHeight();
// Auxiliary matrix and vectors
// to deal with ogl.
float[] invertedMatrix, transformMatrix,
normalizedInPoint, outPoint;
invertedMatrix = new float[16];
transformMatrix = new float[16];
normalizedInPoint = new float[4];
outPoint = new float[4];
// Invert y coordinate, as android uses
// top-left, and ogl bottom-left.
int oglTouchY = (int) (screenH - touch.Y());
/* Transform the screen point to clip
space in ogl (-1,1) */
normalizedInPoint[0] =
(float) ((touch.X()) * 2.0f / screenW - 1.0);
normalizedInPoint[1] =
(float) ((oglTouchY) * 2.0f / screenH - 1.0);
normalizedInPoint[2] = - 1.0f;
normalizedInPoint[3] = 1.0f;
/* Obtain the transform matrix and
then the inverse. */
Print("Proj", getCurrentProjection(gl));
Print("Model", getCurrentModelView(gl));
Matrix.multiplyMM(
transformMatrix, 0,
getCurrentProjection(gl), 0,
getCurrentModelView(gl), 0);
Matrix.invertM(invertedMatrix, 0,
transformMatrix, 0);
/* Apply the inverse to the point
in clip space */
Matrix.multiplyMV(
outPoint, 0,
invertedMatrix, 0,
normalizedInPoint, 0);
if (outPoint[3] == 0.0)
{
// Avoid /0 error.
Log.e("World coords", "ERROR!");
return worldPos;
}
// Divide by the 3rd component to find
// out the real position.
worldPos.Set(
outPoint[0] / outPoint[3],
outPoint[1] / outPoint[3]);
return worldPos;
}
Algorithm is further explained here.
Hopefully my question (and answer) should help you out:
How to find absolute position of click while zoomed in
It has not only the code but also diagrams and diagrams and diagrams explaining it :) Took me ages to figure it out as well.
IMHO one doesn't need to re-implement this function...
I experimented with Erol's solution and it worked, so thanks a lot for it Erol.
Furthermore, I played with
Matrix.orthoM(mtrxProjection, 0, left, right, bottom, top, near, far);
and it works fine as well in my tiny noob example 2D OpenGL ES 2.0 project:
public void onSurfaceChanged(GL10 unused, int width, int height) {...
I need to pass to gluUnProject the winZ value of a pixel. to obtain the winZ value I need to read the depth value at a given pixel, this is a normalised z coordinate.
The way to do it is this with C: glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
The problem is that I'm programming with Android 1.5 and OpenGL ES 1, then I don't have the possibility to use GL_DEPTH_COMPONENT and glReadPixels.
How can I obtain the depth of a pixel on the screen?
Solved, it is simply impossible to do it on OpenGL ES 1.x
If someone find the way, tell me, but after weeks of searching i simply think that it is impossible.
Check out this nice post I think it has what your looking for:
http://afqa123.com/2012/04/03/fixing-gluunproject-in-android-froyo/
A custom GLunproject which enables you to calculate the near and far vectors:
private boolean unProject(float winx, float winy, float winz,
float[] modelMatrix, int moffset,
float[] projMatrix, int poffset,
int[] viewport, int voffset,
float[] obj, int ooffset) {
float[] finalMatrix = new float[16];
float[] in = new float[4];
float[] out = new float[4];
Matrix.multiplyMM(finalMatrix, 0, projMatrix, poffset,
modelMatrix, moffset);
if (!Matrix.invertM(finalMatrix, 0, finalMatrix, 0))
return false;
in[0] = winx;
in[1] = winy;
in[2] = winz;
in[3] = 1.0f;
// Map x and y from window coordinates
in[0] = (in[0] - viewport[voffset]) / viewport[voffset + 2];
in[1] = (in[1] - viewport[voffset + 1]) / viewport[voffset + 3];
// Map to range -1 to 1
in[0] = in[0] * 2 - 1;
in[1] = in[1] * 2 - 1;
in[2] = in[2] * 2 - 1;
Matrix.multiplyMV(out, 0, finalMatrix, 0, in, 0);
if (out[3] == 0.0f)
return false;
out[0] /= out[3];
out[1] /= out[3];
out[2] /= out[3];
obj[ooffset] = out[0];
obj[ooffset + 1] = out[1];
obj[ooffset + 2] = out[2];
return true;
}
In order to get the 3D coordinates of a point x/y on the screen, you only have to call the new method once for each intersection with the near and far clipping planes:
// near clipping plane intersection
float[] objNear = new float[4];
unProject(screenX, screenY, 0.1f,
viewMatrix, 0, projectionMatrix, 0, view, 0, objNear, 0);
// far clipping plane intersection
float[] objFar = new float[4];
unProject(screenX, screenY, 1.0f,
viewMatrix, 0, projectionMatrix, 0, view, 0, objFar, 0);
You can then calculate the difference vector (objFar - objNear) and check what location in 3D space corresponds to the screen event.
This is a simple function i use to find the distance between the two vectors:
public float distanceBetweenVectors(float[] vec1, float[] vec2){
float distance = 0;
// Simple math
distance = (float) Math.sqrt(
Math.pow((vec1[0]-vec2[0]), 2) +
Math.pow((vec1[1]-vec2[1]), 2) +
Math.pow((vec1[2]-vec2[2]), 2) +
Math.pow((vec1[3]-vec2[3]), 2)
);
return distance;
}
In my game, I need to find out where the player is touching. MotionEvent.getX() and MotionEvent.getY() return window coordinates. So I made this function to test converting window coordinates into OpenGL coordinates:
public void ConvertCoordinates(GL10 gl) {
float location[] = new float[4];
final MatrixGrabber mg = new MatrixGrabber(); //From the SpriteText demo in the samples.
mg.getCurrentProjection(gl);
mg.getCurrentModelView(gl);
int viewport[] = {0,0,WinWidth,WinHeight};
GLU.gluUnProject((float) WinWidth/2, (float) WinHeight/2, (float) 0, mg.mModelView, 0, mg.mProjection, 0, viewport, 0, location,0);
Log.d("Location",location[1]+", "+location[2]+", "+location[3]+"");
}
X and y oscillated from almost -33 to almost +33. Z is usually 10. Did I use MatrixGrabber wrong or something?
Well the easiest way for me to get into this was imagining the onscreen click as a ray that starts in camera position and goes into infinity
To get that ray i needed to ask for it's world coords in at least 2 Z positions (in view coords).
Here's my method for finding the ray(taken from the same android demo app i guess).
It works fine in my app:
public void Select(float x, float y) {
MatrixGrabber mg = new MatrixGrabber();
int viewport[] = { 0, 0, _renderer.width, _renderer.height };
_renderer.gl.glMatrixMode(GL10.GL_MODELVIEW);
_renderer.gl.glLoadIdentity();
// We need to apply our camera transformations before getting the ray coords
// (ModelView matrix should only contain camera transformations)
mEngine.mCamera.SetView(_renderer.gl);
mg.getCurrentProjection(_renderer.gl);
mg.getCurrentModelView(_renderer.gl);
float realY = ((float) (_renderer.height) - y);
float nearCoords[] = { 0.0f, 0.0f, 0.0f };
float farCoords[] = { 0.0f, 0.0f, 0.0f };
gluUnProject(x, realY, 0.0f, mg.mModelView, 0, mg.mProjection, 0,
viewport, 0, nearCoords, 0);
gluUnProject(x, realY, 1.0f, mg.mModelView, 0, mg.mProjection, 0,
viewport, 0, farCoords, 0);
mEngine.RespondToClickRay(nearCoords, farCoords);
}
In OGL10 AFAIK you don't have access to Z-Buffer so you can't find the Z of the closest object at the screen x,y coords.
You need to calculate the first object hit by your ray on your own.
Your error comes from the size of the output arrays. They need length 4 and not 3 as in the code above.