I am a newbee to android OpenGL i am trying to draw buttons using OpenGL I have added a Gesture Listener for the GLSurface View now i have motionevent when ever the user touches. My question is how can i convert motionevent.getx and motionevent.gety (which are in pixel range
)to window or Object coordinates of the view?
I Found the solution for this question posting in case if some one requires it.
public float[] convertToObjectCoordinates(MotionEvent event) {
float[] worldPos = new float[2];
float[] invertedMatrix, transformMatrix,
normalizedInPoint, outPoint, mProjMatrix, mVMatrix;
invertedMatrix = new float[16];
transformMatrix = new float[16];
mProjMatrix = new float[16];
mProjMatrix = mRenderer.getmProjMatrix();
mVMatrix = new float[16];
//Change the Proj and ModelView matrix according to your model and view matrix or you can use your mvpMatrix directly instead of transform Matrix
Matrix.setLookAtM(mVMatrix, 0, 0, 0, -3, 0, 0, 0f, 0.0f, 1.0f, 0.0f);
normalizedInPoint = new float[4];
outPoint = new float[4];
float y = screenHeight - event.getY();
setHeightAndWidth();
normalizedInPoint[0] = (float) ((event.getX()) * 2.0f / screenWidth - 1.0);
normalizedInPoint[1] = (float) ((y) * 2.0f / screenHeight - 1.0);
normalizedInPoint[2] = - 1.0f;
normalizedInPoint[3] = 1.0f;
Matrix.multiplyMM( transformMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
Matrix.invertM(invertedMatrix, 0, transformMatrix, 0);
Matrix.multiplyMV(outPoint, 0, invertedMatrix, 0, normalizedInPoint, 0);
if (outPoint[3] != 0.0)
{
worldPos[0] = outPoint[0] / outPoint[3];
worldPos[1] = outPoint[1] / outPoint[3];
} else {
Log.e("Error", "Normalised Zero Error");
}
return worldPos;
}
This is remake from the following post for Android OpenGL2.0 :
Android OpenGL ES 2.0 screen coordinates to world coordinates
EROl's Answer
thanks to EROl for his reply.
public void setHeightAndWidth() {
screenHeight = this.getHeight();
screenWidth = this.getWidth();
}
The above method should be written in the GLSurfaceView class so that it gives the exact view's height and width. If your View occupies complete screen you may also use Display metrics to get complete screen width and height.
Related
I'm trying to build an Augmented Reality application in Android using BoofCV (OpenCV alternative for Java) and OpenGL ES 2.0. I have a marker which I can get the image points of and "world to cam" transformation using BoofCV's solvePnP function. I want to be able to draw the marker in 3D using OpenGL. Here's what I have so far:
On every frame of the camera, I call solvePnP
Se3_F64 worldToCam = MathUtils.worldToCam(__qrWorldPoints, imagePoints);
mGLAssetSurfaceView.setWorldToCam(worldToCam);
This is what I have defined as the world points
static float qrSideLength = 79.365f; // mm
private static final double[][] __qrWorldPoints = {
{qrSideLength * -0.5, qrSideLength * 0.5, 0},
{qrSideLength * -0.5, qrSideLength * -0.5, 0},
{qrSideLength * 0.5, qrSideLength * -0.5, 0},
{qrSideLength * 0.5, qrSideLength * 0.5, 0}
};
I'm feeding it a square that has origin at its center, with a sidelength in millimeters.
I can confirm that the rotation vector and translation vector I'm getting back from solvePnP are reasonable, so I don't know if there's a problem here.
I pass the result from solvePnP into my renderer
public void setWorldToCam(Se3_F64 worldToCam) {
DenseMatrix64F _R = worldToCam.R;
Vector3D_F64 _T = worldToCam.T;
// Concatenating the the rotation and translation vector into
// a View matrix
double[][] __view = {
{_R.get(0, 0), _R.get(0, 1), _R.get(0, 2), _T.getX()},
{_R.get(1, 0), _R.get(1, 1), _R.get(1, 2), _T.getY()},
{_R.get(2, 0), _R.get(2, 1), _R.get(2, 2), _T.getZ()},
{0, 0, 0, 1}
};
DenseMatrix64F _view = new DenseMatrix64F(__view);
// Matrix to convert from BoofCV (OpenCV) coordinate system to OpenGL coordinate system
double[][] __cv_to_gl = {
{1, 0, 0, 0},
{0, -1, 0, 0},
{0, -1, 0, 0},
{0, 0, 0, 1}
};
DenseMatrix64F _cv_to_gl = new DenseMatrix64F(__cv_to_gl);
// Multiply the View Matrix by the BoofCV to OpenGL matrix to apply the coordinate transform
DenseMatrix64F view = new SimpleMatrix(__view).mult(new SimpleMatrix(__cv_to_gl)).getMatrix();
// BoofCV stores matrices in row major order, but OpenGL likes column major order
// I transpose the view matrix and get a flattened list of 16,
// Then I convert them to floating point
double[] viewd = new SimpleMatrix(view).transpose().getMatrix().getData();
for (int i = 0; i < mViewMatrix.length; i++) {
mViewMatrix[i] = (float) viewd[i];
}
}
I'm also using the camera intrinsics I get from camera calibration to feed into the projection matrix of OpenGL
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
double fx = MathUtils.fx;
double fy = MathUtils.fy;
float fovy = (float) (2 * Math.atan(0.5 * height / fy) * 180 / Math.PI);
float aspect = (float) ((width * fy) / (height * fx));
// be careful with this, it could explain why you don't see certain objects
float near = 0.1f;
float far = 100.0f;
Matrix.perspectiveM(mProjectionMatrix, 0, fovy, aspect, near, far);
GLES20.glViewport(0, 0, width, height);
}
The square I'm drawing is the one defined in this Google example.
#Override
public void onDrawFrame(GL10 gl) {
// redraw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
// Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Combine the rotation matrix with the projection and camera view
// Note that the mMVPMatrix factor *must be the first* in order
// for matrix multiplication product to be correct
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
// Draw shape
mSquare.draw(mMVPMatrix);
}
I believe the problem has to do with the fact that this definition of a square in Google's example code doesn't take the real world side length into account. I understand that the OpenGL coordinate system has the corners (-1, 1), (-1, -1), (-1, 1), (1, 1) which doesn't correspond to the millimeter object points I have defined for use in BoofCV, even though they are in the right order.
static float squareCoords[] = {
-0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
I'm trying to learn OpenGL ES 2.0 and I went to load 3d models on Android. I can now load properly with the model texture, but I have a problem on the display depth. When I place my model in perspective, and part of the model is hidden by another part of it, it happens to me that a triangle or two before another draw and this is what I see through some parts .
I try setEGLConfigChooser (8, 8, 8, 8, 16, 0); and (8, 8, 8, 8, 24, 0), but my problem remains the same, except that when I put (8, 8, 8, 8, 24, 0) and display a little better defined, but when the 3d object moves, the colors make a strobe effect that is disturbing to me.
I also try glDepthFunc function (GL_LEQUAL); with glEnable (GL_DEPTH_TEST), but this does not rule over my problem.
Here's the pictures of the probleme:
The probleme : Link is broken
The good : Link is broken
Sorry for my link picture, I do not have more than 10 reputation to post picture in the question.
Here my code
My GLSurfaceView
public MyGLSurfaceView(Context context) {
super(context);
this.context = context;
setEGLContextClientVersion(2);
setEGLConfigChooser(true);
//setZOrderOnTop(true);
//setEGLConfigChooser(8, 8, 8, 8, 16, 0);
//setEGLConfigChooser(8, 8, 8, 8, 24, 0);
//getHolder().setFormat(PixelFormat.RGBA_8888);
mRenderer = new Renderer(context);
setRenderer(mRenderer);
}
My renderer
#Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
glEnable(GL_DEPTH_TEST);
mushroom = new Mushroom();
textureProgram = new TextureShaderProgram(context);
texture = TextureHelper.loadTexture(context, R.drawable.mushroom);
}
#Override
public void onSurfaceChanged(GL10 glUnused, int width, int height) {
glViewport(0, 0, width, height);
MatrixHelper.perspectiveM(projectionMatrix, 45, (float) width
/ (float) height, 0f, 10f);
setLookAtM(viewMatrix, 0, 0f, 1.2f, -10.2f, 0f, 0f, 0f, 0f, 1f, 0f);
}
#Override
public void onDrawFrame(GL10 glUnused) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
multiplyMM(viewProjectionMatrix, 0, projectionMatrix, 0, viewMatrix, 0);
glDepthFunc(GL_LEQUAL);
//glDepthMask(true);
positionMushroomInScene();
textureProgram.useProgram();
textureProgram.setUniforms(modelViewProjectionMatrix, texture);
mushroom.bindData(textureProgram);
mushroom.draw();
//glDepthFunc(GL_LESS);
}
private void positionMushroomInScene() {
setIdentityM(modelMatrix, 0);
translateM(modelMatrix, 0, 0f, 0f, 5f);
rotateM(modelMatrix, 0, -yRotation, 1f, 0f, 0f);
rotateM(modelMatrix, 0, xRotation, 0f, 1f, 0f);
multiplyMM(modelViewProjectionMatrix, 0, viewProjectionMatrix,
0, modelMatrix, 0);
}
My matrix Helper
public static void perspectiveM(float[] m, float yFovInDegrees, float aspect, float n, float f) {
final float angleInRadians = (float) (yFovInDegrees * Math.PI / 180.0);
final float a = (float) (1.0 / Math.tan(angleInRadians / 2.0));
m[0] = a / aspect;
m[1] = 0f;
m[2] = 0f;
m[3] = 0f;
m[4] = 0f;
m[5] = a;
m[6] = 0f;
m[7] = 0f;
m[8] = 0f;
m[9] = 0f;
m[10] = -((f + n) / (f - n));
m[11] = -1f;
m[12] = 0f;
m[13] = 0f;
m[14] = -((2f * f * n) / (f - n));
m[15] = 0f;
}
The problem is most likely with the way you set up your projection matrix:
MatrixHelper.perspectiveM(projectionMatrix, 45, (float) width
/ (float) height, 0f, 10f);
The 4th argument in your definition of this function is the near plane. This value should never be 0.0. It should typically be a reasonable fraction of the far distance. Choosing the ideal value can be somewhat of a tradeoff. The larger far / near is, the less depth precision you get. On the other hand, if you set the near value too large, you risk clipping off close geometry that you actually wanted to see.
A ratio of maybe 100 or 1000 for far / near should normally give you reasonable depth precision, without undesirable front clipping. You'll need to be a little more conservative with the ratio if you use a 16-bit depth buffer than if you have a 24-bit depth buffer.
For your purpose, try changing near to 0.1, and see how that works for you:
MatrixHelper.perspectiveM(projectionMatrix, 45, (float) width
/ (float) height, 0.1f, 10f);
I have drawn a map in OpenGL. What I want is whenever user touches the screen I should get coordinates relative to OpenGL maps not just screen coordinates.
Following is my piece of code which I have tried but I am not getting correct coordinates:
// Initialize auxiliary variables.
PointF worldPos = new PointF();
// Auxiliary matrix and vectors
// to deal with ogl.
float[] invertedMatrix, transformMatrix,
normalizedInPoint, outPoint;
invertedMatrix = new float[16];
transformMatrix = new float[16];
normalizedInPoint = new float[4];
outPoint = new float[4];
// Invert y coordinate, as android uses
// top-left, and ogl bottom-left.
int oglTouchY = (int) (scrheigth - touch.Y);
/* Transform the screen point to clip
space in ogl (-1,1) */
normalizedInPoint[0] =
(float) ((touch.X) * 2.0f / scrwidth - 1.0);
normalizedInPoint[1] =
(float) ((oglTouchY) * 2.0f / scrheigth - 1.0);
normalizedInPoint[2] = - 1.0f;
normalizedInPoint[3] = 1.0f;
/* Obtain the transform matrix and
then the inverse. */
Matrix.multiplyMM(
transformMatrix, 0,
mProjMatrix, 0,
mMVPMatrix, 0);
Matrix.invertM(invertedMatrix, 0,
transformMatrix, 0);
/* Apply the inverse to the point
in clip space */
Matrix.multiplyMV(
outPoint, 0,
invertedMatrix, 0,
normalizedInPoint, 0);
if (outPoint[3] == 0.0)
{
// Avoid /0 error.
Log.e("World coords", "ERROR!");
return worldPos;
}
for view override onTouch method. Then for MotionEvent object use getX() and getY() methods
The motion event dispatched to your view always uses the relative position in its parent view. You might need to recursively add the parent view's coordination to get the absolute position on screen.
I'm building an Android application that uses OpenGL ES 2.0 and I've run into a wall. I'm trying to convert screen coordinates (where the user touches) to world coordinates. I've tried reading and playing around with GLU.gluUnProject but I'm either doing it wrong or just don't understand it.
This is my attempt....
public void getWorldFromScreen(float x, float y) {
int viewport[] = { 0, 0, width , height};
float startY = ((float) (height) - y);
float[] near = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] far = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] mv = new float[16];
Matrix.multiplyMM(mv, 0, mViewMatrix, 0, mModelMatrix, 0);
GLU.gluUnProject(x, startY, 0, mv, 0, mProjectionMatrix, 0, viewport, 0, near, 0);
GLU.gluUnProject(x, startY, 1, mv, 0, mProjectionMatrix, 0, viewport, 0, far, 0);
float nearX = near[0] / near[3];
float nearY = near[1] / near[3];
float nearZ = near[2] / near[3];
float farX = far[0] / far[3];
float farY = far[1] / far[3];
float farZ = far[2] / far[3];
}
The numbers I am getting don't seem right, is this the right way to utilize this method? Does it work for OpenGL ES 2.0? Should I make the Model Matrix an identity matrix before these calculations (Matrix.setIdentityM(mModelMatix, 0))?
As a follow up, if this is correct, how do I pick the output Z? Basically, I always know at what distance I want the world coordinates to be at, but the Z parameter in GLU.gluUnProject appears to be some kind of interpolation between the near and far plane. Is it just a linear interpolation?
Thanks in advance
/**
* Calculates the transform from screen coordinate
* system to world coordinate system coordinates
* for a specific point, given a camera position.
*
* #param touch Vec2 point of screen touch, the
actual position on physical screen (ej: 160, 240)
* #param cam camera object with x,y,z of the
camera and screenWidth and screenHeight of
the device.
* #return position in WCS.
*/
public Vec2 GetWorldCoords( Vec2 touch, Camera cam)
{
// Initialize auxiliary variables.
Vec2 worldPos = new Vec2();
// SCREEN height & width (ej: 320 x 480)
float screenW = cam.GetScreenWidth();
float screenH = cam.GetScreenHeight();
// Auxiliary matrix and vectors
// to deal with ogl.
float[] invertedMatrix, transformMatrix,
normalizedInPoint, outPoint;
invertedMatrix = new float[16];
transformMatrix = new float[16];
normalizedInPoint = new float[4];
outPoint = new float[4];
// Invert y coordinate, as android uses
// top-left, and ogl bottom-left.
int oglTouchY = (int) (screenH - touch.Y());
/* Transform the screen point to clip
space in ogl (-1,1) */
normalizedInPoint[0] =
(float) ((touch.X()) * 2.0f / screenW - 1.0);
normalizedInPoint[1] =
(float) ((oglTouchY) * 2.0f / screenH - 1.0);
normalizedInPoint[2] = - 1.0f;
normalizedInPoint[3] = 1.0f;
/* Obtain the transform matrix and
then the inverse. */
Print("Proj", getCurrentProjection(gl));
Print("Model", getCurrentModelView(gl));
Matrix.multiplyMM(
transformMatrix, 0,
getCurrentProjection(gl), 0,
getCurrentModelView(gl), 0);
Matrix.invertM(invertedMatrix, 0,
transformMatrix, 0);
/* Apply the inverse to the point
in clip space */
Matrix.multiplyMV(
outPoint, 0,
invertedMatrix, 0,
normalizedInPoint, 0);
if (outPoint[3] == 0.0)
{
// Avoid /0 error.
Log.e("World coords", "ERROR!");
return worldPos;
}
// Divide by the 3rd component to find
// out the real position.
worldPos.Set(
outPoint[0] / outPoint[3],
outPoint[1] / outPoint[3]);
return worldPos;
}
Algorithm is further explained here.
Hopefully my question (and answer) should help you out:
How to find absolute position of click while zoomed in
It has not only the code but also diagrams and diagrams and diagrams explaining it :) Took me ages to figure it out as well.
IMHO one doesn't need to re-implement this function...
I experimented with Erol's solution and it worked, so thanks a lot for it Erol.
Furthermore, I played with
Matrix.orthoM(mtrxProjection, 0, left, right, bottom, top, near, far);
and it works fine as well in my tiny noob example 2D OpenGL ES 2.0 project:
public void onSurfaceChanged(GL10 unused, int width, int height) {...
I have made a simple class called Vector3.
It's a 3 dimensional vector with some basic math implementions.
Now i want to be able to rotate this single vector, but i get an exception.
I have this:
private static final float[] matrix = new float[16];
private static final float[] inVec = new float[4];
private static final float[] outVec = new float[4];
public Vector3 rotate(float angle, float axisX, float axisY, float axisZ)
{
inVec[0] = x;
inVec[1] = y;
inVec[2] = z;
inVec[3] = 1;
Matrix.setIdentityM(matrix, 0);
Matrix.rotateM(matrix, 0, angle, axisX, axisY, axisZ);
Matrix.multiplyMM(outVec, 0, matrix, 0, inVec, 0);
x = outVec[0];
y = outVec[1];
z = outVec[2];
return this;
}
And i call i by making this:
Vector3 v = new Vector3(1f, 1f, 1f);
v.rotate(90f, 0f, 1f, 0f);
What i get is an IllegalArgumentException at:
Matrix.multiplyMM(outVec, 0, matrix, 0, inVec, 0);
It says that
length - offset < n
Does anyone have a clue about what i am doing wrong?
I didn't wrote this Vector3 function from the beginning, it's borrowed from the book "beggining android games"
You're using multiplyMM method that mutiplies 2 matrices and return a matrix instead of using multiplyMV (MV stands for matrix-vector) that multiplies your rotation matrix with your vector, returning the rotated vector.