I'm trying to build an Augmented Reality application in Android using BoofCV (OpenCV alternative for Java) and OpenGL ES 2.0. I have a marker which I can get the image points of and "world to cam" transformation using BoofCV's solvePnP function. I want to be able to draw the marker in 3D using OpenGL. Here's what I have so far:
On every frame of the camera, I call solvePnP
Se3_F64 worldToCam = MathUtils.worldToCam(__qrWorldPoints, imagePoints);
mGLAssetSurfaceView.setWorldToCam(worldToCam);
This is what I have defined as the world points
static float qrSideLength = 79.365f; // mm
private static final double[][] __qrWorldPoints = {
{qrSideLength * -0.5, qrSideLength * 0.5, 0},
{qrSideLength * -0.5, qrSideLength * -0.5, 0},
{qrSideLength * 0.5, qrSideLength * -0.5, 0},
{qrSideLength * 0.5, qrSideLength * 0.5, 0}
};
I'm feeding it a square that has origin at its center, with a sidelength in millimeters.
I can confirm that the rotation vector and translation vector I'm getting back from solvePnP are reasonable, so I don't know if there's a problem here.
I pass the result from solvePnP into my renderer
public void setWorldToCam(Se3_F64 worldToCam) {
DenseMatrix64F _R = worldToCam.R;
Vector3D_F64 _T = worldToCam.T;
// Concatenating the the rotation and translation vector into
// a View matrix
double[][] __view = {
{_R.get(0, 0), _R.get(0, 1), _R.get(0, 2), _T.getX()},
{_R.get(1, 0), _R.get(1, 1), _R.get(1, 2), _T.getY()},
{_R.get(2, 0), _R.get(2, 1), _R.get(2, 2), _T.getZ()},
{0, 0, 0, 1}
};
DenseMatrix64F _view = new DenseMatrix64F(__view);
// Matrix to convert from BoofCV (OpenCV) coordinate system to OpenGL coordinate system
double[][] __cv_to_gl = {
{1, 0, 0, 0},
{0, -1, 0, 0},
{0, -1, 0, 0},
{0, 0, 0, 1}
};
DenseMatrix64F _cv_to_gl = new DenseMatrix64F(__cv_to_gl);
// Multiply the View Matrix by the BoofCV to OpenGL matrix to apply the coordinate transform
DenseMatrix64F view = new SimpleMatrix(__view).mult(new SimpleMatrix(__cv_to_gl)).getMatrix();
// BoofCV stores matrices in row major order, but OpenGL likes column major order
// I transpose the view matrix and get a flattened list of 16,
// Then I convert them to floating point
double[] viewd = new SimpleMatrix(view).transpose().getMatrix().getData();
for (int i = 0; i < mViewMatrix.length; i++) {
mViewMatrix[i] = (float) viewd[i];
}
}
I'm also using the camera intrinsics I get from camera calibration to feed into the projection matrix of OpenGL
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
double fx = MathUtils.fx;
double fy = MathUtils.fy;
float fovy = (float) (2 * Math.atan(0.5 * height / fy) * 180 / Math.PI);
float aspect = (float) ((width * fy) / (height * fx));
// be careful with this, it could explain why you don't see certain objects
float near = 0.1f;
float far = 100.0f;
Matrix.perspectiveM(mProjectionMatrix, 0, fovy, aspect, near, far);
GLES20.glViewport(0, 0, width, height);
}
The square I'm drawing is the one defined in this Google example.
#Override
public void onDrawFrame(GL10 gl) {
// redraw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
// Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Combine the rotation matrix with the projection and camera view
// Note that the mMVPMatrix factor *must be the first* in order
// for matrix multiplication product to be correct
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
// Draw shape
mSquare.draw(mMVPMatrix);
}
I believe the problem has to do with the fact that this definition of a square in Google's example code doesn't take the real world side length into account. I understand that the OpenGL coordinate system has the corners (-1, 1), (-1, -1), (-1, 1), (1, 1) which doesn't correspond to the millimeter object points I have defined for use in BoofCV, even though they are in the right order.
static float squareCoords[] = {
-0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f }; // top right
Related
I have an OpenGL scene with a sphere having a radius of 1, and the camera being at the center of the sphere (it's a 360° picture viewer). The user can rotate the sphere by panning.
Now I need to display 2D pins "attached" to some parts of the picture. To do so, I want to convert the 3D coordinates of my pins into 2D screen coordinates, to add the pin image at that screen coordinates.
I'm using GLU.glProject and the following classes from android-apidemo:
MatrixGrabber
MatrixStack
MatrixTrackingGL
I save the projection matrix in the onSurfaceChanged method and the model-view matrix in the onDraw method (after having drawn my sphere). Then I feed GLU.glProject with them when the user rotates the sphere to update the pins position.
When I pan horizontally, the pins pan correctly, but when I pan vertically, the texture pans "faster" than the pin image (like if the pin was closer to the camera than the sphere).
Here are some relevant parts of my code:
public class CustomRenderer implements GLSurfaceView.Renderer {
MatrixGrabber mMatrixGrabber = new MatrixGrabber();
private float[] mModelView = null;
private float[] mProjection = null;
[...]
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Get the sizes:
float side = Math.max(width, height);
int x = (int) (width - side) / 2;
int y = (int) (height - side) / 2;
// Set the viewport:
gl.glViewport(x, y, (int) side, (int) side);
// Set the perspective:
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, FIELD_OF_VIEW_Y, 1, Z_NEAR, Z_FAR);
// Grab the projection matrix:
mMatrixGrabber.getCurrentProjection(gl);
mProjection = mMatrixGrabber.mProjection;
// Set to MODELVIEW mode:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
#Override
public void onDrawFrame(GL10 gl) {
// Load the texture if needed:
if(mTextureToLoad != null) {
mSphere.loadGLTexture(gl, mTextureToLoad);
mTextureToLoad = null;
}
// Clear:
gl.glClearColor(0.5f, 0.5f, 0.5f, 0.0f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
// Rotate the scene:
gl.glRotatef( (1 - mRotationY + 0.25f) * 360, 1, 0, 0); // 0.25 is used to adjust the texture position
gl.glRotatef( (1 - mRotationX + 0.25f) * 360, 0, 1, 0); // 0.25 is used to adjust the texture position
// Draw the sphere:
mSphere.draw(gl);
// Grab the model-view matrix:
mMatrixGrabber.getCurrentModelView(gl);
mModelView = mMatrixGrabber.mModelView;
}
public float[] getScreenCoords(float x, float y, float z) {
if(mModelView == null || mProjection == null) return null;
float[] result = new float[3];
int[] view = new int[] {0, 0, (int) mSurfaceViewSize.getWidth(), (int) mSurfaceViewSize.getHeight()};
GLU.gluProject(x, y, z,
mModelView, 0,
mProjection, 0,
view, 0,
result, 0);
result[1] = mSurfaceViewSize.getHeight() - result[1];
return result;
}
}
I use the result of the getScreenCoords method to display my pins. The y value is wrong.
What am I doing wrong?
I'm trying to learn OpenGL ES 2.0 and I went to load 3d models on Android. I can now load properly with the model texture, but I have a problem on the display depth. When I place my model in perspective, and part of the model is hidden by another part of it, it happens to me that a triangle or two before another draw and this is what I see through some parts .
I try setEGLConfigChooser (8, 8, 8, 8, 16, 0); and (8, 8, 8, 8, 24, 0), but my problem remains the same, except that when I put (8, 8, 8, 8, 24, 0) and display a little better defined, but when the 3d object moves, the colors make a strobe effect that is disturbing to me.
I also try glDepthFunc function (GL_LEQUAL); with glEnable (GL_DEPTH_TEST), but this does not rule over my problem.
Here's the pictures of the probleme:
The probleme : Link is broken
The good : Link is broken
Sorry for my link picture, I do not have more than 10 reputation to post picture in the question.
Here my code
My GLSurfaceView
public MyGLSurfaceView(Context context) {
super(context);
this.context = context;
setEGLContextClientVersion(2);
setEGLConfigChooser(true);
//setZOrderOnTop(true);
//setEGLConfigChooser(8, 8, 8, 8, 16, 0);
//setEGLConfigChooser(8, 8, 8, 8, 24, 0);
//getHolder().setFormat(PixelFormat.RGBA_8888);
mRenderer = new Renderer(context);
setRenderer(mRenderer);
}
My renderer
#Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
glEnable(GL_DEPTH_TEST);
mushroom = new Mushroom();
textureProgram = new TextureShaderProgram(context);
texture = TextureHelper.loadTexture(context, R.drawable.mushroom);
}
#Override
public void onSurfaceChanged(GL10 glUnused, int width, int height) {
glViewport(0, 0, width, height);
MatrixHelper.perspectiveM(projectionMatrix, 45, (float) width
/ (float) height, 0f, 10f);
setLookAtM(viewMatrix, 0, 0f, 1.2f, -10.2f, 0f, 0f, 0f, 0f, 1f, 0f);
}
#Override
public void onDrawFrame(GL10 glUnused) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
multiplyMM(viewProjectionMatrix, 0, projectionMatrix, 0, viewMatrix, 0);
glDepthFunc(GL_LEQUAL);
//glDepthMask(true);
positionMushroomInScene();
textureProgram.useProgram();
textureProgram.setUniforms(modelViewProjectionMatrix, texture);
mushroom.bindData(textureProgram);
mushroom.draw();
//glDepthFunc(GL_LESS);
}
private void positionMushroomInScene() {
setIdentityM(modelMatrix, 0);
translateM(modelMatrix, 0, 0f, 0f, 5f);
rotateM(modelMatrix, 0, -yRotation, 1f, 0f, 0f);
rotateM(modelMatrix, 0, xRotation, 0f, 1f, 0f);
multiplyMM(modelViewProjectionMatrix, 0, viewProjectionMatrix,
0, modelMatrix, 0);
}
My matrix Helper
public static void perspectiveM(float[] m, float yFovInDegrees, float aspect, float n, float f) {
final float angleInRadians = (float) (yFovInDegrees * Math.PI / 180.0);
final float a = (float) (1.0 / Math.tan(angleInRadians / 2.0));
m[0] = a / aspect;
m[1] = 0f;
m[2] = 0f;
m[3] = 0f;
m[4] = 0f;
m[5] = a;
m[6] = 0f;
m[7] = 0f;
m[8] = 0f;
m[9] = 0f;
m[10] = -((f + n) / (f - n));
m[11] = -1f;
m[12] = 0f;
m[13] = 0f;
m[14] = -((2f * f * n) / (f - n));
m[15] = 0f;
}
The problem is most likely with the way you set up your projection matrix:
MatrixHelper.perspectiveM(projectionMatrix, 45, (float) width
/ (float) height, 0f, 10f);
The 4th argument in your definition of this function is the near plane. This value should never be 0.0. It should typically be a reasonable fraction of the far distance. Choosing the ideal value can be somewhat of a tradeoff. The larger far / near is, the less depth precision you get. On the other hand, if you set the near value too large, you risk clipping off close geometry that you actually wanted to see.
A ratio of maybe 100 or 1000 for far / near should normally give you reasonable depth precision, without undesirable front clipping. You'll need to be a little more conservative with the ratio if you use a 16-bit depth buffer than if you have a 24-bit depth buffer.
For your purpose, try changing near to 0.1, and see how that works for you:
MatrixHelper.perspectiveM(projectionMatrix, 45, (float) width
/ (float) height, 0.1f, 10f);
I have cube that rotates around the center of the coordinates system. But the problem is it rotates very slowly. So in my case how to set the rotation speed?
The following three methods update the mCurrentModelMatrix with the given model transformation. These are stateful accumulative methods.
public void trnslate(float x, float y, float z)
{
float[] tempModelMatrix = new float[16];
Matrix.setIdentityM(tempModelMatrix, 0);
Matrix.translateM(tempModelMatrix,0,x,y,z);
Matrix.multiplyMM(this.mCurrentModelMatrix, 0,
tempModelMatrix, 0, this.mCurrentModelMatrix, 0);
}
public void rotate(float angle, float x, float y, float z)
{
float[] tempModelMatrix = new float[16];
Matrix.setIdentityM(tempModelMatrix, 0);
Matrix.rotateM(tempModelMatrix,0,angle,x,y,z);
Matrix.multiplyMM(this.mCurrentModelMatrix, 0,
tempModelMatrix, 0, this.mCurrentModelMatrix, 0);
}
public void scale(float xFactor, float yFactor, float zFactor)
{
float[] tempModelMatrix = new float[16];
Matrix.setIdentityM(tempModelMatrix, 0);
Matrix.scaleM(tempModelMatrix,0,xFactor,yFactor,zFactor);
Matrix.multiplyMM(this.mCurrentModelMatrix, 0,
tempModelMatrix, 0, this.mCurrentModelMatrix, 0);
}
/*
* Calculaute the final model view matrix
* 1. Order of matrix multiplication is important
* 2. MVPmatrix = proj * view * model;
* 3. Setup the MVP matrix in the vertex shader memory
*/
protected void setupMatrices()
{
float[] tempModelMatrix = new float[16];
Matrix.setIdentityM(tempModelMatrix, 0);
//translate the model combo next
Matrix.multiplyMM(mMVPMatrix, 0, //matrix and offset
mCurrentModelMatrix, 0,
tempModelMatrix, 0);
//translate eye coordinates first
Matrix.multiplyMM(mMVPMatrix, 0,
this.mVMatrix, 0,
mMVPMatrix, 0);
//Project it: screen coordinates
Matrix.multiplyMM(mMVPMatrix, 0,
mProjMatrix, 0,
mMVPMatrix, 0);
//Set the vertex uniform handler representing the MVP matrix
GLES20.glUniformMatrix4fv(muMVPMatrixHandle, //uniform handle
1, //number of uniforms. 1 if it is not an array
false, //transpose: must be false
mMVPMatrix, //client matrix memory pointer
0); //offset
}
draw method
// Drawing operation
#Override
protected void draw(GL10 gl, int positionHandle) {
// Hide the hidden surfaces using these APIs
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glDepthFunc(GLES20.GL_LESS);
// Transfer vertices to the shader
transferVertexPoints(positionHandle);
// Transfer texture points to the shader
transferTexturePoints(getTextureHandle());
// Implement rotation from 0 to 360 degrees
// Stop when asked and restart when the stopFlag
// is set to false.
// Decide what the current angle to apply
// for rotation is.
if (stopFlag == true) {
// stop rotation
curAngle = stoppedAtAngle;
} else {
curAngle += 1.0f;
}
if (curAngle > 360) {
curAngle = 0;
}
// Tell the base class to start their
// matrices to unit matrices.
this.initializeMatrices();
// The order of these model transformations matter
// Each model transformation is specified with
// respect to the last one, and not the very first.
// Center the cube
this.trnslate(0, 0, -1);
// Rotate it around y axis
this.rotate(curAngle, 0, -1, 0);
// Decenter it to where ever you want
this.trnslate(0, -2, 2);
// Go ahead calculate the ModelViewMatrix as
// we are done with ALL of our model transformations
this.setupMatrices();
// Call glDrawArrays to use the vertices and draw
int vertexCount = mTriangleVerticesData.length / 3;
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, // what primitives to use
0, // at what point to start
vertexCount); // Starting there how many points to use
// Check if there are errors
checkGlError("glDrawArrays");
}
Thanks in advance!
You are rotating at 1 degree per frame, so it will take 360 frames to do a complete rotation.
If you want it to rotate in 2 seconds, and you were running at 30 frames per second, you would want to rotate by 6 degrees per frame, by changing this section:
if (stopFlag == true) {
// stop rotation
curAngle = stoppedAtAngle;
} else {
curAngle += 6.0f;
}
if (curAngle > 360) {
curAngle = 0;
}
As a beginner to android and openGL 2.0 es, I'm testing simple things and see how it goes.
I downloaded the sample at http://developer.android.com/training/graphics/opengl/touch.html .
I changed the code to check if I could animate a rotation of the camera around the (0,0,0) point, the center of the square.
So i did this:
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
long time = SystemClock.uptimeMillis() % 4000L;
float angle = ((float) (2*Math.PI)/ (float) 4000) * ((int) time);
Matrix.setLookAtM(mVMatrix, 0, (float) (3*Math.sin(angle)), 0, (float) (3.0f*Math.cos(angle)), 0 ,0, 0, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
// Draw square
mSquare.draw(mMVPMatrix);
}
I expected the camera to look always to the center of the square (the (0,0,0) point) but that's not what happens. The camera is indeed rotating around the square but the square does not stay in the center of the screen.. instead it is moving along the X axis...:
I also expected that if we gave the eyeX and eyeY the same values as centerX and centerY,like this:
Matrix.setLookAtM(mVMatrix, 0, 1, 1, -3, 1 ,1, 0, 0f, 1.0f, 0.0f);
the square would keep it's shape (I mean, your field of vision would be dragged but along a plane which would be paralel to the square), but that's also not what happens:
This is my projection matrix:
float ratio = (float) width / height;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 2, 7);
What is going on here?
Looking at the source code to the example you downloaded, I can see why you're having that problem, it has to do with the order of the matrix multiplication.
Typically in OpenGL source you see matrices set up such that
transformed vertex = projMatrix * viewMatrix * modelMatrix * input vertex
However in the source example program that you downloaded, their shader is setup like this:
" gl_Position = vPosition * uMVPMatrix;"
With the position on the other side of the matrix. You can work with OpenGL in this way, but it requires that you reverse the lhs/rhs of your matrix multiplications.
Long story short, in your case, you should change your shader to read:
" gl_Position = uMVPMatrix * vPosition;"
and then I believe you will get the expected behavior.
I'm building an Android application that uses OpenGL ES 2.0 and I've run into a wall. I'm trying to convert screen coordinates (where the user touches) to world coordinates. I've tried reading and playing around with GLU.gluUnProject but I'm either doing it wrong or just don't understand it.
This is my attempt....
public void getWorldFromScreen(float x, float y) {
int viewport[] = { 0, 0, width , height};
float startY = ((float) (height) - y);
float[] near = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] far = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] mv = new float[16];
Matrix.multiplyMM(mv, 0, mViewMatrix, 0, mModelMatrix, 0);
GLU.gluUnProject(x, startY, 0, mv, 0, mProjectionMatrix, 0, viewport, 0, near, 0);
GLU.gluUnProject(x, startY, 1, mv, 0, mProjectionMatrix, 0, viewport, 0, far, 0);
float nearX = near[0] / near[3];
float nearY = near[1] / near[3];
float nearZ = near[2] / near[3];
float farX = far[0] / far[3];
float farY = far[1] / far[3];
float farZ = far[2] / far[3];
}
The numbers I am getting don't seem right, is this the right way to utilize this method? Does it work for OpenGL ES 2.0? Should I make the Model Matrix an identity matrix before these calculations (Matrix.setIdentityM(mModelMatix, 0))?
As a follow up, if this is correct, how do I pick the output Z? Basically, I always know at what distance I want the world coordinates to be at, but the Z parameter in GLU.gluUnProject appears to be some kind of interpolation between the near and far plane. Is it just a linear interpolation?
Thanks in advance
/**
* Calculates the transform from screen coordinate
* system to world coordinate system coordinates
* for a specific point, given a camera position.
*
* #param touch Vec2 point of screen touch, the
actual position on physical screen (ej: 160, 240)
* #param cam camera object with x,y,z of the
camera and screenWidth and screenHeight of
the device.
* #return position in WCS.
*/
public Vec2 GetWorldCoords( Vec2 touch, Camera cam)
{
// Initialize auxiliary variables.
Vec2 worldPos = new Vec2();
// SCREEN height & width (ej: 320 x 480)
float screenW = cam.GetScreenWidth();
float screenH = cam.GetScreenHeight();
// Auxiliary matrix and vectors
// to deal with ogl.
float[] invertedMatrix, transformMatrix,
normalizedInPoint, outPoint;
invertedMatrix = new float[16];
transformMatrix = new float[16];
normalizedInPoint = new float[4];
outPoint = new float[4];
// Invert y coordinate, as android uses
// top-left, and ogl bottom-left.
int oglTouchY = (int) (screenH - touch.Y());
/* Transform the screen point to clip
space in ogl (-1,1) */
normalizedInPoint[0] =
(float) ((touch.X()) * 2.0f / screenW - 1.0);
normalizedInPoint[1] =
(float) ((oglTouchY) * 2.0f / screenH - 1.0);
normalizedInPoint[2] = - 1.0f;
normalizedInPoint[3] = 1.0f;
/* Obtain the transform matrix and
then the inverse. */
Print("Proj", getCurrentProjection(gl));
Print("Model", getCurrentModelView(gl));
Matrix.multiplyMM(
transformMatrix, 0,
getCurrentProjection(gl), 0,
getCurrentModelView(gl), 0);
Matrix.invertM(invertedMatrix, 0,
transformMatrix, 0);
/* Apply the inverse to the point
in clip space */
Matrix.multiplyMV(
outPoint, 0,
invertedMatrix, 0,
normalizedInPoint, 0);
if (outPoint[3] == 0.0)
{
// Avoid /0 error.
Log.e("World coords", "ERROR!");
return worldPos;
}
// Divide by the 3rd component to find
// out the real position.
worldPos.Set(
outPoint[0] / outPoint[3],
outPoint[1] / outPoint[3]);
return worldPos;
}
Algorithm is further explained here.
Hopefully my question (and answer) should help you out:
How to find absolute position of click while zoomed in
It has not only the code but also diagrams and diagrams and diagrams explaining it :) Took me ages to figure it out as well.
IMHO one doesn't need to re-implement this function...
I experimented with Erol's solution and it worked, so thanks a lot for it Erol.
Furthermore, I played with
Matrix.orthoM(mtrxProjection, 0, left, right, bottom, top, near, far);
and it works fine as well in my tiny noob example 2D OpenGL ES 2.0 project:
public void onSurfaceChanged(GL10 unused, int width, int height) {...