Android Augmented reality Cameraview convert to screen coordinates - android

I'm currently developing my own augmented reality app. I'm trying to write my own AR Engine, since all frameworks I've seen so far are just usable with GPS data.
It's going to be used indoors, I'm getting my position data from another system.
What I have so far is:
float[] vector = { 2, 2, 1, 0 };
float transformed[] = new float[4];
float[] R = new float[16];
float[] I = new float[16];
float[] r = new float[16];
float[] S = { 400f, 1, 1, 1, 1, -240f, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 };
float[] B = { 1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 400f, 240f, 1, 1 };
float[] temp1 = new float[16];
float[] temp2 = new float[16];
float[] frustumM = {1.5f,0,0,0,0,-1.5f,0,0,0,0,1.16f,1,0,0,-3.24f,0};
//Rotationmatrix to get transformation for device to world coordinates
SensorManager.getRotationMatrix(R, I, accelerometerValues, geomagneticMatrix);
SensorManager.remapCoordinateSystem(R, SensorManager.AXIS_X, SensorManager.AXIS_Z, r);
//invert to get transformation for world to camera
Matrix.invertM(R, 0, r, 0);
Matrix.multiplyMM(temp1, 0, frustumM, 0, R, 0);
Matrix.multiplyMM(temp2, 0, S, 0, temp1, 0);
Matrix.multiplyMM(temp1, 0, B, 0, temp2, 0);
Matrix.multiplyMV(transformed, 0, temp1, 0, vector, 0);
I know its ugly code, but i'm just trying to get the object "vector" get painted correctly with my position being (0,0,0) for now.
My screen size is hardcoded in the matrix S and B (800x480).
The result should be stored in "transformed" and should be in a form like transformed = {x,y,z,w}
For the math I've used this link: http://www.inf.fu-berlin.de/lehre/WS06/19605_Computergrafik/doku/rossbach_siewert/camera.html
Sometimes my graphic gets painted but it jumps around and its not at the correct position. I've logged the rolling with SensorManager.getOrientation and they seem ok and stable.
So I think I'm doing something with the math wrong but I couldn't find better sources about the math to transform my data. Could anyone help me please?
Thanks in advance
martin

Related

How can I make AR video to always face user's camera with marker vuforia native android

I am working on marker AR with native android vuforia what I am trying to do is to move my video object according to camera (it should always face camera) I tried following but its not working but sometime its working but its not persistence
public void renderFrame(State state, float[] projectionMatrix) {
mSampleAppRenderer.renderVideoBackground();
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
isTracking = false;
for (int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++) {
TrackableResult trackableResult = state.getTrackableResult(tIdx);
ImageTarget imageTarget = (ImageTarget) trackableResult
.getTrackable();
imageTarget.startExtendedTracking();
isTracking = true;
float[] modelViewMatrixVideo = Tool.convertPose2GLMatrix(
trackableResult.getPose()).getData();
float[] modelViewProjectionVideo = new float[16];
Matrix44F invTranspMV = SampleMath.Matrix44FTranspose(SampleMath.Matrix44FInverse(Tool.convertPose2GLMatrix(trackableResult.getPose())));
Matrix.translateM(modelViewMatrixVideo, 0, 0f, 0f, 1f);
Matrix.rotateM(modelViewMatrixVideo, 0, (float) Math.toDegrees(Math.asin(-invTranspMV.getData()[6])), 0.0f, 0.f, 1.0f);
Matrix.multiplyMM(modelViewProjectionVideo, 0,
projectionMatrix, 0, modelViewMatrixVideo, 0);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glUseProgram(videoPlaybackShaderID);
GLES20.glVertexAttribPointer(videoPlaybackVertexHandle, 3,
GLES20.GL_FLOAT, false, 0, quadVertices);
GLES20.glVertexAttribPointer(videoPlaybackTexCoordHandle,
2, GLES20.GL_FLOAT, false, 0,
fillBuffer(videoQuadTextureCoordsTransformedStones));
GLES20.glEnableVertexAttribArray(videoPlaybackVertexHandle);
GLES20.glEnableVertexAttribArray(videoPlaybackTexCoordHandle);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
videoPlaybackTextureID);
GLES20.glUniformMatrix4fv(videoPlaybackMVPMatrixHandle, 1,
false, modelViewProjectionVideo, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, NUM_QUAD_INDEX,
GLES20.GL_UNSIGNED_SHORT, quadIndices);
GLES20.glDisableVertexAttribArray(videoPlaybackVertexHandle);
GLES20.glDisableVertexAttribArray(videoPlaybackTexCoordHandle);
GLES20.glUseProgram(0);
GLES20.glDisable(GLES20.GL_BLEND);
SampleUtils.checkGLError("VideoPlayback renderFrame");
}
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Renderer.getInstance().end();
}
I have tried above so far and its working sometime but not properly
Please help me i am trying to do this from a week.
I am trying to do is to move my video object according to camera (it should always face camera)
If you want that a object faces the camera (Billboarding), then you have to use a model matrix which is the inverse view matrix, but without the translation part.
Use Matrix44FInverse, to geht the inverse matrix of a Matrix44F:
public void renderFrame(State state, float[] projectionMatrix) {
.....
// get the view matrix and set translation part to (0, 0, 0)
float[] tempViewMat = Tool.convertPose2GLMatrix(trackableResult.getPose()).getData();
tempViewMat[12] = 0;
tempViewMat[13] = 0;
tempViewMat[14] = 0;
// create the billboard matrix
Matrix44F billboardMatrix = new Matrix44F();
billboardMatrix.setData(tempViewMat);
billboardMatrix = SampleMath.Matrix44FInverse(billboardMatrix);
// calculate the model view projection matrix
float[] viewMatrixVideo = Tool.convertPose2GLMatrix(trackableResult.getPose()).getData();
float[] modelViewVideo = new float[16];
Matrix.multiplyMM(modelViewVideo, 0, viewMatrixVideo, 0, billboardMatrix.getData(), 0);
float[] modelViewProjectionVideo = new float[16];
Matrix.multiplyMM(modelViewProjectionVideo, 0, projectionMatrix, 0, modelViewVideo, 0);
.....
}
I used your code but my AR is rotating in every direction but i want some different thing I have one video that I want to place it vertical on marker and if user move left or right only then i want to rotate it to face camera so that i will see my AR video same that is vertical from any view
What you want to do is to fix the upwards direction, but to orientate the normal vector to the line of sight.
The line of sight is the inverse Z-axis of the view space in a Right-Handed Coordinate System.
Matrix44F inverse_view = SampleMath.Matrix44FInverse(
Tool.convertPose2GLMatrix(trackableResult.getPose()));
// line of sight
Vec3F los = new Vec3F(-inverse_view[8], -inverse_view[9], -inverse_view[10] );
Eiterh the Y-Axis (0, 1, 0) or the Z-Axis (0, 1, 0) of the model has to be the Z-axis of the orientation matrix.
The X-Axis is the cross product (Vec3FCross) the line of sight and the Z-axis of the orientation matrix.
e.g.
Vec3F z_axis = new Vec3F(0, 0, 1);
Vec3F x_axis = Vec3FNormalize(Vec3FCross(los, z_axis));
Vec3F y_axis = Vec3FCross(z_axis, x_axis);
float[] orientationMatrix = new float[]{
x_axis.getData()[0], x_axis.getData()[1], x_axis.getData()[2], 0,
y_axis.getData()[0], y_axis.getData()[1], y_axis.getData()[2], 0,
z_axis.getData()[0], z_axis.getData()[1], z_axis.getData()[2], 0,
0, 0, 0, 1
};
// calculate the model view projection matrix
float[] viewMatrixVideo = Tool.convertPose2GLMatrix(trackableResult.getPose()).getData();
float[] modelViewVideo = new float[16];
Matrix.multiplyMM(modelViewVideo, 0, viewMatrixVideo, 0, orientationMatrix, 0);
float[] modelViewProjectionVideo = new float[16];
Matrix.multiplyMM(modelViewProjectionVideo, 0, projectionMatrix, 0, modelViewVideo, 0);

Rotating camera around Y asix not making AR face camera vuforia native android

I am making AR video app using vuforia. I want my video should come vertical and always face camera when I move camera around Y axis on all other oration my AR video should vertical. I have edited vuforia sample videoplayback and added folling code but I am getting unexpected behavior.
Matrix44F inverse_view = SampleMath.Matrix44FInverse(Tool.convertPose2GLMatrix(trackableResult.getPose()));
// line of sight
Vec3F los = new Vec3F(-inverse_view.getData()[8], -inverse_view.getData()[9], -inverse_view.getData()[10]);
Vec3F z_axis = new Vec3F(0, 0, 1);
Vec3F x_axis = Vec3FNormalize(Vec3FCross(los, z_axis));
Vec3F y_axis = Vec3FCross(z_axis, x_axis);
float[] orientationMatrix = new float[]{
x_axis.getData()[0], x_axis.getData()[1], x_axis.getData()[2], 0,
y_axis.getData()[0], y_axis.getData()[1], y_axis.getData()[2], 0,
z_axis.getData()[0], z_axis.getData()[1], z_axis.getData()[2], 0,
0, 0, 0, 1
};
// calculate the model view projection matrix
float[] viewMatrixVideo = Tool.convertPose2GLMatrix(trackableResult.getPose()).getData();
float[] modelViewVideo = new float[16];
Matrix.multiplyMM(modelViewVideo, 0, viewMatrixVideo, 0, orientationMatrix, 0);
float[] modelViewProjectionVideo = new float[16];
Matrix.scaleM(modelViewVideo, 0,
targetPositiveDimensions[currentTarget].getData()[0],
targetPositiveDimensions[currentTarget].getData()[0]
* videoQuadAspectRatio[currentTarget],
targetPositiveDimensions[currentTarget].getData()[0]);
Matrix.multiplyMM(modelViewProjectionVideo, 0, projectionMatrix, 0, modelViewVideo, 0);
but with above code I am getting rotation like in following video :
https://www.youtube.com/watch?v=b032rThCseU
but i want my video should rotate like in following image:
plz help me.

Render a 3D Object off the Center of the screen: ARTOOLKIT ANDROID

Hi I am working on a AR android app. I am using ARToolkit6. In this app I want to view my 3D object( A Cube) on left half of the screen. With this eventually I want to display 3 cubes on the screen each on 1/3 of the screen area.
I was able to scale the 3D object by tweaking ModelView Matrix. What I read so far, I think I need to tweak projection matrix to achieve my goal. I tried looking solutions online. But Couldn't get it to work. Can anyone direct me to right path?
for (int trackableUID : trackableUIDs) {
// If the marker is visible, apply its transformation, and render a cube
if (ARToolKit.getInstance().queryMarkerVisible(trackableUID)) {
float[] projectionMatrix = ARToolKit.getInstance().getProjectionMatrix();
float[] modelViewMatrix = ARToolKit.getInstance().queryMarkerTransformation(trackableUID);
float[] scalingMat = {1, 0, 0, 0, 0, 3.0f, 0, 0, 0, 0, 1.0f, 0, 0.0f, 0, 0, 1};
float[] newModelView = modelViewMatrix;
multiplyMM(newModelView, 0, modelViewMatrix, 0, scalingMat, 0);
cube.draw(projectionMatrix, newModelView);
}
I followed the this link Set origin to top-left corner of screen in OpenGL ES 2 and (OpenGL ES) Objects away from view centre are stretched. So I translated the modelView Matrix but it doesn't solve the problem, the 3D object appears at the center of the screen. Can you explain how should I approach this problem? Thanks
#Override
public void draw() {
super.draw();
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glFrontFace(GLES20.GL_CCW);
// Look for trackables, and draw on each found one.
for (int trackableUID : trackableUIDs) {
// If the marker is visible, apply its transformation, and render a cube
if (ARToolKit.getInstance().queryMarkerVisible(trackableUID)) {
float[] projectionMatrix = ARToolKit.getInstance().getProjectionMatrix();
float[] modelViewMatrix = ARToolKit.getInstance().queryMarkerTransformation(trackableUID);
float[] scalingMat = {1, 0, 0, 0, 0, 3.0f, 0, 0, 0, 0, 1.0f, 0, 0.0f, 0, 0, 1};
multiplyMM(modelViewMatrix, 0, scalingMat, 0, modelViewMatrix, 0);
float[] rightModelMatrix = new float[16];
Matrix.setIdentityM(rightModelMatrix, 0);
// Translate outer sphere by 5 in x.
Matrix.translateM(rightModelMatrix, 0, 5.0f, 0.0f, 0.0f);
Matrix.multiplyMM(modelViewMatrix, 0, rightModelMatrix, 0, modelViewMatrix, 0);
cube.draw(projectionMatrix, modelViewMatrix);
}
}
Also tried this but the object gets displayed at the center of the screen.
glMatrixMode(GL_PROJECTION);
glTranslatef(5f, 0f, 0f);

Get OpenGL LookAt Position

In Android OpenGL
it has the command setLookAtM to specific the position for the camera view
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY,
lookZ, upX, upY, upZ);
If I rotate camera, by using command rotateM
Matrix.rotateM(mViewMatrix, 0 , angle , 0.0f, 1.0f, 0.0f);
Then, How could I get the 'LookAt' the exact camera view from mViewMatrix ?
the camera view position, i want is x,y,z that camera look at
Be careful, as calling rotateM doesn't technically rotate the camera, it applies a rotation to any future objects drawn. (it doesn't mean the same thing).
However in general, if you have a view matrix, and you want to see which direction it is facing, you want to transform the eye space forward vector (0,0,1) by the inverse of the view matrix.
As the view matrix transforms vectors in world space into eye space, the inverse view matrix transforms vectors from eye space into world space.
So you can:
Apply arbitrary operations to the current view matrix
Take its inverse
Multiply the inverse matrix by (0,0,1,0). (note this is the same as just pulling out the third column of the inverse matrix).
After #3 you will have the direction of the camera eye in world space. Add this to the eye's position, and you should know at what point it is pointing.
I have finished my question
and this is my solution
first, I need to made this 3 matrix to get result of x, y, z position
private float[] mRotXMatrix = new float[] {
1, 0, 0, 0,
0, 0, 1, 0,
0, 1, 0, 0,
0, 0, 0, 1 };
private float[] mRotYMatrix = new float[] {
0, 0, 1, 0,
0, 1, 0, 0,
1, 0, 0, 0,
0, 0, 0, 1 };
private float[] mRotZMatrix = new float[] {
0, 1, 0, 0,
1, 0, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1 };
Then in onSensorChanged(SensorEvent e) method I rotate all these 3 matrix together with my camera view like this
Matrix.multiplyMM(mViewMatrix, 0, deltaRotationMatrix, 0, mViewMatrix, 0);
Matrix.multiplyMM(mRotXMatrix, 0, deltaRotationMatrix, 0, mRotXMatrix, 0);
Matrix.multiplyMM(mRotYMatrix, 0, deltaRotationMatrix, 0, mRotYMatrix, 0);
Matrix.multiplyMM(mRotZMatrix, 0, deltaRotationMatrix, 0, mRotZMatrix, 0);
And to get the X, Y, Z of my camera view, just get it from these matrix
viewX = mRotXMatrix[2];
viewY = -mRotYMatrix[6]; // +/- up to your initial camera view
viewZ = mRotZMatrix[10];

opengl es position after glRotatef and glTranslatef

i'am new in OpenGL ES. Can you helps me to calculate world coordinates of cube after rotate and translate. For example:
first i rotate cube:
gl.glRotatef(90, 1, 0, 0);
than change his position
gl.glTranslatef(10, 0, 0);
How can i calculate his "new" world coordinates? I read about glGetFloatv(GL_MODELVIEW_MATRIX , matrix) but not understand it. Maybe someone can provide sample code.
EDIT:
I found solution. Android code
float[] matrix = new float[] {
1,0,0,0,
0,1,0,0,
0,0,1,0,
0,0,0,1,
};
Matrix.rotateM(matrix, 0, rx, 1, 0, 0);
Matrix.rotateM(matrix, 0, ry, 0, 1, 0);
Matrix.rotateM(matrix, 0, rz, 0, 0, 1);
Matrix.translateM(matrix, 0, x, y, z);
x = matrix[12];
y = matrix[13];
z = matrix[14];
Thanks for answers.
Although you have an answer for the part you want, in terms of the rest of your question, you'd do something like (please forgive me if I make any Java errors, I'm not really an Android person):
float[] matrix = new float[16];
gl.glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
// check out matrix[12], matrix[13] and matrix[14] now for the world location
// that (0, 0, 0) [, 1)] would be mapped to
That getFloatv just reads back the current value of the modelview matrix into the float buffer specified. In OpenGL 4x4 matrices are specified so that index 0 is the top left, index 3 is the lowest number in the first column and 12 is the number furthest to the right in the first row. That's usually referred to as column-major layout, though the OpenGL FAQ isn't entirely happy with the term.

Categories

Resources