Reading the transformation matrix from andar - android

I am developing an android based tracking system using camera and other sensors on android. I am interested in reading the transformation matrix from AndAR rather than displaying some object (e.g. a cube) when the marker is detected. I have another tracking system developed using a flavor of ARToolkit, called jARToolkit, that runs on desktop machine and gives transformation matrix between the web-camera and the pattern.
Right now I am getting the transformation matrix from AndAR, but if we compare it with the transformation matrix that I am getting from jARToolkit, it is totally different. The reason could be following problems -
The surface preview that I see on android is always rotated by 90 degrees. So my X and Y co-ordinates in the translation matrix exchange their position.
I am not sure about the unit of the translation matrix. It comes to around 4 units per cm in physical world but there is no way for me to verify this.
I would appreciate if anyone could help me to address these questions or let me know if I am missing something. Thanks in advance.
Following is the code that I am using. It is pretty much the same as in the AndAR documentation.
boolean keepRunning = true;
try {
ARToolkit artoolkit = getArtoolkit();
CustomObject object_hiro = new CustomObject("test_hiro", "hiro.patt", 80.0,new double[] { 0, 0 });
artoolkit.registerARObject(object_hiro);
}
catch (AndARException ex)
{
System.out.println("");
}
while(keepRunning)
{
double[] transMatrix = (double[]) object_hiro.getTransMatrix();
}
and here is the CustomObject.java -
import java.nio.FloatBuffer;
import javax.microedition.khronos.opengles.GL10;
import edu.dhbw.andar.ARObject;
import edu.dhbw.andar.pub.SimpleBox;
import edu.dhbw.andar.util.GraphicsUtil;
/**
* An example of an AR object being drawn on a marker.
* #author tobi
*
*/
public class CustomObject extends ARObject {
public CustomObject(String name, String patternName,
double markerWidth, double[] markerCenter) {
super(name, patternName, markerWidth, markerCenter);
float mat_ambientf[] = {0f, 1.0f, 0f, 1.0f};
float mat_flashf[] = {0f, 1.0f, 0f, 1.0f};
float mat_diffusef[] = {0f, 1.0f, 0f, 1.0f};
float mat_flash_shinyf[] = {50.0f};
mat_ambient = GraphicsUtil.makeFloatBuffer(mat_ambientf);
mat_flash = GraphicsUtil.makeFloatBuffer(mat_flashf);
mat_flash_shiny = GraphicsUtil.makeFloatBuffer(mat_flash_shinyf);
mat_diffuse = GraphicsUtil.makeFloatBuffer(mat_diffusef);
}
public CustomObject(String name, String patternName,
double markerWidth, double[] markerCenter, float[] customColor) {
super(name, patternName, markerWidth, markerCenter);
float mat_flash_shinyf[] = {50.0f};
mat_ambient = GraphicsUtil.makeFloatBuffer(customColor);
mat_flash = GraphicsUtil.makeFloatBuffer(customColor);
mat_flash_shiny = GraphicsUtil.makeFloatBuffer(mat_flash_shinyf);
mat_diffuse = GraphicsUtil.makeFloatBuffer(customColor);
}
private SimpleBox box = new SimpleBox();
private FloatBuffer mat_flash;
private FloatBuffer mat_ambient;
private FloatBuffer mat_flash_shiny;
private FloatBuffer mat_diffuse;
/**
* Everything drawn here will be drawn directly onto the marker,
* as the corresponding translation matrix will already be applied.
*/
#Override
public final void draw(GL10 gl) {
super.draw(gl);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SPECULAR,mat_flash);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SHININESS, mat_flash_shiny);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_DIFFUSE, mat_diffuse);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_AMBIENT, mat_ambient);
//draw cube
gl.glColor4f(1.0f, 0f, 0, 1.0f);
gl.glTranslatef( 0.0f, 0.0f, 12.5f );
box.draw(gl);
}
#Override
public void init(GL10 gl) {
// TODO Auto-generated method stub
}
}
Please let me know if I need to provide additional information.. Thanks

Original C ARToolKit has two type of transformation associated to a marker:
a 3x4 matrix (computer vision matrix from the pose estimation, obtained from arGetTransMat)
a 4x4 matrix (an OpenGL-like matrix, obtained from argConvGLcpara with the above 3x4 matrix).
In AndAR:
3x4 matrix: can be obtained by calling getTransMatrix() from your ARObject.
4x4 matrix: not publicly accessible from your ARObject, matrix stored in glCameraMatrix (see the code of ARObject.java).
in JARToolKit:
3x4 matrix: can be obtained by calling getTransMatrix
4x4 matrix: can be obtained by calling getCamTransMatrix
Maybe you access a different matrix between AndAR and JARToolKit.
The unit is relative to your marker size. Generally it's in mm, parameter in your declaration of object_hiro: 80.0 represents 80mm width. You can print your marker to this size so you get a match between your physical object and virtual content.

Related

Convert OpenGL 3D point to 2D using GLU.glProject

I have an OpenGL scene with a sphere having a radius of 1, and the camera being at the center of the sphere (it's a 360° picture viewer). The user can rotate the sphere by panning.
Now I need to display 2D pins "attached" to some parts of the picture. To do so, I want to convert the 3D coordinates of my pins into 2D screen coordinates, to add the pin image at that screen coordinates.
I'm using GLU.glProject and the following classes from android-apidemo:
MatrixGrabber
MatrixStack
MatrixTrackingGL
I save the projection matrix in the onSurfaceChanged method and the model-view matrix in the onDraw method (after having drawn my sphere). Then I feed GLU.glProject with them when the user rotates the sphere to update the pins position.
When I pan horizontally, the pins pan correctly, but when I pan vertically, the texture pans "faster" than the pin image (like if the pin was closer to the camera than the sphere).
Here are some relevant parts of my code:
public class CustomRenderer implements GLSurfaceView.Renderer {
MatrixGrabber mMatrixGrabber = new MatrixGrabber();
private float[] mModelView = null;
private float[] mProjection = null;
[...]
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Get the sizes:
float side = Math.max(width, height);
int x = (int) (width - side) / 2;
int y = (int) (height - side) / 2;
// Set the viewport:
gl.glViewport(x, y, (int) side, (int) side);
// Set the perspective:
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, FIELD_OF_VIEW_Y, 1, Z_NEAR, Z_FAR);
// Grab the projection matrix:
mMatrixGrabber.getCurrentProjection(gl);
mProjection = mMatrixGrabber.mProjection;
// Set to MODELVIEW mode:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
#Override
public void onDrawFrame(GL10 gl) {
// Load the texture if needed:
if(mTextureToLoad != null) {
mSphere.loadGLTexture(gl, mTextureToLoad);
mTextureToLoad = null;
}
// Clear:
gl.glClearColor(0.5f, 0.5f, 0.5f, 0.0f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
// Rotate the scene:
gl.glRotatef( (1 - mRotationY + 0.25f) * 360, 1, 0, 0); // 0.25 is used to adjust the texture position
gl.glRotatef( (1 - mRotationX + 0.25f) * 360, 0, 1, 0); // 0.25 is used to adjust the texture position
// Draw the sphere:
mSphere.draw(gl);
// Grab the model-view matrix:
mMatrixGrabber.getCurrentModelView(gl);
mModelView = mMatrixGrabber.mModelView;
}
public float[] getScreenCoords(float x, float y, float z) {
if(mModelView == null || mProjection == null) return null;
float[] result = new float[3];
int[] view = new int[] {0, 0, (int) mSurfaceViewSize.getWidth(), (int) mSurfaceViewSize.getHeight()};
GLU.gluProject(x, y, z,
mModelView, 0,
mProjection, 0,
view, 0,
result, 0);
result[1] = mSurfaceViewSize.getHeight() - result[1];
return result;
}
}
I use the result of the getScreenCoords method to display my pins. The y value is wrong.
What am I doing wrong?

How to detect if I tapped on GL10 object

I'm drawing a simple Triangle that is displayed on screen using AndAR. Now I want to make a simple interaction. By that I mean that I want to display a Toast when the drawn object is tapped. I've already implemented onTouchEvent so I can get XY coordinates of the place where I tapped the screen. Now I need to check if this point is in area of my 2D triangle. How can I get coordinates of points of my triangle ? Triangle is "sticked" to the marker and it is drawn when marker is recognied, so the coordinates are changed in real time based on view angle. This is the biggest problem. Any idea ?
public class Triangle extends ARObject implements BasicShape {
private final FloatBuffer vertexBuffer;
// number of coordinates per vertex in this array
static final int COORDS_PER_VERTEX = 3;
static float triangleCoords[] = {
// in counterclockwise order:
25.0f, 25.622008459f, 25.0f,// top
-25.5f, -25.311004243f, 25.0f,// bottom left
25.5f, -25.311004243f, 25.0f // bottom right
};
float color[] = { 1f, 0f, 0f, 0.0f };
/**
* Sets up the drawing object data for use in an OpenGL ES context.
*/
public Triangle(String name, String patternName, double markerWidth, double[] markerCenter) {
super(name, patternName, markerWidth, markerCenter);
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(
// (number of coordinate values * 4 bytes per float)
triangleCoords.length * 4);
// use the device hardware's native byte order
bb.order(ByteOrder.nativeOrder());
// create a floating point buffer from the ByteBuffer
vertexBuffer = bb.asFloatBuffer();
// add the coordinates to the FloatBuffer
vertexBuffer.put(triangleCoords);
// set the buffer to read the first coordinate
vertexBuffer.position(0);
}
/**
* Encapsulates the OpenGL ES instructions for drawing this shape.
*
* #param gl - The OpenGL ES context in which to draw this shape.
*/
#Override
public void draw(GL10 gl) {
super.draw(gl);
// Since this shape uses vertex arrays, enable them
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
// draw the shape
gl.glColor4f( // set color:
color[0], color[1],
color[2], color[3]);
gl.glVertexPointer( // point to vertex data:
COORDS_PER_VERTEX,
GL10.GL_FLOAT, 0, vertexBuffer);
gl.glDrawArrays( // draw shape:
GL10.GL_TRIANGLES, 0,
triangleCoords.length / COORDS_PER_VERTEX);
gl.glTranslatef(0.0f, 0.0f, 12.5f);
// Disable vertex array drawing to avoid
// conflicts with shapes that don't use it
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
}
#Override
public void init(GL10 gl10) {
}
}

Drawing on top of previous frame with an offscreen texture

I am very new to OpenGL ES 2.0.
I'm trying to write a fingerpaint app using OpenGL ES 2.0. The idea is to draw from touches each frame onto a texture incrementally (without calling glClear(int)), and sampling the texture onto a full-screen quad.
Referring to my code below, when I draw the GlCircle and GlLine onto the default Framebuffer, everything works fine.
But when I try to draw on top of the previous frame by using an offscreen texture, the coordinate on the rendered texture seems to be off:
Y-axis is inverted.
There's an offset on the Y-axis
The screenshot below should visually show what's wrong (the red/blue outline shows the actual touch coordinates on the screen, white dots are drawn to/from texture):
What am I doing wrong? Is there a better way of achieving this?
Here's my GLSurfaceView.Renderer:
package com.oaskamay.whiteboard.opengl;
import android.opengl.GLES20;
import android.opengl.Matrix;
import android.os.Bundle;
import android.util.Log;
import android.view.MotionEvent;
import com.oaskamay.whiteboard.opengl.base.GlSurfaceView;
import com.oaskamay.whiteboard.opengl.drawable.GlCircle;
import com.oaskamay.whiteboard.opengl.drawable.GlLine;
import com.oaskamay.whiteboard.opengl.drawable.GlTexturedQuad;
import java.util.ArrayList;
import java.util.List;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
public class GlDrawingRenderer implements GlSurfaceView.Renderer {
/*
* Keys used to store/restore the state of this renderer.
*/
private static final String EXTRA_MOTION_EVENTS = "extra_motion_events";
private static final float[] COLOR_BG = new float[]{0.0f, 0.0f, 0.0f, 1.0f};
private static final float[] COLOR_BRUSH = new float[]{1.0f, 1.0f, 1.0f, 1.0f};
/*
* Model-view-projection matrix used to map normalized GL coordinates to the screen's.
*/
private final float[] mMvpMatrix;
private final float[] mViewMatrix;
private final float[] mProjectionMatrix;
private final float[] mTextureProjectionMatrix;
private final float[] mTextureMvpMatrix;
/*
* Offscreen texture rendering handles.
*/
private int[] mFrameBufferHandle;
private int[] mRenderTextureHandle;
/*
* Lists of vertices to draw each frame.
*/
private List<Float> mLineVertexData;
private List<Float> mCircleVertexData;
/*
* List of stored MotionEvents and PacketData, required to store/restore state of Renderer.
*/
private ArrayList<MotionEvent> mMotionEvents;
private boolean mRestoreMotionEvents = false;
private GlLine mLine;
private GlCircle mCircle;
private GlTexturedQuad mTexturedQuad;
/*
* Variables to calculate FPS throughput.
*/
private long mStartTime = System.nanoTime();
private int mFrameCount = 0;
public GlDrawingRenderer() {
mMvpMatrix = new float[16];
mViewMatrix = new float[16];
mProjectionMatrix = new float[16];
mTextureProjectionMatrix = new float[16];
mTextureMvpMatrix = new float[16];
mFrameBufferHandle = new int[1];
mRenderTextureHandle = new int[1];
mLineVertexData = new ArrayList<>();
mCircleVertexData = new ArrayList<>();
mMotionEvents = new ArrayList<>();
}
#Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
// one time feature initializations
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glDisable(GLES20.GL_DITHER);
// clear attachment buffers
GLES20.glClearColor(COLOR_BG[0], COLOR_BG[1], COLOR_BG[2],
COLOR_BG[3]);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// initialize drawables
mLine = new GlLine();
mCircle = new GlCircle(5.0f);
mTexturedQuad = new GlTexturedQuad();
}
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
// calculate projection, camera matrix and MVP matrix for touch events
Matrix.setLookAtM(mViewMatrix, 0, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f);
Matrix.orthoM(mProjectionMatrix, 0, 0.0f, width, height, 0.0f, 0.0f, 1.0f);
Matrix.multiplyMM(mMvpMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
mLine.setMvpMatrix(mMvpMatrix);
mCircle.setMvpMatrix(mMvpMatrix);
// calculate projection and MVP matrix for texture
Matrix.setIdentityM(mTextureProjectionMatrix, 0);
Matrix.multiplyMM(mTextureMvpMatrix, 0, mTextureProjectionMatrix, 0, mViewMatrix, 0);
mTexturedQuad.setMvpMatrix(mTextureMvpMatrix);
// setup buffers for offscreen texture
GLES20.glGenFramebuffers(1, mFrameBufferHandle, 0);
GLES20.glGenTextures(1, mRenderTextureHandle, 0);
mTexturedQuad.initTexture(width, height, mRenderTextureHandle[0]);
}
#Override
public void onDrawFrame(GL10 unused) {
// use offscreen texture frame buffer
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBufferHandle[0]);
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0,
GLES20.GL_TEXTURE_2D, mRenderTextureHandle[0], 0);
GlUtil.glCheckFramebufferStatus();
// restore and draw saved MotionEvents onto texture if they exist
if (mRestoreMotionEvents) {
mRestoreMotionEvents = false;
processStoredMotionEvents();
}
// draw current MotionEvents onto texture
drawObjects();
// use window frame buffer
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
GLES20.glClearColor(COLOR_BG[0], COLOR_BG[1], COLOR_BG[2], COLOR_BG[3]);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// draw texture onto full-screen quad onto the window surface
drawTexturedQuad();
logFps();
}
/**
* Draws any available line and circle vertex data. Objects including {#code GlCircle} and
* {#code GlLine} are to be drawn on the offscreen texture. The offscreen texture will then be
* drawn onto a fullscreen quad in the default window framebuffer.
*/
private void drawObjects() {
if (!mLineVertexData.isEmpty()) {
drawLines();
}
if (!mCircleVertexData.isEmpty()) {
drawCircles();
}
}
/**
* Draws circles. OpenGL points cannot have radii, hence we draw circles on down key events
* instead of points.
*/
private void drawCircles() {
GLES20.glUseProgram(mCircle.getProgramHandle());
// read offsets
float dx = mCircleVertexData.remove(0);
float dy = mCircleVertexData.remove(0);
float dz = mCircleVertexData.remove(0);
mCircle.setTranslateMatrix(dx, dy, dz);
// read color
float r = mCircleVertexData.remove(0);
float g = mCircleVertexData.remove(0);
float b = mCircleVertexData.remove(0);
float a = mCircleVertexData.remove(0);
mCircle.setColor(r, g, b, a);
mCircle.draw();
}
/**
* Draws lines from touch start points to touch end points.
*/
private void drawLines() {
GLES20.glUseProgram(mLine.getProgramHandle());
// read offsets
float x1 = mLineVertexData.remove(0);
float y1 = mLineVertexData.remove(0);
float z1 = mLineVertexData.remove(0);
float x2 = mLineVertexData.remove(0);
float y2 = mLineVertexData.remove(0);
float z2 = mLineVertexData.remove(0);
mLine.setTranslateMatrix(x1, y1, z1, x2, y2, z2);
// read color
float r = mLineVertexData.remove(0);
float g = mLineVertexData.remove(0);
float b = mLineVertexData.remove(0);
float a = mLineVertexData.remove(0);
mLine.setColor(r, g, b, a);
mLine.draw();
}
/**
* Draws the offscreen texture onto the fullscreen quad, and draws the quad onto the default
* window framebuffer.
*/
private void drawTexturedQuad() {
GLES20.glUseProgram(mTexturedQuad.getProgramHandle());
mTexturedQuad.draw();
}
/**
* Processes MotionEvent.
* Sets vertex and color data based on MotionEvent information.
*
* #param event MotionEvent to process.
* #param store Pass true when processing fresh MotionEvents to store them to support parent
* activity recreations, pass false otherwise.
*/
public void processMotionEvent(MotionEvent event, boolean store) {
if (store) {
mMotionEvents.add(MotionEvent.obtain(event));
}
int action = event.getActionMasked();
switch (action) {
case MotionEvent.ACTION_POINTER_DOWN:
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_MOVE:
// set centroid
mCircleVertexData.add(event.getX());
mCircleVertexData.add(event.getY());
mCircleVertexData.add(0.0f);
// set color
mCircleVertexData.add(COLOR_BRUSH[0]);
mCircleVertexData.add(COLOR_BRUSH[1]);
mCircleVertexData.add(COLOR_BRUSH[2]);
mCircleVertexData.add(COLOR_BRUSH[3]);
break;
}
}
/**
* Draws stored MotionEvents.
* Required to be able to restore state of this Renderer.
*/
private void processStoredMotionEvents() {
for (MotionEvent event : mMotionEvents) {
processMotionEvent(event, false);
drawObjects();
}
}
/**
* Prints out current frames-per-second throughput.
*/
private void logFps() {
mFrameCount++;
if (System.nanoTime() - mStartTime >= 1000000000L) {
Log.d("GlDrawingRenderer", "FPS: " + mFrameCount);
mFrameCount = 0;
mStartTime = System.nanoTime();
}
}
/**
* Saves line and circle vertex data into the {#code Bundle} argument. Call when the parent
* {#code GLSurfaceView} calls its corresponding {#code onSaveInstanceState()} method.
*
* #param bundle Destination {#code Bundle} to save the renderer state into.
*/
public void onSaveInstanceState(Bundle bundle) {
bundle.putParcelableArrayList(EXTRA_MOTION_EVENTS, mMotionEvents);
}
/**
* Restores line and circle vertex data from the {#code Bundle} argument. Call when the parent
* {#code GLSurfaceView} calls its corresponding {#code onRestoreInstanceState(Parcelable)}
* method.
*
* #param bundle Source {#code Bundle} to save the renderer state from.
*/
public void onRestoreInstanceState(Bundle bundle) {
ArrayList<MotionEvent> motionEvents = bundle.getParcelableArrayList(EXTRA_MOTION_EVENTS);
if (motionEvents != null && !motionEvents.isEmpty()) {
mMotionEvents.addAll(motionEvents);
mRestoreMotionEvents = true;
}
}
}
And here's the GlTexturedQuad class:
package com.oaskamay.whiteboard.opengl.drawable;
import android.opengl.GLES20;
import com.oaskamay.whiteboard.opengl.GlUtil;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import java.nio.IntBuffer;
import java.nio.ShortBuffer;
public class GlTexturedQuad {
/*
* Vertex metadata: we have 3 coordinates per vertex, and a quad can be drawn with 2 triangles.
*/
private static final int VERTEX_COORDS = 3;
private static final String VERTEX_SHADER_SOURCE =
"uniform mat4 u_MvpMatrix; \n" +
"attribute vec4 a_Position; \n" +
"attribute vec2 a_TextureCoord; \n" +
"varying vec2 v_TextureCoord; \n" +
" \n" +
"void main() { \n" +
" v_TextureCoord = a_TextureCoord; \n" +
" gl_Position = u_MvpMatrix * a_Position; \n" +
"} \n";
private static final String FRAGMENT_SHADER_SOURCE =
"uniform sampler2D u_Texture; \n" +
"varying vec2 v_TextureCoord; \n" +
" \n" +
"void main() { \n" +
" gl_FragColor = texture2D(u_Texture, v_TextureCoord);\n" +
"} \n";
/*
* Vertex locations. The quad will cover the whole screen, and is in normalized device
* coordinates. The projection matrix for this quad should be identity.
*/
private static final float[] VERTICES = {
-1.0f, +1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
+1.0f, -1.0f, 0.0f,
+1.0f, +1.0f, 0.0f
};
/*
* Describes the order in which vertices are to be rendered.
*/
private static final short[] VERTICES_ORDER = {
0, 1, 2,
0, 2, 3
};
/*
* (u, v) texture coordinates to be sent to the vertex and fragment shaders.
*/
private static final float[] TEXTURE_COORDS = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
private float mMvpMatrix[];
private int mRenderTexture;
/*
* FloatBuffers used to store vertices and their order to draw.
*/
private final FloatBuffer mVertexBuffer;
private final ShortBuffer mVertexOrderBuffer;
private final FloatBuffer mTextureCoordsBuffer;
/*
* OpenGL handles to shader program, attributes, and uniforms.
*/
private final int mProgramHandle;
private final int mMvpMatrixHandle;
private final int mPositionHandle;
private final int mTextureHandle;
private final int mTextureCoordHandle;
/**
* Default constructor. Refrain from calling this multiple times as it may be expensive due to
* compilation of shader sources.
*/
public GlTexturedQuad() {
// initialize vertex buffer
ByteBuffer vertexBuffer = ByteBuffer.allocateDirect(VERTICES.length * 4);
vertexBuffer.order(ByteOrder.nativeOrder());
mVertexBuffer = vertexBuffer.asFloatBuffer();
mVertexBuffer.put(VERTICES);
mVertexBuffer.position(0);
// initialize vertex order buffer
ByteBuffer vertexOrderBuffer = ByteBuffer.allocateDirect(VERTICES_ORDER.length * 2);
vertexOrderBuffer.order(ByteOrder.nativeOrder());
mVertexOrderBuffer = vertexOrderBuffer.asShortBuffer();
mVertexOrderBuffer.put(VERTICES_ORDER);
mVertexOrderBuffer.position(0);
// initialize texture coordinates
ByteBuffer textureCoordsBuffer = ByteBuffer.allocateDirect(TEXTURE_COORDS.length * 4);
textureCoordsBuffer.order(ByteOrder.nativeOrder());
mTextureCoordsBuffer = textureCoordsBuffer.asFloatBuffer();
mTextureCoordsBuffer.put(TEXTURE_COORDS);
mTextureCoordsBuffer.position(0);
// compile vertex and fragment shader sources
int vertexShader = GlUtil.glLoadShader(GLES20.GL_VERTEX_SHADER,
VERTEX_SHADER_SOURCE);
int fragmentShader = GlUtil.glLoadShader(GLES20.GL_FRAGMENT_SHADER,
FRAGMENT_SHADER_SOURCE);
// create shader program and attach compiled sources
mProgramHandle = GLES20.glCreateProgram();
GLES20.glAttachShader(mProgramHandle, vertexShader);
GLES20.glAttachShader(mProgramHandle, fragmentShader);
GLES20.glLinkProgram(mProgramHandle);
// store attribute / uniform handles
mMvpMatrixHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_MvpMatrix");
mTextureHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_Texture");
mPositionHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_Position");
mTextureCoordHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_TextureCoord");
}
/**
* Initializes texture components.
*
* #param width Width of texture in pixels.
* #param height Height of texture in pixels.
*/
public void initTexture(int width, int height, int renderTexture) {
mRenderTexture = renderTexture;
// allocate pixel buffer for texture
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(width * height * 4);
byteBuffer.order(ByteOrder.nativeOrder());
IntBuffer texturePixelBuffer = byteBuffer.asIntBuffer();
// initialize texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mRenderTexture);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGB, width, height,
0, GLES20.GL_RGB, GLES20.GL_UNSIGNED_SHORT_5_6_5, texturePixelBuffer);
}
/**
* Draws this object. The model-view-projection matrix must be set with
* {#link #setMvpMatrix(float[])}.
*/
public final void draw() {
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glEnableVertexAttribArray(mTextureCoordHandle);
// set vertex position and MVP matrix in shader
GLES20.glVertexAttribPointer(mPositionHandle, VERTEX_COORDS, GLES20.GL_FLOAT,
false, VERTEX_COORDS * 4, mVertexBuffer);
GLES20.glUniformMatrix4fv(mMvpMatrixHandle, 1, false, mMvpMatrix, 0);
// bind texture
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mRenderTexture);
// set texture data and coordinate
GLES20.glVertexAttribPointer(mTextureCoordHandle, 2, GLES20.GL_FLOAT, false, 0,
mTextureCoordsBuffer);
GLES20.glUniform1i(mTextureHandle, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, VERTICES_ORDER.length, GLES20.GL_UNSIGNED_SHORT,
mVertexOrderBuffer);
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mTextureCoordHandle);
}
/**
* Sets the model-view-projection matrix in the vertex shader. Necessary to map the normalized
* GL coordinate system to that of the display.
*
* #param mvpMatrix Matrix to use as the model-view-projection matrix.
*/
public void setMvpMatrix(float[] mvpMatrix) {
mMvpMatrix = mvpMatrix;
}
public int getProgramHandle() {
return mProgramHandle;
}
}
EDIT (12/11/2015):
#reto-koradi suggested a much better solution. Invert the V-axis by changing the texture coordinates. This fix is also simple:
Change this (initialization of TEXTURE_COORDS array in GlTexturedQuad):
/*
* (u, v) texture coordinates to be sent to the vertex and fragment shaders.
*/
private static final float[] TEXTURE_COORDS = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
To this:
/*
* (u, v) texture coordinates to be sent to the vertex and fragment shaders.
*/
private static final float[] TEXTURE_COORDS = {
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f
};
I've fixed the issue. The problem was with the projection matrix used for the GlTexturedQuad. The fix was simple:
I changed this (in onSurfaceChanged(GL10, int, int) in GlDrawingRenderer):
// calculate projection and MVP matrix for texture
Matrix.setIdentityM(mTextureProjectionMatrix, 0);
Matrix.multiplyMM(mTextureMvpMatrix, 0, mTextureProjectionMatrix, 0, mViewMatrix, 0);
mTexturedQuad.setMvpMatrix(mTextureMvpMatrix);
To this:
// calculate projection and MVP matrix for texture
Matrix.orthoM(mTextureProjectionMatrix, 0, -1.0f, 1.0f, 1.0f, -1.0f, 0.0f, 1.0f);
Matrix.multiplyMM(mTextureMvpMatrix, 0, mTextureProjectionMatrix, 0, mViewMatrix, 0);
mTexturedQuad.setMvpMatrix(mTextureMvpMatrix);
So now mTextureProjectionMatrix takes into account the V-axis inversion of the texture. Again, I'm an OpenGL ES 2.0 beginner, my explanation might be wrong. But it works :)
I hope this post helped someone out there!
Although there seem to be many solutions to fix the inverted screen you should understand what happens in the background, why is it even inverted in your case and incidentally why your solution is not general.
The openGL buffers follow the legacy desktop coordinate system where bottom-left point is the origin and height increases upwards. So the raw buffer data will have the first pixel data at bottom-left part and not at top-left as you would expect due to how image data is used. So if you want to draw to the top-left part of the image you actually need to draw to the bottom-left part of the buffer (respecting the presentation).
So your issue is not in how you present the drawn texture but how you actually draw to the texture itself. Your coordinate system is inverted while drawing the points. But what difference does it make where I invert?
There is a huge difference actually. Since you inverted the coordinate system when drawing to the FBO and then inverted again when drawing to the presentation buffer to get the correct result your inversion equation is kind of (-1 * -1 = 1). Then what will happen if you add some post processing by adding another FBO: (-1 * -1 * -1 = -1) which means you will need to change the presentation coordinates back to normal as these will appear inverted again.
Another issue is if you try to read pixels to generate an image. In all cases if you try to read it from the presentation buffer it will be inverted. But if you use the FBO and read pixels from that buffer the data should be correct (which is not your case).
So the true general solution is to respect the orientation when drawing to anything but the presentation buffer. The FBO matrix should not invert the Y coordinate, Y should increase upwards. In your case the best thing to do is use a separate ortho call: For the FBO simply flip the top and bottom compared to the presentation values.

Line drawing drawn inverted in OpenGL ES

I'm trying to draw a simple line drawing connecting several vertices in OpenGL ES. However, the line is drawn inverted or in a different position from where it should be drawn. I've attached the class for the line drawing below
ConnectingPath.java
--------------------
public class ConnectingPath {
int positionBufferId;
PointF[] verticesList;
public float vertices[];
public FloatBuffer vertexBuffer;
public ConnectingPath(LinkedList<PointF> verticesList, float[] colors)
{
List<PointF> tempCorners = verticesList;
int i = 0;
this.verticesList = new PointF[tempCorners.size()];
for (PointF corner : tempCorners) {
this.verticesList[i++] = corner;
}
}
public float[] getTransformedVertices()
{
float z;
List<Float> finalVertices = new ArrayList<Float>();
finalVertices.clear();
for(PointF point : verticesList){
finalVertices.add(point.x);
finalVertices.add(point.y);
finalVertices.add(0.0f);
}
int i = 0;
float[] verticesArray = new float[finalVertices.size()];
for (Float f : finalVertices) {
verticesArray[i++] = (f != null ? f : Float.NaN);
}
return verticesArray;
}
public void initBooth(){
vertices = this.getTransformedVertices();
for(Float f : vertices){
Log.d("Mapsv3--", f + "");
}
ByteBuffer bb = ByteBuffer.allocateDirect(vertices.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
int[] buffers = new int[1];
GLES11.glGenBuffers(1, buffers, 0);
GLES11.glBindBuffer(GLES11.GL_ARRAY_BUFFER, buffers[0]);
GLES11.glBufferData(GLES11.GL_ARRAY_BUFFER, 4 * vertices.length, vertexBuffer, GLES11.GL_STATIC_DRAW);
positionBufferId = buffers[0];
}
public void Render(GL10 gl){
GLES11.glPushMatrix();
GLES11.glBindBuffer(GLES11.GL_ARRAY_BUFFER, positionBufferId);
GLES11.glEnableClientState(GL10.GL_VERTEX_ARRAY);
GLES11.glVertexPointer(3, GL10.GL_FLOAT, 0, 0);
GLES11.glBindBuffer(GLES11.GL_ARRAY_BUFFER, 0);
GLES11.glFrontFace(GL10.GL_CW);
GLES11.glLineWidth(10.0f);
GLES11.glColor4f(0.0f,0.0f,0.0f,1.0f);
GLES11.glDrawArrays(GL10.GL_LINE_STRIP, 0, verticesList.length);
GLES11.glDisableClientState(GL10.GL_VERTEX_ARRAY);
GLES11.glPopMatrix();
}
}
Drawing code :
Renderer.java
--------------
// Variables here
public void onSurfaceChanged(GL10 gl, int width, int height) {
viewWidth = width;
viewHeight = height;
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glEnable(GL10.GL_TEXTURE_2D); //Enable Texture Mapping
gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading
gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f); //Grey Background
gl.glClearDepthf(1.0f); //Depth Buffer Setup
gl.glEnable(GL10.GL_DEPTH_TEST); //Enables Depth Testing
gl.glDepthFunc(GL10.GL_LEQUAL);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
}
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluOrtho2D(gl, -viewWidth/2, viewWidth/2, -viewHeight/2,viewHeight/2);
gl.glTranslatef(center.x,center.y,0);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0,0, 0);
gl.glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
gl.glEnable(GL10.GL_CULL_FACE);
gl.glCullFace(GL10.GL_FRONT);
if(connectingPath!=null){
connectingPath.Render(gl);
}
gl.glDisable(GL10.GL_CULL_FACE);
gl.glLoadIdentity();
}
Screenshot :
The drawing in OpenGL seems to be inverted for you due to the way OpenGL defines it's screen coordinates. In contrast to most 2D drawing API's, the origin is located in the bottom left corner, which means that the y axis values increase when moving upwards. A very nice explanation is available in the OpenGL common pitfalls (Number 12):
Given a sheet of paper, people write from the top of the page to the bottom. The origin for writing text is at the upper left-hand margin of the page (at least in European languages). However, if you were to ask any decent math student to plot a few points on an X-Y graph, the origin would certainly be at the lower left-hand corner of the graph. Most 2D rendering APIs mimic writers and use a 2D coordinate system where the origin is in the upper left-hand corner of the screen or window (at least by default). On the other hand, 3D rendering APIs adopt the mathematically minded convention and assume a lower left-hand origin for their 3D coordinate systems.

how to use image instead of custom box in Augment Reality android

I am new in Android AS WELL IN Augmented Reality and i am facing some problem in AR .
That i am using an example of AndAR and in this example i can see simple cube box on my markers.
But i want to use my own image on my marker instead of that in build cube.
And i want to use different image for different markers.
this is my code:-
where cube is generated.
public class CustomObject extends ARObject {
public CustomObject(String name, String patternName,
double markerWidth, double[] markerCenter) {
super(name, patternName, markerWidth, markerCenter);
float mat_ambientf[] = {0f, 1.0f, 0f, 1.0f};
float mat_flashf[] = {0f, 1.0f, 0f, 1.0f};
float mat_diffusef[] = {0f, 1.0f, 0f, 1.0f};
float mat_flash_shinyf[] = {50.0f};
mat_ambient = GraphicsUtil.makeFloatBuffer(mat_ambientf);
mat_flash = GraphicsUtil.makeFloatBuffer(mat_flashf);
mat_flash_shiny = GraphicsUtil.makeFloatBuffer(mat_flash_shinyf);
mat_diffuse = GraphicsUtil.makeFloatBuffer(mat_diffusef);
}
public CustomObject(String name, String patternName,
double markerWidth, double[] markerCenter, float[] customColor) {
super(name, patternName, markerWidth, markerCenter);
float mat_flash_shinyf[] = {50.0f};
mat_ambient = GraphicsUtil.makeFloatBuffer(customColor);
mat_flash = GraphicsUtil.makeFloatBuffer(customColor);
mat_flash_shiny = GraphicsUtil.makeFloatBuffer(mat_flash_shinyf);
mat_diffuse = GraphicsUtil.makeFloatBuffer(customColor);
}
private SimpleBox box = new SimpleBox();
private FloatBuffer mat_flash;
private FloatBuffer mat_ambient;
private FloatBuffer mat_flash_shiny;
private FloatBuffer mat_diffuse;
/**
* Everything drawn here will be drawn directly onto the marker,
* as the corresponding translation matrix will already be applied.
*/
#Override
public final void draw(GL10 gl) {
super.draw(gl);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SPECULAR,mat_flash);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SHININESS, mat_flash_shiny);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_DIFFUSE, mat_diffuse);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_AMBIENT, mat_ambient);
//draw cube
gl.glColor4f(0, 1.0f, 0, 1.0f);
gl.glTranslatef( 0.0f, 0.0f, 12.5f );
box.draw(gl);
}
#Override
public void init(GL10 gl) {
// TODO Auto-generated method stub
}
Please help me out to overcome from this problem.
Thanks.
To create a new marker you can use one of these online marker generator tools:
Marker Generator roarmot (image only)
Marker Generator tarotaro (support webcam)
Create a marker and save it preferably with a .patt suffix (e.g. dog.patt).
Copy your marker in an accessible directory from your Android application (e.g SDCard).
To load your specific marker, you need to look at the Custom Activity in the AndAR repository to see how it's done ( /svn/trunk/AndAR/src/edu/dhbw/andar/pub/CustomActivity.java ):
someObject = new CustomObject("test", "patt.hiro", 80.0, new double[]{0,0});
artoolkit.registerARObject(someObject);
When you declare your 3D object (CustomObject, the one drawing your SimpleBox), you can specify which marker it should use as initialization parameters (e.g. patt.hiro).
For information the initialization parameters are: marker name (arbitrary), file marker (your .patt file), size of the marker (mm), marker center (by default 0,0)).

Categories

Resources