how to use image instead of custom box in Augment Reality android - android

I am new in Android AS WELL IN Augmented Reality and i am facing some problem in AR .
That i am using an example of AndAR and in this example i can see simple cube box on my markers.
But i want to use my own image on my marker instead of that in build cube.
And i want to use different image for different markers.
this is my code:-
where cube is generated.
public class CustomObject extends ARObject {
public CustomObject(String name, String patternName,
double markerWidth, double[] markerCenter) {
super(name, patternName, markerWidth, markerCenter);
float mat_ambientf[] = {0f, 1.0f, 0f, 1.0f};
float mat_flashf[] = {0f, 1.0f, 0f, 1.0f};
float mat_diffusef[] = {0f, 1.0f, 0f, 1.0f};
float mat_flash_shinyf[] = {50.0f};
mat_ambient = GraphicsUtil.makeFloatBuffer(mat_ambientf);
mat_flash = GraphicsUtil.makeFloatBuffer(mat_flashf);
mat_flash_shiny = GraphicsUtil.makeFloatBuffer(mat_flash_shinyf);
mat_diffuse = GraphicsUtil.makeFloatBuffer(mat_diffusef);
}
public CustomObject(String name, String patternName,
double markerWidth, double[] markerCenter, float[] customColor) {
super(name, patternName, markerWidth, markerCenter);
float mat_flash_shinyf[] = {50.0f};
mat_ambient = GraphicsUtil.makeFloatBuffer(customColor);
mat_flash = GraphicsUtil.makeFloatBuffer(customColor);
mat_flash_shiny = GraphicsUtil.makeFloatBuffer(mat_flash_shinyf);
mat_diffuse = GraphicsUtil.makeFloatBuffer(customColor);
}
private SimpleBox box = new SimpleBox();
private FloatBuffer mat_flash;
private FloatBuffer mat_ambient;
private FloatBuffer mat_flash_shiny;
private FloatBuffer mat_diffuse;
/**
* Everything drawn here will be drawn directly onto the marker,
* as the corresponding translation matrix will already be applied.
*/
#Override
public final void draw(GL10 gl) {
super.draw(gl);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SPECULAR,mat_flash);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SHININESS, mat_flash_shiny);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_DIFFUSE, mat_diffuse);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_AMBIENT, mat_ambient);
//draw cube
gl.glColor4f(0, 1.0f, 0, 1.0f);
gl.glTranslatef( 0.0f, 0.0f, 12.5f );
box.draw(gl);
}
#Override
public void init(GL10 gl) {
// TODO Auto-generated method stub
}
Please help me out to overcome from this problem.
Thanks.

To create a new marker you can use one of these online marker generator tools:
Marker Generator roarmot (image only)
Marker Generator tarotaro (support webcam)
Create a marker and save it preferably with a .patt suffix (e.g. dog.patt).
Copy your marker in an accessible directory from your Android application (e.g SDCard).
To load your specific marker, you need to look at the Custom Activity in the AndAR repository to see how it's done ( /svn/trunk/AndAR/src/edu/dhbw/andar/pub/CustomActivity.java ):
someObject = new CustomObject("test", "patt.hiro", 80.0, new double[]{0,0});
artoolkit.registerARObject(someObject);
When you declare your 3D object (CustomObject, the one drawing your SimpleBox), you can specify which marker it should use as initialization parameters (e.g. patt.hiro).
For information the initialization parameters are: marker name (arbitrary), file marker (your .patt file), size of the marker (mm), marker center (by default 0,0)).

Related

Vuforia Videoplayback issue - Video is playing inverted

Im working on a project combining Vuforia ImageTarget and VideoPlayback. I have 'N' number of targets and it have corresponding videos . For some imageTargets the video is flipped. I can't find any solution for this issue. Here is my VideoPlaybackRenderer
int videoPlaybackTextureID[] = new int[VideoPlayback.NUM_TARGETS];
// Keyframe and icon rendering specific
private int keyframeShaderID = 0;
private int keyframeVertexHandle = 0;
private int keyframeNormalHandle = 0;
private int keyframeTexCoordHandle = 0;
private int keyframeMVPMatrixHandle = 0;
private int keyframeTexSampler2DHandle = 0;
// We cannot use the default texture coordinates of the quad since these
// will change depending on the video itself
private float videoQuadTextureCoords[] = { 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, };
private Float videoQuadTextureCoordsTransformed[] = {0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f,};
List<Float[]> videoQuadTextureCoordsTransformedList = new ArrayList<Float[]>();
// Trackable dimensions
Vec3F targetPositiveDimensions[] = new Vec3F[VideoPlayback.NUM_TARGETS];
looks like you need to select the object video and then apply something like this:
example you select a cube.
this will rotate the cube 180 degrees without modifying any of the other rotational axis'
cube.transform.rotation = new Quaternion(cube.transform.rotation.x, cube.transform.rotation.y, cube.transform.rotation.z, 180);

Howto draw text on a square with Android and OpenGL ES 2.0

I wanted to draw a square with OpenGL ES 2.0 and put a dynamic text on it. I am trying to combine the instructions in this post (which I had to port to OpenGL ES 2.0) and the four lesson of Learn OpenGL ES Tutorial.
I have an Activity just using a GLSurfaceView:
public class TexturedSquareDrawActivity extends Activity {
private GLSurfaceView mGLView;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mGLView = new MyGLSurfaceViewTexture(this);
setContentView(mGLView);
}
}
My GLSurfaceView just created the renderer and sets it:
public class MyGLSurfaceViewTexture extends GLSurfaceView {
private final MyGLRendererTexture mRenderer;
public MyGLSurfaceViewTexture(Context context){
super(context);
// Create an OpenGL ES 2.0 context
setEGLContextClientVersion(2);
mRenderer = new MyGLRendererTexture(context);
// Set the Renderer for drawing on the GLSurfaceView
setRenderer(mRenderer);
}
}
Then I define a TextureSquare class like this:
public class TexturedSquare {
private final Context mContext;
private FloatBuffer vertexBuffer;
private ShortBuffer drawListBuffer;
private int mProgram;
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"attribute vec2 a_TexCoordinate;" +
"varying vec2 v_TexCoordinate;" +
"void main() {" +
// the matrix must be included as a modifier of gl_Position
// Note that the uMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
" gl_Position = uMVPMatrix * vPosition;" +
" v_TexCoordinate = a_TexCoordinate;" +
"}";
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform sampler2D u_Texture;" +
"uniform vec4 vColor;" +
"varying vec2 v_TexCoordinate;" +
"void main() {" +
// " gl_FragColor = vColor;" +
" gl_FragColor = vColor * texture2D(u_Texture, v_TexCoordinate);" +
"}";
private int mMVPMatrixHandle;
private int mPositionHandle;
private int mColorHandle;
// number of coordinates per vertex in this array
static final int COORDS_PER_VERTEX = 3;
private short drawOrder[] = {0, 1, 2, 0, 2, 3}; // order to draw vertices
private final float[] mColor;
private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex
/**
* Store our model data in a float buffer.
*/
private final FloatBuffer mCubeTextureCoordinates;
/**
* This will be used to pass in the texture.
*/
private int mTextureUniformHandle;
/**
* This will be used to pass in model texture coordinate information.
*/
private int mTextureCoordinateHandle;
/**
* Size of the texture coordinate data in elements.
*/
private final int mTextureCoordinateDataSize = 2;
/**
* This is a handle to our texture data.
*/
private int mTextureDataHandle;
// S, T (or X, Y)
// Texture coordinate data.
// Because images have a Y axis pointing downward (values increase as you move down the image) while
// OpenGL has a Y axis pointing upward, we adjust for that here by flipping the Y axis.
// What's more is that the texture coordinates are the same for every face.
final float[] cubeTextureCoordinateData =
{
// Front face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
};
public TexturedSquare(Context context, final float[] squareCoords, final float[] color) {
mContext = context;
mColor = color;
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(
// (# of coordinate values * 4 bytes per float)
squareCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(squareCoords);
vertexBuffer.position(0);
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
mCubeTextureCoordinates = ByteBuffer.allocateDirect(cubeTextureCoordinateData.length * 4)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mCubeTextureCoordinates.put(cubeTextureCoordinateData).position(0);
linkShaderCode();
}
private void linkShaderCode() {
int vertexShader = MyGLRendererTexture.loadShader(GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = MyGLRendererTexture.loadShader(GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
// create empty OpenGL ES Program
mProgram = GLES20.glCreateProgram();
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader);
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader);
// creates OpenGL ES program executables
GLES20.glLinkProgram(mProgram);
}
public void draw(float[] mvpMatrix) {
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the square coordinate data
// Tell OpenGL how to handle the data in the vertexBuffer
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the square
// Pass the color to the shader
GLES20.glUniform4fv(mColorHandle, 1, mColor, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
// Pass the projection and view transformation to the shader
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Load the texture
// mTextureDataHandle = TextureHelper.loadTexture(mContext, R.drawable.background);
mTextureDataHandle = TextureHelper.loadText(mContext, "01234");
mTextureUniformHandle = GLES20.glGetUniformLocation(mProgram, "u_Texture");
mTextureCoordinateHandle = GLES20.glGetAttribLocation(mProgram, "a_TexCoordinate");
GLES20.glEnableVertexAttribArray(mTextureCoordinateHandle);
GLES20.glVertexAttribPointer(mTextureCoordinateHandle, mTextureCoordinateDataSize, GLES20.GL_FLOAT, false,
0, mCubeTextureCoordinates);
// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureDataHandle);
// Tell the texture uniform sampler to use this texture in the shader by binding to texture unit 0.
GLES20.glUniform1i(mTextureUniformHandle, 0);
GLES20.glDrawElements(
GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
}
}
My renderer draws two squares. The first one shall be textured:
public class MyGLRendererTexture implements GLSurfaceView.Renderer {
// mMVPMatrix is an abbreviation for "Model View Projection Matrix"
private final float[] mMVPMatrix = new float[16];
private final float[] mProjectionMatrix = new float[16];
private final float[] mViewMatrix = new float[16];
private final Context mContext;
private TexturedSquare mTexturedSquare;
private TexturedSquare mTexturedSquare2;
static float squareCoords[] = {
-0.5f, 0.5f, 0.0f, // top left
-0.5f, -0.5f, 0.0f, // bottom left
0.5f, -0.5f, 0.0f, // bottom right
0.5f, 0.5f, 0.0f}; // top right
// Set color with red, green, blue and alpha (opacity) values
float color[] = {0.63671875f, 0.76953125f, 0.22265625f, 1.0f};
static float squareCoords2[] = {
-1.0f, 0.7f, 0.0f, // top left
-1.0f, 0.8f, 0.0f, // bottom left
-0.8f, 0.8f, 0.0f, // bottom right
-0.8f, 0.7f, 0.0f}; // top right
// Set color with red, green, blue and alpha (opacity) values
float color2[] = {0.11111111f, 0.26953125f, 0.52265625f, 1.0f};
public MyGLRendererTexture(
Context context) {
mContext = context;
}
public static int loadShader(int type, String shaderCode) {
// create a vertex shader type (GLES20.GL_VERTEX_SHADER)
// or a fragment shader type (GLES20.GL_FRAGMENT_SHADER)
int shader = GLES20.glCreateShader(type);
// add the source code to the shader and compile it
GLES20.glShaderSource(shader, shaderCode);
GLES20.glCompileShader(shader);
return shader;
}
#Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
// Set the background frame color
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
// initialize a triangle
// initialize a square
mTexturedSquare = new TexturedSquare(mContext, squareCoords, color);
mTexturedSquare2 = new TexturedSquare(mContext, squareCoords2, color2);
}
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
#Override
public void onDrawFrame(GL10 unused) {
// Redraw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
mTexturedSquare.draw(mMVPMatrix);
mTexturedSquare2.draw(mMVPMatrix);
}
}
And finally I have a helper class defining helper methods I am using in the upper code.
public class TextureHelper {
public static int loadTexture(final Context context, final int resourceId) {
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0) {
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false; // No pre-scaling
// Read in the resource
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}
if (textureHandle[0] == 0) {
throw new RuntimeException("Error loading texture.");
}
return textureHandle[0];
}
public static int loadText(final Context context, String text) {
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0) {
// Create an empty, mutable bitmap
Bitmap bitmap = Bitmap.createBitmap(256, 256, Bitmap.Config.ARGB_4444);
// get a canvas to paint over the bitmap
Canvas canvas = new Canvas(bitmap);
bitmap.eraseColor(0);
// get a background image from resources
// note the image format must match the bitmap format
Drawable background = context.getResources().getDrawable(R.drawable.background);
background.setBounds(0, 0, 256, 256);
background.draw(canvas); // draw the background to our bitmap
// Draw the text
Paint textPaint = new Paint();
textPaint.setTextSize(32);
textPaint.setAntiAlias(true);
textPaint.setARGB(0xff, 0x00, 0x00, 0x00);
// draw the text centered
canvas.drawText(text, 16,112, textPaint);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}
if (textureHandle[0] == 0) {
throw new RuntimeException("Error loading texture.");
}
return textureHandle[0];
}
}
But the texture is drawn for both triangles the square consists of. How can I just draw the texture once, placed horizontally within in the square?
I understand that the square is drawn by drawing two triangles. And I understand, that the texture is placed the same way. But I don't know how to tell OpenGL to place this texture only once within the square.
EDIT:
I have now edited the texture coordinates to:
final float[] cubeTextureCoordinateData =
{
-0.5f, 0.5f,
-0.5f, -0.5f,
0.5f, -0.5f,
0.5f, 0.5f
}
resulting in this:
These coordinates:
-1.0f, 1.0f,
-1.0f, -1.0f,
1.0f, -1.0f,
1.0f, 1.0f
result in this:
These coordinates:
0.5f, -0.5f,
0.5f, 0.5f,
-0.5f, 0.5f,
-0.5f, -0.5f
result in this:
And these coordinates:
1.0f, -1.0f,
1.0f, 1.0f,
-1.0f, 1.0f,
-1.0f, -1.0f
result in this:
So the 4th approach seems to be "the most right one". There the text is drawn at bottom right. It even seems that my square is divided into 4 smaller squares. Because as a texture I use this picture:
Why is this divided into four parts?
GLES sets textures to repeat by default so you need to change the parameter.
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
Also the tutorials you are using are pretty good here is the opengl es documentation which is pretty helpful. https://www.khronos.org/opengles/sdk/docs/man/
To draw any UI component in openGL you need to create canvas and use that into openGL.
Bitmap textedBitmap = drawTextToBitmap();
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, textedBitmap, 0);
private Bitmap drawTextToBitmap() {
// TODO Auto-generated method stub
Bitmap bitmap = Bitmap.createBitmap(256, 256, Bitmap.Config.ARGB_4444);
// get a canvas to paint over the bitmap
Canvas canvas = new Canvas(bitmap);
bitmap.eraseColor(android.graphics.Color.TRANSPARENT);
canvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR);
TextPaint textPaint = new TextPaint(TextPaint.ANTI_ALIAS_FLAG);
Paint textPaint = new Paint();
textPaint.setStyle(Paint.Style.FILL);
textPaint.setAntiAlias(true);
textPaint.setColor(Color.BLACK);
textPaint.setTextSize(10);
TextView tv = new TextView(context);
tv.setTextColor(Color.BLACK);
tv.setTextSize(10);
String text = "DEMO TEXT";
tv.setText(text);
tv.setEllipsize(TextUtils.TruncateAt.END);
tv.setMaxLines(4);
tv.setGravity(Gravity.BOTTOM);
tv.setPadding(8, 8, 8, 50);
tv.setDrawingCacheEnabled(true);
tv.measure(MeasureSpec.makeMeasureSpec(canvas.getWidth(),
MeasureSpec.EXACTLY), MeasureSpec.makeMeasureSpec(
canvas.getHeight(), MeasureSpec.EXACTLY));
tv.layout(0, 0, tv.getMeasuredWidth(), tv.getMeasuredHeight());
LinearLayout parent = null;
if (bitmap != null && !bitmap.isRecycled()) {
parent = new LinearLayout(context);
parent.setDrawingCacheEnabled(true);
parent.measure(MeasureSpec.makeMeasureSpec(canvas.getWidth(),
MeasureSpec.EXACTLY), MeasureSpec.makeMeasureSpec(
canvas.getHeight(), MeasureSpec.EXACTLY));
parent.layout(0, 0, parent.getMeasuredWidth(),
parent.getMeasuredHeight());
parent.setLayoutParams(new LinearLayout.LayoutParams(
LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT));
parent.setOrientation(LinearLayout.VERTICAL);
parent.setBackgroundColor(context.getResources().getColor(R.color.transpernt));
parent.addView(tv);
} else {
// write code to recreate bitmap from source
// Write code to show bitmap to canvas
}
canvas.drawBitmap(parent.getDrawingCache(), 0, 0, textPaint);
tv.setDrawingCacheEnabled(false);
iv.setDrawingCacheEnabled(false);
parent.setDrawingCacheEnabled(false);
return bitmap;
}

Drawing on top of previous frame with an offscreen texture

I am very new to OpenGL ES 2.0.
I'm trying to write a fingerpaint app using OpenGL ES 2.0. The idea is to draw from touches each frame onto a texture incrementally (without calling glClear(int)), and sampling the texture onto a full-screen quad.
Referring to my code below, when I draw the GlCircle and GlLine onto the default Framebuffer, everything works fine.
But when I try to draw on top of the previous frame by using an offscreen texture, the coordinate on the rendered texture seems to be off:
Y-axis is inverted.
There's an offset on the Y-axis
The screenshot below should visually show what's wrong (the red/blue outline shows the actual touch coordinates on the screen, white dots are drawn to/from texture):
What am I doing wrong? Is there a better way of achieving this?
Here's my GLSurfaceView.Renderer:
package com.oaskamay.whiteboard.opengl;
import android.opengl.GLES20;
import android.opengl.Matrix;
import android.os.Bundle;
import android.util.Log;
import android.view.MotionEvent;
import com.oaskamay.whiteboard.opengl.base.GlSurfaceView;
import com.oaskamay.whiteboard.opengl.drawable.GlCircle;
import com.oaskamay.whiteboard.opengl.drawable.GlLine;
import com.oaskamay.whiteboard.opengl.drawable.GlTexturedQuad;
import java.util.ArrayList;
import java.util.List;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
public class GlDrawingRenderer implements GlSurfaceView.Renderer {
/*
* Keys used to store/restore the state of this renderer.
*/
private static final String EXTRA_MOTION_EVENTS = "extra_motion_events";
private static final float[] COLOR_BG = new float[]{0.0f, 0.0f, 0.0f, 1.0f};
private static final float[] COLOR_BRUSH = new float[]{1.0f, 1.0f, 1.0f, 1.0f};
/*
* Model-view-projection matrix used to map normalized GL coordinates to the screen's.
*/
private final float[] mMvpMatrix;
private final float[] mViewMatrix;
private final float[] mProjectionMatrix;
private final float[] mTextureProjectionMatrix;
private final float[] mTextureMvpMatrix;
/*
* Offscreen texture rendering handles.
*/
private int[] mFrameBufferHandle;
private int[] mRenderTextureHandle;
/*
* Lists of vertices to draw each frame.
*/
private List<Float> mLineVertexData;
private List<Float> mCircleVertexData;
/*
* List of stored MotionEvents and PacketData, required to store/restore state of Renderer.
*/
private ArrayList<MotionEvent> mMotionEvents;
private boolean mRestoreMotionEvents = false;
private GlLine mLine;
private GlCircle mCircle;
private GlTexturedQuad mTexturedQuad;
/*
* Variables to calculate FPS throughput.
*/
private long mStartTime = System.nanoTime();
private int mFrameCount = 0;
public GlDrawingRenderer() {
mMvpMatrix = new float[16];
mViewMatrix = new float[16];
mProjectionMatrix = new float[16];
mTextureProjectionMatrix = new float[16];
mTextureMvpMatrix = new float[16];
mFrameBufferHandle = new int[1];
mRenderTextureHandle = new int[1];
mLineVertexData = new ArrayList<>();
mCircleVertexData = new ArrayList<>();
mMotionEvents = new ArrayList<>();
}
#Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
// one time feature initializations
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glDisable(GLES20.GL_DITHER);
// clear attachment buffers
GLES20.glClearColor(COLOR_BG[0], COLOR_BG[1], COLOR_BG[2],
COLOR_BG[3]);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// initialize drawables
mLine = new GlLine();
mCircle = new GlCircle(5.0f);
mTexturedQuad = new GlTexturedQuad();
}
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
// calculate projection, camera matrix and MVP matrix for touch events
Matrix.setLookAtM(mViewMatrix, 0, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f);
Matrix.orthoM(mProjectionMatrix, 0, 0.0f, width, height, 0.0f, 0.0f, 1.0f);
Matrix.multiplyMM(mMvpMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);
mLine.setMvpMatrix(mMvpMatrix);
mCircle.setMvpMatrix(mMvpMatrix);
// calculate projection and MVP matrix for texture
Matrix.setIdentityM(mTextureProjectionMatrix, 0);
Matrix.multiplyMM(mTextureMvpMatrix, 0, mTextureProjectionMatrix, 0, mViewMatrix, 0);
mTexturedQuad.setMvpMatrix(mTextureMvpMatrix);
// setup buffers for offscreen texture
GLES20.glGenFramebuffers(1, mFrameBufferHandle, 0);
GLES20.glGenTextures(1, mRenderTextureHandle, 0);
mTexturedQuad.initTexture(width, height, mRenderTextureHandle[0]);
}
#Override
public void onDrawFrame(GL10 unused) {
// use offscreen texture frame buffer
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBufferHandle[0]);
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0,
GLES20.GL_TEXTURE_2D, mRenderTextureHandle[0], 0);
GlUtil.glCheckFramebufferStatus();
// restore and draw saved MotionEvents onto texture if they exist
if (mRestoreMotionEvents) {
mRestoreMotionEvents = false;
processStoredMotionEvents();
}
// draw current MotionEvents onto texture
drawObjects();
// use window frame buffer
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
GLES20.glClearColor(COLOR_BG[0], COLOR_BG[1], COLOR_BG[2], COLOR_BG[3]);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// draw texture onto full-screen quad onto the window surface
drawTexturedQuad();
logFps();
}
/**
* Draws any available line and circle vertex data. Objects including {#code GlCircle} and
* {#code GlLine} are to be drawn on the offscreen texture. The offscreen texture will then be
* drawn onto a fullscreen quad in the default window framebuffer.
*/
private void drawObjects() {
if (!mLineVertexData.isEmpty()) {
drawLines();
}
if (!mCircleVertexData.isEmpty()) {
drawCircles();
}
}
/**
* Draws circles. OpenGL points cannot have radii, hence we draw circles on down key events
* instead of points.
*/
private void drawCircles() {
GLES20.glUseProgram(mCircle.getProgramHandle());
// read offsets
float dx = mCircleVertexData.remove(0);
float dy = mCircleVertexData.remove(0);
float dz = mCircleVertexData.remove(0);
mCircle.setTranslateMatrix(dx, dy, dz);
// read color
float r = mCircleVertexData.remove(0);
float g = mCircleVertexData.remove(0);
float b = mCircleVertexData.remove(0);
float a = mCircleVertexData.remove(0);
mCircle.setColor(r, g, b, a);
mCircle.draw();
}
/**
* Draws lines from touch start points to touch end points.
*/
private void drawLines() {
GLES20.glUseProgram(mLine.getProgramHandle());
// read offsets
float x1 = mLineVertexData.remove(0);
float y1 = mLineVertexData.remove(0);
float z1 = mLineVertexData.remove(0);
float x2 = mLineVertexData.remove(0);
float y2 = mLineVertexData.remove(0);
float z2 = mLineVertexData.remove(0);
mLine.setTranslateMatrix(x1, y1, z1, x2, y2, z2);
// read color
float r = mLineVertexData.remove(0);
float g = mLineVertexData.remove(0);
float b = mLineVertexData.remove(0);
float a = mLineVertexData.remove(0);
mLine.setColor(r, g, b, a);
mLine.draw();
}
/**
* Draws the offscreen texture onto the fullscreen quad, and draws the quad onto the default
* window framebuffer.
*/
private void drawTexturedQuad() {
GLES20.glUseProgram(mTexturedQuad.getProgramHandle());
mTexturedQuad.draw();
}
/**
* Processes MotionEvent.
* Sets vertex and color data based on MotionEvent information.
*
* #param event MotionEvent to process.
* #param store Pass true when processing fresh MotionEvents to store them to support parent
* activity recreations, pass false otherwise.
*/
public void processMotionEvent(MotionEvent event, boolean store) {
if (store) {
mMotionEvents.add(MotionEvent.obtain(event));
}
int action = event.getActionMasked();
switch (action) {
case MotionEvent.ACTION_POINTER_DOWN:
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_MOVE:
// set centroid
mCircleVertexData.add(event.getX());
mCircleVertexData.add(event.getY());
mCircleVertexData.add(0.0f);
// set color
mCircleVertexData.add(COLOR_BRUSH[0]);
mCircleVertexData.add(COLOR_BRUSH[1]);
mCircleVertexData.add(COLOR_BRUSH[2]);
mCircleVertexData.add(COLOR_BRUSH[3]);
break;
}
}
/**
* Draws stored MotionEvents.
* Required to be able to restore state of this Renderer.
*/
private void processStoredMotionEvents() {
for (MotionEvent event : mMotionEvents) {
processMotionEvent(event, false);
drawObjects();
}
}
/**
* Prints out current frames-per-second throughput.
*/
private void logFps() {
mFrameCount++;
if (System.nanoTime() - mStartTime >= 1000000000L) {
Log.d("GlDrawingRenderer", "FPS: " + mFrameCount);
mFrameCount = 0;
mStartTime = System.nanoTime();
}
}
/**
* Saves line and circle vertex data into the {#code Bundle} argument. Call when the parent
* {#code GLSurfaceView} calls its corresponding {#code onSaveInstanceState()} method.
*
* #param bundle Destination {#code Bundle} to save the renderer state into.
*/
public void onSaveInstanceState(Bundle bundle) {
bundle.putParcelableArrayList(EXTRA_MOTION_EVENTS, mMotionEvents);
}
/**
* Restores line and circle vertex data from the {#code Bundle} argument. Call when the parent
* {#code GLSurfaceView} calls its corresponding {#code onRestoreInstanceState(Parcelable)}
* method.
*
* #param bundle Source {#code Bundle} to save the renderer state from.
*/
public void onRestoreInstanceState(Bundle bundle) {
ArrayList<MotionEvent> motionEvents = bundle.getParcelableArrayList(EXTRA_MOTION_EVENTS);
if (motionEvents != null && !motionEvents.isEmpty()) {
mMotionEvents.addAll(motionEvents);
mRestoreMotionEvents = true;
}
}
}
And here's the GlTexturedQuad class:
package com.oaskamay.whiteboard.opengl.drawable;
import android.opengl.GLES20;
import com.oaskamay.whiteboard.opengl.GlUtil;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import java.nio.IntBuffer;
import java.nio.ShortBuffer;
public class GlTexturedQuad {
/*
* Vertex metadata: we have 3 coordinates per vertex, and a quad can be drawn with 2 triangles.
*/
private static final int VERTEX_COORDS = 3;
private static final String VERTEX_SHADER_SOURCE =
"uniform mat4 u_MvpMatrix; \n" +
"attribute vec4 a_Position; \n" +
"attribute vec2 a_TextureCoord; \n" +
"varying vec2 v_TextureCoord; \n" +
" \n" +
"void main() { \n" +
" v_TextureCoord = a_TextureCoord; \n" +
" gl_Position = u_MvpMatrix * a_Position; \n" +
"} \n";
private static final String FRAGMENT_SHADER_SOURCE =
"uniform sampler2D u_Texture; \n" +
"varying vec2 v_TextureCoord; \n" +
" \n" +
"void main() { \n" +
" gl_FragColor = texture2D(u_Texture, v_TextureCoord);\n" +
"} \n";
/*
* Vertex locations. The quad will cover the whole screen, and is in normalized device
* coordinates. The projection matrix for this quad should be identity.
*/
private static final float[] VERTICES = {
-1.0f, +1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
+1.0f, -1.0f, 0.0f,
+1.0f, +1.0f, 0.0f
};
/*
* Describes the order in which vertices are to be rendered.
*/
private static final short[] VERTICES_ORDER = {
0, 1, 2,
0, 2, 3
};
/*
* (u, v) texture coordinates to be sent to the vertex and fragment shaders.
*/
private static final float[] TEXTURE_COORDS = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
private float mMvpMatrix[];
private int mRenderTexture;
/*
* FloatBuffers used to store vertices and their order to draw.
*/
private final FloatBuffer mVertexBuffer;
private final ShortBuffer mVertexOrderBuffer;
private final FloatBuffer mTextureCoordsBuffer;
/*
* OpenGL handles to shader program, attributes, and uniforms.
*/
private final int mProgramHandle;
private final int mMvpMatrixHandle;
private final int mPositionHandle;
private final int mTextureHandle;
private final int mTextureCoordHandle;
/**
* Default constructor. Refrain from calling this multiple times as it may be expensive due to
* compilation of shader sources.
*/
public GlTexturedQuad() {
// initialize vertex buffer
ByteBuffer vertexBuffer = ByteBuffer.allocateDirect(VERTICES.length * 4);
vertexBuffer.order(ByteOrder.nativeOrder());
mVertexBuffer = vertexBuffer.asFloatBuffer();
mVertexBuffer.put(VERTICES);
mVertexBuffer.position(0);
// initialize vertex order buffer
ByteBuffer vertexOrderBuffer = ByteBuffer.allocateDirect(VERTICES_ORDER.length * 2);
vertexOrderBuffer.order(ByteOrder.nativeOrder());
mVertexOrderBuffer = vertexOrderBuffer.asShortBuffer();
mVertexOrderBuffer.put(VERTICES_ORDER);
mVertexOrderBuffer.position(0);
// initialize texture coordinates
ByteBuffer textureCoordsBuffer = ByteBuffer.allocateDirect(TEXTURE_COORDS.length * 4);
textureCoordsBuffer.order(ByteOrder.nativeOrder());
mTextureCoordsBuffer = textureCoordsBuffer.asFloatBuffer();
mTextureCoordsBuffer.put(TEXTURE_COORDS);
mTextureCoordsBuffer.position(0);
// compile vertex and fragment shader sources
int vertexShader = GlUtil.glLoadShader(GLES20.GL_VERTEX_SHADER,
VERTEX_SHADER_SOURCE);
int fragmentShader = GlUtil.glLoadShader(GLES20.GL_FRAGMENT_SHADER,
FRAGMENT_SHADER_SOURCE);
// create shader program and attach compiled sources
mProgramHandle = GLES20.glCreateProgram();
GLES20.glAttachShader(mProgramHandle, vertexShader);
GLES20.glAttachShader(mProgramHandle, fragmentShader);
GLES20.glLinkProgram(mProgramHandle);
// store attribute / uniform handles
mMvpMatrixHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_MvpMatrix");
mTextureHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_Texture");
mPositionHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_Position");
mTextureCoordHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_TextureCoord");
}
/**
* Initializes texture components.
*
* #param width Width of texture in pixels.
* #param height Height of texture in pixels.
*/
public void initTexture(int width, int height, int renderTexture) {
mRenderTexture = renderTexture;
// allocate pixel buffer for texture
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(width * height * 4);
byteBuffer.order(ByteOrder.nativeOrder());
IntBuffer texturePixelBuffer = byteBuffer.asIntBuffer();
// initialize texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mRenderTexture);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGB, width, height,
0, GLES20.GL_RGB, GLES20.GL_UNSIGNED_SHORT_5_6_5, texturePixelBuffer);
}
/**
* Draws this object. The model-view-projection matrix must be set with
* {#link #setMvpMatrix(float[])}.
*/
public final void draw() {
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glEnableVertexAttribArray(mTextureCoordHandle);
// set vertex position and MVP matrix in shader
GLES20.glVertexAttribPointer(mPositionHandle, VERTEX_COORDS, GLES20.GL_FLOAT,
false, VERTEX_COORDS * 4, mVertexBuffer);
GLES20.glUniformMatrix4fv(mMvpMatrixHandle, 1, false, mMvpMatrix, 0);
// bind texture
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mRenderTexture);
// set texture data and coordinate
GLES20.glVertexAttribPointer(mTextureCoordHandle, 2, GLES20.GL_FLOAT, false, 0,
mTextureCoordsBuffer);
GLES20.glUniform1i(mTextureHandle, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, VERTICES_ORDER.length, GLES20.GL_UNSIGNED_SHORT,
mVertexOrderBuffer);
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mTextureCoordHandle);
}
/**
* Sets the model-view-projection matrix in the vertex shader. Necessary to map the normalized
* GL coordinate system to that of the display.
*
* #param mvpMatrix Matrix to use as the model-view-projection matrix.
*/
public void setMvpMatrix(float[] mvpMatrix) {
mMvpMatrix = mvpMatrix;
}
public int getProgramHandle() {
return mProgramHandle;
}
}
EDIT (12/11/2015):
#reto-koradi suggested a much better solution. Invert the V-axis by changing the texture coordinates. This fix is also simple:
Change this (initialization of TEXTURE_COORDS array in GlTexturedQuad):
/*
* (u, v) texture coordinates to be sent to the vertex and fragment shaders.
*/
private static final float[] TEXTURE_COORDS = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f
};
To this:
/*
* (u, v) texture coordinates to be sent to the vertex and fragment shaders.
*/
private static final float[] TEXTURE_COORDS = {
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f
};
I've fixed the issue. The problem was with the projection matrix used for the GlTexturedQuad. The fix was simple:
I changed this (in onSurfaceChanged(GL10, int, int) in GlDrawingRenderer):
// calculate projection and MVP matrix for texture
Matrix.setIdentityM(mTextureProjectionMatrix, 0);
Matrix.multiplyMM(mTextureMvpMatrix, 0, mTextureProjectionMatrix, 0, mViewMatrix, 0);
mTexturedQuad.setMvpMatrix(mTextureMvpMatrix);
To this:
// calculate projection and MVP matrix for texture
Matrix.orthoM(mTextureProjectionMatrix, 0, -1.0f, 1.0f, 1.0f, -1.0f, 0.0f, 1.0f);
Matrix.multiplyMM(mTextureMvpMatrix, 0, mTextureProjectionMatrix, 0, mViewMatrix, 0);
mTexturedQuad.setMvpMatrix(mTextureMvpMatrix);
So now mTextureProjectionMatrix takes into account the V-axis inversion of the texture. Again, I'm an OpenGL ES 2.0 beginner, my explanation might be wrong. But it works :)
I hope this post helped someone out there!
Although there seem to be many solutions to fix the inverted screen you should understand what happens in the background, why is it even inverted in your case and incidentally why your solution is not general.
The openGL buffers follow the legacy desktop coordinate system where bottom-left point is the origin and height increases upwards. So the raw buffer data will have the first pixel data at bottom-left part and not at top-left as you would expect due to how image data is used. So if you want to draw to the top-left part of the image you actually need to draw to the bottom-left part of the buffer (respecting the presentation).
So your issue is not in how you present the drawn texture but how you actually draw to the texture itself. Your coordinate system is inverted while drawing the points. But what difference does it make where I invert?
There is a huge difference actually. Since you inverted the coordinate system when drawing to the FBO and then inverted again when drawing to the presentation buffer to get the correct result your inversion equation is kind of (-1 * -1 = 1). Then what will happen if you add some post processing by adding another FBO: (-1 * -1 * -1 = -1) which means you will need to change the presentation coordinates back to normal as these will appear inverted again.
Another issue is if you try to read pixels to generate an image. In all cases if you try to read it from the presentation buffer it will be inverted. But if you use the FBO and read pixels from that buffer the data should be correct (which is not your case).
So the true general solution is to respect the orientation when drawing to anything but the presentation buffer. The FBO matrix should not invert the Y coordinate, Y should increase upwards. In your case the best thing to do is use a separate ortho call: For the FBO simply flip the top and bottom compared to the presentation values.

Line drawing drawn inverted in OpenGL ES

I'm trying to draw a simple line drawing connecting several vertices in OpenGL ES. However, the line is drawn inverted or in a different position from where it should be drawn. I've attached the class for the line drawing below
ConnectingPath.java
--------------------
public class ConnectingPath {
int positionBufferId;
PointF[] verticesList;
public float vertices[];
public FloatBuffer vertexBuffer;
public ConnectingPath(LinkedList<PointF> verticesList, float[] colors)
{
List<PointF> tempCorners = verticesList;
int i = 0;
this.verticesList = new PointF[tempCorners.size()];
for (PointF corner : tempCorners) {
this.verticesList[i++] = corner;
}
}
public float[] getTransformedVertices()
{
float z;
List<Float> finalVertices = new ArrayList<Float>();
finalVertices.clear();
for(PointF point : verticesList){
finalVertices.add(point.x);
finalVertices.add(point.y);
finalVertices.add(0.0f);
}
int i = 0;
float[] verticesArray = new float[finalVertices.size()];
for (Float f : finalVertices) {
verticesArray[i++] = (f != null ? f : Float.NaN);
}
return verticesArray;
}
public void initBooth(){
vertices = this.getTransformedVertices();
for(Float f : vertices){
Log.d("Mapsv3--", f + "");
}
ByteBuffer bb = ByteBuffer.allocateDirect(vertices.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
int[] buffers = new int[1];
GLES11.glGenBuffers(1, buffers, 0);
GLES11.glBindBuffer(GLES11.GL_ARRAY_BUFFER, buffers[0]);
GLES11.glBufferData(GLES11.GL_ARRAY_BUFFER, 4 * vertices.length, vertexBuffer, GLES11.GL_STATIC_DRAW);
positionBufferId = buffers[0];
}
public void Render(GL10 gl){
GLES11.glPushMatrix();
GLES11.glBindBuffer(GLES11.GL_ARRAY_BUFFER, positionBufferId);
GLES11.glEnableClientState(GL10.GL_VERTEX_ARRAY);
GLES11.glVertexPointer(3, GL10.GL_FLOAT, 0, 0);
GLES11.glBindBuffer(GLES11.GL_ARRAY_BUFFER, 0);
GLES11.glFrontFace(GL10.GL_CW);
GLES11.glLineWidth(10.0f);
GLES11.glColor4f(0.0f,0.0f,0.0f,1.0f);
GLES11.glDrawArrays(GL10.GL_LINE_STRIP, 0, verticesList.length);
GLES11.glDisableClientState(GL10.GL_VERTEX_ARRAY);
GLES11.glPopMatrix();
}
}
Drawing code :
Renderer.java
--------------
// Variables here
public void onSurfaceChanged(GL10 gl, int width, int height) {
viewWidth = width;
viewHeight = height;
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
gl.glEnable(GL10.GL_TEXTURE_2D); //Enable Texture Mapping
gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading
gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f); //Grey Background
gl.glClearDepthf(1.0f); //Depth Buffer Setup
gl.glEnable(GL10.GL_DEPTH_TEST); //Enables Depth Testing
gl.glDepthFunc(GL10.GL_LEQUAL);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
}
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluOrtho2D(gl, -viewWidth/2, viewWidth/2, -viewHeight/2,viewHeight/2);
gl.glTranslatef(center.x,center.y,0);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0,0, 0);
gl.glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
gl.glEnable(GL10.GL_CULL_FACE);
gl.glCullFace(GL10.GL_FRONT);
if(connectingPath!=null){
connectingPath.Render(gl);
}
gl.glDisable(GL10.GL_CULL_FACE);
gl.glLoadIdentity();
}
Screenshot :
The drawing in OpenGL seems to be inverted for you due to the way OpenGL defines it's screen coordinates. In contrast to most 2D drawing API's, the origin is located in the bottom left corner, which means that the y axis values increase when moving upwards. A very nice explanation is available in the OpenGL common pitfalls (Number 12):
Given a sheet of paper, people write from the top of the page to the bottom. The origin for writing text is at the upper left-hand margin of the page (at least in European languages). However, if you were to ask any decent math student to plot a few points on an X-Y graph, the origin would certainly be at the lower left-hand corner of the graph. Most 2D rendering APIs mimic writers and use a 2D coordinate system where the origin is in the upper left-hand corner of the screen or window (at least by default). On the other hand, 3D rendering APIs adopt the mathematically minded convention and assume a lower left-hand origin for their 3D coordinate systems.

Reading the transformation matrix from andar

I am developing an android based tracking system using camera and other sensors on android. I am interested in reading the transformation matrix from AndAR rather than displaying some object (e.g. a cube) when the marker is detected. I have another tracking system developed using a flavor of ARToolkit, called jARToolkit, that runs on desktop machine and gives transformation matrix between the web-camera and the pattern.
Right now I am getting the transformation matrix from AndAR, but if we compare it with the transformation matrix that I am getting from jARToolkit, it is totally different. The reason could be following problems -
The surface preview that I see on android is always rotated by 90 degrees. So my X and Y co-ordinates in the translation matrix exchange their position.
I am not sure about the unit of the translation matrix. It comes to around 4 units per cm in physical world but there is no way for me to verify this.
I would appreciate if anyone could help me to address these questions or let me know if I am missing something. Thanks in advance.
Following is the code that I am using. It is pretty much the same as in the AndAR documentation.
boolean keepRunning = true;
try {
ARToolkit artoolkit = getArtoolkit();
CustomObject object_hiro = new CustomObject("test_hiro", "hiro.patt", 80.0,new double[] { 0, 0 });
artoolkit.registerARObject(object_hiro);
}
catch (AndARException ex)
{
System.out.println("");
}
while(keepRunning)
{
double[] transMatrix = (double[]) object_hiro.getTransMatrix();
}
and here is the CustomObject.java -
import java.nio.FloatBuffer;
import javax.microedition.khronos.opengles.GL10;
import edu.dhbw.andar.ARObject;
import edu.dhbw.andar.pub.SimpleBox;
import edu.dhbw.andar.util.GraphicsUtil;
/**
* An example of an AR object being drawn on a marker.
* #author tobi
*
*/
public class CustomObject extends ARObject {
public CustomObject(String name, String patternName,
double markerWidth, double[] markerCenter) {
super(name, patternName, markerWidth, markerCenter);
float mat_ambientf[] = {0f, 1.0f, 0f, 1.0f};
float mat_flashf[] = {0f, 1.0f, 0f, 1.0f};
float mat_diffusef[] = {0f, 1.0f, 0f, 1.0f};
float mat_flash_shinyf[] = {50.0f};
mat_ambient = GraphicsUtil.makeFloatBuffer(mat_ambientf);
mat_flash = GraphicsUtil.makeFloatBuffer(mat_flashf);
mat_flash_shiny = GraphicsUtil.makeFloatBuffer(mat_flash_shinyf);
mat_diffuse = GraphicsUtil.makeFloatBuffer(mat_diffusef);
}
public CustomObject(String name, String patternName,
double markerWidth, double[] markerCenter, float[] customColor) {
super(name, patternName, markerWidth, markerCenter);
float mat_flash_shinyf[] = {50.0f};
mat_ambient = GraphicsUtil.makeFloatBuffer(customColor);
mat_flash = GraphicsUtil.makeFloatBuffer(customColor);
mat_flash_shiny = GraphicsUtil.makeFloatBuffer(mat_flash_shinyf);
mat_diffuse = GraphicsUtil.makeFloatBuffer(customColor);
}
private SimpleBox box = new SimpleBox();
private FloatBuffer mat_flash;
private FloatBuffer mat_ambient;
private FloatBuffer mat_flash_shiny;
private FloatBuffer mat_diffuse;
/**
* Everything drawn here will be drawn directly onto the marker,
* as the corresponding translation matrix will already be applied.
*/
#Override
public final void draw(GL10 gl) {
super.draw(gl);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SPECULAR,mat_flash);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SHININESS, mat_flash_shiny);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_DIFFUSE, mat_diffuse);
gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_AMBIENT, mat_ambient);
//draw cube
gl.glColor4f(1.0f, 0f, 0, 1.0f);
gl.glTranslatef( 0.0f, 0.0f, 12.5f );
box.draw(gl);
}
#Override
public void init(GL10 gl) {
// TODO Auto-generated method stub
}
}
Please let me know if I need to provide additional information.. Thanks
Original C ARToolKit has two type of transformation associated to a marker:
a 3x4 matrix (computer vision matrix from the pose estimation, obtained from arGetTransMat)
a 4x4 matrix (an OpenGL-like matrix, obtained from argConvGLcpara with the above 3x4 matrix).
In AndAR:
3x4 matrix: can be obtained by calling getTransMatrix() from your ARObject.
4x4 matrix: not publicly accessible from your ARObject, matrix stored in glCameraMatrix (see the code of ARObject.java).
in JARToolKit:
3x4 matrix: can be obtained by calling getTransMatrix
4x4 matrix: can be obtained by calling getCamTransMatrix
Maybe you access a different matrix between AndAR and JARToolKit.
The unit is relative to your marker size. Generally it's in mm, parameter in your declaration of object_hiro: 80.0 represents 80mm width. You can print your marker to this size so you get a match between your physical object and virtual content.

Categories

Resources