Vuforia Videoplayback issue - Video is playing inverted - android

Im working on a project combining Vuforia ImageTarget and VideoPlayback. I have 'N' number of targets and it have corresponding videos . For some imageTargets the video is flipped. I can't find any solution for this issue. Here is my VideoPlaybackRenderer
int videoPlaybackTextureID[] = new int[VideoPlayback.NUM_TARGETS];
// Keyframe and icon rendering specific
private int keyframeShaderID = 0;
private int keyframeVertexHandle = 0;
private int keyframeNormalHandle = 0;
private int keyframeTexCoordHandle = 0;
private int keyframeMVPMatrixHandle = 0;
private int keyframeTexSampler2DHandle = 0;
// We cannot use the default texture coordinates of the quad since these
// will change depending on the video itself
private float videoQuadTextureCoords[] = { 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, };
private Float videoQuadTextureCoordsTransformed[] = {0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f,};
List<Float[]> videoQuadTextureCoordsTransformedList = new ArrayList<Float[]>();
// Trackable dimensions
Vec3F targetPositiveDimensions[] = new Vec3F[VideoPlayback.NUM_TARGETS];

looks like you need to select the object video and then apply something like this:
example you select a cube.
this will rotate the cube 180 degrees without modifying any of the other rotational axis'
cube.transform.rotation = new Quaternion(cube.transform.rotation.x, cube.transform.rotation.y, cube.transform.rotation.z, 180);

Related

Android game - simulate object rotation along its path

I'm reading this awesome Beginning Android Games book, and I'm trying now to implement some tests myself.
I'm using OpenGl ES 1.0, and I'm OK now manipulating the view frustum, projections, translation, rotation, scale etc.
What I'm trying to do:
a) render a rocket to the screen, add some velocity and acceleration to it (using Euler's integration - add the acceleration to the velocity, and the velocity to the position) to simulate a path (parabola). - This is done, implemented without any issue.
b) Rotate the rocket, so that we can simulate also the inclination of the object along its path. - That's the problem.
To be clear, I'm adding the image below.
I can't figure out what's the correct angle to add to the rocket, between one frame and the next one.
I tried to get that with some geometry.
Obj Pos 1 is the rocket representation at frame 1.
Obj Pos 2 is the rocket representation, at the next frame (frame 2).
V1 is the vector that holds the center X and Y coordinates of the Obj Pos 1.
V2 i the vector that holds the center X and Y coordinates of the Obj Pos 2.
Tangent line 1 is the tangent line to the parabola, to where V1 points.
Tangent line 2 is the tangent line to the parabola, to where V2 points.
A1 is the angle between both vectors.
A2 is the angle between both tangent lines.
As far as I can see the correct angle to apply to the rocket, from frame 1 to frame 2 is angle A2. But how can I calculate it?
And, is this correct for game purposes? I mean, we don't need to be exact on the physics concept, we just need to be good enough to simulate animation and 'cheat' the user.
Followd the code below:
public class PersonalTest008Rocket extends GLGame {
#Override
public Screen getStartScreen() {
return new RocketScreen(this);
}
class RocketScreen extends Screen {
GLGraphics glGraphics;
Camera2D camera;
final float WORLD_WIDTH = 60;
final float WORLD_HEIGHT = 36;
float[] rocketRawData;
short[] rocketRawIndices;
BindableVertices rocketVertices;
DynamicGameObject rocket;
float angle;
Vector2 gravity;
public RocketScreen(Game game) {
super(game);
glGraphics = ((GLGame) game).getGLGraphics();
camera = new Camera2D(glGraphics, WORLD_WIDTH, WORLD_HEIGHT);
rocketRawData = new float[]{
// x, y, r, g, b, a
+4.0f, +0.0f, 1.0f, 0.0f, 0.0f, 1, // 0
+2.0f, +1.0f, 0.5f, 0.0f, 0.0f, 1, // 1
+2.0f, -1.0f, 0.5f, 0.0f, 0.0f, 1, // 2
-2.0f, +1.0f, 0.0f, 0.5f, 0.5f, 1, // 3
-2.0f, -1.0f, 0.0f, 0.5f, 0.5f, 1, // 4
-3.0f, +1.0f, 0.0f, 0.5f, 0.5f, 1, // 5
-3.0f, -1.0f, 0.0f, 0.5f, 0.5f, 1, // 6
-4.0f, +3.0f, 0.0f, 0.0f, 1.0f, 1, // 7
-5.0f, +0.0f, 0.0f, 0.0f, 1.0f, 1, // 8
-4.0f, -3.0f, 0.0f, 0.0f, 1.0f, 1 // 9
};
rocketRawIndices = new short[]{
0, 1, 2,
1, 4, 2,
1, 3, 4,
3, 4, 6,
3, 5, 6,
3, 7, 5,
5, 8, 6,
6, 9, 4
};
rocketVertices = new BindableVertices(glGraphics, 10, 3 * 8, true, false);
rocketVertices.setVertices(rocketRawData, 0, rocketRawData.length);
rocketVertices.setIndices(rocketRawIndices, 0, rocketRawIndices.length);
int velocity = 30;
angle = 45;
rocket = new DynamicGameObject(0, 0, 9, 6);
rocket.position.add(1, 1);
rocket.velocity.x = (float) Math.cos(Math.toRadians(angle)) * velocity;
rocket.velocity.y = (float) Math.sin(Math.toRadians(angle)) * velocity;
gravity = new Vector2(0, -10);
}
#Override
public void update(float deltaTime) {
rocket.velocity.add(gravity.x * deltaTime, gravity.y * deltaTime);
rocket.position.add(rocket.velocity.x * deltaTime, rocket.velocity.y * deltaTime);
}
#Override
public void present(float deltaTime) {
GL10 gl = glGraphics.getGL();
gl.glClearColor(0.5f, 0.5f, 0.5f, 1);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.setViewportAndMatrices();
gl.glTranslatef(rocket.position.x, rocket.position.y, 0);
gl.glRotatef(angle, 0, 0, 1);
rocketVertices.bind();
rocketVertices.draw(GL10.GL_TRIANGLES, 0, rocketRawIndices.length);
rocketVertices.unbind();
}
#Override
public void pause() {
}
#Override
public void resume() {
}
#Override
public void dispose() {
}
}
}
You can calculate the angle based on the instantaneous velocity vector:
// Get direction to point the ship
mangleInDeg = (float) (Math.atan2(mRelSpeed.y, mRelSpeed.x) * 180 / Math.PI);
mangleInDeg += 90.0; // offset the angle to coincide with angle of the base ship image

Learning camera OpenGl ES 2.0

I am trying to understand how camera works on OpenGL ES, so I am tryng to look at the same point with the two differents types, Matrix.frustumM and Matrix.orthoM
I will like to know what exactly I am doing when use Matrix.frustumM or orthoM, I know that I apply them to the ProjectionMatrix but I dont understand what defines the parameters(left,right,bottom,top,near,far of what? it is supposed to be the screen of the phone? ) same with orthoM
I want to draw a square on the screen on 0,0,0 with 1f of height and weight(like 2D just to test the cameras)
but if I do onSurfaceCreated
final float eyeX = 2f;
final float eyeY = 5f;
final float eyeZ = 8f;
final float lookX = 2f;
final float lookY = 5f;
final float lookZ = 0.0f;
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
onSurfaceChanged
GLES20.glViewport(0, 0, width, height);
// Create a new perspective projection matrix. The height will stay the
// same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 25.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
That is what i saw onn phone
Draw function:
public void dibujarBackground()
{
// Draw a plane
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mBackgroundDataHandle);
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f,2.0f, 0.0f);
drawBackground();
}
private void drawBackground()
{
coordinate.drawBackground(mPositionHandle, mNormalHandle, mTextureCoordinateHandle);
// This multiplies the view matrix by the model matrix, and stores the
// result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mMVPMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glUniform3f(mLightPosHandle,Light.mLightPosInEyeSpace[0], Light.mLightPosInEyeSpace[1], Light.mLightPosInEyeSpace[2]);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6);
}
Coords of the square:
final float[] backgroundPositionData = {
// In OpenGL counter-clockwise winding is default.
0f, 1f, 0.0f,
0f, 0f, 0.0f,
1f, 1f, 0.0f,
0f, 0f, 0.0f,
1f, 0f, 0.0f,
1f, 1f, 0.0f,
};
final float[] backgroundNormalData = {
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f, };
final float[] backgroundTextureCoordinateData = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f, };
Overall what you get in the end is a single matrix which is used to multiply the positions so that the visible fragments are in range [-1,1] in all 3 dimensions. That means if you use no matrix or use the identity the coordinates will need to be in this range to be visible. So the 3 matrix computations you are using are actually only conveniences to help you achieve a correct transformation:
Ortho is an orthographical transformation. This means the visual representation of x and y screen coordinates are not effected by the z coordinate at all. Visually that means the object does not appear smaller when it is further. The values you insert into this convenience method are border values (left, right, top, bottom) which means a rectangle with same coordinates will take exactly the full screen. These values are mostly used to be the same as your view coordinate system (left = 0, right = screenWidth, top = 0, bottom = screenHeight). Also there are near and far parameters which represent the clipping planes so that positions smaller then near or further then far are not visible. This projection is mostly used for 2D drawing.
Frustum matrix is designed so that the x and y coordinates are reduced with increasing z. This means an object will appear smaller when further. The border parameters are connected to the near parameter so that the rectangle with border coordinates having z at near will appear as full screen. The near must be larger then zero in this case or the result is unpredictable. The far promoter is just a clipping plane but same as with ortho the pixels are clipped if z value is smaller then near or larger then far. The border parameters are best computed with the field of view (angle) and screen aspect ratio. You use the tang function to compute border parameters to get the desired effect. This method is mostly used for 3D drawing.
LookAt is a convenience which is used to transform all the objects to such positions and orientations that they appear to be effected by the camera position. Though this method is defined with vectors you may imagine it having a vector position and rotations. What this does it creates a matrix that will rotate all the objects by -rotations and translate them by -position.
Overall the usage then is pretty simple. Each position should first be multiplied by the model matrix which is the matrix representing the model position in your scene. Then multiplied by the matrix received with lookAt to simulate the camera. Then multiplied by the projection matrix which in most cases is either the ortho or the frustum. The optimization then is to multiply the matrices first on the CPU and then have the positions multiplied by them on the GPU. Some variations then persist where you split the matrix to the "model view matrix" and the "projection matrix". This is used to compute things like lighting effect where the position must not be effected by the projection matrix.

Android Paint and Color Neon Effect

I have solid RGB colors as shown below. How can I apply a neon glow effect to plain RGB color codes below? I am new to programming so please bare with my ignorance regarding this.
public static final class Color {
static final float RGB_UPPER_BOUND = 255;
static final float[] GRAY_RGB = {153/RGB_UPPER_BOUND, 60/RGB_UPPER_BOUND, 243/RGB_UPPER_BOUND};
static final float[] WHITE_RGB = {255/RGB_UPPER_BOUND, 65/RGB_UPPER_BOUND, 5/RGB_UPPER_BOUND};
static final float[] BLACK_RGB = {0/RGB_UPPER_BOUND, 0/RGB_UPPER_BOUND, 0/RGB_UPPER_BOUND};
static final float[] RED_RGB = {255/RGB_UPPER_BOUND, 0/RGB_UPPER_BOUND, 0/RGB_UPPER_BOUND};
static final float[] BLUE_RGB = {77/RGB_UPPER_BOUND, 77/RGB_UPPER_BOUND, 255/RGB_UPPER_BOUND};
static final float[] GREEN_RGB = {131/RGB_UPPER_BOUND, 245/RGB_UPPER_BOUND, 44/RGB_UPPER_BOUND};
public static final float[] WHITE = {
WHITE_RGB[0], WHITE_RGB[1], WHITE_RGB[2], 1.0f, // bottom left
WHITE_RGB[0], WHITE_RGB[1], WHITE_RGB[2], 1.0f, // top left
WHITE_RGB[0], WHITE_RGB[1], WHITE_RGB[2], 1.0f, // bottom right
WHITE_RGB[0], WHITE_RGB[1], WHITE_RGB[2], 1.0f, // top right
};
public static final float[] GRAY = {
GRAY_RGB[0], GRAY_RGB[1], GRAY_RGB[2], 1.0f,
GRAY_RGB[0], GRAY_RGB[1], GRAY_RGB[2], 1.0f,
GRAY_RGB[0], GRAY_RGB[1], GRAY_RGB[2], 1.0f,
GRAY_RGB[0], GRAY_RGB[1], GRAY_RGB[2], 1.0f,
};
public static final float[] BLUE = {
BLUE_RGB[0], BLUE_RGB[1], BLUE_RGB[2], 1.0f,
BLUE_RGB[0], BLUE_RGB[1], BLUE_RGB[2], 1.0f,
BLUE_RGB[0], BLUE_RGB[1], BLUE_RGB[2], 1.0f,
BLUE_RGB[0], BLUE_RGB[1], BLUE_RGB[2], 1.0f,
};
public static final float[] GREEN = {
GREEN_RGB[0], GREEN_RGB[1], GREEN_RGB[2], 1.0f,
GREEN_RGB[0], GREEN_RGB[1], GREEN_RGB[2], 1.0f,
GREEN_RGB[0], GREEN_RGB[1], GREEN_RGB[2], 1.0f,
GREEN_RGB[0], GREEN_RGB[1], GREEN_RGB[2], 1.0f,
};
}
UPDATE:
Want to achieve something like this but with a Neon glow not just neon solid color.
https://play.google.com/store/apps/details?id=com.ginnko.games.neonpong

Android Open Gl Object Selection

In open GL there is a term called picking. Which is used to determine which object on the screen was selected. Can someone explain to me what the difference between using picking and putting a touch based listener in each and every instance of a object ex. A Cube class.
Hypothetically; What i want to do is demonstrate multiple cubes on a screen randomly. I figured if I give the Cube class a listener, upon touching the cube the listener should fire off accordingly for each cube pressed.
This is the code I would add the listener to.
Would this be possible or is picking necessary?
public class Cube extends Shapes {
private FloatBuffer mVertexBuffer;
private FloatBuffer mColorBuffer;
private ByteBuffer mIndexBuffer;
private Triangle[] normTris = new Triangle[12];
private Triangle[] transTris = new Triangle[12];
// every 3 entries represent the position of one vertex
private float[] vertices =
{
-1.0f, -1.0f, -1.0f,
1.0f, -1.0f, -1.0f,
1.0f, 1.0f, -1.0f,
-1.0f, 1.0f, -1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, -1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, 1.0f, 1.0f
};
// every 4 entries represent the color (r,g,b,a) of the corresponding vertex in vertices
private float[] colors =
{
1.0f, 0.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f
};
// every 3 entries make up a triangle, every 6 entries make up a side
private byte[] indices =
{
0, 4, 5, 0, 5, 1,
1, 5, 6, 1, 6, 2,
2, 6, 7, 2, 7, 3,
3, 7, 4, 3, 4, 0,
4, 7, 6, 4, 6, 5,
3, 0, 1, 3, 1, 2
};
private float[] createVertex(int Index)
{
float[] vertex = new float[3];
int properIndex = Index * 3;
vertex[0] = vertices[properIndex];
vertex[1] = vertices[properIndex + 1];
vertex[2] = vertices[properIndex + 2];
return vertex;
}
public Triangle getTriangle(int index){
Triangle tri = null;
//if(index >= 0 && index < indices.length){
float[] v1 = createVertex(indices[(index * 3) + 0]);
float[] v2 = createVertex(indices[(index * 3) + 1]);
float[] v3 = createVertex(indices[(index * 3) + 2]);
tri = new Triangle(v1, v2, v3);
// }
return tri;
}
public int getNumberOfTriangles(){
return indices.length / 3;
}
public boolean checkCollision(Ray r, OpenGLRenderer renderer){
boolean isCollide = false;
int i = 0;
while(i < getNumberOfTriangles() && !isCollide){
float[] I = new float[3];
if(Shapes.intersectRayAndTriangle(r, transTris[i], I) > 0){
isCollide = true;
}
i++;
}
return isCollide;
}
public void translate(float[] trans){
for(int i = 0; i < getNumberOfTriangles(); i++){
transTris[i].setV1(Vector.addition(transTris[i].getV1(), trans));
transTris[i].setV2(Vector.addition(transTris[i].getV2(), trans));
transTris[i].setV3(Vector.addition(transTris[i].getV3(), trans));
}
}
public void scale(float[] scale){
for(int i = 0; i < getNumberOfTriangles(); i++){
transTris[i].setV1(Vector.scalePoint(transTris[i].getV1(), scale));
transTris[i].setV2(Vector.scalePoint(transTris[i].getV2(), scale));
transTris[i].setV3(Vector.scalePoint(transTris[i].getV3(), scale));
}
}
public void resetTransfomations(){
for(int i = 0; i < getNumberOfTriangles(); i++){
transTris[i].setV1(normTris[i].getV1().clone());
transTris[i].setV2(normTris[i].getV2().clone());
transTris[i].setV3(normTris[i].getV3().clone());
}
}
public Cube()
{
Buffer[] buffers = super.getBuffers(vertices, colors, indices);
mVertexBuffer = (FloatBuffer) buffers[0];
mColorBuffer = (FloatBuffer) buffers[1];
mIndexBuffer = (ByteBuffer) buffers[2];
}
public Cube(float[] vertices, float[] colors, byte[] indices)
{
if(vertices != null) {
this.vertices = vertices;
}
if(colors != null) {
this.colors = colors;
}
if(indices != null) {
this.indices = indices;
}
Buffer[] buffers = getBuffers(this.vertices, this.colors, this.indices);
mVertexBuffer = (FloatBuffer) buffers[0];
mColorBuffer = (FloatBuffer) buffers[1];
mIndexBuffer = (ByteBuffer) buffers[2];
for(int i = 0; i < getNumberOfTriangles(); i++){
normTris[i] = getTriangle(i);
transTris[i] = getTriangle(i);
}
}
public void draw(GL10 gl)
{
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer);
gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_COLOR_ARRAY);
// draw all 36 triangles
gl.glDrawElements(GL10.GL_TRIANGLES, 36, GL10.GL_UNSIGNED_BYTE, mIndexBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
}
}
Using a Listener does not work in this case.
If you for example take a look at the onTouchListener. This is basically an interface providing just a single method onTouch(). Now when android is processing touch inputs and the target view was touched it knows that your listener can be informed about the touch by calling onTouch() of your listener.
When using OpenGL you have the problem, that noone handles the touch input inside your opengl surface. You have to do it yourself. So there is noone who will call your listener.
Why? What you render inside your gl surface is up to you. You only know what the actual geometry is and therefore you are the only one who can decide which object was selected.
You basically have two options to do the selection:
Ray Shooting - Shoot a ray through the eye of the viewer and the touched point into the scene and check which object was hit.
Color Picking - assign ids to your objects, encode id as a color, render scene with this color. finally check the color at the touch position and decode the color to get the id of the object.
For most applications I would prefer the second solution.

glUnProject problem in Android

In my game, I need to find out where the player is touching. MotionEvent.getX() and MotionEvent.getY() return window coordinates. So I made this function to test converting window coordinates into OpenGL coordinates:
public void ConvertCoordinates(GL10 gl) {
float location[] = new float[4];
final MatrixGrabber mg = new MatrixGrabber(); //From the SpriteText demo in the samples.
mg.getCurrentProjection(gl);
mg.getCurrentModelView(gl);
int viewport[] = {0,0,WinWidth,WinHeight};
GLU.gluUnProject((float) WinWidth/2, (float) WinHeight/2, (float) 0, mg.mModelView, 0, mg.mProjection, 0, viewport, 0, location,0);
Log.d("Location",location[1]+", "+location[2]+", "+location[3]+"");
}
X and y oscillated from almost -33 to almost +33. Z is usually 10. Did I use MatrixGrabber wrong or something?
Well the easiest way for me to get into this was imagining the onscreen click as a ray that starts in camera position and goes into infinity
To get that ray i needed to ask for it's world coords in at least 2 Z positions (in view coords).
Here's my method for finding the ray(taken from the same android demo app i guess).
It works fine in my app:
public void Select(float x, float y) {
MatrixGrabber mg = new MatrixGrabber();
int viewport[] = { 0, 0, _renderer.width, _renderer.height };
_renderer.gl.glMatrixMode(GL10.GL_MODELVIEW);
_renderer.gl.glLoadIdentity();
// We need to apply our camera transformations before getting the ray coords
// (ModelView matrix should only contain camera transformations)
mEngine.mCamera.SetView(_renderer.gl);
mg.getCurrentProjection(_renderer.gl);
mg.getCurrentModelView(_renderer.gl);
float realY = ((float) (_renderer.height) - y);
float nearCoords[] = { 0.0f, 0.0f, 0.0f };
float farCoords[] = { 0.0f, 0.0f, 0.0f };
gluUnProject(x, realY, 0.0f, mg.mModelView, 0, mg.mProjection, 0,
viewport, 0, nearCoords, 0);
gluUnProject(x, realY, 1.0f, mg.mModelView, 0, mg.mProjection, 0,
viewport, 0, farCoords, 0);
mEngine.RespondToClickRay(nearCoords, farCoords);
}
In OGL10 AFAIK you don't have access to Z-Buffer so you can't find the Z of the closest object at the screen x,y coords.
You need to calculate the first object hit by your ray on your own.
Your error comes from the size of the output arrays. They need length 4 and not 3 as in the code above.

Categories

Resources