Why are textures displayed on Android larger and icomplete compared to PC? - android

My aim is to write a game in C++ for Linux, Windows and Android. I use SDL and am able to draw basic geometric shapes using OpenGL ES 2.0 and shaders. But when I try to apply textures to these shapes I recognize, that they appear larger and incomplete on Android. On PC it works fine. My code does not has to be changed to compile for Android. I use Ubuntu 14.10 and test it on it as well as on my Nexus 5 with Android 5.0.1.
I set up an orthographic projection matrix, that gives me a "surface" with an aspect ration of 16:9 in the area x 0.0 to 1.0 and y 0.0 to 0.5625. In this area I draw a rectangle to check that "custom space":
//Clear
glClearColor(0.25, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Configure viewport
glViewport(0, 0, this->width, this->height);
//Update matrices
this->baseShader.bind();
this->baseShader.updateProjectionMatrix(this->matrix);
this->matrix.loadIdentity();
this->baseShader.updateModelViewMatrix(this->matrix);
//Draw rectangle
GLfloat vertices[] = {0.0, 0.0, 0.0, 0.5625, 1.0, 0.0, 1.0, 0.5625};
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
The results are following:
TextureComparison1.png - Dropbox
Then I draw a square and map a texture to it. Here is the code:
//Enable blending
bw::Texture::enableAlpha();
//Matrices
this->textureShader.bind();
this->textureShader.updateProjectionMatrix(this->matrix);
this->matrix.loadIdentity();
this->matrix.translate(this->sW/2.0, this->sH/2.0, 0.0);
this->matrix.rotate(rot, 0.0, 0.0, 1.0);
this->matrix.translate(-this->sW/2.0, -this->sH/2.0, 0.0);
this->textureShader.updateModelViewMatrix(this->matrix);
//Coordinates
float x3 = this->sW/2.0-0.15, x4 = this->sW/2.0+0.15;
float y3 = this->sH/2.0-0.15, y4 = this->sH/2.0+0.15;
GLfloat vertices2[] = {x3, y3, x3, y4, x4, y3, x4, y4};
GLfloat texCoords[] = {0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0};
//Send coordinations to GPU
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, vertices2);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, texCoords);
glEnableVertexAttribArray(1);
//Bind texture
int u1 = glGetUniformLocation(this->textureShader.getId(), "u_Texture");
glActiveTexture(GL_TEXTURE0);
this->spriteTexture.bind();
glUniform1i(u1, 0);
//Draw
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
But this gives me different results:
TextureComparison2.png - Dropbox
Then I have tried to modify and test around with the texture coordinates. If I half them, so all 1.0s to 0.5, only the "1 field" of my texture should be displayed? On Linux it is that way, but on Android is just some random area of the texture displayed.
So can anyone give me a hint what I do wrong?

I figured out the problem. I bind my attributes a_Vertex to 0 and a_TexCoordinate to 1. On PC it happens, but on Android it is reversed. So I took a look in the OpenGL ES 2.0 Reference concerning glBindAttribLocation. You have to bind the locations before linking the program. Now it works perfectly the same on Android as on PC.

Related

Result difference between android virtual device and real device

Following code is drawing two different cube (red/green) on android studio with JNI using opengl-ES. On the virtual device, the result is correct. However, on real device, the result looks like strange.
The structure(means 3d model and its view on 2d) is correct. but the color is different to AVD. In addition it looks like the depth test is not working. What's the problem?
Simply speaking,
AVD : gives correct result (red, green cube with depth test on with specific camera pose)
real device : gives strange result (same camera pose to AVD. but color is different. two green cubes are there. also depth test not working)
float color1[] = {1.0f, 0.0f, 0.0f};
float color2[] = {0.0f, 1.0f, 0.0f};
int mColorHandle1;
int mColorHandle2;
glViewport(0,1280,720,1280);
glEnable(GL_DEPTH_TEST);
glUniform4f(mColorHandle1, color1[0], color1[1], color1[2], 1.0f);
glVertexAttribPointer(gvPositionHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleVertices1);
glEnableVertexAttribArray(gvPositionHandle);
glDrawArrays(GL_TRIANGLES, 0, 36);
glUniform4f(mColorHandle2, color2[0], color2[1], color2[2], 1.0f);
glDrawArrays(GL_TRIANGLES, 36, 36);
glDisableVertexAttribArray(gvPositionHandle);
glDisable(GL_DEPTH_TEST);

Android calculate object 2D position if having projection model-view matrix in OpenGL

I only have projection and model-view matrices while what I want is getting the 2D position of object in the screen.
The projection matrix is array of float[16].
For example:
float[] projectionMatrix = new float[16] {
2.6077228, 0.0, 0.0, 0.0,
0.0, 1.9605954, 0.0, 0.0,
-0.010504603, -0.01849866, -1.004008, -1.0,
0.0, 0.0, -20.040081, 0.0
};
The model-view matrix is the same.
For example:
float[] modelViewMatrix = new float[16] {
0.78095937, -0.05827314, -0.6218487, 0.0,
0.04460156, 0.9982925, -0.037790783, 0.0,
0.6229988, 0.001786924, 0.78222054, 0.0,
25.339212, -41.582745, -197.50203, 1.0
}
How to find the final position of object [x, y]?
Does anyone have experience in this?
Update more details:
In my project, everything is calculated inside .so lib. I can only get the projection and modelview matrx. The lib detects an object via camera and now I want to know the 2d position of it on the screen to add some touch events.
I have experience on this area.
What you are doing is multiplying to vector object other-way ,(i.e) you are correlating the two image data. it is nothing but, correlation based object detection.Open GL E.S 2.0 will not allow to read the shader variable.
Output of the RGB data (LCD interface) will contain the ROW count (Hsync) and COLUMN count (pixel count) provides you the object position if you assign the pixel when the correlation is successful
How much FPS you are getting?
All the best.

How to draw/render a Bullet Physics collision body/shape?

I have implemented the Bullet Physics engine into my android program with the NDK (I am using Vuforia's imagetarget example for android), and it is set up and working correctly, however I would like to render/draw my collision boxes/planes to see my rigid bodies (btRigidBody)/collision shapes (btCollisionShape), I'm positive this is possible but I can't find any tutorials on how to do it!
I have taken the hello world Bullet physics tutorial on their wiki page and modified it to apply the transformations from the falling physics body to a 3d object I have in opengl es 2.0 to view the collision bodies, here is the code I am using to render to object:
void drawRigidBody(btRigidBody* body,QCAR::Matrix44F modelViewMatrix, unsigned int textureID)
{
btTransform trans;
body->getMotionState()->getWorldTransform(trans);
LOG("sphere pos: (x %f , y %f, z %f)",trans.getOrigin().getX(),trans.getOrigin().getY(),trans.getOrigin().getZ());
float physicsMatrix[16];
trans.getOpenGLMatrix(physicsMatrix);
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
&modelViewMatrix.data[0]);
QCAR::Matrix44F modelViewProjection, objectMatrix;
SampleUtils::multiplyMatrix(&modelViewMatrix.data[0], physicsMatrix, &objectMatrix.data[0]);
SampleUtils::multiplyMatrix(&projectionMatrix.data[0], &objectMatrix.data[0], &modelViewProjection.data[0]);
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &signVerts[0]);
glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &signNormals[0]);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0,
(const GLvoid*) &signTexCoords[0]);
glEnableVertexAttribArray(vertexHandle);
glEnableVertexAttribArray(normalHandle);
glEnableVertexAttribArray(textureCoordHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE,
(GLfloat*)&modelViewProjection.data[0] );
glDrawArrays(GL_TRIANGLES, 0, signNumVerts);
}
EDIT: looking at the code for btBoxShape i noticed you can grab the box vertices and normals:
btVector3** vertices= wallShape->getVertices();
btVector3**normals = wallShape->getNormals();
but you can't grab a list of indices to draw the vertex points in a certain order!
If I recall correctly, this is not the proper way to draw debug shapes in Bullet. Did you read the user manual (PDF), page 16?
You are supposed to implement your own debug drawer class which implements btIDebugDraw, and in this class you implement the drawLine method.
You pass this debug drawer to bullet with setDebugDrawer, and then enable it with world->getDebugDrawer->setDebugMode(debugMode);
To draw the world, call world->debugDrawWorld();
This then calls drawLine on your custom function numerous times until a wireframe model of the physics world has been drawn.

android - How do I draw a wireframe over object in opengl 2.0?

I have a tetrahedron object created in openGL ES 2.0. What I'm trying to achieve is to show the actual wireframe of the polygon over its base color. Is there a method for achieving this effect?
Also, my tetrahedron is pink. How can I change its color?
1: First draw your object as usual, then draw it again (with a different shader, or it will be the same color as the object and thus invisible), using GLES20.GL_LINES, GL_LINE_LOOP or GL_LINE_STRIP.
You might want to scale the object up slightly when drawing the lines so that the depth-testing don't decides that the lines are behind the object and ignores them.
2: This is done in your shader, set gl_FragColor to a vec4 containing your wanted rgba-values for a solid color.
In addition to what Jave said. Instead of enlarging the whole object (whose optimal amount always depends on the object and the current view) to prevent z-fighting artefacts, you can also use the polygon offset (using glPolygonOffset), whose major application are indeed wireframe overlays.
It basically works by slightly adjusting the depth values of the resulting fragments, which you cannot achieve differently in ES, since you cannot write to the depth buffer (which I guess is the reason they didn't drop it from ES, like they did in desktop GL 3+). So you basically render your solid object and your line-version of the object using the same vertices, but using a polygon offset configuration for the solid object that slightly increases the resulting fragments' depth values (pushes them away from the viewer), thus placing the triangles always behind the lines in view space (or window space actually). See here for further information.
Though the case of a tetrahedron might make some problems because of its very sharp edges.
Although this has been here a little while (over a year now), this may also help: Note that you may not need to use another shader to achieve the desired result. You can disable the vertex attribute array then specify and load the constant vertex attribute data (the wireframe line color) to draw the wire frame. For example:
float coloredcube[] = { 2, 2, 2, 1, 0, 0, -2, 2, 2, 0, 1, 0, -2, -2, 2, 0,
0, 1, 2, -2, 2, 1, 1, 0, 2, 2, -2, 1, 0, 1, -2, 2, -2, 0, 1, 1, -2,
-2, -2, 1, 1, 1, 2, -2, -2, 0, 0, 0
};
short indices[] = { 0, 1, 2, 0, 2, 3, //back
0, 4, 7, //right
0, 7, 3,
7, 6, 2, //bottom
7, 2, 3,
4, 5, 6, //front
4, 6, 7,
5, 1, 2, //left
5, 2, 6,
0, 1, 5, 0, 5, 4 //top
};
short lineindices[] = { 0, 1, 1, 2, 0, 2, 2, 3,
0, 4, 4, 7, 0, 7, 7, 3,
7, 6, 6, 2, 7, 2, 4, 5,
5, 6, 4, 6, 5, 2, 1, 5, 0, 5, 0, 3 };
glVertexAttribPointer(iPosition, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float),coloredcube);
glEnableVertexAttribArray(iPosition);
glVertexAttribPointer(iColor, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float),coloredcube + 3);
glEnableVertexAttribArray(iColor);
// offset the filled object to avoid the stitching that can arise when the wireframe lines are drawn
glEnable(GL_POLYGON_OFFSET_FILL);
glPolygonOffset(2.0f, 2.0f);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, indices);
glDisable(GL_POLYGON_OFFSET_FILL);
//Then disable the vertex colors and draw the wire frame with one constant color
glDisableVertexAttribArray(iColor);
glLineWidth(1.0f);
GLfloat color[4] = { 1.0f, 0.0f, 0.0f, 0.5f };
glVertexAttrib4fv(iColor, color);
glDrawElements(GL_LINES, 36, GL_UNSIGNED_SHORT, lineindices);
A similar example can be found in "The OpenGL ES 2.0 programming guide" pp 109,110.

glDrawArrays works in vm, crashes on phone

I am drawing a line in opengl es from the Android NDK. I have been developing on the VM's and just recently tried my application on a phone. The application runs fine on the vm's. A line is drawn. However, on a motorola droid, the application just crashes, and on a HTC incredible it just shows a black screen. I have verified that the number being passed to the function are correct. The application haults on the glDrawArray(GL_LINES, 0, 2) call. The whole function looks like this:
void drawLine(GLfloat x1, GLfloat y1, GLfloat x2, GLfloat y2, GLfloat * color)
{
GLfloat vVertices[] =
{x1, y1,
x2, y2};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glColor4f(color[0],color[1],color[2],color[3]);
glVertexPointer(2, GL_FLOAT, 0, vVertices);
glDrawArrays(GL_LINES, 0, 2);
__android_log_write(ANDROID_LOG_ERROR,"to mama","You drew arrays");
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
}
and the call to it looks like this:
drawLine(0.0f,0.0f,1.0f,0.0f,colorx);/*x is green*/
I can try drawelements next, but there is not reason draw arrays should not work (as far as i know).
You're enabling the color array (glEnableClientState(GL_COLOR_ARRAY)) without actually setting setting the glColorPointer().
Either set the color pointer or don't enable the color array.

Categories

Resources