Poor shading problem in Android OpenGL ES - android

I set up diffuse lighting:
private float[] lightAmbient = { 0.5f, 0.5f, 0.5f, 1.0f };
private float[] lightDiffuse = { 1.0f, 1.0f, 1.0f, 1.0f };
private float[] lightPosition = { 0.0f, 0.0f, 2.0f, 1.0f };
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_AMBIENT, lightAmbientBuffer);
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_DIFFUSE, lightDiffuseBuffer);
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_POSITION, lightPositionBuffer);
gl.glEnable(GL10.GL_LIGHT0);
gl.glShadeModel(GL10.GL_SMOOTH);
But I get triangulated shading or flat color on the cube located at origin ( center) and rotated 45 deg around x and y. So the cube is directly in front of the light. Any reasons why I am getting such poor results? Attached is the cube image.

OpenGL ES calculates colors at the vertices of each triangle. The color is then interpolated across the triangle, Ideally the vertices should calculate the same colors between the two triangles but a variety of situations could cause it not to.
It appears are though your cube edges are modeled with two triangles. You could decompose the cube side into more triangles, but that adds more memory storage and could slow down drawing.
You could also move to OpenGL ES 2.0 and write a shader, which can properly interpolate the colors across the surface, but that will require rewriting the entire pipeline. OGL ES doesn't let you mix old style and shader based implementations.

Related

GLES20 Why texture data is not linked with vertices indexes?

I was trying to render a cube and apply a single texture on all faces.
as well as using as little vertices as i can by passing up indexes of the vertices of the face. example:
Vertices:
static final float FACE_VERTEX[] = {
// front
0.0f, 1.0f, 1.0f, //0
0.0f, 0.0f, 1.0f, //1
1.0f, 0.0f, 1.0f, //2
1.0f, 1.0f, 1.0f, //3
//back
1.0f, 1.0f, 0.0f, //4 - 3
1.0f, 0.0f, 0.0f, //5 - 2
0.0f, 0.0f, 0.0f, //6 - 1
0.0f, 1.0f, 0.0f, //7 - 0
};
Indexes:
static final int FACE_INDEX[] = {
//front
0,1,2, 0,2,3,
//back
4,5,6, 4,6,7,
//left
7,6,1, 7,1,0,
//right
3,2,5, 3,5,4,
//top
4,7,0, 4,0,3,
//bottom
1,6,5, 1,5,2
};
texture mapping data:
final int textureCoordinateData[] =
{
// Front face
0,0, 0,1, 1,1, 1,0,
//back
0,0, 0,1, 1,1, 1,0,
//left
0,0, 0,1, 1,1, 1,0,
//right
0,0, 0,1, 1,1, 1,0,
//top
0,0, 0,1, 1,1, 1,0,
//bottom
0,0, 0,1, 1,1, 1,0,
//top
0,0, 0,1, 1,1, 1,0,
//bottom
0,0, 0,1, 1,1, 1,0,
};
The texture is rendered on all sides of the cube, except top and bottom. only the first row of the pixels of the front face are rendered along the whole top and bottom faces (see screenshot):
I am using VBOs to store the vertex/index/texture data in the GPU, and rendering with
glDrawElements(GL_TRIANGLES, indexLength, GL_UNSIGNED_INT, 0);
However this issue is because the texture data should be mapped to the passed vertex data (which is kinda annoying for a cube model) and is not calculated by the index.
My questions will be:
- Is there any way to keep the vertices data as low as possible and map the texture to the index data?
- If I create 36 vertices (some are repeated) to solve the texture mapping issue, but I created the correct indexes to render the cube, would it still be faster than using glDrawArrays? or shall i go with glDrawArrays and trash the indexing data anyway?
related question (that didn't answer my question):
OpenGL ES - texture map all faces of an 8 vertex cube?
If you read the first comment on the answer:
What do you mean you have to use 24 vertexes? If you have to duplicate
the vertexes what is the point of using an index buffer then if you
are still sending repeat data to the GPU?
There is really no reasonable way to draw a cube with texture coordinates with less than 24 vertices. That's because... it has 24 vertices, when using the definition of what forms a vertex in OpenGL, which is each unique combination of vertex attributes (position, texture coordinates, normal, etc).
You might be able to reduce the number slightly with only positions and texture coordinates, since texture coordinates in your case are only combinations of 0 and 1 values. If you don't care about the orientation of the texture on each face, you could for example have 1 vertex that uses (0, 0) for the texture coordinates of all 3 adjacent faces. Of course you can't do that for all vertices, but you could trim down the number of vertices somewhat.
So is using indices worth it? As you already found by now, you can easily stick with 24 vertices when using indices, but need 36 vertices with the most straightforward use of glDrawArrays(), where you use a single draw call with GL_TRIANGLES as the primitive type. So it does reduce the number of vertices.
You could reduce it to 24 vertices with glDrawArrays() as well, if you make a separate draw call for each face, and draw the face using the GL_TRIANGLE_STRIP primitive type with 4 vertices each. This will result in 6 draw calls, though, and having many small draw calls is not desirable either.
Even if it might look questionable in the case of a cube, index buffers are highly useful in general. A cube is just such a small and simple shape that the way you send the vertices won't make much of a difference anyway.
In most use cases, you will have much more complex shapes, with many more vertices that often define smooth surfaces. In this case, the same vertex (with the same normal and texture coordinates) is mostly shared by multiple triangles. Sketching part of a regular mesh:
_____________
| /| /| /|
| / | / | / |
|/__|/__|/__|
| /| /| /|
| / | / | / |
|/__|/__|/__|
you can see that the interior vertices are shared by 6 triangles each. So for a relatively large, smooth surface, you can typically reduce the number of vertices by about a factor of 6 by sharing vertices, which is what using an index array allows you to do. Reducing memory usage by almost a factor of 6 can be a substantial gain if your geometry is sufficiently large.

Move pixels around in OpenGL-ES (Android)

I have a texture that I can render in OpenGL-ES with an orthogonal identity matrix:
gst_gl_shader_set_uniform_matrix_4fv(shader, "u_transformation", 1, FALSE, identity_matrix);
I want to move "the pixels around": half of the top is going to the left, half of the bottom is going to the right as shown on the image below. Is there an "easy" way to do that? I'm on Android.
On this related answered question How to crop/clip in OpenGL using u_transformation, I was able to keep the top part 'a' or the bottom part 'e'. Would there be a way to do a "double gst_gl_shader_set_uniform_matrix_4fv" after "cutting" the scene in two?
The transformation that you want here cannot be represented by a transformation matrix. Matrices can only represent certain classes of transformations. In 3D space:
A 3x3 matrix represents a linear transformation. Typical examples include rotation, scaling, mirroring, shearing.
A 4x3 matrix represents an affine transformation. On top of the above, this includes translations.
If you extend the 3D space to homogenous coordinates with 4 components, a 4x4 matrix can represent additional classes of transformations, like projections.
The transformation in your sketch is none of the above. So applying a matrix to your vertices will not be able to do this.
So what can you do? Two options come to mind:
If you can easily draw the two parts (top/bottom, left/right) separately, you can obviously do that, and simply change the transformation between rendering the two parts.
Apply the logic in your shader code.
For option 2, you could do this either in the vertex or fragment shader. If you have no primitives that cross the boundary between the two parts, handling it in the vertex shader would be more efficient. Otherwise, similar logic can be used in the fragment shader.
Sketching the critical parts for the vertex shader case, let's say you currently have the following that gives you the arrangement in the left side of your sketch:
// Calculate output position and store it in variable "pos".
gl_Position = pos;
To get the second arrangement, the logic could look like this (completely untested...):
if (pos.y > 0.0) {
gl_Position = vec4(0.5 * pos.x - 0.5, 2.0 * pos.y - 1.0, pos.zw)
} else {
gl_Position = vec4(0.5 * pos.x + 0.5, 2.0 * pos.y + 1.0, pos.zw);
}
The idea is that you check whether the vertex is in the top or bottom half, and scale/shift it accordingly to map the top half of the coordinate space into the left half, and the bottom half of the coordinate space into the right half.
This could be streamlined some more by replacing the conditional with a sign operation:
float s = sign(pos.y);
gl_Position = vec4(0.5 * pos.x - sign * 0.5, 2.0 * pos.y - sign, pos.zw);
Some more care will be needed if pos.w is not 1.0, which happens if you e.g. applied a perspective projection to your vertices. In that case, you'll have to incorporate the division by w in the calculations above.
The formula described in Reto answers 'semi' work as they only produce the "a" on the left or the "e" on the right but not both at the same time.
The solution I found is to double the number of vertices and indices and play around with the vertices coordinates like this:
static const GLfloat vertices[] = {
1.0f, 1.0f, 0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f, 0.0f,
0.0f, -1.0f, 0.0f, 0.0f, 0.5f,
1.0f, -1.0f, 0.0f, 1.0f, 0.5f,
0.0f, 1.0f, 0.0f, 1.0f, 0.5f,
-1.0f, 1.0f, 0.0f, 0.0f, 0.5f,
-1.0f, -1.0f, 0.0f, 0.0f, 1.0f,
0.0f, -1.0f, 0.0f, 1.0f, 1.0f
};
static const GLushort indices[] = { 0, 1, 2, 0, 2, 3, 4, 5, 6, 4, 6, 7 };

How do I rotate a triangle around its vertex located at (0,0,0) in OpenGL 2

I'm trying to make a hexagon with 6 triangles using rotation and translation. Rather than making multiple translate calls, I instead want to translate the triangle downward once and rotate around the Z axis at 60 degrees six times (my sketch may help with that explanation: http://i.imgur.com/SrrXcA3.jpg). After repeating the drawTriangle() and rotate() methods six times, I should have a hexagon.
Currently my code looks like this:
public void onDrawFrame(GL10 unused)
{
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); //start by clearing the screen for each frame
GLES20.glUseProgram(mPerVertexProgramHandle); //tell OpenGL to use the shader program we've compiled
//Get pointers to the program's variables. Instance variables so we can break apart code
mMVPMatrixHandle = GLES20.glGetUniformLocation(mPerVertexProgramHandle, "uMVPMatrix");
mPositionHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle, "aPosition");
mColorHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle, "aColor");
//Prepare the model matrix!
Matrix.setIdentityM(mModelMatrix, 0); //start modelMatrix as identity (no transformations)
Matrix.translateM(mModelMatrix, 0, 0.0f, -0.577350269f, 0.0f); //shift the triangle down the y axis by -0.577350269f so that its top point is at 0,0,0
drawTriangle(mModelMatrix); //draw the triangle with the given model matrix
Matrix.rotateM(mModelMatrix, 0, 60f, 0.0f, 0.0f, 1.0f);
drawTriangle(mModelMatrix);
}
Here's my problem: it appears my triangle isn't rotating around (0,0,0), but instead it rotates around the triangle's center (as shown in this picture: http://i.imgur.com/oiLFSCE.png).
Is it possible for to rotate triangle around (0,0,0), where its vertex is located?
Are you really be sure that your constant -0.577350269f is the correct value for the triangle center?
Also your code looks unfinish (You use an mvp handle but never use it in the code), could you provide more information?

How can i draw a black point on the 0,0,-1 opengl coordinate of the screen?

actually I'm drawing a cube, I'm checking rotation problems of the cube, but for this I need to draw a point on the 0,0,-1 opengl coordinate of the screen, I'm using perspective projection, MyGLSurfaceView and android 1.5 opengl es 1.x
How can I draw a black or white point on the 0,0,-1 opengl coordinate of the screen?
If you want to be able to draw directly in window space then the easiest thing would be to load modelview and projection temporarily with the identity matrix and draw a GL_POINT with the location that you need. So that'd be something like:
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
// draw the point here; specifics depending on whether you
// favour VBOs, VBAs, etc
// e.g. (assuming you don't have any client state enabled
// on entry and don't care about leaving the vertex array
// enabled on exit)
GLfloat vertexLocation[] = {0.0f, 0.0f, -1.0f};
glColor4f(0.0f, 0.0f, 0.0f, 1.0f);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertexLocation);
glDrawArrays(GL_POINTS, 0, 1);
// end of example to plot a GL_POINT
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
// and possibly restore yourself to some other matrix mode
// if, atypically, the rest of your code doesn't assume modelview

draw opengl texture at full screen

I want to draw opengl texture at full screen.
(texture : 128x128 ===> device screen : 320x480)
Below code works good, but texture is small.
I have to use only glFrustumf function(not glOrthof function).
How can I draw texture in full screen size?
// this is android source code
float ratio = (float) screenWidth / screenHeight;
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-ratio, ratio, -1, 1, 1, 10);
GLU.gluLookAt(gl, 0.0f, 0.0f, -2.5f, // eye
0.0f, 0.0f, 0.0f, // center
0.0f, 1.0f, 0.0f); // up
// draw blah blah
Why do you have to use glFrustum only? Switching to glOrtho for drawing the background, then switching to glFrustum for regular drawing would be the canonical solution.
BTW: gluLookAt must happen in the modelview matrix, not in the projection matrix like you do right now. As it stands your code is broken and if you were a student in one of my OpenGL classes I'd give you negative points for this cardinal error.

Categories

Resources