Colouring and texturing 3d shapes with OpenGL ES - android

I'm trying to learn some OpenGL ES for Android, entirely for fun. Graphics programming is very new to me, so sorry if this is a stupid question.
I've read a few examples and tutorials, and can create shapes, move the camera with touch events etc. Following Google's tutorial, I modified it to make some 3d shapes. I'm happy with all of this so far.
Currently my shapes have the same colour on all sides, and it isn't clear to me how I colour/texture them separately with the code that I've got at the minute (I just built on the Google tutorials by experimenting basically).
All the other examples I look at use glDrawArrays rather than glDrawElements. Defining a shape this way seems a lot clumsier, but then you have a unique set of vertices which makes colouring (and normals) seem more obvious.
The draw method for my cube is below so you can see where I'm at. Do I have to do this differently to progress further? If so, doesn't that make glDrawElements a lot less useful?
public void draw(float[] mvpMatrix) {
// Add program to OpenGL environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the cube coordinate data
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the Cube
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Draw the cube, tell it that vertices describe triangles
GLES20.glDrawElements(
GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
}

To color a certain part of a shape you need to define the color per vertex. That in your case means you need another attribute next to vPosition which represents the color. In the shaders you will need add this new attribute and add a varying vector to pass the color to the fragment shader. No transformations are needed in vertex shader though. Also do not forget to enable the color attribute array and to set the pointer...
Try doing so to begin with and specify a different color for each vertex to see what happens.
If you will do this correctly you will see that each face of the cube now has a gradient color and only the corners of the cube have the exact color you specified. This is due to interpolation and there is not much you can do to fix this but create the vertex buffer differently:
So this is where the difference between drawing arrays or elements come in. By using elements you have an index buffer to lower the actual vertex buffer size by reusing the same vertices. But in the case of the cube with colored faces those vertices can not be shared anymore as they are not the same. If you want to have a cube with 6 different color each representing a single face you can see that each vertex (corner) actually contains 3 different colors, one for each face. So the vertex containing a position and a color must have the same both position and color to actually be the same...
So what must be done here is you need to create not 8 but 3*8 vertices and indices to draw such a cube. After you will do so you can see that number of indices and used vertices is the same so you gain nothing by using elements (but you still can if you wish) so it is easier to simply draw the arrays.
This same situation on a cube will happen for normals or texture coordinates, you simply need to make 8*3 vertices.
Note though that a cube is just one of those unfriendly shapes for which you need to do this. If you rather draw a sphere with some large amount of triangles you would not need to inflate the vertex count to do a nice texturing or lighting (using normals). In fact, tripling the sphere vertex count and doing the same procedure as on a cube would make the shape look worse.
I suggest you try playing around a bit with this kind of shapes, normals, colors, textures and you will learn most from own experience.

Related

OpenGL ES 2.0 an Android: What dimensions should the normalMatrix have?

I am learning OpenGL ES 2.0, without ever having learned OpenGL or OpenGL ES 1.x.
I'm applying non-uniform scaling to my modelViewMatrix, so the tutorials tell me that I need to take special steps to compute a normalMatrix. In my application the modelViewMatrix has dimension 4x4.
Some tutorials say that for the normalMatrix I need to simply calculate transpose(inverse(modelViewMatrix)).
Other instructions say that I need to first take the upper left 3x3 sub-matrix of my modelViewMatrix and then compute transpose(inverse(submatrix)).
Is there any difference? Do they lead to the same result?
Right now I'm using method 1, and then in the vertex shader I extract a vec3 after applying the transformation:
vec3 vNormalEyespace = vec3(normalMatrix * vec4(vertexNormal, 1.0));
I am doubting this because I see strange effects in my diffuse lighting. I'm thinking about trying method 2, but the android.opengl.Matrix class does not offer methods for inverting or transposing 3x3 matrices...
My actual code in renderFrame() is as follows:
final float[] normalMatrix=new float[16];
final float[] unscaledModelViewMatrix=modelViewMatrix_Vuforia.getData();
Matrix.invertM(normalMatrix, 0, unscaledModelViewMatrix, 0);
Matrix.transposeM(normalMatrix, 0, normalMatrix, 0);
// pass the normalMatrix to the shader
GLES20.glUniformMatrix4fv(normalMatrixHandleBottle, 1, false, normalMatrix, 0);
A 3x3 matrix is enough to transform the normals.
The primary purpose of using 4x4 matrices that operate on homogenous coordinates for positions is that they can express translations. A 3x3 matrix applied to a 3-member vector can not express translations. You can easily confirm that because it will always map the origin back to the origin.
Since normals are vectors, and not positions, we specifically do not want to apply the translation part of the modelview matrix to them. They describe directions, and directions do not change when a translation is applied.
So the cleanest approach is to use a 3x3 matrix for normals, and set it to the inverse-transpose of the top-left 3x3 elements of the modelview matrix.
In the case where you only have rotations and uniform scaling in the modelview matrix (which does not apply to your specific situation), people sometimes use the same modelview matrix that they also use for the positions. Which is correct, as long as the translation part is not applied. This can be done by setting the w component of the vector to zero for the multiplication:
vec3 transformedNormal = (modelViewMatrix * vec4(originalNormal, 0.0)).xyz
With the w component being zero, the matrix elements that encode the translation have no effect on the result, and this corresponds to using only the top-left 3x3 part of the matrix. As long as this is the same as the inverse-transpose of the matrix, which is the case for rotations and uniform scaling, this is a valid shortcut.

Android OpenGL2.0 intersection between two textures

I'm making game in OpenGL2.0 and I want to check are two sprites have intersection but i don't need to check intersection between two rectangles.I have two sprites with texture,some part of texture is transparent,some not. I need to check intersection between sprites only on not trasnparent part.
Example: http://i.stack.imgur.com/ywGN5.png
The easiest way to determine intersection between two sprites is by Bounding Box method.
Object 1 Bounding Box:
vec3 min1 = {Xmin, Ymin, Zmin}
vec3 max1 = {Xmax, Ymax, Zmax}
Object 2 Bounding Box:
vec3 min2 = {Xmin, Ymin, Zmin}
vec3 max2 = {Xmax, Ymax, Zmax}
You must precompute the bounding box by traversing through the vertex buffer array for your sprites.
http://en.wikibooks.org/wiki/OpenGL_Programming/Bounding_box
Then during each render frame check if the bounding boxes overlap (compute on CPU).
a) First convert the Mins & Maxs to world space.
min1WorldSpace = modelViewMatrix * min1
b) Then check their overlap.
I need to check intersection between sprites only on not trasnparent part.
Checking this test case maybe complicated depending on your scene. You may have to segment your transparent sprites into a separate sprite and compute their bounding box.
In your example it looks like the transparent object is encapsulate inside an opaque object so it's easy. Just compute two bounding boxes.
I don't think there's a very elegant way of doing this with ES 2.0. ES 2.0 is a very minimal version of OpenGL, and you're starting to push the boundaries of what it can do. For example in ES 3.0, you could use queries, which would be very helpful in solving this nicely and efficiently.
What can be done in ES 2.0 is draw the sprites in a way so that only pixels in the intersection of the two end up producing color. This can be achieved with either using a stencil buffer, or with blending (see details below). But then you need to find out if any pixels were rendered, and there's no good mechanism in ES 2.0 that I can think of to do this. I believe you're pretty much stuck with reading back the result, using glReadPixels(), and then checking for non-black pixels on the CPU.
One idea I had to avoid reading back the whole image was to repeatedly downsample it until it reaches a size of 1x1. It would originally render to a texture, and then in each step, sample the current texture with linear sampling, rendering to a texture of half the size. I believe this would work, but I'm not sure if it would be more efficient than just reading back the whole image.
I won't provide full code for the proposed solution, but the outline looks like this. This is using blending for drawing only the pixels in the intersection.
Set up an FBO with an RGBA texture attached as a color buffer. The size does not necessarily have to be the same as your screen resolution. It just needs to be big enough to give you enough precision for your intersection.
Clear FBO with black clear color.
Render first sprite with only alpha output, and no blending.
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glDisable(GL_BLEND);
// draw sprite 1
This leaves the alpha values of sprite 1 in the alpha of the framebuffer.
Render the second sprite with destination alpha blending. The transparent pixels will need to have black in their RGB components for this to work correctly. If that's not already the case, change the fragment shader to create pre-multiplied colors (multiply rgb of the output by a).
glColorMask(GL_TRUE GL_TRUE, GL_TRUE, GL_TRUE);
glBlendFunc(GL_DST_ALPHA, GL_ZERO);
glEnable(GL_BLEND);
// draw sprite 2
This renders sprite 2 with color output only where the alpha of sprite 1 was non-zero.
Read back the result using glReadPixels(). The region being read needs to cover at least the bounding box of the two sprites.
Add up all the RGB values of the pixels that were read.
There was overlap between the two sprites if the resulting color is not black.

where to put the getting of uniform location and attribpointer? draw vs surface created?

I have some code that is very the same on the sample on android dev website. I must quite confuse as to why some programmers or tutorials put the getting of uniform location and the setting of attribporinter
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the triangle
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
MyGLRenderer.checkGlError("glGetUniformLocation");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
MyGLRenderer.checkGlError("glUniformMatrix4fv");
// Draw the triangle
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
to the on draw method?. From what i understand draw method is a loop. SO putting the declaration there will make the app slow?. Others on the other hand put that code on on surface created even though they have the same code. Which is the preferred way here?
Fetching the attribute- and uniform-pointers in your initialization code, is the way to go, as they won't change throughout the lifetime of the application. I, suggest some tutorials initialize the pointers in onDraw() mostly for simplicity. Performance-wise, it won't make any noticeable difference anyways.
I always set those handles in onSurfaceCreated. As you correctly note, the onDraw method runs in a continuous loop (unless you've set the mode to RENDERMODE_WHEN_DIRTY) and those handles aren't going to change between iterations since you compile the vertex/fragment shaders just once in onSurfaceCreated.
However, it actually makes very little difference and won't make your app slow if you do put it in onDraw. OpenGL is doing a huge amount of work in onDraw, such as applying transformations and rendering primitives. Setting those handles is trivial in comparison and the additional overhead is tiny and won't be noticeable.

Hide second-row transparent faces?

I'm trying to display a geographically complex, semi-transparent (e.g. alpha = 0.5) object (terrain). When I render this object, the hidden front-faces of this object will also be drawn (like a hill that actually lies behind another one).
I would like to see other objects behind my "terrain" object, but don't want to see the hidden faces of my terrain (the second hill). So actually set the transparency for the "whole" object, not for single faces.
Q: How could I achieve to hide the "hidden" front-faces of a semi-transparent object?
I'm setting the transparency in the vertex shader by multiplying the color vector with the desired transparency:
fColor = vec4(vColor, 1.0);
fColor *= 0.5;
// fColor goes to fragment shader
GL_DEPTH_TEST is activated with GL_LEQUAL as depth function.
GL_BLEND is activated with GL_ONE, GL_ONE_MINUS_SRC_ALPHA as blending functions.
I tried to deactivate the depth buffer by GLES20.glDepthMask(false); before drawing, but this doesn't make any difference.
Probably I don't get the idea for the right depth buffer settings or the blending functions.
Well, I think I got it now:
Actually I can resign on blending at all. With the depth test switched on only the foreground fragments are visible (the front hill of my terrain). With the multiplication in the vertex shader the fragment shader will draw these visible fragments with desired transparency (the terrain as a whole becomes semi-transparent).
So, depth test on, blending off, color multiplication in vertex shader.

OpenGL ES2: Using FrameBuffer objects to render many shapes more quickly

I've got a program that draws upwards of 90 2D shapes w/ textures which the user can pick up and drag by touching the screen. There is a noticeable amount of choppiness, and DDMS tells me that the one method that takes up the most CPU time (~85%) is the draw() method. Since only 1 shape is actually moving and the other 89 are not, would it be possible/faster to render the 89 shapes to a texture using a FrameBuffer object and draw that texture on a shape that fills up the whole screen? If not, are there any other potential ways of speeding things up?
private void draw() {
// Pass in the position information
mCubePositions.position(0);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, 0, mCubePositions);
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Pass in the color information
mCubeColors.position(0);
GLES20.glVertexAttribPointer(mColorHandle, mColorDataSize, GLES20.GL_FLOAT, false, 0, mCubeColors);
GLES20.glEnableVertexAttribArray(mColorHandle);
// Pass in the texture coordinate information
mCubeTextureCoordinates.position(0);
GLES20.glVertexAttribPointer(mTextureCoordinateHandle, mTextureCoordinateDataSize, GLES20.GL_FLOAT, false, 0, mCubeTextureCoordinates);
GLES20.glEnableVertexAttribArray(mTextureCoordinateHandle);
// This multiplies the view matrix by the model matrix, and stores the
// result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// Pass in the modelview matrix.
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mMVPMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and
// stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
// Pass in the combined matrix.
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
// Draw the cube.
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6);
}
Thanks in advance.
I got confused by the question referring to "cubes" when it meant quads, so this answer deals with the 3d case, which is probably more instructive anyway.
Combine the view and projection matrices into a ViewProj matrix. Then in the vert shader you do VertexPos * Model * ViewProj.
Also you really need to batch. You should have a single big array with all your cubes in it, and another array with the transforms for each cube. Then you do a single draw call for all cubes. Consider converting it to use a Vertex Buffer Object. Draw calls are CPU intensive because they invoke a whole bunch of logic and memory copying etc. in the API behind the scenes. Game engines go to great lengths to minimise them.
How to make one draw call draw many things
Put all the different textures into a single texture (an "atlas"), and compensate by adjusting the UVs of each cube to look up the appropriate portion of the texture. Put all your model matrices into a contiguous array, and index into this array in your vertex shader e.g.
attribute vec3 a_position;
attribute vec2 a_texCoord;
attribute int a_modelIndex;
attribute int a_UVlIndex;
uniform mat4 u_model[90];
uniform vec2 u_UVOffset[16]; // Support 16 different textures in our atlas.
varying vec2 v_texCoord;
...
void main()
{
gl_Position = u_viewProj * u_model[a_modelIndex] * vec4(a_position, 1);
v_texCoord = a_texCoord + u_UVOffset[a_UVlIndex];
...
}
You can pack all your vertex data into one big array, so you end up doing GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6 * 90); But even better, since you are just drawing cubes all the time, you can re-use the exact same vertex data each time. The model matrices take care of the rest (scale, rotation, translation). To do this, use glDrawElements instead of glDrawArrays, and --- assuming tri lists for simplicity --- specify 36 indices that reference the 36 vertices in your vertex array that make a cube, then just repeat those 36 indices 90 times to make your index array. The vertices should be a unit cube, centered on (0, 0, 0). This same "cube template" then gets modified by the model matrix in the vertex shader to create each visible "cube instance". The only thing you need to change each frame are the model matrices and the texture UVs.
glVertexAttribPointer() allows you to spew pretty much anything you like into your vertex shader, and it may be more efficient to have the model matrices as attributes rather than uniforms with some creative use of glVertexAttribPointer.
Mobile devices tend to be quite sensitive to being pixel bound. If you're cubes are quite large on the screen, you might be getting a lot of overdraw. The high CPU % (it is just a percentage after all) could be a red herring, and you may be pixel bound on the GPU. A simple test for this is to make all your cubes very small and see if the framerate improves.
For reference, the S5570 has an Adreno 200 GPU.

Categories

Resources