I have a problem concerning a moving point light in my GLES 2.0 Android app:
I have an instance of Person walking around on a large surface. This Person needs to have a light above his head in order to get a small area around him illuminated properly. Since the light instance has the person instance as its parent, its position in world space moves exactly the way the person moves (y-Offset +4). But every time I start the app, the light is not positioned on top and it does not move exactly the way the person moves (or at least it looks like it is not). It seems like it's left and in front of the person even though light and person share the same x- and z-values.
The Person is a cube (no complex model yet).
Here is my code of the cube's draw method:
public void draw(float[] pPMatrix, float[] pVMatrix)
{
float[] MVPMatrix = new float[16];
Matrix.setIdentityM(getParent().getModelMatrix(),0);
Matrix.translateM(getParent().getModelMatrix(),0,mXLL, mYLL, mZLL);
Matrix.multiplyMM(MVPMatrix, 0, pVMatrix, 0, getParent().getModelMatrix(), 0);
Matrix.multiplyMM(MVPMatrix, 0, pPMatrix, 0, MVPMatrix, 0);
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// .....
GLES20.glUniformMatrix4fv(LightingProgram.getMVPMatrixHandle(), 1, false, MVPMatrix, 0);
GLES20.glUniformMatrix4fv(LightingProgram.getMVMatrixHandle(), 1, false, pVMatrix, 0);
LightObject lo = mParent.getWorld().getLightObjects().get(0);
Matrix.multiplyMV(lo.getLightPosInEyeSpace(), 0, pVMatrix, 0, lo.getLightPosInWorldSpace(), 0 );
GLES20.glUniform3f(LightingProgram.getLightPosHandle(), lo.getLightPosInEyeSpace()[0], lo.getLightPosInEyeSpace()[1], lo.getLightPosInEyeSpace()[2]);
// Draw the triangle
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, mVertexCount);
}
Vertex Shader code is:
uniform mat4 u_MVPMatrix;
uniform mat4 u_MVMatrix;
attribute vec4 a_Position;
attribute vec4 a_Color;
attribute vec3 a_Normal;
attribute vec2 a_TexCoordinate;
varying vec3 v_Position;
varying vec4 v_Color;
varying vec3 v_Normal;
varying vec2 v_TexCoordinate;
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the color.
v_Color = a_Color;
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
Fragment Shader code is:
precision mediump float;
uniform vec3 u_LightPos;
uniform sampler2D u_Texture;
varying vec3 v_Position;
varying vec4 v_Color;
varying vec3 v_Normal
varying vec2 v_TexCoordinate;
void main()
{
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
float diffuse = max(dot(v_Normal, lightVector), 0.0);
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance)));
// Add ambient lighting
diffuse = diffuse + 0.25;
gl_FragColor = (diffuse * v_Color * texture2D(u_Texture, v_TexCoordinate));
}
I think it has something to do with the way I pass in the light object's position... but I cannot figure out what's the correct way.
Thanks in advance... :-)
==========================================
!!EDIT!! I uploaded a video of the problem:
https://dl.dropboxusercontent.com/u/17038392/opengl_lighting_test.mp4 (2MB)
Every shape in this scene is a cube. When the person stands in the middle of the room, the light has no effect on the floor. If the person moves to the upper corner, the light moves to the middle of the room.
Now here is the very strange thing: since the light is positioned above the person it illuminates the yellow crates just fine when the person is in the middle of the room. HOW ON EARTH CAN THIS HAPPEN? ;-)
==========================================
EDIT 2:
Ok, so I tried to do what you said. But, being a rookie, I am having trouble doing it correctly:
My draw-Method for any cube instance:
public void draw(float[] pPMatrix, float[] pVMatrix)
{
float[] MVPMatrix = new float[16];
float[] normalVMatrix = new float[16];
float[] normalTransposed = new float[16];
// Move object
Matrix.setIdentityM(getParent().getModelMatrix(),0);
Matrix.translateM(getParent().getModelMatrix(),0,mXLL, mYLL, mZLL);
Matrix.multiplyMM(MVPMatrix, 0, pVMatrix, 0, getParent().getModelMatrix(), 0);
Matrix.multiplyMM(MVPMatrix, 0, pPMatrix, 0, MVPMatrix, 0);
// create normal matrix by inverting and transposing the modelmatrix
Matrix.invertM(normalVMatrix, 0, getParent().getModelMatrix(), 0);
Matrix.transposeM(normalTransposed, 0, normalVMatrix, 0);
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// ============================
// POSITION
// ============================
getVertexBuffer().position(0);
GLES20.glVertexAttribPointer(LightingProgram.getPositionHandle(), COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, getVertexBuffer());
GLES20.glEnableVertexAttribArray(LightingProgram.getPositionHandle());
// ============================
// COLOR
// ============================
getColorBuffer().position(0);
GLES20.glVertexAttribPointer(LightingProgram.getColorHandle(), COLOR_DATA_SIZE, GLES20.GL_FLOAT, false, 0, getColorBuffer());
GLES20.glEnableVertexAttribArray(LightingProgram.getColorHandle());
// ============================
// NORMALS
// ============================
// Pass in the normal information
if(LightingProgram.getNormalHandle() != -1)
{
getNormalBuffer().position(0);
GLES20.glVertexAttribPointer(LightingProgram.getNormalHandle(), NORMAL_DATA_SIZE, GLES20.GL_FLOAT, false, 0, getNormalBuffer());
GLES20.glEnableVertexAttribArray(LightingProgram.getNormalHandle());
checkGLError("normals");
}
// ============================
// TEXTURE
// ============================
// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, getTextureHandle());
// Tell the texture uniform sampler to use this texture in the shader by binding to texture unit 0.
//GLES20.glUniform1i(mTextureUniformHandle, 0);
GLES20.glUniform1i(LightingProgram.getTextureUniformHandle(), 0);
getTextureBuffer().position(0);
GLES20.glVertexAttribPointer(LightingProgram.getTextureCoordinateHandle(), TEXTURE_DATA_SIZE, GLES20.GL_FLOAT, false, 0, getTextureBuffer());
GLES20.glEnableVertexAttribArray(LightingProgram.getTextureCoordinateHandle());
// Pass the projection and view transformation to the shader
GLES20.glUniformMatrix4fv(LightingProgram.getMVPMatrixHandle(), 1, false, MVPMatrix, 0);
GLES20.glUniformMatrix4fv(LightingProgram.getMVMatrixHandle(), 1, false, pVMatrix, 0);
GLES20.glUniformMatrix4fv(LightingProgram.getNormalHandle(), 1, false, normalTransposed, 0);
LightObject lo = mParent.getWorld().getLightObjects().get(0);
Matrix.multiplyMV(lo.getLightPosInEyeSpace(), 0, pVMatrix, 0, lo.getLightPosInWorldSpace(), 0 );
GLES20.glUniform3f(LightingProgram.getLightPosHandle(), lo.getLightPosInEyeSpace()[0], lo.getLightPosInEyeSpace()[1], lo.getLightPosInEyeSpace()[2]);
// Draw the triangle
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, mVertexCount);
// Disable vertex array
GLES20.glDisableVertexAttribArray(LightingProgram.getPositionHandle());
GLES20.glDisableVertexAttribArray(LightingProgram.getTextureCoordinateHandle());
if(LightingProgram.getNormalHandle() != -1)
GLES20.glDisableVertexAttribArray(LightingProgram.getNormalHandle());
GLES20.glDisableVertexAttribArray(LightingProgram.getColorHandle());
checkGLError("end");
}
So, my updated vertex shader code now is:
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
uniform mat4 u_NMatrix; // combined normal/view matrix ???
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec4 a_Color; // Per-vertex color information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec4 v_Color; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the color.
v_Color = a_Color;
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_NMatrix * vec4(a_Normal, 0.0)); // THIS does not look right...
//v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
I thought that maybe if I inverted and transposed the model matrix and save this to a normal matrix, it might already fix the issue.
But I think I got it completely wrong..
It looks a bit messy. Some code is missing and you need to start commenting your code more often. Even if only for the SO question.
I am not sure what the draw input parameters are but I can assume that pPMatrix is a projection matrix and pVMatrix is a view matrix.
Then in the code there is this strange line: Matrix.translateM(getParent().getModelMatrix(),0,mXLL, mYLL, mZLL); which I assume moves the person to its current position. I would then expect that this is actually part of a view matrix if you are looking form the persons perspective. In any case this value is not included into the component you use for the light. Then again what does getLightPosInWorldSpace return?
If we try to break it down a bit you have a character whose position is defined by his model matrix. This describes its location and orientation in your scene. The projection matrix is defined by your view size and field of view. Then the view matrix is computed from the persons orientation or from wherever you are looking at the scene (lookAt procedure is most common).
No matter how you define all these then the light position depends only on the person model matrix. So you need to multiply the light position (0, 4, 0) by the model matrix of the character. So it might be that this is what you wanted to do in Matrix.multiplyMV(lo.getLightPosInEyeSpace(), 0, pVMatrix, 0, lo.getLightPosInWorldSpace(), 0 );.
By doing this you can actually test on the CPU that the result of the light position is correct depending on where the character position is.
Now what you need to pass in your shader (seeing what you use) are actually the MVP matrix and the model matrix next to the computed light source. The MV matrix should not be used here as the eye position does not effect the lighting effect in your case.
Now the v_Position must be a fragment computed in the scene coordinates so it must be multiplied by the model matrix only. This will basically give you the coordinate of the fragment(pixel) in the scene, not on the view. Now use this position to get the distance from the light and continue the computation as you already did.
Then there seems to be an issue with your normals. Computing the normals is not done by multiplying them with a model matrix nor model-view matrix. Imagine a scenario where you have a normal (0,1,0) and you multiply it with a matrix which has a translation (10, 0, 0); the resulting normal is then (10, 1, 0) which even when normalized has no sense, the result must still be (0,1,0) since no rotations were applied. Please look into how you generate the matrix to transform your normals which includes all possible edge cases. But note you can use the top left 3x3 part of the (model) matrix to transform them including the normalization for most situation (this fails for cases where the normalization should not be done and for the cases where the model matrix is not scaled equally for each of the axises).
EDIT:
To look into it more theoretically what you are dealing with is we are usually using 3 matrices, model, view and projection.
The projection matrix defines the projection of shapes on your screen. In your case it should depend on your view ratio and on the field of view you want to show. It should never effect the lighting, where shapes are positioned or anything beyond how all of these are mapped onto your screen.
The view matrix is usually used to define how you are looking into the scene. In your case where are you looking at your scene from and toward which direction. You should probably use a look-at procedure for that one. This matrix does not effect the lighting or any of the positions of the object, just how you look on the objects.
Then the model matrix is the one that is used only to position a specific object in the scene. The reason why we use this is so you may have only 1 vertex buffer for all instances of your object drawn. So in your case you have 3 cubes which should all share the same vertex buffer but are drawn in 3 different places because their model matrix is different.
Now your character is no different from any other object in your scene. It has a vertex buffer and it has a model matrix. If you wanted to swap to a first person view from what you have you only needed to multiply natural base vectors with the character model matrix and then use those vectors in a look-at method to construct a new view matrix. The base vectors probably being location(0,0,0), forward(0,0,1), up(0,1,0). Once transforming these you can then construct the "center" as location+forward. And by doing so you still have no difference in how the lighting works or how the objects are illuminated, your view on the scene should have no effect on that.
So your light is attached to the character offset by some vector offset(0,4,0). This means that the light position in the scene is this same vector multiplied with your character model matrix since that matrix is the one that defines the position of the character in your scene. It could even be interpreted that the light position is at (0,0,0) and moved to a character location which is multiplying it with model matrix and then translated by offset so again multiplied with the translation matrix created with this vector. This is important as you could for instance construct this translation matrix T and a rotation matrix R which rotates around X axis (for instance) and then by multiplying them as modelMatrix*T*R would make the light rotate around your character.
So assuming you have all these matrices for all objects and you have a light position you can then start looking into shaders. You need to construct the whole MVP matrix as that is the one that will map the objects onto your screen. So this matrix is used only for the gl_Position. As for the actual pixel in the scene you need to multiply it with a model matrix only.
Then the first issue is you need to also transform the normals. You need to construct the matrix for them by inverting and then transposing the model matrix. The source. So multiply your normal with this matrix instead of the model matrix.
So now things become pretty simply. You have computed the light position on the CPU and sent it as an uniform. You have the fragment on-scene position and you have its normal. So with these you can compute the lighting you already have in your fragment shader.
Now I did lie a bit about the view matrix having no effect on the lighting. It does effect it but not in your case. You have not implemented any shine so this lighting component is not added. But when(if) you will add it then it is easier to simply pass your position as another uniform then to use a view matrix to get the same result.
It is hard to tell from the code you posted what all is in conflict but at least it seems that the light position is transformed incorrectly and that the normals are transformed incorrectly.
EDIT: about debugging
When working with openGL you need to be inventive when it comes to debugging. Seeing your results there are still many things that may be wrong. It is pretty hard to check them on the CPU or by having some logs and the best way is usually to modify the shaders the way that you will get the results giving you additional info.
To debug the fragment position in the scene:
As previously mentioned the fragment in scene position is not effected by the perspective you are looking from. So it should not depend from the view or projection matrix. In your fragment shader this is the value stored in v_Position.
You need to set the boundaries in which you are going to test and these depend on your scene size (where you are putting your walls and cubes...).
You said that your walls are offset by 25 so it is safe to assume that your scene will be in range [-30, 30]. you will want to debug each axis separately so for instance lets take a red value at -25 and green value at 25. To test the Y coordinate (usually height) you simply use gl_FragColor = vec4(1.0-(v_Position.y+30)/60, (v_Position.y+30)/60), 0.0, 1.0). This should show a nice gradient on all objects where the lower their Y value more red should the color be. Then you do the same for the 2 other components and for each of them you should get these nice gradients all in each own direction. The gradient should be visible throughout the whole scene equally, not just for each of the objects.
To debug the normals in the scene:
The normals must be consistent between objects depending on which way they are facing. Since all the objects in your case are parallel to the axises and normalized this should be pretty easy. You are expecting only 6 possible values so 2 passes should do the job (positive and negative).
Use gl_FragColor = vec4(max(v_Normal.x, 0.0), max(v_Normal.y, 0.0), max(v_Normal.z, 0.0), 1.0) for positive normals. This will show all the faces that facing positive X as red, all that are facing positive Y green and all facing positive Z blue.
The second test is then gl_FragColor = vec4(max(-v_Normal.x, 0.0), max(-v_Normal.y, 0.0), max(-v_Normal.z, 0.0), 1.0) which does exactly the same for those facing negative coordinates.
To debug light position:
When the first test passes the light position can be tested by only illuminating the near by objects. So the length(u_LightPos - v_Position.xyz) shows you the light distance. You must normalize it so in your scene use highp float scale = 1.0 - length(u_LightPos - v_Position.xyz)/50.0 and then use this in the color as gl_FragColor = vec4(scale, scale, scale, 1.0). This will make all the objects near the light white while those very far away black. A nice gradient should be shown.
The point of these tests:
You do this because your current result depends on multiple values where all of them can be bugged. By isolating the issues you will easily find where the problem is.
The first test discards the normals and the light position so if it is incorrect it only means that you are multiplying your position with a wrong matrix. You need to ensure that the model matrix is used to multiply the position when computing v_Position.
The second test discards both the position in scene and the light position. You will still be able to see your scene but the colors will be defined by your normals only. If the result is incorrect you either have wrong normals to begin with or they are transformed incorrectly when using a normal matrix. You may even disable the normal matrix multiplication just to see which of the two is incorrect. If disabling does not fix the issue then the normals are incorrect in your vertex buffer.
The third test discards your normals and your logic to compute the lighting effect. The first test must pass because we still need the position of your models. So if this test fails you are most likely not positioning the light correctly. Other possibilities are that the normals are incorrect (for which you have the test) and the third is that you compute the light effect incorrectly.
I'm rendering a polygon in OpenGL with a vertex array called vertices and a final index buffer called DRAW_ORDER with CCW winding. I have back-face culling enabled, and I make draw calls using glDrawElements(GL_TRIANGLES, DRAW_ORDER.capacity(), GL_UNSIGNED_SHORT, DRAW_ORDER).
When I reflect vertices in the x or y axis via matrix transformation, the polygon gets culled because the reflection reverses the orientation of the vertices, so that they no longer match the CCW winding order of DRAW_ORDER.
I can prevent the problem by disabling culling, but for performance sake I would rather find a way to restore the orientation of vertices via permutation. For example, if the polygon were a triangle, I could simply swap the second and third vertices after a reflection to restore CCW orientation. How can I extend this approach to a polygon with an arbitrary number of vertices and indices?
//PSEUDO-CODE FOR TRIANGLE:
final DRAW_ORDER = {0,1,2};
vertices = { {0,0}, {1,0}, {0,1} };
reflect(vertices);
swap(vertices,1,2);
EDIT: Here's a solution that seems to work for convex polygons, but not concave.
//Reverse the order of the vertices so, for example,
//vertices {v1,v2,v3,v4,v5} become {v5,v4,v3,v2,v1}
for(int start = 0, end = vertices.length-1; start<end; start++, end--){
swap(vertices,start,end);
}
You can see in the image below how the solution works for an ellipse (which is convex) but not a star (which is concave).
To invert the winding order by a permutation of the indices, you can simply reverse the order of the indices.
So for your triangle example, if the original order is (0, 1, 2), the reverse order is (2, 1, 0). Since all cyclic permutations are equivalent for defining a polygon, other valid orders would be (1, 0, 2) and (0, 2, 1). But using the reverse is as good as anything.
As another example, for a polygon with 5 vertices, using indices (3, 17, 6, 8, 11), the reverse is (11, 8, 6, 17, 3), which can be used as the set of indices when you want to render the polygon with the opposite winding order.
I am learning OpenGL ES 2.0, without ever having learned OpenGL or OpenGL ES 1.x.
I'm applying non-uniform scaling to my modelViewMatrix, so the tutorials tell me that I need to take special steps to compute a normalMatrix. In my application the modelViewMatrix has dimension 4x4.
Some tutorials say that for the normalMatrix I need to simply calculate transpose(inverse(modelViewMatrix)).
Other instructions say that I need to first take the upper left 3x3 sub-matrix of my modelViewMatrix and then compute transpose(inverse(submatrix)).
Is there any difference? Do they lead to the same result?
Right now I'm using method 1, and then in the vertex shader I extract a vec3 after applying the transformation:
vec3 vNormalEyespace = vec3(normalMatrix * vec4(vertexNormal, 1.0));
I am doubting this because I see strange effects in my diffuse lighting. I'm thinking about trying method 2, but the android.opengl.Matrix class does not offer methods for inverting or transposing 3x3 matrices...
My actual code in renderFrame() is as follows:
final float[] normalMatrix=new float[16];
final float[] unscaledModelViewMatrix=modelViewMatrix_Vuforia.getData();
Matrix.invertM(normalMatrix, 0, unscaledModelViewMatrix, 0);
Matrix.transposeM(normalMatrix, 0, normalMatrix, 0);
// pass the normalMatrix to the shader
GLES20.glUniformMatrix4fv(normalMatrixHandleBottle, 1, false, normalMatrix, 0);
A 3x3 matrix is enough to transform the normals.
The primary purpose of using 4x4 matrices that operate on homogenous coordinates for positions is that they can express translations. A 3x3 matrix applied to a 3-member vector can not express translations. You can easily confirm that because it will always map the origin back to the origin.
Since normals are vectors, and not positions, we specifically do not want to apply the translation part of the modelview matrix to them. They describe directions, and directions do not change when a translation is applied.
So the cleanest approach is to use a 3x3 matrix for normals, and set it to the inverse-transpose of the top-left 3x3 elements of the modelview matrix.
In the case where you only have rotations and uniform scaling in the modelview matrix (which does not apply to your specific situation), people sometimes use the same modelview matrix that they also use for the positions. Which is correct, as long as the translation part is not applied. This can be done by setting the w component of the vector to zero for the multiplication:
vec3 transformedNormal = (modelViewMatrix * vec4(originalNormal, 0.0)).xyz
With the w component being zero, the matrix elements that encode the translation have no effect on the result, and this corresponds to using only the top-left 3x3 part of the matrix. As long as this is the same as the inverse-transpose of the matrix, which is the case for rotations and uniform scaling, this is a valid shortcut.
I'm trying to learn some OpenGL ES for Android, entirely for fun. Graphics programming is very new to me, so sorry if this is a stupid question.
I've read a few examples and tutorials, and can create shapes, move the camera with touch events etc. Following Google's tutorial, I modified it to make some 3d shapes. I'm happy with all of this so far.
Currently my shapes have the same colour on all sides, and it isn't clear to me how I colour/texture them separately with the code that I've got at the minute (I just built on the Google tutorials by experimenting basically).
All the other examples I look at use glDrawArrays rather than glDrawElements. Defining a shape this way seems a lot clumsier, but then you have a unique set of vertices which makes colouring (and normals) seem more obvious.
The draw method for my cube is below so you can see where I'm at. Do I have to do this differently to progress further? If so, doesn't that make glDrawElements a lot less useful?
public void draw(float[] mvpMatrix) {
// Add program to OpenGL environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the cube coordinate data
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the Cube
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Draw the cube, tell it that vertices describe triangles
GLES20.glDrawElements(
GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
}
To color a certain part of a shape you need to define the color per vertex. That in your case means you need another attribute next to vPosition which represents the color. In the shaders you will need add this new attribute and add a varying vector to pass the color to the fragment shader. No transformations are needed in vertex shader though. Also do not forget to enable the color attribute array and to set the pointer...
Try doing so to begin with and specify a different color for each vertex to see what happens.
If you will do this correctly you will see that each face of the cube now has a gradient color and only the corners of the cube have the exact color you specified. This is due to interpolation and there is not much you can do to fix this but create the vertex buffer differently:
So this is where the difference between drawing arrays or elements come in. By using elements you have an index buffer to lower the actual vertex buffer size by reusing the same vertices. But in the case of the cube with colored faces those vertices can not be shared anymore as they are not the same. If you want to have a cube with 6 different color each representing a single face you can see that each vertex (corner) actually contains 3 different colors, one for each face. So the vertex containing a position and a color must have the same both position and color to actually be the same...
So what must be done here is you need to create not 8 but 3*8 vertices and indices to draw such a cube. After you will do so you can see that number of indices and used vertices is the same so you gain nothing by using elements (but you still can if you wish) so it is easier to simply draw the arrays.
This same situation on a cube will happen for normals or texture coordinates, you simply need to make 8*3 vertices.
Note though that a cube is just one of those unfriendly shapes for which you need to do this. If you rather draw a sphere with some large amount of triangles you would not need to inflate the vertex count to do a nice texturing or lighting (using normals). In fact, tripling the sphere vertex count and doing the same procedure as on a cube would make the shape look worse.
I suggest you try playing around a bit with this kind of shapes, normals, colors, textures and you will learn most from own experience.
I have some code that is very the same on the sample on android dev website. I must quite confuse as to why some programmers or tutorials put the getting of uniform location and the setting of attribporinter
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the triangle
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
MyGLRenderer.checkGlError("glGetUniformLocation");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
MyGLRenderer.checkGlError("glUniformMatrix4fv");
// Draw the triangle
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
to the on draw method?. From what i understand draw method is a loop. SO putting the declaration there will make the app slow?. Others on the other hand put that code on on surface created even though they have the same code. Which is the preferred way here?
Fetching the attribute- and uniform-pointers in your initialization code, is the way to go, as they won't change throughout the lifetime of the application. I, suggest some tutorials initialize the pointers in onDraw() mostly for simplicity. Performance-wise, it won't make any noticeable difference anyways.
I always set those handles in onSurfaceCreated. As you correctly note, the onDraw method runs in a continuous loop (unless you've set the mode to RENDERMODE_WHEN_DIRTY) and those handles aren't going to change between iterations since you compile the vertex/fragment shaders just once in onSurfaceCreated.
However, it actually makes very little difference and won't make your app slow if you do put it in onDraw. OpenGL is doing a huge amount of work in onDraw, such as applying transformations and rendering primitives. Setting those handles is trivial in comparison and the additional overhead is tiny and won't be noticeable.