I am trying to implement a sprite of 8 columns and 8 rows in OpenGL ES 2.0
I made appear the first imagen but I cant figure out how to translate the Texture matrix in OpenGL ES 2.0 , the equivalent of the code in OpenGL 1.0 that I am looking is
gl.glMatrixMode(GL10.GL_TEXTURE);
gl.glLoadIdentity();
gl.glPushMatrix();
gl.glTranslatef(0.0f, 0.2f, 0f);
gl.glPopMatrix();
This are the matrix that I am using atm
/**
* Store the model matrix. This matrix is used to move models from object space (where each model can be thought
* of being located at the center of the universe) to world space.
*/
private float[] mModelMatrix = new float[16];
/**
* Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
* it positions things relative to our eye.
*/
private float[] mViewMatrix = new float[16];
/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
private float[] mProjectionMatrix = new float[16];
/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
private float[] mMVPMatrix = new float[16];
/**
* Stores a copy of the model matrix specifically for the light position.
*/
private float[] mLightModelMatrix = new float[16];
My Vertex shader
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
My Fragment shader:
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos; // The position of the light in eye space.
uniform sampler2D u_Texture; // The input texture.
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);
// Add attenuation.
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance)));
// Add ambient lighting
diffuse = diffuse + 0.7;
// Multiply the color by the diffuse illumination level and texture value to get final output color.
gl_FragColor = (diffuse * texture2D(u_Texture, v_TexCoordinate));
}
You will need to perform transformations to the texture co-ordinates yourself, you could do this in one of four places:
Apply the transformation to your raw model data.
Apply the transformation in the CPU (not recommended unless you have good reason as this is what vertex shaders are for).
Apply the transformation in the vertex shader (recommended).
Apply the transformation in the fragment shader.
If you are going to apply a translation to the texture coordinates the most flexible way will be to use your maths library to create a translation matrix and pass the new matrix to your vertex shader as a uniform (the same way you pass the mMVPMatrix and mLightModelMatrix).
You can then multiply the translation matrix by the texture coordinate in the vertex shader and output the result as a varying vector.
Vertex Shader:
texture_coordinate_varying = texture_matrix_uniform * texture_coordinate_attribute;
Fragment Shader:
gl_FragColor = texture2D(texture_sampler, texture_coordinate_varying);
Please note: Your GLES 1.0 code does not actually perform a translation as you surrounded it with a push and pop.
Related
In simple words, all I need to do is display a live stream of video frames in Android (each frame is YUV420 format). I have a callback function where I receieve individual frames as a byte array. Something that looks like this :
public void onFrameReceived(byte[] frame, int height, int width, int format) {
// display this frame to surfaceview/textureview.
}
A feasible but slow option is to convert the byte array to a Bitmap and draw to canvas on SurfaceView. In the future, I would ideally like to be able to alter brightness, contrast etc of this frame, and hence am hoping I can use OpenGL-ES for the same. What are my other options to do this efficiently?
Remember, unlike in implementations of Camera or MediaPlayer class, I can't direct my output to a surfaceview/textureview using camera.setPreviewTexture(surfaceTexture); as I am receiving individual frames using Gstreamer in C.
I'm using ffmpeg for my project, but the principal for rendering the YUV frame should be the same for yourself.
If a frame, for example, is 756 x 576, then the Y frame will be that size. The U and V frame are half the width and height of the Y frame, so you will have to make sure you account for the size differences.
I don't know about the camera API, but the frames I get from a DVB source have a width and also each line has a stride. Extras pixels at the end of each line in the frame. Just in case yours is the same, then account for this when calculating your texture coordinates.
Adjusting the texture coordinates to account for the width and stride (linesize):
float u = 1.0f / buffer->y_linesize * buffer->wid; // adjust texture coord for edge
The vertex shader I've used takes screen coordinates from 0.0 to 1.0, but you can change these to suit. It also takes in the texture coords and a colour input. I've used the colour input so that I can add fading, etc.
Vertex shader:
#ifdef GL_ES
precision mediump float;
const float c1 = 1.0;
const float c2 = 2.0;
#else
const float c1 = 1.0f;
const float c2 = 2.0f;
#endif
attribute vec4 a_vertex;
attribute vec2 a_texcoord;
attribute vec4 a_colorin;
varying vec2 v_texcoord;
varying vec4 v_colorout;
void main(void)
{
v_texcoord = a_texcoord;
v_colorout = a_colorin;
float x = a_vertex.x * c2 - c1;
float y = -(a_vertex.y * c2 - c1);
gl_Position = vec4(x, y, a_vertex.z, c1);
}
The fragment shader which takes three uniform textures, one for each Y, U and V framges and converts to RGB. This also multiplies by the colour passed in from the vertex shader:
#ifdef GL_ES
precision mediump float;
#endif
uniform sampler2D u_texturey;
uniform sampler2D u_textureu;
uniform sampler2D u_texturev;
varying vec2 v_texcoord;
varying vec4 v_colorout;
void main(void)
{
float y = texture2D(u_texturey, v_texcoord).r;
float u = texture2D(u_textureu, v_texcoord).r - 0.5;
float v = texture2D(u_texturev, v_texcoord).r - 0.5;
vec4 rgb = vec4(y + 1.403 * v,
y - 0.344 * u - 0.714 * v,
y + 1.770 * u,
1.0);
gl_FragColor = rgb * v_colorout;
}
The vertices used are in:
float x, y, z; // coords
float s, t; // texture coords
uint8_t r, g, b, a; // colour and alpha
Hope this helps!
EDIT:
For NV12 format you can still use a fragment shader, although I've not tried it myself. It takes in the interleaved UV as a luminance-alpha channel or similar.
See here for how one person has answered this: https://stackoverflow.com/a/22456885/2979092
I took several answers from SO and various articles plus #WLGfx's answer above to come up with this:
I created two byte buffers, one for Y and one for the UV part of the texture. Then converted the byte buffers to textures using
public static int createImageTexture(ByteBuffer data, int width, int height, int format, int textureHandle) {
if (GLES20.glIsTexture(textureHandle)) {
return updateImageTexture(data, width, height, format, textureHandle);
}
int[] textureHandles = new int[1];
GLES20.glGenTextures(1, textureHandles, 0);
textureHandle = textureHandles[0];
GlUtil.checkGlError("glGenTextures");
// Bind the texture handle to the 2D texture target.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle);
// Configure min/mag filtering, i.e. what scaling method do we use if what we're rendering
// is smaller or larger than the source image.
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GlUtil.checkGlError("loadImageTexture");
// Load the data from the buffer into the texture handle.
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, format, width, height,
0, format, GLES20.GL_UNSIGNED_BYTE, data);
GlUtil.checkGlError("loadImageTexture");
return textureHandle;
}
Both these textures are then sent as normal 2D textures to the glsl shader:
precision highp float;
varying vec2 vTextureCoord;
uniform sampler2D sTextureY;
uniform sampler2D sTextureUV;
uniform float sBrightnessValue;
uniform float sContrastValue;
void main (void) {
float r, g, b, y, u, v;
// We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
// that's why we're pulling it from the R component, we could also use G or B
y = texture2D(sTextureY, vTextureCoord).r;
// We had put the U and V values of each pixel to the A and R,G,B components of the
// texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread
// in the texture, this is probably the fastest way to use them in the shader
u = texture2D(sTextureUV, vTextureCoord).r - 0.5;
v = texture2D(sTextureUV, vTextureCoord).a - 0.5;
// The numbers are just YUV to RGB conversion constants
r = y + 1.13983*v;
g = y - 0.39465*u - 0.58060*v;
b = y + 2.03211*u;
// setting brightness/contrast
r = r * sContrastValue + sBrightnessValue;
g = g * sContrastValue + sBrightnessValue;
b = b * sContrastValue + sBrightnessValue;
// We finally set the RGB color of our pixel
gl_FragColor = vec4(r, g, b, 1.0);
}
I am trying to use Android Color Matrix in OpenGL ES 2. I have been able to use a 4x4 Matrix using the following code in the Shader (this adds also an intensity parameter):
varying vec2 textureCoordinate;
uniform lowp mat4 colorMatrix;
uniform lowp float intensity;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 outputColor = textureColor * colorMatrix;
gl_FragColor = (intensity * outputColor) + ((1.0 - intensity) * textureColor);
}
But i am struggling on how i could convert Android 4x5 matrix to a vec4 matrix usable in the shader. I am not interested in Alpha channel.
The most direct approach is probably to split the ColorMatrix into a mat4 and a vec4, where the mat4 contains the multipliers for the input components, and the vec4 the constant offsets.
One detail to watch out for is the order of the matrix elements in memory. OpenGL uses column major storage, where the ColorMatrix appears to be in row major order. But it looks like you're already correcting for this discrepancy by multiplying the input vector from the right in the shader code.
The shader code will then look like this:
uniform lowp mat4 colorMatrix
uniform lowp vec4 colorOffset;
...
vec4 outputColor = textureColor * colorMatrix + colorOffset;
In the Java code, say you have a ColorMatrix named mat:
ColorMatrix mat = ...;
float[] matArr = mat.getArray();
float[] oglMat = {
matArr[0], matArr[1], matArr[2], matArr[3],
matArr[5], matArr[6], matArr[7], matArr[8],
matArr[10], matArr[11], matArr[12], matArr[13],
matArr[15], matArr[16], matArr[17], matArr[18]};
// Set value of colorMatrix uniform using oglMat.
float[] oglOffset = {matArr[4], matArr[9], matArr[14], matArr[19]};
// Set value of colorOffset uniform using oglOffset.
I am making a model of the Earth using OpenGL ES 2.0 for Android. I draw the sphere, and each time, in the vertex shader, I want to rotate it. Each time it's drawn, I set a uniform representing the rotation angle. Here is how I calculate the new points:
Vertex Shader:
uniform mat4 u_Matrix;
uniform vec3 u_VectorToLight;
uniform vec3 u_Center;
uniform float u_RotationAngle;
attribute vec4 a_Position;
attribute vec2 a_TextureCoordinates;
attribute vec3 a_Normal;
varying vec2 v_TextureCoordinates;
varying vec3 v_VectorToLight;
varying vec3 v_Normal;
vec2 rotate(vec2 c, vec2 p, float a);
void main() {
v_TextureCoordinates = a_TextureCoordinates;
v_VectorToLight = u_VectorToLight;
v_Normal = a_Normal;
vec2 point = rotate(u_Center.xz, a_Position.xz, radians(u_RotationAngle));
gl_Position = a_Position;
gl_Position *= u_Matrix;
gl_Position.x = point.x;
gl_Position.z = point.y;
}
vec2 rotate(vec2 c, vec2 p, float a) {
p.x -= c.x;
p.y -= c.y;
float x1 = p.x * cos(a) - p.y * sin(a);
float y1 = p.x * sin(a) + p.y * cos(a);
p.x = c.x + x1;
p.y = c.y + y1;
return p;
}
Fragment Shader:
precision mediump float;
uniform sampler2D u_TextureUnit;
varying vec2 v_TextureCoordinates;
varying vec3 v_VectorToLight;
varying vec3 v_Normal;
void main() {
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
vec4 color = gl_FragColor.rgba;
vec3 scaledNormal = v_Normal;
scaledNormal = normalize(scaledNormal);
float diffuse = max(dot(scaledNormal, v_VectorToLight), 0.0);
gl_FragColor.rgb *= diffuse;
float ambient = 0.2;
gl_FragColor.rgb += ambient * color;
}
I rotate each individual point around the center of the sphere, using the rotate() method in the vertex shader, and this just results in a distorted earth, made smaller on the X and Z axes, but not the Y (so imagine a very thin version of the Earth). What's even more confusing is that even though I pass in a new value for the rotation angle each time, I still get the same still image. Even if it is distorted, It should still look different each time in some way, since I'm using a different angle each time. Here's how I set the uniforms:
public void setUniforms(float[] matrix, Vector vectorToLight, int texture, Point center, float angle) {
glUniformMatrix4fv(uMatrixLocation, 1, false, matrix, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(uTextureUnitLocation, 0);
glUniform3f(uRadiusLocation, center.x, center.y, center.z);
glUniform3f(uVectorToLightLocation, vectorToLight.x, vectorToLight.y, vectorToLight.z);
glUniform1f(uRotationAngleLocation, angle); // <-- I set the rotation angle
}
I believe there is a problem with how you combine the rotation around the axis with your global rotation contained in u_Matrix:
vec2 point = rotate(u_Center.xz, a_Position.xz, radians(u_RotationAngle));
gl_Position = a_Position;
gl_Position *= u_Matrix;
gl_Position.x = point.x;
gl_Position.z = point.y;
Since point only contains the rotation around the axis, and you replace x and z of gl_Position with the values from point, the resulting x and z do not have u_Matrix applied to them.
You need to first apply your axis rotation, and then apply u_Matrix to this transformed point:
vec2 point = rotate(u_Center.xz, a_Position.xz, radians(u_RotationAngle));
vec4 rotPoint = a_Position;
rotPoint.x = point.x;
rotPoint.z = point.y;
gl_Position = rotPoint;
gl_Position *= u_Matrix;
To complete this task, it would generally be much more efficient to calculate the rotation matrix once in your Java code, set it as a uniform, and then only apply a matrix multiplication in your vertex shader. With the code you have now, you will calculate a cos() and sin() value for each vertex in your shader code, unless the GLSL compiler is really smart about it. If you don't want to pass in a full matrix, I would at least pass the cosine and sine values into the shader instead of the angle.
If blending two textures of different size in the fragment shader, is it possible to map the textures to different coordinates?
For example, if blending the textures from the following two images:
with the following shaders:
// Vertex shader
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
attribute vec2 aTexcoord;
varying vec2 vTexcoord;
void main() {
gl_Position = uMVPMatrix * vPosition;
vTexcoord = aTexcoord;
}
// Fragment shader
uniform sampler2D uContTexSampler;
uniform sampler2D uMaskTextSampler;
varying vec2 vTexcoord;
void main() {
vec4 mask = texture2D(uMaskTextSampler, vTexcoord);
vec4 text = texture2D(uContTexSampler, vTexcoord);
gl_FragColor = vec4(text.r * mask.r), text.g * mask.r, text.b * mask.r, text.a * mask.r);
}
(The fragment shader replaces the white spaces of the black and white mask with the second texture).
Since both texture use the same gl_Position and coordinates (1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f), both texture get mapped to the same coordinates in the view:
However, my goal is to maintain the original texture ratio:
I want to achieve this within the shader, rather than glBlendFunc and glBlendFuncSeparate, in order to my own values for blending.
Is there a way to achieve this within GLSL? I have a feeling that my approach for blending texture vertices for different position coordinates is broken by design...
It is indeed possible but you need to get a 2D scale vector for the mask texture coordinates. I suggest you to compute those on the CPU and send them to the shader via uniform, alternative is to compute them per vertex or even per fragment but you will still need to pass image dimensions into the shader so just do it on the CPU and add an uniform.
To get that scale vector you will need a bit of simple math. What you want to do is respect the mask ratio but scale it so one of the 2 scale coordinates is 1.0 and other is <=1.0. That means that you will either see the whole mask width or height while the opposite dimension will be scaled down. For instance if you have an image of size 1.0x1.0 and a mask of size 2.0x1.0 your scale vector will be (.5, 1.0).
To use this scaleVector you simply need to multiply the texture coordinates:
vec4 mask = texture2D(uMaskTextSampler, vTexcoord* scaleVector);
To compute the scale vector try this:
float imageWidth;
float imageHeight;
float maskWidth;
float maskHeight;
float imageRatio = imageWidth/imageHeight;
float maskRatio = maskWidth/maskHeight;
float scaleX, scaleY;
if(imageRatio/maskRatio > 1.0f) {
//x will be 1.0
scaleX = 1.0f;
scaleY = 1.0f/(imageRatio/maskRatio);
}
else {
//y will be 1.0
scaleX = imageRatio/maskRatio;
scaleY = 1.0f;
}
Note I did not try this code so you might need to play around a bit.
EDIT: Scaling the texture coordinates fix
The above scale of texture coordinates makes the mask texture use the top-left part instead of centre part. The coordinates must be scaled around centre. That means get the original vector from centre which is vTexcoord-vec2(.5,.5) then scale this vector and add it back from the centre:
vec2 fromCentre = vTexcoord-vec2(.5,.5);
vec2 scaledFromCenter = fromCenter*scaleVector;
vec2 resultCoordinate = vec2(.5,.5) + scaledFromCenter;
You can put this into a single line and even try to shorten it a bit (do it on the paper first).
So finally by following some other posts etc I have created a sphere in OpenGL 2.0 (Android)
However it is currently rendering as a wireframe instead of being filled in.
Draw code
public void draw()
{
// Set our per-vertex lighting program.
GLES20.glUseProgram(mProgramHandle);
// Set program handles for drawing.
mRenderer.mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_MVPMatrix");
mRenderer.mMVMatrixHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_MVMatrix");
mNormalHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_Normal");
mPositionHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_Position");
mColorHandle = GLES20.glGetUniformLocation(mProgramHandle, "v_Color");
// Translate the cube into the screen.
Matrix.setIdentityM(mRenderer.mModelMatrix, 0);
Matrix.translateM(mRenderer.mModelMatrix, 0, position.x, position.y, position.z);
Matrix.scaleM(mRenderer.mModelMatrix, 0, scale.x, scale.y, scale.z);
Matrix.rotateM(mRenderer.mModelMatrix, 0, rotation.x, 1, 0,0);
Matrix.rotateM(mRenderer.mModelMatrix, 0, rotation.y, 0, 1,0);
Matrix.rotateM(mRenderer.mModelMatrix, 0, rotation.z, 0, 0,1);
float color[] = { 0.0f, 0.0f, 1.0f, 1.0f };
// Set color for drawing the triangle
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, 3, GLES20.GL_FLOAT, false, BYTES_PER_VERTEX, vertexBuffer);
GLES20.glEnableVertexAttribArray(mNormalHandle);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mRenderer.mMVPMatrix, 0, mRenderer.mViewMatrix, 0, mRenderer.mModelMatrix, 0);
// Pass in the modelview matrix.
GLES20.glUniformMatrix4fv(mRenderer.mMVMatrixHandle, 1, false, mRenderer.mMVPMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mTemporaryMatrix, 0, mRenderer.mProjectionMatrix, 0, mRenderer.mMVPMatrix, 0);
System.arraycopy(mTemporaryMatrix, 0, mRenderer.mMVPMatrix, 0, 16);
// Pass in the combined matrix.
GLES20.glUniformMatrix4fv(mRenderer.mMVPMatrixHandle, 1, false, mRenderer.mMVPMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, vertexCount);
//GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount);
}
Sphere creation code
private void generateSphereCoords(float radius, int stacks, int slices)
{
for (int stackNumber = 0; stackNumber <= stacks; ++stackNumber)
{
for (int sliceNumber = 0; sliceNumber < slices; ++sliceNumber)
{
float theta = (float) (stackNumber * Math.PI / stacks);
float phi = (float) (sliceNumber * 2 * Math.PI / slices);
float sinTheta = FloatMath.sin(theta);
float sinPhi = FloatMath.sin(phi);
float cosTheta = FloatMath.cos(theta);
float cosPhi = FloatMath.cos(phi);
vertexBuffer.put(new float[]{radius * cosPhi * sinTheta, radius * sinPhi * sinTheta, radius * cosTheta});
}
}
for (int stackNumber = 0; stackNumber < stacks; ++stackNumber)
{
for (int sliceNumber = 0; sliceNumber <= slices; ++sliceNumber)
{
indexBuffer.put((short) ((stackNumber * slices) + (sliceNumber % slices)));
indexBuffer.put((short) (((stackNumber + 1) * slices) + (sliceNumber % slices)));
}
}
}
Any ideas how I can make this 1 full colour instead of a wireframe?
In the fragment shader I have
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos; // The position of the light in eye space.
uniform sampler2D u_Texture; // The input texture.
uniform vec4 v_Color;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);
// Add attenuation.
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance)));
// Add ambient lighting
diffuse = diffuse + 0.7;
gl_FragColor = v_Color;
}
Vertex
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
It looks like you are calculating vertex indices in your setup code, but not using them. Consider using glDrawElements instead of glDrawArrays.
It also looks like you are half way between using GL_TRIANGLE_STRIP (one new index per triangle after the first triangle) and GL_TRIANGLES (three indices per triangle). You will probably find it easier to use GL_TRIANGLES.