GLSL 2.0 Shader giving different light in different device - android

I am trying to write a shader for a opengles 2.0 view in Android.
My shader is :
Vertex shader:
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = normalize(vec3(u_MVMatrix * vec4(a_Normal, 0.0)));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = (u_MVPMatrix * a_Position);
}
Fragment Shader:
precision highp float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos1; // The position of the light in eye space.
uniform vec3 u_LightDir1; // The position of the light in eye space.
float l_spotCutOff=45.0;
uniform sampler2D u_Texture; // The input texture.
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
float cutoff = 0.1;
// The entry point for our fragment shader.
void main()
{
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector1 = normalize(u_LightPos1 - v_Position);
// Will be used for attenuation.
float distance1 = length(u_LightPos1 - v_Position);
float diffuse=0.0;
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse1 = max(dot(v_Normal, lightVector1), 0.1);
// Add attenuation.
diffuse1 = diffuse1 * (1.0 / (1.0+(0.25*distance1)));
// Add ambient lighting
diffuse = diffuse1+0.2;
// Multiply the color by the diffuse illumination level and texture value to get final output color.
vec4 color = (texture2D(u_Texture, v_TexCoordinate));
color.rgb *= (diffuse);
if( color.a < cutoff)
discard;
gl_FragColor = color;
}
Now the shaders are working perfectly but its behaving differently in different device:
Device 1: (moto x play)
1
Device 2: (Samsung S7)
2
Can anyone help?

The issue can be there in texture format/type which you have used. All devices doesn't support all texture formats.
for eg : if your output color can have negative values and the texture format of device doesn't support them, it will get clamped to 0 and might give different results.
Better to check capabilities of both devices using
GLES20.glGetString(GLES20.GL_EXTENSIONS));

Related

opengl-es pre pixel lighting issue

there is a problem that i just can't seem to get a handle on..
i have a fragment shader:
precision mediump float;
uniform vec3 u_AmbientColor;
uniform vec3 u_LightPos;
uniform float u_Attenuation_Constant;
uniform float u_Attenuation_Linear;
uniform float u_Attenuation_Quadradic;
uniform vec3 u_LightColor;
varying vec3 v_Normal;
varying vec3 v_fragPos;
vec4 fix(vec3 v);
void main() {
vec3 color = vec3(1.0,1.0,1.0);
vec3 vectorToLight = u_LightPos - v_fragPos;
float distance = length(vectorToLight);
vec3 direction = vectorToLight / distance;
float attenuation = 1.0/(u_Attenuation_Constant +
u_Attenuation_Linear * distance + u_Attenuation_Quadradic * distance * distance);
vec3 diffuse = u_LightColor * attenuation * max(normalize(v_Normal) * direction,0.0);
vec3 d = u_AmbientColor + diffuse;
gl_FragColor = fix(color * d);
}
vec4 fix(vec3 v){
float r = min(1.0,max(0.0,v.r));
float g = min(1.0,max(0.0,v.g));
float b = min(1.0,max(0.0,v.b));
return vec4(r,g,b,1.0);
}
i've been following some tutorial i found on the web,
anyways, the ambientColor and lightColor uniforms are (0.2,0.2,0.2), and (1.0,1.0,1.0)
respectively. the v_Normal is calculated at the vertex shader using the
inverted transposed matrix of the model-view matrix.
the v_fragPos is the model result of multiplying the position with the normal model-view matrix.
now, i expect that when i move the light position closer to the cube i render, it will just appear brighter, but the resulting image is very different:
(the little square there is an indicator for the light position)
now, i just don't understand how this can happen?
i mean, i multiply the color components each by the SAME value..
so, how is it that it seems to vary so??
EDIT: i noticed that if i move the camera in front of the cube, the light is just shades of blue.. which is the same problem but maybe it's a clue i don't know..
The Lambertian reflectance is computed with the Dot product of the normal vector and the vector to the light source, instead of the component wise product.
See How does the calculation of the light model work in a shader program?
Use the dot function instead of the * (multiplication) operator:
vec3 diffuse = u_LightColor * attenuation * max(normalize(v_Normal) * direction,0.0);
vec3 diffuse = u_LightColor * attenuation * max(dot(normalize(v_Normal), direction), 0.0);
You can simplify the code in the fix function. min and max can be substituted with clamp. This functions work component wise, so they do not have to be called separately for each component:
vec4 fix(vec3 v)
{
return vec4(clamp(v, 0.0, 1.0), 1.0);
}

Lighting on OpenGL ES sphere not smooth

In OpenGL ES 2.0 for Android, I am drawing a sphere. The sphere appears on the screen as a circle, so I need to add lighting. When I added lighting, instead of it being smooth, like it would be in real life, it is a fine line between light and dark, as shown here:
However, I want it to look like this, where the shading is much smoother and blended:
Here is my vertex shader code:
uniform mat4 u_Matrix;
uniform vec3 u_VectorToLight;
attribute vec4 a_Position;
attribute vec3 a_Color;
attribute vec3 a_Normal;
varying vec3 v_Color;
void main() {
v_Color = a_Color;
vec3 scaledNormal = a_Normal;
scaledNormal = normalize(scaledNormal);
float diffuse = max(dot(scaledNormal, u_VectorToLight), 0.0);
v_Color *= diffuse;
float ambient = 0.1;
v_Color += ambient;
gl_Position = u_Matrix * a_Position;
}
And my fragment shader:
precision mediump float;
varying vec3 v_Color;
void main() {
gl_FragColor = vec4(v_Color, 1.0);
}
The normal is calculated by getting the vector from the center of the sphere to the point on the sphere, then normalizing it (giving it a length of 1)
Here is how I set the colors:
vertices[offset++] = Color.red(color);
vertices[offset++] = Color.green(color);
vertices[offset++] = Color.blue(color);
Where color is 0xffea00.
The problem is with the range of color values you use. OpenGL operates with color component values in the range [0.0, 1.0]. But you are specifying colors in the range [0, 255] in your Java code.
You have two options to fix this:
Divide the color values you get from the Color class by 255.0f.
Specify the colors with type GL_UNSIGNED_BYTE. To do this, store the values in an array/buffer with element type byte, store those values in a VBO, and then set up the vertex attribute with:
glVertexAttribPointer(attrLoc, 3, GL_UNSIGNED_BYTE, GL_TRUE, stride, 0);
Note the value for the 4th argument. While it does not matter for GL_FLOAT attributes, it is critical that you use GL_TRUE in this case, because the byte values need to be normalized.

OpenGL ES 2.0 : Multiple light sources : Shader issue

UPDATE 3 (Thanks so much for your help)
I removed what was suggested. Also u_IT_MVMatrix seems wrong (what ever it is for) Things look a bit better but the floor should glow and the textured bricks should have light from the colour bricks (blue, red etc)
Vertex (fragment stayed the same) for textured Objects
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
uniform vec4 u_PointLightPositions[3]; // In eye space
uniform vec3 u_PointLightColors[3];
vec4 eyeSpacePosition;
vec3 eyeSpaceNormal;
uniform vec4 v_Color;
varying vec3 lighting;
vec3 materialColor;
vec3 getAmbientLighting();
vec3 getDirectionalLighting();
vec3 getPointLighting();
// The entry point for our vertex shader.
void main()
{
//materialColor = vec3(v_Color.xyz); // Will be modified by the texture later.
materialColor = vec3(1.0, 1.0, 1.0);
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
eyeSpacePosition = u_MVMatrix * a_Position;
// The model normals need to be adjusted as per the transpose of the inverse of the modelview matrix.
eyeSpaceNormal = normalize(vec3(u_MVMatrix * vec4(a_Normal, 0.0)));
gl_Position = u_MVPMatrix * a_Position;
lighting = getAmbientLighting();
lighting += getPointLighting();
}
vec3 getAmbientLighting()
{
return materialColor * 0.2;
}
vec3 getPointLighting()
{
vec3 lightingSum = vec3(0.0);
for (int i = 0; i < 3; i++) {
vec3 toPointLight = vec3(u_PointLightPositions[i]) - vec3(eyeSpacePosition);
float distance = length(toPointLight);
//distance = distance / 5.0;
toPointLight = normalize(toPointLight);
float cosine = max(dot(eyeSpaceNormal, toPointLight), 0.0);
lightingSum += (materialColor * u_PointLightColors[i] * 20.0 * cosine)
/ distance;
}
return lightingSum;
}
**Vertex for light bricks (no texture)**
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
uniform vec4 u_PointLightPositions[3]; // In eye space
uniform vec3 u_PointLightColors[3];
vec4 eyeSpacePosition;
vec3 eyeSpaceNormal;
uniform vec4 v_Color;
varying vec3 lighting;
vec3 getAmbientLighting();
vec3 getDirectionalLighting();
vec3 getPointLighting();
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
eyeSpacePosition = u_MVMatrix * a_Position;
// The model normals need to be adjusted as per the transpose of the inverse of the modelview matrix.
eyeSpaceNormal = normalize(vec3(u_MVMatrix * vec4(a_Normal, 0.0)));
lighting = getAmbientLighting();
lighting += getPointLighting();
}
vec3 getAmbientLighting()
{
return v_Color.xyz * 0.2;
}
vec3 getPointLighting()
{
vec3 lightingSum = vec3(0.0);
for (int i = 0; i < 3; i++) {
vec3 toPointLight = vec3(u_PointLightPositions[i]) - vec3(eyeSpacePosition);
float distance = length(toPointLight);
toPointLight = normalize(toPointLight);
float cosine = max(dot(eyeSpaceNormal, toPointLight), 0.0);
lightingSum += (v_Color.xyz * u_PointLightColors[i] * 20.0 * cosine)
/ distance;
}
return lightingSum;
}
I always struggled with using multiple light sources in a shader but I found an example in my Android OpenGL 2.0 quick start book.
Thought I would give it ago, sadly whatever I do, I seem to be the light, so when I get closer to a object it gets lighter, what I want to a make 3 different places (say street lamps) as light sources.
I define my light places and colour in my render
// new lighting
public final float[] pointLightPositions = new float[]
{0f, 1f, 0f, 1f,
100f, 1f, 0f, 1f,
50f, 1f, 0f, 1f};
public final float[] pointLightColors = new float[]
{1.00f, 0.20f, 0.20f,
0.02f, 0.25f, 0.02f,
0.02f, 0.20f, 1.00f};
On rendering
uPointLightPositionsLocation =
glGetUniformLocation(mProgramHandle, "u_PointLightPositions");
uPointLightColorsLocation =
glGetUniformLocation(mProgramHandle, "u_PointLightColors");
glUniform4fv(uPointLightPositionsLocation, 3, mRenderer.pointLightPositions, 0);
glUniform3fv(uPointLightColorsLocation, 3, mRenderer.pointLightColors, 0);
// not sure why I need this
// lighting
final float[] pointPositionsInEyeSpace = new float[12];
multiplyMV(pointPositionsInEyeSpace, 0, mVMatrix, 0, mRenderer.pointLightPositions, 0);
multiplyMV(pointPositionsInEyeSpace, 4, mVMatrix, 0, mRenderer.pointLightPositions, 4);
multiplyMV(pointPositionsInEyeSpace, 8, mVMatrix, 0, mRenderer.pointLightPositions, 8);
Matrix.multiplyMM(mRenderer.mMVPMatrix, 0, mVMatrix, 0, mRenderer.mModelMatrix, 0);
Shaders (vertex)
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
uniform vec4 u_PointLightPositions[3]; // In eye space
uniform vec3 u_PointLightColors[3];
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
Fragment
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos; // The position of the light in eye space.
uniform sampler2D u_Texture; // The input texture.
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
uniform vec4 v_Color;
uniform vec4 u_PointLightPositions[3]; // In eye space
uniform vec3 u_PointLightColors[3];
vec3 getPointLighting();
// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);
// Add attenuation.
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance)));
// Add ambient lighting
diffuse = diffuse + 0.7;
// Multiply the color by the diffuse illumination level and texture value to get final output color.
//gl_FragColor = (diffuse * texture2D(u_Texture, v_TexCoordinate));
gl_FragColor = diffuse * texture2D(u_Texture, v_TexCoordinate) ;
gl_FragColor *= (v_Color * vec4(getPointLighting(),v_Color.w));
}
vec3 getPointLighting()
{
vec3 lightingSum = vec3(0.0);
for (int i = 0; i < 3; i++) {
vec3 toPointLight = vec3(u_PointLightPositions[i])
- vec3(v_Position);
float distance = length(toPointLight);
toPointLight = normalize(toPointLight);
float cosine = max(dot(v_Normal, toPointLight), 0.0);
//lightingSum += vec3(0.0, 0.0, 1.0);
lightingSum += (vec3(v_Color.xyz) * u_PointLightColors[i] * 5.0 * cosine) / distance;
}
return lightingSum;
}
I would be extremely happy if someone could help :)
UPDATE 2
I have lighting, different colour but they only glow when I get really near? I am sure its something to do with u_IT_MVMatrix matrix
Fragment
uniform vec3 u_LightPos; // The position of the light in eye space.
uniform sampler2D u_Texture; // The input texture.
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
uniform vec4 v_Color;
varying vec3 lighting;
// The entry point for our fragment shader.
void main()
{
gl_FragColor = texture2D(u_Texture, v_TexCoordinate) ;
gl_FragColor *= vec4(lighting,1.0);
}
Vertex
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
uniform vec4 u_PointLightPositions[3]; // In eye space
uniform vec3 u_PointLightColors[3];
uniform vec3 u_VectorToLight; // In eye space
uniform mat4 u_IT_MVMatrix;
vec4 eyeSpacePosition;
vec3 eyeSpaceNormal;
uniform vec4 v_Color;
varying vec3 lighting;
vec3 materialColor;
vec3 getAmbientLighting();
vec3 getDirectionalLighting();
vec3 getPointLighting();
// The entry point for our vertex shader.
void main()
{
materialColor = vec3(1.0, 1.0, 1.0); // Will be modified by the texture later.
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
eyeSpacePosition = u_MVMatrix * a_Position;
// The model normals need to be adjusted as per the transpose
// of the inverse of the modelview matrix.
eyeSpaceNormal = normalize(vec3(u_IT_MVMatrix * vec4(a_Normal, 0.0)));
gl_Position = u_MVPMatrix * a_Position;
lighting = getAmbientLighting();
lighting += getDirectionalLighting();
lighting += getPointLighting();
}
vec3 getAmbientLighting()
{
return materialColor * 0.2;
}
vec3 getDirectionalLighting()
{
return materialColor * max(dot(eyeSpaceNormal, u_VectorToLight), 0.0);
}
vec3 getPointLighting()
{
vec3 lightingSum = vec3(0.0);
for (int i = 0; i < 3; i++) {
vec3 toPointLight = vec3(u_PointLightPositions[i]) - vec3(eyeSpacePosition);
float distance = length(toPointLight);
toPointLight = normalize(toPointLight);
float cosine = max(dot(eyeSpaceNormal, toPointLight), 0.0);
lightingSum += (materialColor * u_PointLightColors[i] * 5.0 * cosine)
/ distance;
}
return lightingSum;
}
So I believe its something to do with my position
//multiplyMM(mModelMatrix, 0, VMatrix, 0, mModelMatrix, 0);
//invertM(tempMatrix, 0, mModelMatrix, 0);
transposeM(it_modelViewMatrix, 0, VMatrix, 0);
In your code you do have four lights, the fourth being positioned at u_LightPos.
I'd suggest you remove the diffuse variable (the fourth light) altogether and also all references to the v_Color (since you also have a texture). Then you should start seeing only the lighting of your three street lamps.
ps. I'd also move the light calculations to the vertex shader for the sake of performance.

Opengl ES 2.0: parts of a model are occluded where they shouldn't. Is z-buffer to blame?

I'm using Assimp to render 3D models with OpenGL ES 2.0. I'm currently having a strange problem in which some parts of the model are not visible, even when they should be. It's easy to see it in these pictures:
In this second image I rendered (a linearized version of) the z-buffer into screen to see if it could be a z-buffer problem. Black pixels are near the camera:
I tried to change values for z-near and z-far without any effect. Right now I do that on initialisation:
glEnable(GL_CULL_FACE);// Cull back facing polygons
glEnable(GL_DEPTH_TEST);
And I'm also doing that for every frame:
glClearColor(0.7f, 0.7f, 0.7f, 1.0f);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
I thought it could be a face winding problem, so I tried to disable GL_CULL_FACE, but it didn't work. I'm pretty sure the model is fine, since Blender can render it correctly.
I'm using these shaders right now:
// vertex shader
uniform mat4 u_ModelMatrix; // A constant representing the model matrix.
uniform mat4 u_ViewMatrix; // A constant representing the view matrix.
uniform mat4 u_ProjectionMatrix; // A constant representing the projection matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
void main()
{
// Transform the vertex into eye space.
mat4 u_ModelViewMatrix = u_ViewMatrix * u_ModelMatrix;
v_Position = vec3(u_ModelViewMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_ModelViewMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_ProjectionMatrix * u_ModelViewMatrix * a_Position;
}
And this is the fragment shader:
// fragment shader
uniform sampler2D u_Texture; // The input texture.
uniform int u_TexCount;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
// The entry point for our fragment shader.
void main()
{
vec3 u_LightPos = vec3(1.0);
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);
// Add attenuation.
diffuse = diffuse * (1.0 / distance);
// Add ambient lighting
diffuse = diffuse + 0.2;
diffuse = 1.0;
//gl_FragColor = (diffuse * texture2D(u_Texture, v_TexCoordinate));// Textured version
float d = (2.0 * 0.1) / (100.0 + 0.1 - gl_FragCoord.z * (100.0 - 0.1));
gl_FragColor = vec4(d, d, d, 1.0);// z-buffer render
}
I'm using VBO with indices to load the geometry and stuff.
Of course I can paste some other code you think it may be relevant, but for now I'm happy to get some ideas of what can cause this strange behavior, or some possible tests I can do.
Ok, I solved the problem. I post the solution since it may be useful to future googlers.
Basically I didn't request a Depth Buffer. I'm doing all the render stuff in native code, but all the Open GL context initialization is done in the Java side. I used one of the Android samples (GL2JNIActivity) as a starting point, but they didn't request any depth buffer and I didn't notice that.
I solved it setting the depth buffer size to 24 when setting the ConfigChooser:
setEGLConfigChooser( translucent ?
new ConfigChooser(8, 8, 8, 8, 24 /*depth*/, 0) :
new ConfigChooser(5, 6, 5, 0, 24 /*depth*/, 0 );

Scaled ModelMatrix messes with Diffuse lighting

So, I have a hopefully simple question:
I have a simple cube, I'm useing Matrix.ScaleM to scale the modelview and compress the cube(There's a reason for this, trust me).
This work, the cube shrinks. However, my fragment shader no longer properly applies the diffuse light source to the top a bottom on the cube. The shade code is as follows.
precision mediump float;
uniform vec3 u_LightPos;
uniform sampler2D u_Texture;
uniform sampler2D u_Texture2;
varying vec3 v_Position;
varying vec4 v_Color;
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
// The entry point for our fragment shader.
void main()
{
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);
mediump float emptyness = 0.0;
mediump float half_emptyness = 0.1;
// Add attenuation.
diffuse = diffuse * (1.0 / (1.0 + (0.10 * distance)));
// Add ambient lighting
diffuse = diffuse + 0.3;
vec4 textColor1 = texture2D(u_Texture, v_TexCoordinate);
vec4 textColor2 = texture2D(u_Texture2, v_TexCoordinate);
// Multiply the color by the diffuse illumination level and texture value to get final output color.
if(textColor2.w == emptyness){
diffuse = diffuse * (1.0 / (1.0 + (0.10 * distance)));
gl_FragColor = ( diffuse * textColor1 );//v_Color *
gl_FragColor.a = 1.0;
} else{
diffuse = diffuse * (1.0 / (1.0 + (0.75 * distance)));
gl_FragColor = ( diffuse * textColor1 );//v_Color *
gl_FragColor.a = 0.0;
}
}
So, any ideas?
And I know the color is a little...odd. That's for a completely different reason.
EDIT: As requested, the vertex Shader:
uniform mat4 u_MVPMatrix;
uniform mat4 u_MVMatrix;
attribute vec4 a_Position;
attribute vec4 a_Color;
attribute vec3 a_Normal;
attribute vec2 a_TexCoordinate;
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec4 v_Color; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the color.
v_Color = a_Color;
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
float halfer = 2.0;
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
You'll need an inverted transposed matrix like this:
Shader:
uniform mat4 u_IT_MVMatrix;
...
v_Normal = vec3(u_IT_MVMatrix * vec4(a_Normal, 0.0));
In your Java code you create the matrix from your regular MV matrix like this:
invertM(tempMatrix, 0, modelViewMatrix, 0);
transposeM(it_modelViewMatrix, 0, tempMatrix, 0);
Then you'll just need to pass this into the shader as a uniform.

Categories

Resources