opengl-es pre pixel lighting issue - android

there is a problem that i just can't seem to get a handle on..
i have a fragment shader:
precision mediump float;
uniform vec3 u_AmbientColor;
uniform vec3 u_LightPos;
uniform float u_Attenuation_Constant;
uniform float u_Attenuation_Linear;
uniform float u_Attenuation_Quadradic;
uniform vec3 u_LightColor;
varying vec3 v_Normal;
varying vec3 v_fragPos;
vec4 fix(vec3 v);
void main() {
vec3 color = vec3(1.0,1.0,1.0);
vec3 vectorToLight = u_LightPos - v_fragPos;
float distance = length(vectorToLight);
vec3 direction = vectorToLight / distance;
float attenuation = 1.0/(u_Attenuation_Constant +
u_Attenuation_Linear * distance + u_Attenuation_Quadradic * distance * distance);
vec3 diffuse = u_LightColor * attenuation * max(normalize(v_Normal) * direction,0.0);
vec3 d = u_AmbientColor + diffuse;
gl_FragColor = fix(color * d);
}
vec4 fix(vec3 v){
float r = min(1.0,max(0.0,v.r));
float g = min(1.0,max(0.0,v.g));
float b = min(1.0,max(0.0,v.b));
return vec4(r,g,b,1.0);
}
i've been following some tutorial i found on the web,
anyways, the ambientColor and lightColor uniforms are (0.2,0.2,0.2), and (1.0,1.0,1.0)
respectively. the v_Normal is calculated at the vertex shader using the
inverted transposed matrix of the model-view matrix.
the v_fragPos is the model result of multiplying the position with the normal model-view matrix.
now, i expect that when i move the light position closer to the cube i render, it will just appear brighter, but the resulting image is very different:
(the little square there is an indicator for the light position)
now, i just don't understand how this can happen?
i mean, i multiply the color components each by the SAME value..
so, how is it that it seems to vary so??
EDIT: i noticed that if i move the camera in front of the cube, the light is just shades of blue.. which is the same problem but maybe it's a clue i don't know..

The Lambertian reflectance is computed with the Dot product of the normal vector and the vector to the light source, instead of the component wise product.
See How does the calculation of the light model work in a shader program?
Use the dot function instead of the * (multiplication) operator:
vec3 diffuse = u_LightColor * attenuation * max(normalize(v_Normal) * direction,0.0);
vec3 diffuse = u_LightColor * attenuation * max(dot(normalize(v_Normal), direction), 0.0);
You can simplify the code in the fix function. min and max can be substituted with clamp. This functions work component wise, so they do not have to be called separately for each component:
vec4 fix(vec3 v)
{
return vec4(clamp(v, 0.0, 1.0), 1.0);
}

Related

Anisotropic lighting in OpenGL ES 2.0/3.0. Black artifacts

I am trying to implement anisotropic lighting.
Vertex shader:
#version 300 es
uniform mat4 u_mvMatrix;
uniform mat4 u_vMatrix;
in vec4 a_position;
in vec3 a_normal;
...
out lowp float v_DiffuseIntensity;
out lowp float v_SpecularIntensity;
const vec3 lightPosition = vec3(-1.0, 0.0, 5.0);
const lowp vec3 grainDirection = vec3(15.0, 2.8, -1.0);
const vec3 eye_positiion = vec3(0.0, 0.0, 0.0);
void main() {
// transform normal orientation into eye space
vec3 modelViewNormal = mat3(u_mvMatrix) * a_normal;
vec3 modelViewVertex = vec3(u_mvMatrix * a_position);
vec3 lightVector = normalize(lightPosition - modelViewVertex);
lightVector = mat3(u_vMatrix) * lightVector;
vec3 normalGrain = cross(modelViewNormal, grainDirection);
vec3 tangent = normalize(cross(normalGrain, modelViewNormal));
float LdotT = dot(tangent, normalize(lightVector));
float VdotT = dot(tangent, normalize(mat3(u_mvMatrix) * eye_position));
float NdotL = sqrt(1.0 - pow(LdotT, 2.0));
float VdotR = NdotL * sqrt(1.0 - pow(VdotT, 2.0)) - VdotT * LdotT;
v_DiffuseIntensity = max(NdotL * 0.4 + 0.6, 0.0);
v_SpecularIntensity = max(pow(VdotR, 2.0) * 0.9, 0.0);
...
}
Fragment shader:
...
in lowp float v_DiffuseIntensity;
in lowp float v_SpecularIntensity;
const lowp vec3 default_color = vec3(0.1, 0.7, 0.9);
void main() {
...
lowp vec3 resultColor = (default_color * v_DiffuseIntensity)
+ v_SpecularIntensity;
outColor = vec4(resultColor, 1.0);
}
Overall, the lighting works well on different devices. But an artifact appears on the SAMSUNG tablet, as shown in the figure:
It seems that the darkest place is becoming completely black. Can anyone please suggest why this is happening? Thanks for any answer/comment!
You've got a couple of expressions that risk undefined behaviour:
sqrt(1.0 - pow(LdotT, 2.0))
sqrt(1.0 - pow(VdotT, 2.0))
The pow function is undefined if x is negative. I suspect you're getting away with this because y is 2.0 so they're probably optimised to just be x * x.
The sqrt function is undefined if x is negative. Mathematically it never should be since the magnitude of the dot product of two normalized vectors should never be more than 1, but computations always have error. I think this is causing your rendering artifacts.
I'd change those two expressions to:
sqrt(max(0.0, 1.0 - pow(max(0.0, LdotT), 2.0)))
sqrt(max(0.0, 1.0 - pow(max(0.0, VdotT), 2.0)))
The code looks a lot uglier, but it's safer and max(0.0, x) is a pretty cheap operation.
Edit: Just noticed pow(VdotR, 2.0), I'd change that too.

Directional lighting is not constant in OpenGL ES 2.0/3.0

Problem: The direction of the directional light changes when the position of the object changes.
I watched posts with a similar problem:
Directional light in worldSpace is dependent on viewMatrix
OpenGL directional light shader
Diffuse lighting for a moving object
Based on these posts, I tried to apply this:
#version 300 es
uniform mat4 u_mvMatrix;
uniform mat4 u_vMatrix;
in vec4 a_position;
in vec3 a_normal;
const vec3 lightDirection = vec3(-1.9, 0.0, -5.0);
...
void main() {
vec3 modelViewNormal = vec3(u_mvMatrix * vec4(a_normal, 0.0));
vec3 lightVector = lightDirection * mat3(u_vMatrix);
float diffuseFactor = max(dot(modelViewNormal, -lightVector), 0.0);
...
}
But the result looks like this:
Tried also:
vec3 modelViewVertex = vec3(u_mvMatrix * a_position);
vec3 lightVector = normalize(lightDirection - modelViewVertex);
float diffuseFactor = max(dot(modelViewNormal, lightVector), 0.0);
And:
vec3 lightVector = normalize(lightDirection - modelViewVertex);
lightVector = lightVector * mat3(u_vMatrix);
But the result:
What changes need to be made to the code so that all objects are lit identically?
Thanks in advance!
Solution:
In practice, creating directional lighting was not such an easy task for me. On Rabbid76 advice, I changed the order of multiplication. On another Rabbid76 advice (post), I also created a custom point of view:
Matrix.setLookAtM(pointViewMatrix, rmOffset:0, eyeX:3.8f, eyeY:0.0f, eyeZ:2.8f,
centerX:0.0f, centerY:0f, centerZ:0f, upX:0f, upY:1.0f, upZ:0.0f)
Also calculated eye coordinates and light vector, although the camera is set in [0, 0, 0]:
#version 300 es
uniform mat4 u_mvMatrix;
uniform mat4 u_pointViewMatrix;
in vec4 a_position;
in vec3 a_normal;
const vec3 lightPosition = vec3(-5.0, 0.0, 1.0);
...
void main() {
// transform normal orientation into eye space
vec3 modelViewNormal = vec3(u_mvMatrix * vec4(a_normal, 0.0));
vec3 modelViewVertex = vec3(u_mvMatrix * a_position); // eye coordinates
vec3 lightVector = normalize(lightPosition - modelViewVertex);
lightVector = mat3(u_pointViewMatrix) * lightVector;
float diffuseFactor = max(dot(modelViewNormal, lightVector), 0.0);
...
}
Only after these steps did the picture become good:
Small differences are probably caused by a big perspective.
The vector has to be multiplied to the matrix from the right. See GLSL Programming/Vector and Matrix Operations.
vec3 lightVector = lightDirection * mat3(u_vMatrix);
vec3 lightVector = mat3(u_vMatrix) * lightDirection;
If you want to dot the light calculations in view space, then the normal vecotr has to be transformed form object (model) space to view space by the model view matrix and the light direction hss to be transformed form world space to view space, by the view matrix. For instance:
void main() {
vec3 modelViewNormal = mat3(u_mvMatrix) * a_normal;
vec3 lightVector = mat3(u_vMatrix) * lightDirection;
float diffuseFactor = max(dot(modelViewNormal, -lightVector), 0.0);
// [...]
}

GLSL 2.0 Shader giving different light in different device

I am trying to write a shader for a opengles 2.0 view in Android.
My shader is :
Vertex shader:
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = normalize(vec3(u_MVMatrix * vec4(a_Normal, 0.0)));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = (u_MVPMatrix * a_Position);
}
Fragment Shader:
precision highp float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos1; // The position of the light in eye space.
uniform vec3 u_LightDir1; // The position of the light in eye space.
float l_spotCutOff=45.0;
uniform sampler2D u_Texture; // The input texture.
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
float cutoff = 0.1;
// The entry point for our fragment shader.
void main()
{
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector1 = normalize(u_LightPos1 - v_Position);
// Will be used for attenuation.
float distance1 = length(u_LightPos1 - v_Position);
float diffuse=0.0;
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse1 = max(dot(v_Normal, lightVector1), 0.1);
// Add attenuation.
diffuse1 = diffuse1 * (1.0 / (1.0+(0.25*distance1)));
// Add ambient lighting
diffuse = diffuse1+0.2;
// Multiply the color by the diffuse illumination level and texture value to get final output color.
vec4 color = (texture2D(u_Texture, v_TexCoordinate));
color.rgb *= (diffuse);
if( color.a < cutoff)
discard;
gl_FragColor = color;
}
Now the shaders are working perfectly but its behaving differently in different device:
Device 1: (moto x play)
1
Device 2: (Samsung S7)
2
Can anyone help?
The issue can be there in texture format/type which you have used. All devices doesn't support all texture formats.
for eg : if your output color can have negative values and the texture format of device doesn't support them, it will get clamped to 0 and might give different results.
Better to check capabilities of both devices using
GLES20.glGetString(GLES20.GL_EXTENSIONS));

Very slow fract operation on Galaxy SII and SIII

My terrain uses shader which itself uses four different textures. It runs fine on windows and linux machines, but on android it gets only ~25FPS on both galaxies. I thought, that textures are the problem, but no, as it appears the problem is with the part where I divide texture coordinates and use frac to get tiled coordinates. Without it, I get 60FPS.
// Material data.
//uniform vec3 uAmbient;
//uniform vec3 uDiffuse;
//uniform vec3 uLightPos[8];
//uniform vec3 uEyePos;
//uniform vec3 uFogColor;
uniform sampler2D terrain_blend;
uniform sampler2D grass;
uniform sampler2D rock;
uniform sampler2D dirt;
varying vec2 varTexCoords;
//varying vec3 varEyeNormal;
//varying float varFogWeight;
//------------------------------------------------------------------
// Name: fog
// Desc: applies calculated fog weight to fog color and mixes with
// specified color.
//------------------------------------------------------------------
//vec4 fog(vec4 color) {
// return mix(color, vec4(uFogColor, 1.0), varFogWeight);
//}
void main(void)
{
/*vec3 N = normalize(varEyeNormal);
vec3 L = normalize(uLightPos[0]);
vec3 H = normalize(L + normalize(uEyePos));
float df = max(0.0, dot(N, L));
vec3 col = uAmbient + uDiffuse * df;*/
// Take color information from textures and tile them.
vec2 tiledCoords = varTexCoords;
//vec2 tiledCoords = fract(varTexCoords / 0.05); // <========= HERE!!!!!!!!!
//vec4 colGrass = texture2D(grass, tiledCoords);
vec4 colGrass = texture2D(grass, tiledCoords);
//vec4 colDirt = texture2D(dirt, tiledCoords);
vec4 colDirt = texture2D(dirt, tiledCoords);
//vec4 colRock = texture2D(rock, tiledCoords);
vec4 colRock = texture2D(rock, tiledCoords);
// Take color information from not tiled blend map.
vec4 colBlend = texture2D(terrain_blend, varTexCoords);
// Find the inverse of all the blend weights.
float inverse = 1.0 / (colBlend.r + colBlend.g + colBlend.b);
// Scale colors by its corresponding weight.
colGrass *= colBlend.r * inverse;
colDirt *= colBlend.g * inverse;
colRock *= colBlend.b * inverse;
vec4 final = colGrass + colDirt + colRock;
//final = fog(final);
gl_FragColor = final;
}
Note: there's some more code for light calculation and fog, but it isn't used. I indicated the line that, when uncommented, causes massive lag. I tried using floor and calculating fractional part manually, but lag is the same. What might be wrong?
EDIT: Now here's what I don't understand.
This:
vec2 tiledCoords = fract(varTexCoords * 2.0);
Runs great.
This:
vec2 tiledCoords = fract(varTexCoords * 10.0);
Runs average on SIII.
This:
vec2 tiledCoords = fract(varTexCoords * 20.0);
Lags...
This:
vec2 tiledCoords = fract(varTexCoords * 100.0);
Well 5FPS is still better than I expected...
So what gives? Why is this happening? To my understanding this shouldn't make any difference. But it does. And a huge one.
I would run your code on a profiler (check Mali-400), but by the looks of it, you are killing the texture cache. For the first pixel computed, all those 4 texture look-ups are fetched but also the contiguous data is also fetched into the texture cache. For the next pixel, you are not accessing data in the cache but looking quite far (10, 20..etc) which completely defies the purpose of such a cache.
This of course a guess, without proper profiling is hard to tell.
EDIT: #harism also pointed you to that direction.

Opengl ES 2.0: parts of a model are occluded where they shouldn't. Is z-buffer to blame?

I'm using Assimp to render 3D models with OpenGL ES 2.0. I'm currently having a strange problem in which some parts of the model are not visible, even when they should be. It's easy to see it in these pictures:
In this second image I rendered (a linearized version of) the z-buffer into screen to see if it could be a z-buffer problem. Black pixels are near the camera:
I tried to change values for z-near and z-far without any effect. Right now I do that on initialisation:
glEnable(GL_CULL_FACE);// Cull back facing polygons
glEnable(GL_DEPTH_TEST);
And I'm also doing that for every frame:
glClearColor(0.7f, 0.7f, 0.7f, 1.0f);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
I thought it could be a face winding problem, so I tried to disable GL_CULL_FACE, but it didn't work. I'm pretty sure the model is fine, since Blender can render it correctly.
I'm using these shaders right now:
// vertex shader
uniform mat4 u_ModelMatrix; // A constant representing the model matrix.
uniform mat4 u_ViewMatrix; // A constant representing the view matrix.
uniform mat4 u_ProjectionMatrix; // A constant representing the projection matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
void main()
{
// Transform the vertex into eye space.
mat4 u_ModelViewMatrix = u_ViewMatrix * u_ModelMatrix;
v_Position = vec3(u_ModelViewMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_ModelViewMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_ProjectionMatrix * u_ModelViewMatrix * a_Position;
}
And this is the fragment shader:
// fragment shader
uniform sampler2D u_Texture; // The input texture.
uniform int u_TexCount;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
// The entry point for our fragment shader.
void main()
{
vec3 u_LightPos = vec3(1.0);
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);
// Add attenuation.
diffuse = diffuse * (1.0 / distance);
// Add ambient lighting
diffuse = diffuse + 0.2;
diffuse = 1.0;
//gl_FragColor = (diffuse * texture2D(u_Texture, v_TexCoordinate));// Textured version
float d = (2.0 * 0.1) / (100.0 + 0.1 - gl_FragCoord.z * (100.0 - 0.1));
gl_FragColor = vec4(d, d, d, 1.0);// z-buffer render
}
I'm using VBO with indices to load the geometry and stuff.
Of course I can paste some other code you think it may be relevant, but for now I'm happy to get some ideas of what can cause this strange behavior, or some possible tests I can do.
Ok, I solved the problem. I post the solution since it may be useful to future googlers.
Basically I didn't request a Depth Buffer. I'm doing all the render stuff in native code, but all the Open GL context initialization is done in the Java side. I used one of the Android samples (GL2JNIActivity) as a starting point, but they didn't request any depth buffer and I didn't notice that.
I solved it setting the depth buffer size to 24 when setting the ConfigChooser:
setEGLConfigChooser( translucent ?
new ConfigChooser(8, 8, 8, 8, 24 /*depth*/, 0) :
new ConfigChooser(5, 6, 5, 0, 24 /*depth*/, 0 );

Categories

Resources