I am using OpenGL es 3.0 and my GLSL version #version 300 es.
I am trying to calculate luminous histogram in GPU.
i had identified that my device supports Vertex texture fetch and trying to read color information in vertex shader using texelFetch function.i am using texelFetch because i pass in every texture coordinate so as to read color information at every pixel.
Attaching the code
#version 300 es
uniform sampler2D imageTexture;
in vec2 texturePosition;
const float SCREEN_WIDTH = 1024.0;
in vec4 position;
vec2 texturePos;
out vec3 colorFactor;
const vec3 W = vec3(0.299, 0.587, 0.114);
void main() {
texturePos = texturePosition / SCREEN_WIDTH;
vec3 color = texelFetch(imageTexture, texturePos.xy,0).rgb;
float luminance = dot(color.xyz,W);
colorFactor = vec3(1.0, 1.0, 1.0);
gl_Position = vec4(-1.0 + (luminance * 0.00784313725), 0.0, 0.0, 1.0);
gl_PointSize = 1.0;
};
now getting the error 'texelFetch' : no matching overloaded function found.
Could somebody help with error and provide suggestion to calculate luminous histogram in GPU.
texturePos needs to be an ivec2, with integer texture coordinates (i.e. the pixel position in the texture) rather than normalized [0.0 1.0] floating point coordinates.
The code below should work:
#version 300 es
uniform sampler2D imageTexture;
in ivec2 texturePosition;
in vec4 position;
out vec3 colorFactor;
const vec3 W = vec3(0.299, 0.587, 0.114);
void main() {
vec3 color = texelFetch(imageTexture, texturePosition, 0).rgb;
float luminance = dot(color, W);
colorFactor = vec3(1.0, 1.0, 1.0);
gl_Position = vec4(-1.0 + (luminance * 0.00784313725), 0.0, 0.0, 1.0);
gl_PointSize = 1.0;
};
But you need to feed the 'texturePosition' input array with integers containing all the (x,y) pixel coordinates you want to include in your histogram as integers. (with VBOs).
As pointed out in the comments, you'll need to use glVertexAttribIPointer() to feed integer types.
Related
there is a problem that i just can't seem to get a handle on..
i have a fragment shader:
precision mediump float;
uniform vec3 u_AmbientColor;
uniform vec3 u_LightPos;
uniform float u_Attenuation_Constant;
uniform float u_Attenuation_Linear;
uniform float u_Attenuation_Quadradic;
uniform vec3 u_LightColor;
varying vec3 v_Normal;
varying vec3 v_fragPos;
vec4 fix(vec3 v);
void main() {
vec3 color = vec3(1.0,1.0,1.0);
vec3 vectorToLight = u_LightPos - v_fragPos;
float distance = length(vectorToLight);
vec3 direction = vectorToLight / distance;
float attenuation = 1.0/(u_Attenuation_Constant +
u_Attenuation_Linear * distance + u_Attenuation_Quadradic * distance * distance);
vec3 diffuse = u_LightColor * attenuation * max(normalize(v_Normal) * direction,0.0);
vec3 d = u_AmbientColor + diffuse;
gl_FragColor = fix(color * d);
}
vec4 fix(vec3 v){
float r = min(1.0,max(0.0,v.r));
float g = min(1.0,max(0.0,v.g));
float b = min(1.0,max(0.0,v.b));
return vec4(r,g,b,1.0);
}
i've been following some tutorial i found on the web,
anyways, the ambientColor and lightColor uniforms are (0.2,0.2,0.2), and (1.0,1.0,1.0)
respectively. the v_Normal is calculated at the vertex shader using the
inverted transposed matrix of the model-view matrix.
the v_fragPos is the model result of multiplying the position with the normal model-view matrix.
now, i expect that when i move the light position closer to the cube i render, it will just appear brighter, but the resulting image is very different:
(the little square there is an indicator for the light position)
now, i just don't understand how this can happen?
i mean, i multiply the color components each by the SAME value..
so, how is it that it seems to vary so??
EDIT: i noticed that if i move the camera in front of the cube, the light is just shades of blue.. which is the same problem but maybe it's a clue i don't know..
The Lambertian reflectance is computed with the Dot product of the normal vector and the vector to the light source, instead of the component wise product.
See How does the calculation of the light model work in a shader program?
Use the dot function instead of the * (multiplication) operator:
vec3 diffuse = u_LightColor * attenuation * max(normalize(v_Normal) * direction,0.0);
vec3 diffuse = u_LightColor * attenuation * max(dot(normalize(v_Normal), direction), 0.0);
You can simplify the code in the fix function. min and max can be substituted with clamp. This functions work component wise, so they do not have to be called separately for each component:
vec4 fix(vec3 v)
{
return vec4(clamp(v, 0.0, 1.0), 1.0);
}
I am trying to implement anisotropic lighting.
Vertex shader:
#version 300 es
uniform mat4 u_mvMatrix;
uniform mat4 u_vMatrix;
in vec4 a_position;
in vec3 a_normal;
...
out lowp float v_DiffuseIntensity;
out lowp float v_SpecularIntensity;
const vec3 lightPosition = vec3(-1.0, 0.0, 5.0);
const lowp vec3 grainDirection = vec3(15.0, 2.8, -1.0);
const vec3 eye_positiion = vec3(0.0, 0.0, 0.0);
void main() {
// transform normal orientation into eye space
vec3 modelViewNormal = mat3(u_mvMatrix) * a_normal;
vec3 modelViewVertex = vec3(u_mvMatrix * a_position);
vec3 lightVector = normalize(lightPosition - modelViewVertex);
lightVector = mat3(u_vMatrix) * lightVector;
vec3 normalGrain = cross(modelViewNormal, grainDirection);
vec3 tangent = normalize(cross(normalGrain, modelViewNormal));
float LdotT = dot(tangent, normalize(lightVector));
float VdotT = dot(tangent, normalize(mat3(u_mvMatrix) * eye_position));
float NdotL = sqrt(1.0 - pow(LdotT, 2.0));
float VdotR = NdotL * sqrt(1.0 - pow(VdotT, 2.0)) - VdotT * LdotT;
v_DiffuseIntensity = max(NdotL * 0.4 + 0.6, 0.0);
v_SpecularIntensity = max(pow(VdotR, 2.0) * 0.9, 0.0);
...
}
Fragment shader:
...
in lowp float v_DiffuseIntensity;
in lowp float v_SpecularIntensity;
const lowp vec3 default_color = vec3(0.1, 0.7, 0.9);
void main() {
...
lowp vec3 resultColor = (default_color * v_DiffuseIntensity)
+ v_SpecularIntensity;
outColor = vec4(resultColor, 1.0);
}
Overall, the lighting works well on different devices. But an artifact appears on the SAMSUNG tablet, as shown in the figure:
It seems that the darkest place is becoming completely black. Can anyone please suggest why this is happening? Thanks for any answer/comment!
You've got a couple of expressions that risk undefined behaviour:
sqrt(1.0 - pow(LdotT, 2.0))
sqrt(1.0 - pow(VdotT, 2.0))
The pow function is undefined if x is negative. I suspect you're getting away with this because y is 2.0 so they're probably optimised to just be x * x.
The sqrt function is undefined if x is negative. Mathematically it never should be since the magnitude of the dot product of two normalized vectors should never be more than 1, but computations always have error. I think this is causing your rendering artifacts.
I'd change those two expressions to:
sqrt(max(0.0, 1.0 - pow(max(0.0, LdotT), 2.0)))
sqrt(max(0.0, 1.0 - pow(max(0.0, VdotT), 2.0)))
The code looks a lot uglier, but it's safer and max(0.0, x) is a pretty cheap operation.
Edit: Just noticed pow(VdotR, 2.0), I'd change that too.
Problem: The direction of the directional light changes when the position of the object changes.
I watched posts with a similar problem:
Directional light in worldSpace is dependent on viewMatrix
OpenGL directional light shader
Diffuse lighting for a moving object
Based on these posts, I tried to apply this:
#version 300 es
uniform mat4 u_mvMatrix;
uniform mat4 u_vMatrix;
in vec4 a_position;
in vec3 a_normal;
const vec3 lightDirection = vec3(-1.9, 0.0, -5.0);
...
void main() {
vec3 modelViewNormal = vec3(u_mvMatrix * vec4(a_normal, 0.0));
vec3 lightVector = lightDirection * mat3(u_vMatrix);
float diffuseFactor = max(dot(modelViewNormal, -lightVector), 0.0);
...
}
But the result looks like this:
Tried also:
vec3 modelViewVertex = vec3(u_mvMatrix * a_position);
vec3 lightVector = normalize(lightDirection - modelViewVertex);
float diffuseFactor = max(dot(modelViewNormal, lightVector), 0.0);
And:
vec3 lightVector = normalize(lightDirection - modelViewVertex);
lightVector = lightVector * mat3(u_vMatrix);
But the result:
What changes need to be made to the code so that all objects are lit identically?
Thanks in advance!
Solution:
In practice, creating directional lighting was not such an easy task for me. On Rabbid76 advice, I changed the order of multiplication. On another Rabbid76 advice (post), I also created a custom point of view:
Matrix.setLookAtM(pointViewMatrix, rmOffset:0, eyeX:3.8f, eyeY:0.0f, eyeZ:2.8f,
centerX:0.0f, centerY:0f, centerZ:0f, upX:0f, upY:1.0f, upZ:0.0f)
Also calculated eye coordinates and light vector, although the camera is set in [0, 0, 0]:
#version 300 es
uniform mat4 u_mvMatrix;
uniform mat4 u_pointViewMatrix;
in vec4 a_position;
in vec3 a_normal;
const vec3 lightPosition = vec3(-5.0, 0.0, 1.0);
...
void main() {
// transform normal orientation into eye space
vec3 modelViewNormal = vec3(u_mvMatrix * vec4(a_normal, 0.0));
vec3 modelViewVertex = vec3(u_mvMatrix * a_position); // eye coordinates
vec3 lightVector = normalize(lightPosition - modelViewVertex);
lightVector = mat3(u_pointViewMatrix) * lightVector;
float diffuseFactor = max(dot(modelViewNormal, lightVector), 0.0);
...
}
Only after these steps did the picture become good:
Small differences are probably caused by a big perspective.
The vector has to be multiplied to the matrix from the right. See GLSL Programming/Vector and Matrix Operations.
vec3 lightVector = lightDirection * mat3(u_vMatrix);
vec3 lightVector = mat3(u_vMatrix) * lightDirection;
If you want to dot the light calculations in view space, then the normal vecotr has to be transformed form object (model) space to view space by the model view matrix and the light direction hss to be transformed form world space to view space, by the view matrix. For instance:
void main() {
vec3 modelViewNormal = mat3(u_mvMatrix) * a_normal;
vec3 lightVector = mat3(u_vMatrix) * lightDirection;
float diffuseFactor = max(dot(modelViewNormal, -lightVector), 0.0);
// [...]
}
Trying to implement refraction in OpenGL ES 2.0/3.0. Used the following shaders:
Vertex shader:
#version 300 es
precision lowp float;
uniform mat4 u_mvMatrix;
in vec4 a_position;
in vec3 a_normal;
...
out mediump vec2 v_refractCoord;
const mediump float eta = 0.95;
void main() {
vec4 eyePositionModel = u_mvMatrix * a_position;
// eye direction in model space
mediump vec3 eyeDirectModel = normalize(a_position.xyz - eyePositionModel.xyz);
// calculate refraction direction in model space
mediump vec3 refractDirect = refract(eyeDirectModel, a_normal, eta);
// project refraction
refractDirect = (u_mvpMatrix * vec4(refractDirect, 0.0)).xyw;
// map refraction direction to 2d coordinates
v_refractCoord = 0.5 * (refractDirect.xy / refractDirect.z) + 0.5;
...
}
Fragment shader:
...
in mediump vec2 v_refractCoord;
uniform samplerCube s_texture; // skybox
void main() {
outColor = texture(s_texture, vec3(v_refractCoord, 1.0));
}
Method for loading texture:
#JvmStatic
fun createTextureCubemap(context: Context, rowID: Int) {
val input = context.resources.openRawResource(rowID)
val bitmap = BitmapFactory.decodeStream(input)
val textureId = IntArray(1)
glGenTextures(1, textureId, 0)
glBindTexture(GL_TEXTURE_CUBE_MAP, textureId[0])
GLUtils.texImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, bitmap, 0)
GLUtils.texImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, bitmap, 0)
GLUtils.texImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, bitmap, 0)
GLUtils.texImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, bitmap, 0)
GLUtils.texImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, bitmap, 0)
GLUtils.texImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, bitmap, 0)
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
return textureId[0]
}
But the texture is obtained with large pixels like:
What could be the reason for this? Maybe this is normal for a low-poly model? It seems that the texture is too close.
Note: The fewer polygons - the less quality becomes.
Thanks in advance for any comment/answer!
image from goodfon.ru
Solution: On the #Rabbid76 advice, I changed the normal data. It turned out that in the Blender you need to set the Shading for the object as smooth (no flat) - this increases the number of normals when exporting to the format *.obj: Why OBJ export writes face normals instead of vertex normals
Also, on the #Rabbid76 advice, I changed the line:
vec3 eyeDirectModel = normalize(- eyePositionModel.xyz);
As a result, pixelation has disappeared:
In addition, pixel artifacts may also appear when calculate refraction in the vertex shader, so I transferred the calculations to the fragment shader. Here is the modified shader code:
Vertex shader:
#version 300 es
precision lowp float;
uniform mat4 u_mvpMatrix;
uniform mat4 u_mvMatrix;
in vec4 a_position;
in vec3 a_normal;
out vec3 v_normal;
out lowp float SpecularIntensity;
out vec3 v_eyeDirectModel;
float getSpecularIntensity(vec4 position, vec3 a_normal, vec3 eyeDirectModel) {
float shininess = 30.0;
vec3 lightPosition = vec3(-20.0, 0.0, 0.0);
mediump vec3 LightDirModel = normalize(lightPosition - position.xyz);
mediump vec3 halfVector = normalize(LightDirModel + eyeDirectModel);
lowp float NdotH = max(dot(a_normal, halfVector), 0.0);
return pow(NdotH, shininess);
}
void main() {
v_normal = a_normal;
vec4 eyePositionModel = u_mvMatrix * a_position;
// Eye direction in model space
vec3 eyeDirectModel = normalize(- eyePositionModel.xyz);
// specular lighting
SpecularIntensity = getSpecularIntensity(a_position, a_normal, eyeDirectModel);
v_eyeDirectModel = eyeDirectModel;
gl_Position = u_mvpMatrix * a_position;
}
Fragment shader:
#version 300 es
precision lowp float;
uniform mat4 u_mvpMatrix;
in vec3 v_normal;
in lowp float SpecularIntensity;
in vec3 v_eyeDirectModel;
out vec4 outColor;
uniform samplerCube s_texture; // skybox
const float eta = 0.65;
void main() {
// Calculate refraction direction in model space
vec3 refractDirect = refract(v_eyeDirectModel, normalize(v_normal), eta);
// Project refraction
refractDirect = (u_mvpMatrix * vec4(refractDirect, 0.0)).xyw;
// Map refraction direction to 2d coordinates
vec2 refractCoord = 0.5 * (refractDirect.xy / refractDirect.z) + 0.5;
vec4 glassColor = texture(s_texture, vec3(refractCoord, 1.0));
outColor = glassColor + SpecularIntensity;
outColor.a = 0.8; // transparent
}
First of all there is a mistake in the shader code. a_position.xyz - eyePositionModel.xyz does not make any sense, since a_position is the vertex coordinate in model space and eyePositionModel is the vertex coordinate in view space.
You have to compute the incident vector for refract in view sapce. That is the vector from the eye position to the vertex. Since the eye position in view space is (0, 0, 0), it is:
vec4 eyePositionView = u_mvMatrix * a_position;
// eye direction in model space
mediump vec3 eyeDirectView = normalize(- eyePositionView.xyz);
Furthermore, it is an issue of the normal vector attributes.
The problem is caused by the fact that the normal vectors are computed per face rather than individually for each vertex.
Note, the refraction direction (refractDirect) depends on the vertex coordinate (eyeDirectModel) and the normal vector (a_normal):
mediump vec3 refractDirect = refract(eyeDirectModel, a_normal, eta);
Since the normal vectors are different between adjacent surfaces, you can see a noticeable edge between the faces of the the mesh.
If the normal vectors are computed per vertex, then the adjacent faces share the vertex coordinates and the corresponding normal vectors. That would causes a smooth transition from face to face.
For each point my OpenGL shader program takes it creates a red ring that smoothly transitions between opaque, and totally transparent. My shader program works, but has banding artifacts.
The fragment shader is below.
#version 110
precision mediump float;
void main() {
float dist = distance(gl_PointCoord.xy, vec2(0.5, 0.5));
// Edge value is 0.5, it should be 1.
// Inner most value is 0 it should stay 0.
float inner_circle = 2.0 * dist;
float circle = 1.0 - inner_circle;
vec4 pixel = vec4(1.0, 0.0, 0.0, inner_circle * circle );
gl_FragColor = pixel;
}
Here's the less interesting vertex shader that I don't think is the cause of the problem.
#version 110
attribute vec2 aPosition;
uniform float uSize;
uniform vec2 uCamera;
void main() {
// Square the view and map the top of the screen to 1 and the bottom to -1.
gl_Position = vec4(aPosition, 0.0, 1.0);
gl_Position.x = gl_Position.x * uCamera.y / uCamera.x;
// Set point size
gl_PointSize = (uSize + 1.0) * 100.0;
}
Please help my figure out, why does my OpenGL shader program have banding artifacts?
P.S. Incidentally this is for an Android Acer Iconia tablet.
Android's GLSurfaceView uses an RGB565 surface by default. Either enable dithering (glEnable(GL_DITHER)) or install a custom EGLConfigChooser to choose an RGBA or RGBX surface configuration with 8 bits per channel.