OpenGL ES 2.0 / GLSL not rendering data (Kotlin) - android

I am trying to implement a basic shading program with GLES 2/3, and have pieced together this code from various tutorials. Everything looks correct to me, and it compiles fine, but nothing appears on my screen.
It rendered fine until I added Normal and Light Position data, then it broke and I haven't been able to find a way to fix it.
Can anyone see what's wrong here?
class MyRenderer:GLSurfaceView.Renderer{
var glProgram = -1
var uMV = -1
var uMVP = -1
var uLightPos = -1
var aPosition = -1
var aNormal = -1
var aColor = -1
val verts = arrayOf(0f,1f,0f, -1f,0f,0f, 1f,0f,0f)
val vBuf = allocateDirect(verts.size*4).order(nativeOrder()).asFloatBuffer()
val norms = arrayOf(0f,0f,-1f, 0f,0f,-1f, 0f,0f,-1f)
val nBuf = allocateDirect(norms.size*4).order(nativeOrder()).asFloatBuffer()
override fun onSurfaceCreated(g:GL10,c:EGLConfig) {
glClearColor(0f,0f,0f,1f)
glClearDepthf(1f)
glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT)
glProgram = glCreateProgram()
val vShader = glCreateShader(GL_VERTEX_SHADER)
glShaderSource(vShader,"#version 100\n " +
"uniform mat4 u_mvMat; " +
"uniform mat4 u_mvpMat; " +
"uniform vec3 u_LightPos; " +
"attribute vec4 a_Position; " +
"attribute vec3 a_Normal; " +
"attribute vec4 a_Color; " +
"varying vec4 v_Color; " +
"void main(){ " +
" vec3 vertex = vec3(u_mvMat*a_Position); " +
" vec3 normal = vec3(u_mvMat*vec4(a_Normal,0.0)); " +
" vec3 lightVector = normalize(u_LightPos-vertex); " +
" float distance = length(u_LightPos-vertex); " +
" float diffuse = max(dot(normal,lightVector),0.1) " +
" / (1.0+distance*distance/4.0); " +
" v_Color = a_Color*diffuse; " +
" gl_Position = u_mvpMat*a_Position;} " )
glCompileShader(vShader)
glAttachShader(glProgram,vShader)
val fShader = glCreateShader(GL_FRAGMENT_SHADER)
glShaderSource(fShader,"#version 100\n " +
"precision mediump float; " +
"varying vec4 v_Color; " +
"void main(){ " +
" gl_FragColor = v_Color;} " )
glCompileShader(fShader)
glAttachShader(glProgram,fShader)
glLinkProgram(glProgram)
glUseProgram(glProgram)
uMVP = glGetUniformLocation(glProgram,"u_mvpMat")
uMV = glGetUniformLocation(glProgram,"u_mvMat")
uLightPos = glGetUniformLocation(glProgram,"u_LightPos")
aPosition = glGetAttribLocation (glProgram,"a_Position")
aNormal = glGetAttribLocation (glProgram,"a_Normal")
aColor = glGetAttribLocation (glProgram,"a_Color")
glVertexAttribPointer(aPosition,4,GL_FLOAT,false,3*4,vBuf)
glEnableVertexAttribArray(aPosition)
glVertexAttribPointer(aNormal,4,GL_FLOAT,false,3*4,nBuf)
glEnableVertexAttribArray(aNormal)
val modelM = FloatArray(16)
setIdentityM(modelM,0)
val viewM = FloatArray(16)
setLookAtM(viewM,0, 0f,0f,-5f, 0f,0f,0f, 0f,0f,1f)
val projM = FloatArray(16)
frustumM(projM,0, -2f,2f, 1f,-1f, 1f,50f)
val mvM = FloatArray(16)
multiplyMM(mvM,0,viewM,0,modelM,0)
glUniformMatrix4fv(uMV,1,false,mvM,0)
val mvpM = FloatArray(16)
multiplyMM(mvpM,0,projM,0,mvM,0)
glUniformMatrix4fv(uMVP,1,false,mvpM,0)
glUniform3v(uLightPos,-1f,-10f,-1f)
glVertexAttrib4f(aColor,1f,1f,1f,1f)
glDrawArrays(GL_TRIANGLES,0,verts.size/3)
}
override fun onSurfaceChanged(g:GL10,w:Int,h:Int){}
override fun onDrawFrame(g:GL10){}
}

If you want to use color values in range [0.0, 1.0] then you've to use glVertexAttrib4f rather than glVertexAttribI4i:
glVertexAttribI4i(aColor,1,1,1,1)
glVertexAttrib4f(aColor,1.0f,1.0f,1.0f,1.0f)
glVertexAttribI* assumes the values to be signed or unsigned fixed-point values in range [-2147483647, 2147483647] or [0, 4294967295]. A value of 1 is almost black.
The type of the u_LightPos is floating point (vec3):
uniform vec3 u_LightPos;
You've to use glUniform3f rather than glUniform3ito set the value of a floating point uniform variable:
glUniform3i(uLightPos,-1,-10,-1)
glUniform3f(uLightPos,-1f,-10f,-1f)
I recommend to verify if the shader is complied successfully by glGetShaderiv (parameter GL_COMPILE_STATUS). Compile error messages can be retrieved by glGetShaderInfoLog
I recommend to add an ambient light component (for debug reasons)
e.g.:
v_Color = a_Color*(diffuse + 0.5);
If you can "see" the geometry with the ambient light, then there are some possible issues:
the light source is on the back side of the geometry, so the back side is lit, but not the front side. That cause that only the almost black, unlit side is visible from the point of view.
The distance of the light source to the geometry is "too large". distance becomes a very huge value and so diffuse is very small and all the geometry is almost black.
The light source is in the closed volume of the geometry.
All the normal vectors point away from the camera. That may cause that dot(normal, lightVector) is less than 0.0.

Related

Negative image filter to Black & White mode

I am using https://github.com/natario1/CameraView this library for capturing a negative image to positive and it is using the openGl shaders. I need a filter in which I can capture a negative image to positive in Black & White mode not in the normal color mode (which is currently available in the library). I tried to mix the two filters i.e first capture the negative image to positive in color mode and then apply the Black & White mode filter but as I am new to openGl, I was unable to do this. Please help me in this regard. It would be highly appreciated. The shaders which I am using are as follows :
This shader is used to convert the negative to positive in color mode.
private final static String FRAGMENT_SHADER = "#extension GL_OES_EGL_image_external : require\n"
+ "precision mediump float;\n"
+ "varying vec2 "+DEFAULT_FRAGMENT_TEXTURE_COORDINATE_NAME+";\n"
+ "uniform samplerExternalOES sTexture;\n"
+ "void main() {\n"
+ " vec4 color = texture2D(sTexture, "+DEFAULT_FRAGMENT_TEXTURE_COORDINATE_NAME+");\n"
+ " float colorR = (1.0 - color.r) / 1.0;\n"
+ " float colorG = (1.0 - color.g) / 1.0;\n"
+ " float colorB = (1.0 - color.b) / 1.0;\n"
+ " gl_FragColor = vec4(colorR, colorG, colorB, color.a);\n"
+ "}\n";
This shader is used to change the normal positive image in Black & White mode.
private final static String FRAGMENT_SHADER = "#extension GL_OES_EGL_image_external : require\n"
+ "precision mediump float;\n"
+ "varying vec2 "+DEFAULT_FRAGMENT_TEXTURE_COORDINATE_NAME+";\n"
+ "uniform samplerExternalOES sTexture;\n" + "void main() {\n"
+ " vec4 color = texture2D(sTexture, "+DEFAULT_FRAGMENT_TEXTURE_COORDINATE_NAME+");\n"
+ " float colorR = (color.r + color.g + color.b) / 3.0;\n"
+ " float colorG = (color.r + color.g + color.b) / 3.0;\n"
+ " float colorB = (color.r + color.g + color.b) / 3.0;\n"
+ " gl_FragColor = vec4(colorR, colorG, colorB, color.a);\n"
+ "}\n";
Please help in making a filter which can direct capture the negative image to positive in Black & White mode.
Thanks.
You can do that with a one-liner in a single shader:
gl_FragColor = vec4(vec3(dot(1.0 - color.rgb, vec3(1.0/3.0))), color.a);
Explanation:
the inverse color is:
vec3 inverseColor = 1.0 - color.rgb;
For the gray scale there are 2 opportunities. Either straight forward
float gray = (inverseColor.r + inverseColor.g + inverseColor.b) / 3.0;
Or by using the dot product:
float gray = dot(1.0 - inverseColor.rgb, vec3(1.0/3.0));
Finally construct a vec3 from gray:
gl_FragColor = vec4(vec3(gray), color.a);

Opengles How to change black pixels to transparent pixels

My code is
vec4 textureColor = texture2D(uTextureSampler, vTextureCoord);
if(textureColor.r* 0.299 + textureColor.g * 0.587 + textureColor.b * 0.114 < 0.1) {
gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0);
} else {
gl_FragColor = vec4(textureColor.r, textureColor.g, textureColor.b, textureColor.w);
}
My problem is how to judge the pixel is black? How can I do that, should change rgb to hsv?
return "precision mediump float; \n"+
" varying highp vec2 " + VARYING_TEXTURE_COORD + ";\n" +
" \n" +
" uniform sampler2D " + TEXTURE_SAMPLER_UNIFORM + ";\n" +
" \n" +
" void main()\n" +
" {\n" +
" vec3 keying_color = vec3(0.0, 0.0, 0.0);\n" +
" float thresh = 0.45; // [0, 1.732]\n" +
" float slope = 0.1; // [0, 1]\n" +
" vec3 input_color = texture2D(" + TEXTURE_SAMPLER_UNIFORM + ", " + VARYING_TEXTURE_COORD + ").rgb;\n" +
" float d = abs(length(abs(keying_color.rgb - input_color.rgb)));\n" +
" float edge0 = thresh * (1.0 - slope);\n" +
" float alpha = smoothstep(edge0, thresh, d);\n" +
" gl_FragColor = vec4(input_color,alpha);\n" +
" }";
In the keying_color variable is stored the actual color we want to replace. It is using classic RGB model, but intensity is not expressed as 0-255 integer. It is a float value in range 0-1. (So 0 = 0, 255 = 0, 122 = 0.478…) In our case, the green color has value (0.647, 0.941, 0.29), but if you are using different video, measure the color yourself.
Note: Make sure you have the right color. Some color measurement software automatically converts colors to slightly different formats, such as AdobeRGB.
So where’s the magic?
We load current pixel color in the input_color, then calculate difference between input and keying color. Based on this difference, alpha value is calculated and used for specific pixel.
You can control how strict the comparison is using the slope and threshold values. It is a bit more complicated, but the most basic rule is: The more threshold you have, the bigger tolerance.
So, we are done, right? Nope.
You can look this link: http://blog.csdn.net/u012847940/article/details/47441923

Opengl-es : How to access corners of a texel

I have a texel (rectangle) and I need to access its 4 corners.
vec2 offset = vec2(1,1)/vec2(texWidth, texHeight)
texture2D (texSource, texCoord + 0.5 * offset * ???? )
what should I fill here to get both top 2 and bottom 2 corners.?
[Edit] : Code as per Tommy's answer
" vec2 pixelSize = vec2(offsetx,offsety);\n" +
" vec2 halfPixelSize = pixelSize * vec2(0.5);\n" +
" vec2 texCoordCentre = vTextureCoord - mod(vTextureCoord, pixelSize) + halfPixelSize;\n" +
" vec2 topLeft = texCoordCentre - halfPixelSize;\n" +
" vec2 bottomRight = texCoordCentre + halfPixelSize;\n" +
" vec2 topRight = texCoordCentre + vec2(halfPixelSize.x, -halfPixelSize.y);\n" +
" vec2 bottomLeft = texCoordCentre + vec2(-halfPixelSize.x, halfPixelSize.y);\n" +
" vec4 p00 = texture2D(sTexture, topLeft);\n" +
" vec4 p02 = texture2D(sTexture, bottomRight);\n" +
" vec4 p20 = texture2D(sTexture, topRight);\n" +
" vec4 p22 = texture2D(sTexture, bottomLeft);\n" +
" vec4 pconv = 0.25*(p00 + p02 + p20 + p22);\n" +
A texture is always addressed by numbers in the range [0, 1). Taking a texel as being an individual pixel within a texture, each of those is an equal subdivision of the range [0, 1), hence if there are 16 of them the first occupies the region [0, 1/16), the next [1/16, 2/16), etc.
So the boundaries of the texel at n in a texture of size p are at n/p and n+1/p, and the four corners are at the combinations of the boundary positions for x and y.
If you have linear filtering enabled then you'll get an equal mix of the four adjoining texels by sampling at those locations; if you've got nearest filtering enabled then you'll get one of the four but be heavily subject to floating point rounding errors.
So, I think:
vec2 pixelSize = vec2(1.0) / vec2(texWidth, texHeight);
vec2 halfPixelSize = pixelSize * vec2(0.5);
vec2 texCoordCentre = texCoord - mod(texCoord, pixelSize) + halfPixelSize;
vec2 topLeft = texCoordCentre - halfPixelSize;
vec2 bottomRight = texCoordCentre + halfPixelSize;
vec2 topRight = texCoordCentre + vec2(halfPixelSize.x, -halfPixelSize.y);
vec2 bottomLeft = texCoordCentre + vec2(-halfPixelSize.x, halfPixelSize.y);
(... and if you were targeting ES 3 instead of 2, you could just use the textureSize function instead of messing about with uniforms)

OpenGL ES 2.0 Directional lighting issues (diffuse)

I'm writing a lighting shader at the moment for my OpenGLES 2.0 Renderer. I've messed around with a sample project's shader and have gotten directional lighting working on that. But when I try to implement it into our own code, it acts really strange- certain parts seemingly lighting up when they shouldn't, etc.
I'll post the shader code here, if you'd like more, please request it. Me and a team-mate have been trying to fix this for the past week and a half with no progress, we really need some help. Thanks.
final String vertexShader =
"uniform mat4 u_MVPMatrix;" // A constant representing the combined model/view/projection matrix.
+ "uniform mat4 u_MVMatrix;" // A constant representing the combined model/view matrix.
+ "uniform vec3 u_LightPos;" // The position of the light in eye space.
+ "uniform vec3 u_VectorToLight;"
+ "attribute vec4 a_Position;" // Per-vertex position information we will pass in.
+ "attribute vec4 a_Color;" // Per-vertex color information we will pass in.
+ "attribute vec3 a_Normal;" // Per-vertex normal information we will pass in.
+ "varying vec4 v_Color;" // This will be passed into the fragment shader.
+"vec3 modelViewVertex;"
+"vec3 modelViewNormal;"
+"vec4 getPointLighting();"
+"vec4 getAmbientLighting();"
+"vec4 getDirectionalLighting();"
+ "void main()" // The entry point for our vertex shader.
+ "{"
// Transform the vertex into eye space.
+ " modelViewVertex = vec3(u_MVMatrix * a_Position);"
// Transform the normal's orientation into eye space.
+ " modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));"
// Will be used for attenuation.
// Multiply the color by the illumination level. It will be interpolated across the triangle.
// + " v_Color = getAmbientLighting();"
+ " v_Color = getDirectionalLighting();"
//+ " v_Color += getPointLighting(); \n"
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
+ " gl_Position = u_MVPMatrix * a_Position;"
+ "}"
+ "vec4 getAmbientLighting(){"
+ " return a_Color * 0.1;"
+ "}"
+ "vec4 getPointLighting(){"
+ " float distance = length(u_LightPos - modelViewVertex);"
// Get a lighting direction vector from the light to the vertex.
+ " vec3 lightVector = normalize(u_LightPos - modelViewVertex);"
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
+ " float diffuse = max(dot(modelViewNormal, lightVector), 0.0);"
// Attenuate the light based on distance.
+ " diffuse = diffuse * (1.0 / (1.0 + distance));" +
" return a_Color * diffuse;"
+"}"
+"vec4 getDirectionalLighting(){" +
"vec3 lightVector = normalize(u_VectorToLight - modelViewVertex);"+
"float diffuse = max(dot(modelViewNormal, lightVector), 0.0);" +
"diffuse *= .3;"+
"return a_Color * diffuse;"+
"}";
I've left all of it in there, even stuff that we've given up on.
Thanks again.
Edit: I should probably also that transforms are standard; the camera is rotated and moved by changing the view matrix, the object is rotated by changing the model matrix.
In directional lighting model, all light rays are shooting in a single, uniform direction, in other words they are all parallel to each other. However, in your function getDirectionalLighting() the lightVector is dependent on the modelViewVertex which seems to be causing the the issue.
Try to change it as
vec3 lightVector = normalize(u_VectorToLight);
or simply
vec3 lightVector = normalize(-u_LightPos);

How to print a float value in vertex shader

I try to write a small program in OpenGL ES 2.0. But i found i quit hard to inspect the variable in shader language.
For example i want to know the value in ver vertex shader. I will pass the value to fragment shader and put the value as the Red in glFragColor. But i found it quit hard to pass the value. If i declare the value using varying, then the value will change.
Here is the code, the log is the value i want to print.
public static final String VERTEX_SHADER =
"attribute vec4 position;\n" +
"attribute vec2 inputTextureCoordinate;\n" +
"\n" +
"uniform float texelWidthOffset; \n" +
"uniform float texelHeightOffset; \n" +
"\n" +
"varying vec2 centerTextureCoordinate;\n" +
"varying vec2 oneStepLeftTextureCoordinate;\n" +
"varying vec2 twoStepsLeftTextureCoordinate;\n" +
"varying vec2 oneStepRightTextureCoordinate;\n" +
"varying vec2 twoStepsRightTextureCoordinate;\n" +
"varying float log;\n" +
"\n" +
"void main()\n" +
"{\n" +
"log = -0.1;\n" +
"gl_Position = position;\n" +
"vec2 firstOffset;\n" +
"vec2 secondOffset;\n" +
// "if (sqrt(pow(position.x, 2) + pow(position.y, 2)) < 0.2) {\n" +
// "log = -position.x;\n" +
"if (position.x < 0.3) {\n" +
"log = 0.7;\n" +
"firstOffset = vec2(3.0 * texelWidthOffset, 3.0 * texelHeightOffset);\n" +
"secondOffset = vec2(3.0 * texelWidthOffset, 3.0 * texelHeightOffset);\n" +
"} else {\n" +
"firstOffset = vec2(texelWidthOffset, texelHeightOffset);\n" +
"secondOffset = vec2(texelWidthOffset, texelHeightOffset);\n" +
"log = -0.1;\n" +
"}\n" +
"\n" +
"centerTextureCoordinate = inputTextureCoordinate;\n" +
"oneStepLeftTextureCoordinate = inputTextureCoordinate - firstOffset;\n" +
"twoStepsLeftTextureCoordinate = inputTextureCoordinate - secondOffset;\n" +
"oneStepRightTextureCoordinate = inputTextureCoordinate + firstOffset;\n" +
"twoStepsRightTextureCoordinate = inputTextureCoordinate + secondOffset;\n" +
"}\n";
public static final String FRAGMENT_SHADER =
"precision highp float;\n" +
"\n" +
"uniform sampler2D inputImageTexture;\n" +
"\n" +
"varying vec2 centerTextureCoordinate;\n" +
"varying vec2 oneStepLeftTextureCoordinate;\n" +
"varying vec2 twoStepsLeftTextureCoordinate;\n" +
"varying vec2 oneStepRightTextureCoordinate;\n" +
"varying vec2 twoStepsRightTextureCoordinate;\n" +
"varying float log;\n" +
"\n" +
"void main()\n" +
"{\n" +
"if (log != -0.1) {\n" +
"gl_FragColor.rgba = vec4(log, 0.0, 0.0, 1.0);\n" +
// "return;\n" +
// "}\n" +
"} else { \n" +
"lowp vec4 fragmentColor;\n" +
"fragmentColor = texture2D(inputImageTexture, centerTextureCoordinate) * 0.2;\n" +
"fragmentColor += texture2D(inputImageTexture, oneStepLeftTextureCoordinate) * 0.2;\n" +
"fragmentColor += texture2D(inputImageTexture, oneStepRightTextureCoordinate) * 0.2;\n" +
"fragmentColor += texture2D(inputImageTexture, twoStepsLeftTextureCoordinate) * 0.2;\n" +
"fragmentColor += texture2D(inputImageTexture, twoStepsRightTextureCoordinate) * 0.2;\n" +
"\n" +
"gl_FragColor = fragmentColor;\n" +
// "gl_FragColor.rgba = vec4(0.0, 1.0, 0.0, 1.0);\n" +
// "}\n" +
"}\n";
Or is there any better method to do this.
When comparing floating point values, instead of doing this :
if (log != -0.1)
You should allow a little delta/tolerance on the value to account for floating point precision and the eventual value "change" you may get from passing it as a varying.
So you should do something like :
if (abs(log - (-0.1)) >= 0.0001)
Here the 0.0001 I chosen is a bit arbitrary ... It has to be a small value ...
Another example with ==
Instead of :
if (log == 0.7)
do
if (abs(log - 0.7) <= 0.0001)
However here you probably also have another issue:
The vertex shader executes for each 3 vertex of all your triangles (or quads)
So for a specific triangle, you may set different values (-0.1 or 0.7) for log for each vertex
Now the problem is that in the fragment shader the GPU will interpolate between the 3 log values depending on which pixel it is rendering ... so in the end you can get any value in [-0.1,0.7] interval displayed on screen :-(
To avoid this kind of issue, I personally use #ifdefs in my shaders to be able to switch them between normal and debug mode, and can switch between the two with a keypress. I never try to mix normal and debug displays based on if tests, especially when the test is based on a vertex position.
So in your case I would first create a specific debug version of the shader, and then use 0.0 and 1.0 as values for log, like this what you will see are red gradients, the more red the color is, the closer you are to the case you want to test.

Categories

Resources