I am using https://github.com/natario1/CameraView this library for capturing a negative image to positive and it is using the openGl shaders. I need a filter in which I can capture a negative image to positive in Black & White mode not in the normal color mode (which is currently available in the library). I tried to mix the two filters i.e first capture the negative image to positive in color mode and then apply the Black & White mode filter but as I am new to openGl, I was unable to do this. Please help me in this regard. It would be highly appreciated. The shaders which I am using are as follows :
This shader is used to convert the negative to positive in color mode.
private final static String FRAGMENT_SHADER = "#extension GL_OES_EGL_image_external : require\n"
+ "precision mediump float;\n"
+ "varying vec2 "+DEFAULT_FRAGMENT_TEXTURE_COORDINATE_NAME+";\n"
+ "uniform samplerExternalOES sTexture;\n"
+ "void main() {\n"
+ " vec4 color = texture2D(sTexture, "+DEFAULT_FRAGMENT_TEXTURE_COORDINATE_NAME+");\n"
+ " float colorR = (1.0 - color.r) / 1.0;\n"
+ " float colorG = (1.0 - color.g) / 1.0;\n"
+ " float colorB = (1.0 - color.b) / 1.0;\n"
+ " gl_FragColor = vec4(colorR, colorG, colorB, color.a);\n"
+ "}\n";
This shader is used to change the normal positive image in Black & White mode.
private final static String FRAGMENT_SHADER = "#extension GL_OES_EGL_image_external : require\n"
+ "precision mediump float;\n"
+ "varying vec2 "+DEFAULT_FRAGMENT_TEXTURE_COORDINATE_NAME+";\n"
+ "uniform samplerExternalOES sTexture;\n" + "void main() {\n"
+ " vec4 color = texture2D(sTexture, "+DEFAULT_FRAGMENT_TEXTURE_COORDINATE_NAME+");\n"
+ " float colorR = (color.r + color.g + color.b) / 3.0;\n"
+ " float colorG = (color.r + color.g + color.b) / 3.0;\n"
+ " float colorB = (color.r + color.g + color.b) / 3.0;\n"
+ " gl_FragColor = vec4(colorR, colorG, colorB, color.a);\n"
+ "}\n";
Please help in making a filter which can direct capture the negative image to positive in Black & White mode.
Thanks.
You can do that with a one-liner in a single shader:
gl_FragColor = vec4(vec3(dot(1.0 - color.rgb, vec3(1.0/3.0))), color.a);
Explanation:
the inverse color is:
vec3 inverseColor = 1.0 - color.rgb;
For the gray scale there are 2 opportunities. Either straight forward
float gray = (inverseColor.r + inverseColor.g + inverseColor.b) / 3.0;
Or by using the dot product:
float gray = dot(1.0 - inverseColor.rgb, vec3(1.0/3.0));
Finally construct a vec3 from gray:
gl_FragColor = vec4(vec3(gray), color.a);
I am trying to implement a basic shading program with GLES 2/3, and have pieced together this code from various tutorials. Everything looks correct to me, and it compiles fine, but nothing appears on my screen.
It rendered fine until I added Normal and Light Position data, then it broke and I haven't been able to find a way to fix it.
Can anyone see what's wrong here?
class MyRenderer:GLSurfaceView.Renderer{
var glProgram = -1
var uMV = -1
var uMVP = -1
var uLightPos = -1
var aPosition = -1
var aNormal = -1
var aColor = -1
val verts = arrayOf(0f,1f,0f, -1f,0f,0f, 1f,0f,0f)
val vBuf = allocateDirect(verts.size*4).order(nativeOrder()).asFloatBuffer()
val norms = arrayOf(0f,0f,-1f, 0f,0f,-1f, 0f,0f,-1f)
val nBuf = allocateDirect(norms.size*4).order(nativeOrder()).asFloatBuffer()
override fun onSurfaceCreated(g:GL10,c:EGLConfig) {
glClearColor(0f,0f,0f,1f)
glClearDepthf(1f)
glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT)
glProgram = glCreateProgram()
val vShader = glCreateShader(GL_VERTEX_SHADER)
glShaderSource(vShader,"#version 100\n " +
"uniform mat4 u_mvMat; " +
"uniform mat4 u_mvpMat; " +
"uniform vec3 u_LightPos; " +
"attribute vec4 a_Position; " +
"attribute vec3 a_Normal; " +
"attribute vec4 a_Color; " +
"varying vec4 v_Color; " +
"void main(){ " +
" vec3 vertex = vec3(u_mvMat*a_Position); " +
" vec3 normal = vec3(u_mvMat*vec4(a_Normal,0.0)); " +
" vec3 lightVector = normalize(u_LightPos-vertex); " +
" float distance = length(u_LightPos-vertex); " +
" float diffuse = max(dot(normal,lightVector),0.1) " +
" / (1.0+distance*distance/4.0); " +
" v_Color = a_Color*diffuse; " +
" gl_Position = u_mvpMat*a_Position;} " )
glCompileShader(vShader)
glAttachShader(glProgram,vShader)
val fShader = glCreateShader(GL_FRAGMENT_SHADER)
glShaderSource(fShader,"#version 100\n " +
"precision mediump float; " +
"varying vec4 v_Color; " +
"void main(){ " +
" gl_FragColor = v_Color;} " )
glCompileShader(fShader)
glAttachShader(glProgram,fShader)
glLinkProgram(glProgram)
glUseProgram(glProgram)
uMVP = glGetUniformLocation(glProgram,"u_mvpMat")
uMV = glGetUniformLocation(glProgram,"u_mvMat")
uLightPos = glGetUniformLocation(glProgram,"u_LightPos")
aPosition = glGetAttribLocation (glProgram,"a_Position")
aNormal = glGetAttribLocation (glProgram,"a_Normal")
aColor = glGetAttribLocation (glProgram,"a_Color")
glVertexAttribPointer(aPosition,4,GL_FLOAT,false,3*4,vBuf)
glEnableVertexAttribArray(aPosition)
glVertexAttribPointer(aNormal,4,GL_FLOAT,false,3*4,nBuf)
glEnableVertexAttribArray(aNormal)
val modelM = FloatArray(16)
setIdentityM(modelM,0)
val viewM = FloatArray(16)
setLookAtM(viewM,0, 0f,0f,-5f, 0f,0f,0f, 0f,0f,1f)
val projM = FloatArray(16)
frustumM(projM,0, -2f,2f, 1f,-1f, 1f,50f)
val mvM = FloatArray(16)
multiplyMM(mvM,0,viewM,0,modelM,0)
glUniformMatrix4fv(uMV,1,false,mvM,0)
val mvpM = FloatArray(16)
multiplyMM(mvpM,0,projM,0,mvM,0)
glUniformMatrix4fv(uMVP,1,false,mvpM,0)
glUniform3v(uLightPos,-1f,-10f,-1f)
glVertexAttrib4f(aColor,1f,1f,1f,1f)
glDrawArrays(GL_TRIANGLES,0,verts.size/3)
}
override fun onSurfaceChanged(g:GL10,w:Int,h:Int){}
override fun onDrawFrame(g:GL10){}
}
If you want to use color values in range [0.0, 1.0] then you've to use glVertexAttrib4f rather than glVertexAttribI4i:
glVertexAttribI4i(aColor,1,1,1,1)
glVertexAttrib4f(aColor,1.0f,1.0f,1.0f,1.0f)
glVertexAttribI* assumes the values to be signed or unsigned fixed-point values in range [-2147483647, 2147483647] or [0, 4294967295]. A value of 1 is almost black.
The type of the u_LightPos is floating point (vec3):
uniform vec3 u_LightPos;
You've to use glUniform3f rather than glUniform3ito set the value of a floating point uniform variable:
glUniform3i(uLightPos,-1,-10,-1)
glUniform3f(uLightPos,-1f,-10f,-1f)
I recommend to verify if the shader is complied successfully by glGetShaderiv (parameter GL_COMPILE_STATUS). Compile error messages can be retrieved by glGetShaderInfoLog
I recommend to add an ambient light component (for debug reasons)
e.g.:
v_Color = a_Color*(diffuse + 0.5);
If you can "see" the geometry with the ambient light, then there are some possible issues:
the light source is on the back side of the geometry, so the back side is lit, but not the front side. That cause that only the almost black, unlit side is visible from the point of view.
The distance of the light source to the geometry is "too large". distance becomes a very huge value and so diffuse is very small and all the geometry is almost black.
The light source is in the closed volume of the geometry.
All the normal vectors point away from the camera. That may cause that dot(normal, lightVector) is less than 0.0.
I want to do a fisheye effect on android useing opengl 2.0,i can do it not use the opengl,but this not i want ,because this is inefficient and not support video texture. I also test the fisheye effect using Android Media Effects API,but the effect looks not good.
i also search fishshader as follows:
private static final String FISHEYE_FRAGMENT_SHADER =
"precision mediump float;\n" +
"uniform sampler2D u_Texture;\n" +
"uniform vec2 vScale;\n" +
"const float alpha = float(4.0 * 2.0 + 0.75);\n" +
"varying vec2 v_TexCoordinate;\n" +
"void main() {\n" +
" float bound2 = 0.25 * (vScale.x * vScale.x + vScale.y * vScale.y);\n" +
" float bound = sqrt(bound2);\n" +
" float radius = 1.15 * bound;\n" +
" float radius2 = radius * radius;\n" +
" float max_radian = 0.5 * 3.14159265 - atan(alpha / bound * sqrt(radius2 - bound2));\n" +
" float factor = bound / max_radian;\n" +
" float m_pi_2 = 1.570963;\n" +
" vec2 coord = v_TexCoordinate - vec2(0.5, 0.5);\n" +
" float dist = length(coord * vScale);\n" +
" float radian = m_pi_2 - atan(alpha * sqrt(radius2 - dist * dist), dist);\n" +
" float scalar = radian * factor / dist;\n" +
" vec2 new_coord = coord * scalar + vec2(0.5, 0.5);\n" +
" gl_FragColor = texture2D(u_Texture, new_coord);\n" +
"}\n";
this is i want to ,but i donot know how to use it .Can someone give me some clue.
Android OpenGL ES does (normally) support video textures. It's not strictly part of the OpenGL ES API, but you can normally import video surfaces as EGL External Images via Android SurfaceViews.
There are lots of similar questions on the web, but this SO question should provide a useful starting point:
Android. How play video on Surface(OpenGL)
I have a texel (rectangle) and I need to access its 4 corners.
vec2 offset = vec2(1,1)/vec2(texWidth, texHeight)
texture2D (texSource, texCoord + 0.5 * offset * ???? )
what should I fill here to get both top 2 and bottom 2 corners.?
[Edit] : Code as per Tommy's answer
" vec2 pixelSize = vec2(offsetx,offsety);\n" +
" vec2 halfPixelSize = pixelSize * vec2(0.5);\n" +
" vec2 texCoordCentre = vTextureCoord - mod(vTextureCoord, pixelSize) + halfPixelSize;\n" +
" vec2 topLeft = texCoordCentre - halfPixelSize;\n" +
" vec2 bottomRight = texCoordCentre + halfPixelSize;\n" +
" vec2 topRight = texCoordCentre + vec2(halfPixelSize.x, -halfPixelSize.y);\n" +
" vec2 bottomLeft = texCoordCentre + vec2(-halfPixelSize.x, halfPixelSize.y);\n" +
" vec4 p00 = texture2D(sTexture, topLeft);\n" +
" vec4 p02 = texture2D(sTexture, bottomRight);\n" +
" vec4 p20 = texture2D(sTexture, topRight);\n" +
" vec4 p22 = texture2D(sTexture, bottomLeft);\n" +
" vec4 pconv = 0.25*(p00 + p02 + p20 + p22);\n" +
A texture is always addressed by numbers in the range [0, 1). Taking a texel as being an individual pixel within a texture, each of those is an equal subdivision of the range [0, 1), hence if there are 16 of them the first occupies the region [0, 1/16), the next [1/16, 2/16), etc.
So the boundaries of the texel at n in a texture of size p are at n/p and n+1/p, and the four corners are at the combinations of the boundary positions for x and y.
If you have linear filtering enabled then you'll get an equal mix of the four adjoining texels by sampling at those locations; if you've got nearest filtering enabled then you'll get one of the four but be heavily subject to floating point rounding errors.
So, I think:
vec2 pixelSize = vec2(1.0) / vec2(texWidth, texHeight);
vec2 halfPixelSize = pixelSize * vec2(0.5);
vec2 texCoordCentre = texCoord - mod(texCoord, pixelSize) + halfPixelSize;
vec2 topLeft = texCoordCentre - halfPixelSize;
vec2 bottomRight = texCoordCentre + halfPixelSize;
vec2 topRight = texCoordCentre + vec2(halfPixelSize.x, -halfPixelSize.y);
vec2 bottomLeft = texCoordCentre + vec2(-halfPixelSize.x, halfPixelSize.y);
(... and if you were targeting ES 3 instead of 2, you could just use the textureSize function instead of messing about with uniforms)
I am trying to achieve a fisheye effect on a BitMap image in Android. Is there an existing library or algorithm which can help?
I recommend you to use Android Media Effects API. If you want to have more control on the effect (or target older Android versions) you can also directly use opengl to apply a fisheye effect to your photo. Some tutorials on the subject : http://www.learnopengles.com/android-lesson-four-introducing-basic-texturing/ . Learning opengl will permit you to be able to apply all kind of effects to your photo, shader codes can be easily found on the internet (eg : https://github.com/BradLarson/GPUImage/tree/master/framework/Source)
Here is a shader code for a fisheye effect :
private static final String FISHEYE_FRAGMENT_SHADER =
"precision mediump float;\n" +
"uniform sampler2D u_Texture;\n" +
"uniform vec2 vScale;\n" +
"const float alpha = float(4.0 * 2.0 + 0.75);\n" +
"varying vec2 v_TexCoordinate;\n" +
"void main() {\n" +
" float bound2 = 0.25 * (vScale.x * vScale.x + vScale.y * vScale.y);\n" +
" float bound = sqrt(bound2);\n" +
" float radius = 1.15 * bound;\n" +
" float radius2 = radius * radius;\n" +
" float max_radian = 0.5 * 3.14159265 - atan(alpha / bound * sqrt(radius2 - bound2));\n" +
" float factor = bound / max_radian;\n" +
" float m_pi_2 = 1.570963;\n" +
" vec2 coord = v_TexCoordinate - vec2(0.5, 0.5);\n" +
" float dist = length(coord * vScale);\n" +
" float radian = m_pi_2 - atan(alpha * sqrt(radius2 - dist * dist), dist);\n" +
" float scalar = radian * factor / dist;\n" +
" vec2 new_coord = coord * scalar + vec2(0.5, 0.5);\n" +
" gl_FragColor = texture2D(u_Texture, new_coord);\n" +
"}\n";
Have a look at OpenCV for Android:
http://opencv.org/platforms/android.html
And this answer:
How to simulate fisheye lens effect by openCV?
Perhaps a more simple solution would be using the Android Media Effects API. It's only available from API 14 and above however.