I'm writing a lighting shader at the moment for my OpenGLES 2.0 Renderer. I've messed around with a sample project's shader and have gotten directional lighting working on that. But when I try to implement it into our own code, it acts really strange- certain parts seemingly lighting up when they shouldn't, etc.
I'll post the shader code here, if you'd like more, please request it. Me and a team-mate have been trying to fix this for the past week and a half with no progress, we really need some help. Thanks.
final String vertexShader =
"uniform mat4 u_MVPMatrix;" // A constant representing the combined model/view/projection matrix.
+ "uniform mat4 u_MVMatrix;" // A constant representing the combined model/view matrix.
+ "uniform vec3 u_LightPos;" // The position of the light in eye space.
+ "uniform vec3 u_VectorToLight;"
+ "attribute vec4 a_Position;" // Per-vertex position information we will pass in.
+ "attribute vec4 a_Color;" // Per-vertex color information we will pass in.
+ "attribute vec3 a_Normal;" // Per-vertex normal information we will pass in.
+ "varying vec4 v_Color;" // This will be passed into the fragment shader.
+"vec3 modelViewVertex;"
+"vec3 modelViewNormal;"
+"vec4 getPointLighting();"
+"vec4 getAmbientLighting();"
+"vec4 getDirectionalLighting();"
+ "void main()" // The entry point for our vertex shader.
+ "{"
// Transform the vertex into eye space.
+ " modelViewVertex = vec3(u_MVMatrix * a_Position);"
// Transform the normal's orientation into eye space.
+ " modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));"
// Will be used for attenuation.
// Multiply the color by the illumination level. It will be interpolated across the triangle.
// + " v_Color = getAmbientLighting();"
+ " v_Color = getDirectionalLighting();"
//+ " v_Color += getPointLighting(); \n"
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
+ " gl_Position = u_MVPMatrix * a_Position;"
+ "}"
+ "vec4 getAmbientLighting(){"
+ " return a_Color * 0.1;"
+ "}"
+ "vec4 getPointLighting(){"
+ " float distance = length(u_LightPos - modelViewVertex);"
// Get a lighting direction vector from the light to the vertex.
+ " vec3 lightVector = normalize(u_LightPos - modelViewVertex);"
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
+ " float diffuse = max(dot(modelViewNormal, lightVector), 0.0);"
// Attenuate the light based on distance.
+ " diffuse = diffuse * (1.0 / (1.0 + distance));" +
" return a_Color * diffuse;"
+"}"
+"vec4 getDirectionalLighting(){" +
"vec3 lightVector = normalize(u_VectorToLight - modelViewVertex);"+
"float diffuse = max(dot(modelViewNormal, lightVector), 0.0);" +
"diffuse *= .3;"+
"return a_Color * diffuse;"+
"}";
I've left all of it in there, even stuff that we've given up on.
Thanks again.
Edit: I should probably also that transforms are standard; the camera is rotated and moved by changing the view matrix, the object is rotated by changing the model matrix.
In directional lighting model, all light rays are shooting in a single, uniform direction, in other words they are all parallel to each other. However, in your function getDirectionalLighting() the lightVector is dependent on the modelViewVertex which seems to be causing the the issue.
Try to change it as
vec3 lightVector = normalize(u_VectorToLight);
or simply
vec3 lightVector = normalize(-u_LightPos);
Related
I am developing for Android with OpenGL ES 2 and two devices: Google Nexus 3 and Archos 80 G9 (8" tablet). On these devices my shaders works fine.
Now, i have bought a Samsung Galaxy Tab 2, and a single vertex shader does not work on this device. This shader (from OSGEarth) remove the problem of z-fighting for tracks on a 3D terrain, modulating the z value of the vertex. On Samsung Galaxy Tab 2 this modulation does not work (!)
This is the shader
std::string vertexShaderSource =
"#version 100\n\n"
"float remap( float val, float vmin, float vmax, float r0, float r1 ) \n"
"{ \n"
" float vr = (clamp(val, vmin, vmax)-vmin)/(vmax-vmin); \n"
" return r0 + vr * (r1-r0); \n"
"} \n\n"
"mat3 castMat4ToMat3Function(in mat4 m) \n"
"{ \n"
" return mat3( m[0].xyz, m[1].xyz, m[2].xyz ); \n"
"} \n\n"
"uniform mat4 osg_ViewMatrix; \n"
"uniform mat4 osg_ViewMatrixInverse; \n"
"uniform mat4 osg_ModelViewMatrix; \n"
"uniform mat4 osg_ProjectionMatrix; \n"
"uniform mat4 osg_ModelViewProjectionMatrix; \n"
"uniform float osgearth_depthoffset_minoffset; \n"
"attribute vec4 osg_Vertex;\n"
"attribute vec4 osg_Color;\n"
"varying vec4 v_color; \n\n"
"varying vec4 adjV; \n"
"varying float simRange; \n\n"
"void main(void) \n"
"{ \n"
// transform the vertex into eye space:
" vec4 vertEye = osg_ModelViewMatrix * osg_Vertex; \n"
" vec3 vertEye3 = vertEye.xyz/vertEye.w; \n"
" float range = length(vertEye3); \n"
// vec3 adjVecEye3 = normalize(vertEye3); \n"
// calculate the "up" vector, that will be our adjustment vector:
" vec4 vertWorld = osg_ViewMatrixInverse * vertEye; \n"
" vec3 adjVecWorld3 = -normalize(vertWorld.xyz/vertWorld.w); \n"
" vec3 adjVecEye3 = castMat4ToMat3Function(osg_ViewMatrix) * adjVecWorld3; \n"
// remap depth offset based on camera distance to vertex. The farther you are away,
// the more of an offset you need.
" float offset = remap( range, 1000.0, 10000000.0, osgearth_depthoffset_minoffset, 10000.0); \n"
// adjust the Z (distance from the eye) by our offset value:
" vertEye3 -= adjVecEye3 * offset; \n"
" vertEye.xyz = vertEye3 * vertEye.w; \n"
// Transform the new adjusted vertex into clip space and pass it to the fragment shader.
" adjV = osg_ProjectionMatrix * vertEye; \n"
// Also pass along the simulated range (eye=>vertex distance). We will need this
// to detect when the depth offset has pushed the Z value "behind" the camera.
" simRange = range - offset; \n"
" gl_Position = osg_ModelViewProjectionMatrix * osg_Vertex; \n"
" v_color = osg_Color; \n"
// transform clipspace depth into [0..1] for FragDepth:
" gl_Position.z = max(0.0, (adjV.z/adjV.w)*gl_Position.w ); \n"
// if the offset pushed the Z behind the eye, the projection mapping will
// result in a z>1. We need to bring these values back down to the
// near clip plan (z=0). We need to check simRange too before doing this
// so we don't draw fragments that are legitimently beyond the far clip plane.
" float z = 0.5 * (1.0+(adjV.z/adjV.w)); \n"
" if ( z > 1.0 && simRange < 0.0 ) { gl_Position.z = 0.0; } \n"
"} \n" ;
With this code the track on the terrain is not displayed at all on SGT2.
If a comment the instruction
gl_Position.z = max(0.0, (adjV.z/adjV.w)*gl_Position.w );
the shader works, and the track is displayed, but of course the z-figthing is not removed.
My Google Nexus and Samsung Galaxy Tab 2 both use PowerVR SGX 540, so maybe the problem is in the Floating Point Math precision of the different CPU's? (Or some bug?)
Any ideas?
Thanks
[SOLVED]
I removed from the vertex shader
"varying vec4 adjV; \n"
"varying float simRange; \n\n"
and declared them ad vec4/float.
I think they where assigned mediump precision as varyings.
Is there a way to transform a libgdx's Texture to a grayscale image? So far I had duplicate the images that I want to grayscale and I did it manually, but I think it is not the best solution because my game is using more and more images and it uses a lot of disk space.
Thought I'd share this for anyone wanting to use some copy/paste code.
import com.badlogic.gdx.graphics.glutils.ShaderProgram;
public class GrayscaleShader {
static String vertexShader = "attribute vec4 a_position;\n" +
"attribute vec4 a_color;\n" +
"attribute vec2 a_texCoord0;\n" +
"\n" +
"uniform mat4 u_projTrans;\n" +
"\n" +
"varying vec4 v_color;\n" +
"varying vec2 v_texCoords;\n" +
"\n" +
"void main() {\n" +
" v_color = a_color;\n" +
" v_texCoords = a_texCoord0;\n" +
" gl_Position = u_projTrans * a_position;\n" +
"}";
static String fragmentShader = "#ifdef GL_ES\n" +
" precision mediump float;\n" +
"#endif\n" +
"\n" +
"varying vec4 v_color;\n" +
"varying vec2 v_texCoords;\n" +
"uniform sampler2D u_texture;\n" +
"\n" +
"void main() {\n" +
" vec4 c = v_color * texture2D(u_texture, v_texCoords);\n" +
" float grey = (c.r + c.g + c.b) / 3.0;\n" +
" gl_FragColor = vec4(grey, grey, grey, c.a);\n" +
"}";
public static ShaderProgram grayscaleShader = new ShaderProgram(vertexShader,
fragmentShader);
}
To use it call
spriteBatch.setShader(GrayscaleShader.grayscaleShader)
And when you're done with grayscale don't forget to call
spriteBatch.setShader(null);
You should be able to write a GLSL shader that renders a texture in grayscale. This requires OpenGL 2.x, and doesn't really "transform" a texture, but just renders it to the display as grayscale.
For a detailed tutorial on shaders that includes a grayscale shader check out: https://github.com/mattdesl/lwjgl-basics/wiki/ShaderLesson3
(Libgdx doesn't really define the GLSL shader API, that's passed through from OpenGL, so most tutorials or code you find on the web for regular OpenGL should work.)
For a more direct hack, just take the Libgdx SpriteBatch shader and change the fragment shader so it averages the rgb components. (You can define your own ShaderProgram and provide it to a SpriteBatch to use.)
Change body of the fragment shader to something like this (untested, so may not compile):
+ " vec4 c = v_color * texture2D(u_texture, v_texCoords);\n" //
+ " float grey = (c.r + c.g + c.b) / 3.0;\n" //
+ " gl_FragColor = vec4(grey, grey, grey, c.a);\n" //
You can load up textures as luminance only, or luminance and alpha in GLES (see glTexImage2D). In libgdx you can specify PixFormat.Intensity (luminance) or LuminanceAlpha (luminance and alpha) when instantiating the Texture. This will generate a grayscale texture.
You still need to have two textures (one color, one grayscale) loaded up, but they can use the same source, and the luminance only uses only 1 byte per pixel in memory.
A more efficient solution is to implement a shader as suggested by P.T., but is only available from GLES 2.
This vertex shader code works on every device except for the Galaxy Note 2.
gl_Position = uMVPMatrix * vPosition;
where if I reverse the matrix multiplication to:
gl_Position = vPosition * uMVPMatrix; I can actually get things to appear.
Unfortunately, the reverse would require me to completely rewrite my transformations library.
Does anyone have any insight on what could be causing this, is this an opengl driver error with the device?
Shader code
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"attribute vec2 a_TexCoordinate;" +
"varying vec2 v_TexCoordinate;" +
"void main() {" +
// the matrix must be included as a modifier of gl_Position
"v_TexCoordinate = a_TexCoordinate;" +
"gl_Position = uMVPMatrix * vPosition;" +
"}";
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform sampler2D u_Texture;" +
"varying vec2 v_TexCoordinate;" +
"void main() {" +
" gl_FragColor = texture2D(u_Texture, v_TexCoordinate);" +
//" gl_FragColor = vec4(v_TexCoordinate, 0, 1);" +
"}";
This is not about only a galaxy note 2 platform.
This is a mathematical question. Because both glsl/hlsl uses column major order,
It is a right way to multiply MATRIX x VECTOR
or
you can transpose the matrix using a option in
glUniformMatrix4fv( h_Uniforms[UNIFORMS_PROJECTION], 1, GL_FALSE or GL_TRUE, g_proxtrans.s);
Apparently, It can be a problem not to call this function with the option ( GL_TRUE is to use transposing ) each every frame just try .
It has been quite a while since understanding Matrix calculations and mapping them to expected results. It's sort of intimidating. Curious if there is anyone out there who have done mapping from an ImageView Matrix android.graphics.Matrix to an openGL Matrix 'android.opengl.Matrix' in order to update the texture based upon the image behind it.
There will always be a photograph directly behind the texture, and the texture in front should stay at the same scale and translation as the ImageView (actually using the ImageTouchView library). The ImageView scaling and zooming works as expected and desired.
So, while the Image is being re-sized, I want to update the OpenGL ES 2.0 shader code in the project such that it changes size with it. I thought that I would be able to do this by using callbacks and directly manipulating the shader code, but it doesn't seem to be working. Not sure if this is a shader problem or a direct Matrix passing problem. The trigger is reached from the ImageTouchView when a matrix change is detected.
#Override
public void onMatrixChanged(Matrix matrix)
{
photoViewRenderer_.updateTextureMatrix(matrix);
photoSurfaceView_.requestRender();
}
When I receive that matrix data and attempt to display the new matrix in the texture then, I get a black screen overtop of the ImageTouchView. So, this could very well just be an OpenGL problem in the code. However, this is what it looks like so as we can see exactly what is being described.
The shader code looks something like this. And I'll start with these and add additional code as requested based upon feedback.
private final String vertexShader_ =
"uniform float zoom;\n" +
"attribute vec4 a_position;\n" +
"attribute vec4 a_texCoord;\n" +
"attribute vec3 a_translation;\n" +
"varying vec2 v_texCoord;\n" +
"void main() {\n" +
" gl_Position = a_position * vec4(a_translation, 1.0);\n" +
" v_texCoord = a_texCoord.xy * zoom;\n" +
"}\n";
Before adding in the line for vec4(a_translation, 1.0); it seemed be working as expected in that the image was being displayed directly on top of the ImageTouchView of equal size just fine. So it is probably the shader. But...I cannot rule out that the data going in from an Image Matrix is screwing up the texture either and representing it way off screen. I don't know what to use as a default matrix for a_translation in to check against that.
Edit:
The black screen is actually not an issue now. The default a_position set to private float[] positionTranslationData_ = {1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f}; brought the image back. But the texture is not getting manipulated with the inputs from the onMatrixChanged(Matrix matrix) function called in by the ImageTouchView.
If you want to translate the image (By moving the pixels renderer) you have two options.
Translate the gl_Position:
private final String vertexShader_ =
"uniform float zoom;\n" +
"attribute vec4 a_position;\n" +
"attribute vec4 a_texCoord;\n" +
"attribute vec3 a_translation;\n" +
"varying vec2 v_texCoord;\n" +
"void main() {\n" +
" gl_Position = a_position + vec4(a_translation, 1.0);\n" + // Note the + here
" v_texCoord = a_texCoord.xy * zoom;\n" +
"}\n";
Apply an affine transform, using a 4x4 matrix, to gl_Position (be aware that the 4th component of a_position must be 1.0):
private final String vertexShader_ =
"uniform float zoom;\n" +
"attribute vec4 a_position;\n" +
"attribute vec4 a_texCoord;\n" +
"attribute mat4 a_translation;\n" +
"varying vec2 v_texCoord;\n" +
"void main() {\n" +
" gl_Position = a_position*a_translation;\n" +
" v_texCoord = a_texCoord.xy * zoom;\n" +
"}\n";
if you want to move the texture (not the quad or whatever you are rendering) you can apply the same logic to the v_texCoord. This will apply a translation and/or rotation to the output texture coordinates.
Probably one reason for you problem is that right now, your code that a multiply component wise, which will multiply a_position.x by a_translation.x component, the same for 'y' and so on, which I think is not what you are trying to archive.
In My application i have to use the Android Camera, and OpenGLES.
I have also have to give the effect to the Camera Vision with two files called one.vsh and one.fsh But i dont know how to implement that file in OpenGLES.
Even i also dont know how to implement android camera to work with OPENGLES to do effect with that two files.
Please help me for this.
Thanks.
Well, I don't have a test about Android camera to use such effect on it.
But ofcourse, you can use the shader file in the onSurfaceCreated method as like below:
//
// Initialize the shader and program object
//
public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
String vShaderStr = "uniform mat4 u_mvpMatrix; \n"
+ "attribute vec4 a_position; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_Position = u_mvpMatrix * a_position; \n"
+ "} \n";
String fShaderStr = "precision mediump float; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 ); \n"
+ "} \n";
// Load the shaders and get a linked program object
mProgramObject = ESShader.loadProgram(vShaderStr, fShaderStr);
// Get the attribute locations
mPositionLoc = GLES20.glGetAttribLocation(mProgramObject, "position");
// Get the uniform locations
mMVPLoc = GLES20.glGetUniformLocation(mProgramObject, "u_mvpMatrix");
// Generate the vertex data
mCube.genCube(1.0f);
// Starting rotation angle for the cube
mAngle = 45.0f;
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
}
Just replace the String that you want to use for vertex shader and fragment shader.
Hope this helps.