I've just switched my code over to using a separate shader instead of passing a boolean uniform to decide which algorithm to use. Unfortunately, after vigorous testing, I've discovered that one of the attributes (halo) is not being passed through the the new shader. The other attribute it uses (position) is passed through, though.
Abdridged code follows:
Java code:
// Attributes
protected static int position = 0;
protected static int colour = 1;
protected static int texture = 2;
protected static int halo = 3;
protected static int normal = 4;
protected static int program1;
protected static int program2;
...
// Linking shader1
GLES20.glBindAttribLocation(program1, position, "position");
GLES20.glBindAttribLocation(program1, colour, "colour");
GLES20.glBindAttribLocation(program1, texture, "texCoord");
GLES20.glBindAttribLocation(program1, normal, "normal");
GLES20.glLinkProgram(program1);
...
// Linking shader2
GLES20.glBindAttribLocation(program2, position, "position");
GLES20.glBindAttribLocation(program2, halo, "halo");
GLES20.glLinkProgram(program2);
...
GLES20.glUseProgram(program1);
GLES20.glVertexAttribPointer(
position,
3,
GLES20.GL_FLOAT,
false,
0,
buffer);
...
//Render with program1
...
GLES20.glUseProgram(program2);
GLES20.glVertexAttribPointer(
halo,
1,
GLES20.GL_FLOAT,
false,
0,
doHaloBuffer);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
...
// Using lines for testing purposes
GLES20.glDrawElements(GLES20.GL_LINE_LOOP, haloIndexCount, GLES20.GL_UNSIGNED_SHORT, haloIndexBuffer);
...
Fragment shaders are just simple "Render the texture and colour you get" shaders
shader1.vsh:
attribute vec3 position;
attribute vec4 colour;
attribute vec2 texCoord;
attribute vec3 normal;
...
varying vec2 fragTexCoord;
varying vec4 fragColour;
...
// All attributes used at some point
shader2.vsh:
attribute vec3 position;
attribute float halo;
varying vec4 fragColour;
...
vec4 colour = vec4(1.0, 1.0, 0.0, 1.0);
if(halo > 0.5){
colour.g = 0.0;
...
}
fragColour = colour;
...
If i change halo > 0.5 to halo == 0.0 or swap the green values in the above statements, red is rendered otherwise yellow is rendered.
I tried altering the input buffer to be all 1.0 for testing but it made no difference. It seems that halo is not being passed through.
Previously, I had the two shaders merged and had a boolean uniform to decide which code to run and it worked fine. Nothing else has changed; the input buffers are the same, the counts are the same it's just that I'm using separate shaders now that is different.
Any thoughts?
check if the halo attribute is enabled just before rendering with glDrawElements
Related
I applied cell shading effect to the object, like:
This works well, but there are many conditional checks ("if" statements) in the fragment shader:
#version 300 es
precision lowp float;
in float v_CosViewAngle;
in float v_LightIntensity;
const lowp vec3 defaultColor = vec3(0.1, 0.7, 0.9);
void main() {
lowp float intensity = 0.0;
if (v_CosViewAngle > 0.33) {
intensity = 0.33;
if (v_LightIntensity > 0.76) {
intensity = 1.0;
} else if (v_LightIntensity > 0.51) {
intensity = 0.84;
} else if (v_LightIntensity > 0.26) {
intensity = 0.67;
} else if (v_LightIntensity > 0.1) {
intensity = 0.50;
}
}
outColor = vec4(defaultColor * intensity, 1.0);
}
I guess so many checks in the fragment shader can ultimately affect performance. In addition, shader size is increasing. Especially if there will be even more cell shading levels.
Is there any other way to get this effect? Maybe some glsl-function can be used here?
Thanks in advance!
Store your color bands in a Nx1 texture, do a texture lookup using v_LightIntensity as your texture coordinate. Want a different shading level count then just change the texture.
EDIT Store an NxM texture, doing a lookup using vLightIntensity and v_CosViewAngle as a 2D coordinate, and you can kill branches completely.
All I need is to pass texture through fragment shader 1, get the result and pass it to fragment shader 2.
I know how to link vertex and fragment shader together into a program and get the shader object.
I don't know how to get the result of shader 1, switch the shaders (GLES20.glUseProgram ?) and pass the result of shader 1 to shader 2.
Any ideas how to do it?
UPDATE
This is an example what I want to achive
Effect 1:
Effect 2:
My goal is to combine Effect 1 and Effect 2.
UPDATE 2
effect 2 function:
...
uniform float effect2;
vec2 getEffect_() {
float mType = effect2;
vec2 newCoordinate = vec2(textureCoordinate.x, textureCoordinate.y);
vec2 res = vec2(textureCoordinate.x, textureCoordinate.y);
//case 1
if(mType==3.5) {
if (newCoordinate.x > 0.5) {
res = vec2(1.25 - newCoordinate.x, newCoordinate.y); }
}
else
//case 2
...
return res;
}
...
If you want to pass the result as a texture to another shader, you should use RTT(render to texture) so that you can get a texture to pass to another shader.
Yes, you should use glUseProgram(name) to switch another shader but not only this, you should render it at the original FBO(now you use)
make one FBO to make the result as a texture.
render to texture using the first shader then you can get a texture
Draw the texture with the second shader at the main fbo(now you use)
If you just want to combine two effect, just combine the two fragment shaders.
//At the end of the second frag shader
// skip this below
//gl_FragColor = result;
// put these codes
float grayScale = dot(result.rgb, vec3(0.299, 0.587, 0.114));
gl_FragColor = vec4(grayScale, grayScale, grayScale, 1.0);
Only use one shader program with the second effect fragment shader.
I will assume you don't need to show those 30 effects at once.
define uniform float effect2 in the 10 fragments like effect1.
pass the effect2 like 0.5, 1.5 or 2.5
according to the value you pass, differently mix the effect.
For example,
if(effec2>2.0) {
float grayScale = dot(result.rgb, vec3(0.299, 0.587, 0.114));
gl_FragColor = vec4(grayScale, grayScale, grayScale, 1.0);
} else if(effect2>1.0) {
vec3 final_result_of_effec2_2 = fun_2(result.rgb);
gl_FragColor = vec4(final_result_of_effec2_2, 1.0);
} else {
vec3 final_result_of_effec2_3 = fun_3(result.rgb);
gl_FragColor = vec4(final_result_of_effec2_3, 1.0);
}
I've ported a desktop OpenGL application to Android NDK (under OpenGL ES 2), and there seems to be a random deformation of my mesh. On most application runs, it looks 100% perfect, but sometimes it looks as follows:
The inconsistency of the problem is the most concerning to me. I don't know if it's because of my Android simulator, or if it's something else. Through my testing, I can establish that it's either:
An OpenGL setting that doesn't play nice on Android, but does on everything else
A bug in the Open Asset Import Library (Assimp) which I've compiled by hand to work on Android
A bug in the Android simulator
My model process looks as follows:
On every draw:
- bind the program
- change the uniforms
- if (has vao support)
- bind vao
- enable all vertex attribute arrays
- for every mesh
- bind array buffer
- set the attribute pointer for each vertex array
- bind element buffer
- bind texture & set uniform of texture location
- glDrawElements
- disable all vertex attribute arrays
And this is the actual code:
glUseProgram(program_);
if (loaded_vao_)
{
#if !defined(TARGET_OS_IPHONE) && !defined(__ANDROID__)
glBindVertexArray(vao_);
#else
glBindVertexArrayOES(vao_);
#endif
}
glEnableVertexAttribArray(vPosition_);
glEnableVertexAttribArray(vTexCoord_);
glEnableVertexAttribArray(boneids_);
glEnableVertexAttribArray(weights_);
for (unsigned int i = 0; i < vbo_.size(); i++)
{
glBindBuffer(GL_ARRAY_BUFFER, vbo_[i]);
glVertexAttribPointer(vPosition_, 3, GL_FLOAT, GL_FALSE, 0, 0);
glVertexAttribPointer(vTexCoord_, 2, GL_FLOAT, GL_FALSE, 0, reinterpret_cast<void*>(texcoord_locations_[i]));
#if !defined(TARGET_OS_IPHONE) && !defined(__ANDROID__)
glVertexAttribIPointer(boneids_, 4, GL_INT, 0, reinterpret_cast<void*>(bone_id_locations_[i]));
#else // APPLE OR ANDROID
glVertexAttribPointer(boneids_, 4, GL_FLOAT, GL_FALSE, 0, reinterpret_cast<void*>(bone_id_locations_[i]));
#endif
glVertexAttribPointer(weights_, 4, GL_FLOAT, GL_FALSE, 0, reinterpret_cast<void*>(bone_weight_locations_[i]));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo_[i]);
// Textures
if (!textures_.empty())
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures_[texture_numbers_[i]]);
glUniform1i(textureSample_, 0);
}
glDrawElements(GL_TRIANGLES, ind_size_[i], GL_UNSIGNED_SHORT, 0);
}
glDisableVertexAttribArray(vPosition_);
glDisableVertexAttribArray(vTexCoord_);
glDisableVertexAttribArray(boneids_);
glDisableVertexAttribArray(weights_);
As well, my vertex shader looks as follows:
precision mediump float;
attribute vec3 vPosition;
attribute vec2 vTexCoord;
attribute vec4 boneids;
attribute vec4 weights;
uniform mat4 pos;
uniform mat4 view;
uniform mat4 scale;
uniform mat4 rotate;
uniform mat4 proj;
uniform mat4 bones[50];
uniform int has_bones;
varying vec4 color;
varying vec2 texcoord;
void main()
{
color = vec4(1.0f);
texcoord = vTexCoord;
vec4 newPos = vec4(vPosition,1.0);
if (has_bones == 1)
{
mat4 bone_transform = bones[int(boneids[0])]*weights[0];
bone_transform += bones[int(boneids[1])]*weights[1];
bone_transform += bones[int(boneids[2])]*weights[2];
bone_transform += bones[int(boneids[3])]*weights[3];
newPos = bone_transform * newPos;
}
gl_Position = proj * view * pos * scale * rotate * newPos;
}
Do note that I've tried commenting out the bone_transform in the vertex shader, and the problem still persists.
EDIT:
It seems that I was able to recreate some deformations on my Linux OpenGL 3.3 version by removing any assimp optimization post process flags:
scene = importer.ReadFile(file_path.c_str(), aiProcess_Triangulate | aiProcess_FlipUVs | aiProcess_LimitBoneWeights | aiProcess_ValidateDataStructure);
Based on the output of the Assimp::DefaultLogger, there's no errors or vertex warnings.
The issue seemed to be a part of Blender's COLLADA export, or Assimp's COLLADA reader.
By exporting to FBX and using Autodesk's free FBX to DAE tool, the deformation was fixed.
Blender was version 2.71
Assimp was version 3.1.1
Assimp, even with all logging and data integrity flags on, did not post any errors about corruption, so I don't know which component to blame. Regardless, I'm happy I've found a workaround.
As a starting point I use the Vuforia (version 4) sample called MultiTargets which tracks a 3d physical "cube" in the camera feed and augments it with yellow grid lines along the cube edges.
What I want to achieve is remove the textures and use diffuse lighting on the cube faces instead, by setting my own light position.
I want to do this on native Android and I do NOT want to use Unity.
It's been a hard journey of several days of work and learning. This is my first time working with OpenGL of any kind, and OpenGL ES 2.0 doesn't exactly make it easy for the beginner.
So I have a light source positioned slightly above the top face of my cube. I found that I can get the diffuse effect right if I compute the lambert factor in model space, everything remains in place regardless of my camera, and only the top face gets any light.
But when I move to using eye space, it becomes weird and the light seems to follow my camera around. Other faces get light, not only the top face. I don't understand why that is. For testing I have made sure that the light position is as expected by only using distance to lightsource for rendering pixel brightness in the fragment shader. Therefore, I'm fairly confident in the correctness of my "lightDirectionEyespace", and my only explanation is that something with the normals must be wrong. But I think I followed the explanations for creating the normal matrix correctly...
Help please!
Then there is of course the question whether those diffuse calculations SHOULD be performed in eye space? Will there be any disadvantages if I just do it in model space? I suspect that probably when I later use more models and lights and add specular and transparency, it will not work anymore, even though I don't see yet why.
My renderFrame method: (some variable names still contain "bottle", which is the object I want to light next after I get the cube right)
private void renderFrame()
{
ShaderFactory.checkGLError("Check gl errors prior render Frame");
// Clear color and depth buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// Get the state from Vuforia and mark the beginning of a rendering section
final State state=Renderer.getInstance().begin();
// Explicitly render the Video Background
Renderer.getInstance().drawVideoBackground();
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
// Did we find any trackables this frame?
if(0 != state.getNumTrackableResults())
{
// Get the trackable:
TrackableResult result=null;
final int numResults=state.getNumTrackableResults();
// Browse results searching for the MultiTarget
for(int j=0; j < numResults; j++)
{
result=state.getTrackableResult(j);
if(result.isOfType(MultiTargetResult.getClassType()))
break;
result=null;
}
// If it was not found exit
if(null == result)
{
// Clean up and leave
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Renderer.getInstance().end();
return;
}
final Matrix44F modelViewMatrix_Vuforia=Tool.convertPose2GLMatrix(result.getPose());
final float[] modelViewMatrix=modelViewMatrix_Vuforia.getData();
final float[] modelViewProjection=new float[16];
Matrix.scaleM(modelViewMatrix, 0, CUBE_SCALE_X, CUBE_SCALE_Y, CUBE_SCALE_Z);
Matrix.multiplyMM(modelViewProjection, 0, vuforiaAppSession
.getProjectionMatrix().getData(), 0, modelViewMatrix, 0);
GLES20.glUseProgram(bottleShaderProgramID);
// Draw the cube:
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glCullFace(GLES20.GL_BACK);
GLES20.glVertexAttribPointer(vertexHandleBottle, 3, GLES20.GL_FLOAT, false, 0, cubeObject.getVertices());
GLES20.glVertexAttribPointer(normalHandleBottle, 3, GLES20.GL_FLOAT, false, 0, cubeObject.getNormals());
GLES20.glEnableVertexAttribArray(vertexHandleBottle);
GLES20.glEnableVertexAttribArray(normalHandleBottle);
// add light position and color
final float[] lightPositionInModelSpace=new float[] {0.0f, 1.1f, 0.0f, 1.0f};
GLES20.glUniform4f(lightPositionHandleBottle, lightPositionInModelSpace[0], lightPositionInModelSpace[1],
lightPositionInModelSpace[2], lightPositionInModelSpace[3]);
GLES20.glUniform3f(lightColorHandleBottle, 0.9f, 0.9f, 0.9f);
// create the normalMatrix for lighting calculations
final float[] normalMatrix=new float[16];
Matrix.invertM(normalMatrix, 0, modelViewMatrix, 0);
Matrix.transposeM(normalMatrix, 0, normalMatrix, 0);
// pass the normalMatrix to the shader
GLES20.glUniformMatrix4fv(normalMatrixHandleBottle, 1, false, normalMatrix, 0);
// extract the camera position for lighting calculations (last column of matrix)
// GLES20.glUniform3f(cameraPositionHandleBottle, normalMatrix[12], normalMatrix[13], normalMatrix[14]);
// set material properties
GLES20.glUniform3f(matAmbientHandleBottle, 0.0f, 0.0f, 0.0f);
GLES20.glUniform3f(matDiffuseHandleBottle, 0.1f, 0.9f, 0.1f);
// pass the model view matrix to the shader
GLES20.glUniformMatrix4fv(modelViewMatrixHandleBottle, 1, false, modelViewMatrix, 0);
// pass the model view projection matrix to the shader
// the "transpose" parameter must be "false" according to the spec, anything else is an error
GLES20.glUniformMatrix4fv(mvpMatrixHandleBottle, 1, false, modelViewProjection, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES,
cubeObject.getNumObjectIndex(), GLES20.GL_UNSIGNED_SHORT, cubeObject.getIndices());
GLES20.glDisable(GLES20.GL_CULL_FACE);
// disable the enabled arrays after everything has been rendered
GLES20.glDisableVertexAttribArray(vertexHandleBottle);
GLES20.glDisableVertexAttribArray(normalHandleBottle);
ShaderFactory.checkGLError("MultiTargets renderFrame");
}
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Renderer.getInstance().end();
}
My vertex shader:
attribute vec4 vertexPosition;
attribute vec3 vertexNormal;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 normalMatrix;
// lighting
uniform vec4 uLightPosition;
uniform vec3 uLightColor;
// material
uniform vec3 uMatAmbient;
uniform vec3 uMatDiffuse;
// pass to fragment shader
varying vec3 vNormalEyespace;
varying vec3 vVertexEyespace;
varying vec4 vLightPositionEyespace;
varying vec3 vNormal;
varying vec4 vVertex;
void main()
{
// we can just take vec3() of a vec4 and it will take the first 3 entries
vNormalEyespace = vec3(normalMatrix * vec4(vertexNormal, 1.0));
vNormal = vertexNormal;
vVertexEyespace = vec3(modelViewMatrix * vertexPosition);
vVertex = vertexPosition;
// light position
vLightPositionEyespace = modelViewMatrix * uLightPosition;
gl_Position = modelViewProjectionMatrix * vertexPosition;
}
And my fragment shader:
precision highp float; //apparently necessary to force same precision as in vertex shader
//lighting
uniform vec4 uLightPosition;
uniform vec3 uLightColor;
//material
uniform vec3 uMatAmbient;
uniform vec3 uMatDiffuse;
//from vertex shader
varying vec3 vNormalEyespace;
varying vec3 vVertexEyespace;
varying vec4 vLightPositionEyespace;
varying vec3 vNormal;
varying vec4 vVertex;
void main()
{
vec3 normalModel = normalize(vNormal);
vec3 normalEyespace = normalize(vNormalEyespace);
vec3 lightDirectionModel = normalize(uLightPosition.xyz - vVertex.xyz);
vec3 lightDirectionEyespace = normalize(vLightPositionEyespace.xyz - vVertexEyespace.xyz);
vec3 ambientTerm = uMatAmbient;
vec3 diffuseTerm = uMatDiffuse * uLightColor;
// calculate the lambert factor via cosine law
float diffuseLambert = max(dot(normalEyespace, lightDirectionEyespace), 0.0);
// Attenuate the light based on distance.
float distance = length(vLightPositionEyespace.xyz - vVertexEyespace.xyz);
float diffuseLambertAttenuated = diffuseLambert * (1.0 / (1.0 + (0.01 * distance * distance)));
diffuseTerm = diffuseLambertAttenuated * diffuseTerm;
gl_FragColor = vec4(ambientTerm + diffuseTerm, 1.0);
}
I finally solved all problems.
There were 2 issues that might be of interest for future readers.
Vuforia CubeObject class from the official sample (current Vuforia version 4) has wrong normals. They do not all correspond with the vertex definition order. If you're using the CubeObject from the sample, make sure that the normal definitions are correctly corresponding with the faces. Vuforia fail...
As suspected, my normalMatrix was wrongly built. We cannot just invert-transpose the 4x4 modelViewMatrix, we need to first extract the top left 3x3 submatrix from it and then invert-transpose that.
Here is the code that works for me:
final Mat3 normalMatrixCube=new Mat3();
normalMatrixCube.SetFrom4X4(modelViewMatrix);
normalMatrixCube.invert();
normalMatrixCube.transpose();
This code by itself is not that useful though, because it relies on a custom class Mat3 which I randomly imported from this guy because neither Android nor Vuforia seem to offer any matrix class that can invert/transpose 3x3 matrices. This really makes me question my sanity - the only code that works for such a basic problem has to rely on a custom matrix class? Maybe I'm just doing it wrong, I don't know...
thumbs up for not using the fixed functions on this! I found your example quite useful for understanding that one needs to also translate the light to a position in eyespace. All the questions i've found just recommend using glLight.
While this helped me solve using a static light source, something which is missing from your code if you wish to also make transformations on your model(s) while keeping the light source static(e.g rotating the object) is to keep track of the original modelview matrix until the view is changed, or until you're drawing another object which has a different model. So something like:
vLightPositionEyespace = fixedModelView * uLightPosition;
where fixedModelView can be updated in your renderFrame() method.
This thread on opengl discussion boards helped :)
I am trying to use a point light animation for my game. It runs fine in Editor with Diffuse, Bumped Specular and VertexLit shaders. However it doesn't work on any Mobile shaders provided by default.
Is there a way to use Point lights in Android? Or is there any shader which can work on mobiles and supports point lights too?
Finally found the answer - this post on UnityAnswer helped me. I am reposting the Custom Shader here -
// Specular, Normal Maps with Main Texture
// Fragment based
Shader "SpecTest/SpecTest5"
{
Properties
{
_Shininess ("Shininess", Range (0, 1.5)) = 0.078125
_Color ("Main Color", Color) = (1,1,1,1)
_SpecColor ("Specular Color", Color) = (0, 0, 0, 0)
_MainTex ("Texture", 2D) = "white" {}
_BumpMap ("Bump Map", 2D) = "bump" {}
_NormalStrength ("Normal Strength", Range (0, 1.5)) = 1
} // eo Properties
SubShader
{
// pass for 4 vertex lights, ambient light & first pixel light
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf MobileBlinnPhong
fixed4 LightingMobileBlinnPhong (SurfaceOutput s, fixed3 lightDir, fixed3 halfDir, fixed atten)
{
fixed diff = saturate(dot (s.Normal, lightDir));
fixed nh = saturate(dot (s.Normal, halfDir)); //Instead of injecting the normalized light+view, we just inject view, which is provided as halfasview in the initial surface shader CG parameters
fixed spec = pow (nh, s.Specular*128) * s.Gloss;
fixed4 c;
c.rgb = (s.Albedo * _LightColor0.rgb * diff + _SpecColor.rgb * spec) * (atten*2);
c.a = 0.0;
return c;
}
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
};
// User-specified properties
uniform sampler2D _MainTex;
uniform sampler2D _BumpMap;
uniform float _Shininess;
uniform float _NormalStrength;
uniform fixed4 _Color;
float3 expand(float3 v) { return (v - 0.5) * 2; } // eo expand
void surf (Input IN, inout SurfaceOutput o) {
half4 tex = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = tex.rgb;
o.Gloss = tex.a;
o.Alpha = tex.a;
o.Specular = _Shininess;
// fetch and expand range-compressed normal
float3 normalTex = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
float3 normal = normalTex * _NormalStrength;
o.Normal = normal;
} // eo surf
ENDCG
}
//Fallback "Specular"
} // eo Shader
Remember to increase strength though. And obviously it's too costly on frame rate. I need it for just an animation, so I used it.