I applied cell shading effect to the object, like:
This works well, but there are many conditional checks ("if" statements) in the fragment shader:
#version 300 es
precision lowp float;
in float v_CosViewAngle;
in float v_LightIntensity;
const lowp vec3 defaultColor = vec3(0.1, 0.7, 0.9);
void main() {
lowp float intensity = 0.0;
if (v_CosViewAngle > 0.33) {
intensity = 0.33;
if (v_LightIntensity > 0.76) {
intensity = 1.0;
} else if (v_LightIntensity > 0.51) {
intensity = 0.84;
} else if (v_LightIntensity > 0.26) {
intensity = 0.67;
} else if (v_LightIntensity > 0.1) {
intensity = 0.50;
}
}
outColor = vec4(defaultColor * intensity, 1.0);
}
I guess so many checks in the fragment shader can ultimately affect performance. In addition, shader size is increasing. Especially if there will be even more cell shading levels.
Is there any other way to get this effect? Maybe some glsl-function can be used here?
Thanks in advance!
Store your color bands in a Nx1 texture, do a texture lookup using v_LightIntensity as your texture coordinate. Want a different shading level count then just change the texture.
EDIT Store an NxM texture, doing a lookup using vLightIntensity and v_CosViewAngle as a 2D coordinate, and you can kill branches completely.
Related
Working on AR app and trying to replace plane texture.
I'm trying to render texture on vertical and horizontal planes. It's working fine for horizontal planes, but doesn't work well on vertical.
i found that something wrong with texture_coord calculations, but can't figure it out (new to OpenGL).
Here is my vertex shader
void main()
{
vec4 local_pos = vec4(a_position, 1.0);
vec4 world_pos = u_model * local_pos;
texture_coord = world_pos.sp * u_scale;
gl_Position = u_mvp * local_pos;
}
fragment shader
out vec4 outColor;
void main()
{
vec4 control = texture(u_texture, diffuse_coord);
float dotScale = 1.0;
float lineFade = 0.5;
vec3 newColor = (control.r * dotScale > u_gridControl.x) ? u_dotColor.rgb : control.g > u_gridControl.y ? u_lineColor.rgb * lineFade: u_lineColor.rgb * 0.25 * lineFade;
outColor = vec4(newColor, 1.0);
}
The important bit is texture_coord = world_pos.sp in your vertex shader.
There are 3 ways to refer to the components of a vector in GLSL. xyzw (the most common), rgba (more natural for colours), stpq (more natural for texture coordinates).
The line texture_coord = world_pos.sp would be clearer if it were written as texture_coord = world_pos.xz.
Once you realize that you're generating texture coordinates by ignoring the y-component it's obvious why vertical planes are not textured how you would like.
Unfortunately there's no simple one line fix. Perhaps tri-planar texturing might be an appropriate solution for you - this seems to be a good explanation of the technique.
The main texture of my surface shader is a Google Maps image tile, similar to this:
.
I want to replace pixels that are close to a specified color with that from a separate texture. What is working now is the following:
Shader "MyShader"
{
Properties
{
_MainTex("Base (RGB) Trans (A)", 2D) = "white" {}
_GrassTexture("Grass Texture", 2D) = "white" {}
_RoadTexture("Road Texture", 2D) = "white" {}
_WaterTexture("Water Texture", 2D) = "white" {}
}
SubShader
{
Tags{ "Queue" = "Transparent-1" "IgnoreProjector" = "True" "ForceNoShadowCasting" = "True" "RenderType" = "Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert alpha approxview halfasview noforwardadd nometa
uniform sampler2D _MainTex;
uniform sampler2D _GrassTexture;
uniform sampler2D _RoadTexture;
uniform sampler2D _WaterTexture;
struct Input
{
float2 uv_MainTex;
};
void surf(Input IN, inout SurfaceOutput o)
{
fixed4 ct = tex2D(_MainTex, IN.uv_MainTex);
// if the red (or blue) channel of the pixel is within a
// specific range, get either a 1 or a 0 (true/false).
int grassCond = int(ct.r >= 0.45) * int(0.46 >= ct.r);
int waterCond = int(ct.r >= 0.14) * int(0.15 >= ct.r);
int roadCond = int(ct.b >= 0.23) * int(0.24 >= ct.b);
// if none of the above conditions is a 1, then we want to keep our
// current pixel's color:
half defaultCond = 1 - grassCond - waterCond - roadCond;
// get the pixel from each texture, multiple by their check condition
// to get:
// fixed4(0,0,0,0) if this isn't the right texture for this pixel
// or fixed4(r,g,b,1) from the texture if it is the right pixel
fixed4 grass = grassCond * tex2D(_GrassTexture, IN.uv_MainTex);
fixed4 water = waterCond * tex2D(_WaterTexture, IN.uv_MainTex);
fixed4 road = roadCond * tex2D(_RoadTexture, IN.uv_MainTex);
fixed4 def = defaultCond * ct; // just used the MainTex pixel
// then use the found pixels as the Albedo
o.Albedo = (grass + road + water + def).rgb;
o.Alpha = 1;
}
ENDCG
}
Fallback "None"
}
This is the first shader I've ever written, and it probably isn't very performant. It seems counter intuitive to me to call tex2D on each texture for every pixel to just throw that data away, but I couldn't think of a better way to do this without if/else (which I read were bad for GPUs).
This is a Unity Surface Shader, and not a fragment/vertex shader. I know there is a step that happens behind the scenes that will generate the fragment/vertex shader for me (adding in the scene's lighting, fog, etc.). This shader is applied to 100 256x256px map tiles (2560x2560 pixels in total). The grass/road/water textures are all 256x256 pixels as well.
My question is: is there a better, more performant way of accomplishing what I'm doing here? The game runs on Android and iOS.
I'm not a specialist in Shader performance, but assuming you have a relatively small number of source tiles that you wish to render in the same frame it might make more sense to store the result of the pixel replacement and reuse it.
As you are stating that the resulting image is going to be the same size as your source tile, just render the source tile using your surface shader (without any lighting though, you may want to consider using a simple, flat pixel shader!) into a RenderTexture once and then use that RenderTexture as source for your world rendering. That way you are doing the expensive work only once per source tile and thus it isn't even important anymore whether your shader is well optimized.
If all textures are static, you might even consider not doing this at runtime, but just translate them once in the Editor.
All I need is to pass texture through fragment shader 1, get the result and pass it to fragment shader 2.
I know how to link vertex and fragment shader together into a program and get the shader object.
I don't know how to get the result of shader 1, switch the shaders (GLES20.glUseProgram ?) and pass the result of shader 1 to shader 2.
Any ideas how to do it?
UPDATE
This is an example what I want to achive
Effect 1:
Effect 2:
My goal is to combine Effect 1 and Effect 2.
UPDATE 2
effect 2 function:
...
uniform float effect2;
vec2 getEffect_() {
float mType = effect2;
vec2 newCoordinate = vec2(textureCoordinate.x, textureCoordinate.y);
vec2 res = vec2(textureCoordinate.x, textureCoordinate.y);
//case 1
if(mType==3.5) {
if (newCoordinate.x > 0.5) {
res = vec2(1.25 - newCoordinate.x, newCoordinate.y); }
}
else
//case 2
...
return res;
}
...
If you want to pass the result as a texture to another shader, you should use RTT(render to texture) so that you can get a texture to pass to another shader.
Yes, you should use glUseProgram(name) to switch another shader but not only this, you should render it at the original FBO(now you use)
make one FBO to make the result as a texture.
render to texture using the first shader then you can get a texture
Draw the texture with the second shader at the main fbo(now you use)
If you just want to combine two effect, just combine the two fragment shaders.
//At the end of the second frag shader
// skip this below
//gl_FragColor = result;
// put these codes
float grayScale = dot(result.rgb, vec3(0.299, 0.587, 0.114));
gl_FragColor = vec4(grayScale, grayScale, grayScale, 1.0);
Only use one shader program with the second effect fragment shader.
I will assume you don't need to show those 30 effects at once.
define uniform float effect2 in the 10 fragments like effect1.
pass the effect2 like 0.5, 1.5 or 2.5
according to the value you pass, differently mix the effect.
For example,
if(effec2>2.0) {
float grayScale = dot(result.rgb, vec3(0.299, 0.587, 0.114));
gl_FragColor = vec4(grayScale, grayScale, grayScale, 1.0);
} else if(effect2>1.0) {
vec3 final_result_of_effec2_2 = fun_2(result.rgb);
gl_FragColor = vec4(final_result_of_effec2_2, 1.0);
} else {
vec3 final_result_of_effec2_3 = fun_3(result.rgb);
gl_FragColor = vec4(final_result_of_effec2_3, 1.0);
}
As a starting point I use the Vuforia (version 4) sample called MultiTargets which tracks a 3d physical "cube" in the camera feed and augments it with yellow grid lines along the cube edges.
What I want to achieve is remove the textures and use diffuse lighting on the cube faces instead, by setting my own light position.
I want to do this on native Android and I do NOT want to use Unity.
It's been a hard journey of several days of work and learning. This is my first time working with OpenGL of any kind, and OpenGL ES 2.0 doesn't exactly make it easy for the beginner.
So I have a light source positioned slightly above the top face of my cube. I found that I can get the diffuse effect right if I compute the lambert factor in model space, everything remains in place regardless of my camera, and only the top face gets any light.
But when I move to using eye space, it becomes weird and the light seems to follow my camera around. Other faces get light, not only the top face. I don't understand why that is. For testing I have made sure that the light position is as expected by only using distance to lightsource for rendering pixel brightness in the fragment shader. Therefore, I'm fairly confident in the correctness of my "lightDirectionEyespace", and my only explanation is that something with the normals must be wrong. But I think I followed the explanations for creating the normal matrix correctly...
Help please!
Then there is of course the question whether those diffuse calculations SHOULD be performed in eye space? Will there be any disadvantages if I just do it in model space? I suspect that probably when I later use more models and lights and add specular and transparency, it will not work anymore, even though I don't see yet why.
My renderFrame method: (some variable names still contain "bottle", which is the object I want to light next after I get the cube right)
private void renderFrame()
{
ShaderFactory.checkGLError("Check gl errors prior render Frame");
// Clear color and depth buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// Get the state from Vuforia and mark the beginning of a rendering section
final State state=Renderer.getInstance().begin();
// Explicitly render the Video Background
Renderer.getInstance().drawVideoBackground();
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
// Did we find any trackables this frame?
if(0 != state.getNumTrackableResults())
{
// Get the trackable:
TrackableResult result=null;
final int numResults=state.getNumTrackableResults();
// Browse results searching for the MultiTarget
for(int j=0; j < numResults; j++)
{
result=state.getTrackableResult(j);
if(result.isOfType(MultiTargetResult.getClassType()))
break;
result=null;
}
// If it was not found exit
if(null == result)
{
// Clean up and leave
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Renderer.getInstance().end();
return;
}
final Matrix44F modelViewMatrix_Vuforia=Tool.convertPose2GLMatrix(result.getPose());
final float[] modelViewMatrix=modelViewMatrix_Vuforia.getData();
final float[] modelViewProjection=new float[16];
Matrix.scaleM(modelViewMatrix, 0, CUBE_SCALE_X, CUBE_SCALE_Y, CUBE_SCALE_Z);
Matrix.multiplyMM(modelViewProjection, 0, vuforiaAppSession
.getProjectionMatrix().getData(), 0, modelViewMatrix, 0);
GLES20.glUseProgram(bottleShaderProgramID);
// Draw the cube:
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glCullFace(GLES20.GL_BACK);
GLES20.glVertexAttribPointer(vertexHandleBottle, 3, GLES20.GL_FLOAT, false, 0, cubeObject.getVertices());
GLES20.glVertexAttribPointer(normalHandleBottle, 3, GLES20.GL_FLOAT, false, 0, cubeObject.getNormals());
GLES20.glEnableVertexAttribArray(vertexHandleBottle);
GLES20.glEnableVertexAttribArray(normalHandleBottle);
// add light position and color
final float[] lightPositionInModelSpace=new float[] {0.0f, 1.1f, 0.0f, 1.0f};
GLES20.glUniform4f(lightPositionHandleBottle, lightPositionInModelSpace[0], lightPositionInModelSpace[1],
lightPositionInModelSpace[2], lightPositionInModelSpace[3]);
GLES20.glUniform3f(lightColorHandleBottle, 0.9f, 0.9f, 0.9f);
// create the normalMatrix for lighting calculations
final float[] normalMatrix=new float[16];
Matrix.invertM(normalMatrix, 0, modelViewMatrix, 0);
Matrix.transposeM(normalMatrix, 0, normalMatrix, 0);
// pass the normalMatrix to the shader
GLES20.glUniformMatrix4fv(normalMatrixHandleBottle, 1, false, normalMatrix, 0);
// extract the camera position for lighting calculations (last column of matrix)
// GLES20.glUniform3f(cameraPositionHandleBottle, normalMatrix[12], normalMatrix[13], normalMatrix[14]);
// set material properties
GLES20.glUniform3f(matAmbientHandleBottle, 0.0f, 0.0f, 0.0f);
GLES20.glUniform3f(matDiffuseHandleBottle, 0.1f, 0.9f, 0.1f);
// pass the model view matrix to the shader
GLES20.glUniformMatrix4fv(modelViewMatrixHandleBottle, 1, false, modelViewMatrix, 0);
// pass the model view projection matrix to the shader
// the "transpose" parameter must be "false" according to the spec, anything else is an error
GLES20.glUniformMatrix4fv(mvpMatrixHandleBottle, 1, false, modelViewProjection, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES,
cubeObject.getNumObjectIndex(), GLES20.GL_UNSIGNED_SHORT, cubeObject.getIndices());
GLES20.glDisable(GLES20.GL_CULL_FACE);
// disable the enabled arrays after everything has been rendered
GLES20.glDisableVertexAttribArray(vertexHandleBottle);
GLES20.glDisableVertexAttribArray(normalHandleBottle);
ShaderFactory.checkGLError("MultiTargets renderFrame");
}
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Renderer.getInstance().end();
}
My vertex shader:
attribute vec4 vertexPosition;
attribute vec3 vertexNormal;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 normalMatrix;
// lighting
uniform vec4 uLightPosition;
uniform vec3 uLightColor;
// material
uniform vec3 uMatAmbient;
uniform vec3 uMatDiffuse;
// pass to fragment shader
varying vec3 vNormalEyespace;
varying vec3 vVertexEyespace;
varying vec4 vLightPositionEyespace;
varying vec3 vNormal;
varying vec4 vVertex;
void main()
{
// we can just take vec3() of a vec4 and it will take the first 3 entries
vNormalEyespace = vec3(normalMatrix * vec4(vertexNormal, 1.0));
vNormal = vertexNormal;
vVertexEyespace = vec3(modelViewMatrix * vertexPosition);
vVertex = vertexPosition;
// light position
vLightPositionEyespace = modelViewMatrix * uLightPosition;
gl_Position = modelViewProjectionMatrix * vertexPosition;
}
And my fragment shader:
precision highp float; //apparently necessary to force same precision as in vertex shader
//lighting
uniform vec4 uLightPosition;
uniform vec3 uLightColor;
//material
uniform vec3 uMatAmbient;
uniform vec3 uMatDiffuse;
//from vertex shader
varying vec3 vNormalEyespace;
varying vec3 vVertexEyespace;
varying vec4 vLightPositionEyespace;
varying vec3 vNormal;
varying vec4 vVertex;
void main()
{
vec3 normalModel = normalize(vNormal);
vec3 normalEyespace = normalize(vNormalEyespace);
vec3 lightDirectionModel = normalize(uLightPosition.xyz - vVertex.xyz);
vec3 lightDirectionEyespace = normalize(vLightPositionEyespace.xyz - vVertexEyespace.xyz);
vec3 ambientTerm = uMatAmbient;
vec3 diffuseTerm = uMatDiffuse * uLightColor;
// calculate the lambert factor via cosine law
float diffuseLambert = max(dot(normalEyespace, lightDirectionEyespace), 0.0);
// Attenuate the light based on distance.
float distance = length(vLightPositionEyespace.xyz - vVertexEyespace.xyz);
float diffuseLambertAttenuated = diffuseLambert * (1.0 / (1.0 + (0.01 * distance * distance)));
diffuseTerm = diffuseLambertAttenuated * diffuseTerm;
gl_FragColor = vec4(ambientTerm + diffuseTerm, 1.0);
}
I finally solved all problems.
There were 2 issues that might be of interest for future readers.
Vuforia CubeObject class from the official sample (current Vuforia version 4) has wrong normals. They do not all correspond with the vertex definition order. If you're using the CubeObject from the sample, make sure that the normal definitions are correctly corresponding with the faces. Vuforia fail...
As suspected, my normalMatrix was wrongly built. We cannot just invert-transpose the 4x4 modelViewMatrix, we need to first extract the top left 3x3 submatrix from it and then invert-transpose that.
Here is the code that works for me:
final Mat3 normalMatrixCube=new Mat3();
normalMatrixCube.SetFrom4X4(modelViewMatrix);
normalMatrixCube.invert();
normalMatrixCube.transpose();
This code by itself is not that useful though, because it relies on a custom class Mat3 which I randomly imported from this guy because neither Android nor Vuforia seem to offer any matrix class that can invert/transpose 3x3 matrices. This really makes me question my sanity - the only code that works for such a basic problem has to rely on a custom matrix class? Maybe I'm just doing it wrong, I don't know...
thumbs up for not using the fixed functions on this! I found your example quite useful for understanding that one needs to also translate the light to a position in eyespace. All the questions i've found just recommend using glLight.
While this helped me solve using a static light source, something which is missing from your code if you wish to also make transformations on your model(s) while keeping the light source static(e.g rotating the object) is to keep track of the original modelview matrix until the view is changed, or until you're drawing another object which has a different model. So something like:
vLightPositionEyespace = fixedModelView * uLightPosition;
where fixedModelView can be updated in your renderFrame() method.
This thread on opengl discussion boards helped :)
I am trying to use a point light animation for my game. It runs fine in Editor with Diffuse, Bumped Specular and VertexLit shaders. However it doesn't work on any Mobile shaders provided by default.
Is there a way to use Point lights in Android? Or is there any shader which can work on mobiles and supports point lights too?
Finally found the answer - this post on UnityAnswer helped me. I am reposting the Custom Shader here -
// Specular, Normal Maps with Main Texture
// Fragment based
Shader "SpecTest/SpecTest5"
{
Properties
{
_Shininess ("Shininess", Range (0, 1.5)) = 0.078125
_Color ("Main Color", Color) = (1,1,1,1)
_SpecColor ("Specular Color", Color) = (0, 0, 0, 0)
_MainTex ("Texture", 2D) = "white" {}
_BumpMap ("Bump Map", 2D) = "bump" {}
_NormalStrength ("Normal Strength", Range (0, 1.5)) = 1
} // eo Properties
SubShader
{
// pass for 4 vertex lights, ambient light & first pixel light
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf MobileBlinnPhong
fixed4 LightingMobileBlinnPhong (SurfaceOutput s, fixed3 lightDir, fixed3 halfDir, fixed atten)
{
fixed diff = saturate(dot (s.Normal, lightDir));
fixed nh = saturate(dot (s.Normal, halfDir)); //Instead of injecting the normalized light+view, we just inject view, which is provided as halfasview in the initial surface shader CG parameters
fixed spec = pow (nh, s.Specular*128) * s.Gloss;
fixed4 c;
c.rgb = (s.Albedo * _LightColor0.rgb * diff + _SpecColor.rgb * spec) * (atten*2);
c.a = 0.0;
return c;
}
struct Input {
float2 uv_MainTex;
float2 uv_BumpMap;
};
// User-specified properties
uniform sampler2D _MainTex;
uniform sampler2D _BumpMap;
uniform float _Shininess;
uniform float _NormalStrength;
uniform fixed4 _Color;
float3 expand(float3 v) { return (v - 0.5) * 2; } // eo expand
void surf (Input IN, inout SurfaceOutput o) {
half4 tex = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = tex.rgb;
o.Gloss = tex.a;
o.Alpha = tex.a;
o.Specular = _Shininess;
// fetch and expand range-compressed normal
float3 normalTex = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
float3 normal = normalTex * _NormalStrength;
o.Normal = normal;
} // eo surf
ENDCG
}
//Fallback "Specular"
} // eo Shader
Remember to increase strength though. And obviously it's too costly on frame rate. I need it for just an animation, so I used it.