I am trying to write an opengl shader that applies Vigenette Shader. But the issue I am facing is that it shows up a circle as you can see in the right image.
I want to find out what exactly must have been going wrong in my code.
I have also pasted the code below too.
Code For My Vigenette Shader
precision mediump float;
uniform sampler2D u_Texture;
uniform sampler2D u_Vigenette;
uniform sampler2D u_Map;
varying vec2 v_TexCoordinate;
void main()
{
vec3 texel = texture2D(u_Texture, v_TexCoordinate).rgb;
texel.r = (texel.r == 1.0)?.9961:texel.r;
texel.g = (texel.g == 1.0)?.9961:texel.g;
texel.b = (texel.b == 1.0)?.9961:texel.b;
texel = vec3(
texture2D(u_Map, vec2(texel.r, .16666)).r,
texture2D(u_Map, vec2(texel.g, .5)).g,
texture2D(u_Map, vec2(texel.b, .83333)).b);
texel.r = (texel.r == 1.0)?.9961:texel.r;
texel.g = (texel.g == 1.0)?.9961:texel.g;
texel.b = (texel.b == 1.0)?.9961:texel.b;
vec2 tc = (2.0 * v_TexCoordinate) - 1.0;
float d = dot(tc, tc);
vec2 lookup = vec2(d, texel.r);
texel.r = texture2D(u_Vigenette, lookup).r;
lookup.y = texel.g;
texel.g = texture2D(u_Vigenette, lookup).g;
lookup.y = texel.b;
texel.b = texture2D(u_Vigenette, lookup).b;
gl_FragColor = vec4(texel, 1.0);
}
Thanks in Advance for the help.
Bit of a guess because there's not enough information to be sure, but I think that your code is expecting the result of
float d = dot(tc, tc);
to be in the range 0 to 1, but it's actually in the range 0 to 2.
Perhaps you want to rescale it, or perhaps the u_Vigenette texture is set to repeat instead of clamp to edge.
Related
I've been trying to figure out why my shader doesn't work on Android while it does on desktop. With not working I mean the shader makes the texture unable to be seen. Here's the code of my fragment shader:
precision mediump float;
varying vec4 v_color;
varying vec2 v_texCoord0;
varying float randomFloat;
uniform vec3 dif_color;
uniform sampler2D u_sampler2D;
vec4 color_right;
vec4 color_left;
vec4 color_bottom;
vec4 color_top;
vec3 colorNeighbors;
float neighbors;
void main(){
vec4 color = texture2D(u_sampler2D, v_texCoord0) * v_color;
ivec2 texSize = textureSize(u_sampler2D, 0);
ivec2 texIndex = ivec2(int(v_texCoord0.x * float(texSize.x)), int(v_texCoord0.y * float(texSize.y)));
color_right = texelFetch( u_sampler2D, texIndex + ivec2(1, 0), 0).rgba;
color_left = texelFetch( u_sampler2D, texIndex + ivec2(-1, 0), 0).rgba;
color_top = texelFetch( u_sampler2D, texIndex + ivec2(0, 1), 0).rgba;
color_bottom = texelFetch( u_sampler2D, texIndex + ivec2(0, -1), 0).rgba;
if(color_right.a != 0){ neighbors++; }
if(color_left.a != 0){ neighbors++; }
if(color_top.a != 0){ neighbors++; }
if(color_bottom.a != 0){ neighbors++; }
vec3 colorNeighbors = (color_right.rgb + color_left.rgb + color_top.rgb + color_bottom.rgb) / neighbors;
if((color_right.a != 0 || color_left.a != 0 || color_top.a != 0 || color_bottom.a != 0) && color.a == 0){
color.rgba = vec4(.1 + .3 * randomFloat + colorNeighbors,1.0);
}
gl_FragColor = color;
}
I've read many similar questions to mine. Answers to these advise adding a certain line: precision mediump float; and using floating points instead of integers. I've applied this to my code, but with no different result from not working.
Other answers are totally irrelevant to my code. I assume the problem in my code lies in this line:
ivec2 texSize = textureSize(u_sampler2D, 0);
When I remove this line (along with the lines dependent on this variable) the shader operates, but of course not in the way I want it to.
So, maybe someone knows a different way to get the texture size? But I'd also like to know what causes the shader to disfunction?
According to Khronos specification textureSize is available only in OpenGL ES SL 3.00. If you want to use it you should rewrite your code for OpenGL ES [SL] 3 format.
Also, check shader compilation log to see if there are other errors.
Don't use textureSize. Use the same values in a Vector2 uniform.
Be careful to send another float uniform with the devicePixelRatio. if don't, the texture in devices with retina display will looks to half size.
I needed to create some ripling effect for one sprite in my game, here's the vertexShader:
attribute vec4 a_position; // just taking in necessary attributes
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans; // Combination of view and projection matrix
varying vec2 v_texCoords;
void main() {
v_texCoords = a_texCoord0;
gl_Position = u_projTrans * a_position; //as I said, it is sprite so no need for modelMatrix
}
and here's the fragment:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoords;
uniform sampler2D u_texture; //texture of sprite
uniform float time;
void main()
{
vec2 uv;
if (time > 0.0) { // time is > 0.0 when I want the ripling effect to be applied,
vec2 cPos = -1.0 + 2.0 * v_texCoords.xy; // converting tex.Coords to -1 - 1
float cLength = length(cPos); //taking length of it
uv = v_texCoords.xy +( (cPos/cLength)*cos(cLength*12.0-time*4.0)*0.03 ) // just some calculations for the ripling effect
}
else
uv = v_texCoords.xy; // if I don't want to use the ripling effect, I use normal texCoords
vec4 tex = texture2D(u_texture, uv); //sampling texture
gl_FragColor = tex;
}
It all works fine, the performance's fine on PC, but when running it on android, the performance is a lot worse... As you can see, shader's are trivial but they somehow are expensive.. Anyways, sprite I draw has width about 2000 - 4000 px and height 720. Also, when I replace v_texCoords with different vector(for example vec2(1, 1)) in cPos calc: vec2 cPos = -1.0 + 2.0 * v_texCoords.xy; the performance improves heavily..
I don't really know what's so expensive there. If anyone had some advices, I'd be happy. Thanks in advance
My terrain uses shader which itself uses four different textures. It runs fine on windows and linux machines, but on android it gets only ~25FPS on both galaxies. I thought, that textures are the problem, but no, as it appears the problem is with the part where I divide texture coordinates and use frac to get tiled coordinates. Without it, I get 60FPS.
// Material data.
//uniform vec3 uAmbient;
//uniform vec3 uDiffuse;
//uniform vec3 uLightPos[8];
//uniform vec3 uEyePos;
//uniform vec3 uFogColor;
uniform sampler2D terrain_blend;
uniform sampler2D grass;
uniform sampler2D rock;
uniform sampler2D dirt;
varying vec2 varTexCoords;
//varying vec3 varEyeNormal;
//varying float varFogWeight;
//------------------------------------------------------------------
// Name: fog
// Desc: applies calculated fog weight to fog color and mixes with
// specified color.
//------------------------------------------------------------------
//vec4 fog(vec4 color) {
// return mix(color, vec4(uFogColor, 1.0), varFogWeight);
//}
void main(void)
{
/*vec3 N = normalize(varEyeNormal);
vec3 L = normalize(uLightPos[0]);
vec3 H = normalize(L + normalize(uEyePos));
float df = max(0.0, dot(N, L));
vec3 col = uAmbient + uDiffuse * df;*/
// Take color information from textures and tile them.
vec2 tiledCoords = varTexCoords;
//vec2 tiledCoords = fract(varTexCoords / 0.05); // <========= HERE!!!!!!!!!
//vec4 colGrass = texture2D(grass, tiledCoords);
vec4 colGrass = texture2D(grass, tiledCoords);
//vec4 colDirt = texture2D(dirt, tiledCoords);
vec4 colDirt = texture2D(dirt, tiledCoords);
//vec4 colRock = texture2D(rock, tiledCoords);
vec4 colRock = texture2D(rock, tiledCoords);
// Take color information from not tiled blend map.
vec4 colBlend = texture2D(terrain_blend, varTexCoords);
// Find the inverse of all the blend weights.
float inverse = 1.0 / (colBlend.r + colBlend.g + colBlend.b);
// Scale colors by its corresponding weight.
colGrass *= colBlend.r * inverse;
colDirt *= colBlend.g * inverse;
colRock *= colBlend.b * inverse;
vec4 final = colGrass + colDirt + colRock;
//final = fog(final);
gl_FragColor = final;
}
Note: there's some more code for light calculation and fog, but it isn't used. I indicated the line that, when uncommented, causes massive lag. I tried using floor and calculating fractional part manually, but lag is the same. What might be wrong?
EDIT: Now here's what I don't understand.
This:
vec2 tiledCoords = fract(varTexCoords * 2.0);
Runs great.
This:
vec2 tiledCoords = fract(varTexCoords * 10.0);
Runs average on SIII.
This:
vec2 tiledCoords = fract(varTexCoords * 20.0);
Lags...
This:
vec2 tiledCoords = fract(varTexCoords * 100.0);
Well 5FPS is still better than I expected...
So what gives? Why is this happening? To my understanding this shouldn't make any difference. But it does. And a huge one.
I would run your code on a profiler (check Mali-400), but by the looks of it, you are killing the texture cache. For the first pixel computed, all those 4 texture look-ups are fetched but also the contiguous data is also fetched into the texture cache. For the next pixel, you are not accessing data in the cache but looking quite far (10, 20..etc) which completely defies the purpose of such a cache.
This of course a guess, without proper profiling is hard to tell.
EDIT: #harism also pointed you to that direction.
I have an application drawing some objects with OpenG-ES 2.0.The application fails to render on some of samsung devices. I tried debugging and it seems to be problem with the vertex and fragment shaders.
Here is my shader code:
Vertex Shader:
attribute vec3 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
uniform mat4 Projection;
uniform mat4 Modelview;
uniform mat4 CordTransform;
attribute float flag;
attribute float clubColorFlag;
attribute vec2 TexCoordIn; // New
varying vec2 TexCoordOut; // New
varying float flagS;
varying float flagClubColorS;
void main(void) {
gl_Position = Projection * Modelview * vec4(Position,1.0);
flagS = flag;
flagClubColorS = clubColorFlag;
if (clubColorFlag == 1.0) {
DestinationColor = vec4(0.190,0.309,0.309,1.0);
}
else {
DestinationColor = SourceColor;
}
exCoordOut = TexCoordIn;
gl_PointSize = 1.0;
}
Fragment Shader:
varying lowp vec4 DestinationColor;
varying lowp vec2 TexCoordOut; // New
uniform sampler2D Texture; // New
varying lowp float flagS;
varying lowp float flagClubColorS;
void main(void) {
gl_FragColor = DestinationColor;
if(flagS == 1.0){
gl_FragColor = DestinationColor;
}
else if (flagClubColorS == 1.0) {
gl_FragColor = DestinationColor;
}
else if (flagS == 0.0){
gl_FragColor = DestinationColor * texture2D(Texture, TexCoordOut); // New
}
}
I am not sure what is the problem but I get texture uniform -1 if I comment out the if-else part in fragment shader. In other scenario, it is zero. Both shaders compile without any errors.
Is it related to precision? Please help me to debug the issue.
I am answering my own question for people who are going to refer this later. I could kind of solve the issue.
The problem was not in either of the shaders. The issue was, I was using glDrawElements() method to draw objects with vertex and index buffers. I replaced this call with glDrawArrays() method using only vertex buffer and everything worked fine.
I am still not sure of the exact issue but it may help somebody struggling long for similar issue.
This two are My VertexShader and Fragment Shader file:
Vertex Shader File:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
varying vec4 co;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
co = inputTextureCoordinate;
}
Fragment Shader File:
uniform sampler2D videoFrame; // the texture with the scene you want to blur
varying mediump vec2 textureCoordinate;
varying mediump vec4 co;
precision mediump float;
vec4 nightVision()
{
float luminanceThreshold = 0.2; // 0.2
float colorAmplification = 2.0; // 4.0
float effectCoverage = 1.0; // 1.0
vec4 finalColor;
// Set effectCoverage to 1.0 for normal use.
if (co.x < effectCoverage)
{
vec3 c = texture2D(videoFrame, co.st).rgb;
float lum = dot(vec3(0.30, 0.59, 0.11), c);
if (lum < luminanceThreshold) {
c *= colorAmplification;
}
vec3 visionColor = vec3(0.1, 0.95, 0.2);
finalColor.rgb = (c) * visionColor;
} else {
finalColor = texture2D(videoFrame, co.st);
}
vec4 sum = vec4(0.0, 0.0, 0.0, 1.0);
sum.rgb = finalColor.rgb;
return sum;
}
void main(void)
{
gl_FragColor = nightVision();
}
Now, I want to Use this code to give the Camera Preview Effect in Android Camera preview. And also want to save the Picture that captured by that effect.
So is it possible to do so ???
If yes then Please help me with Some code as i am new to OpenGles with Android Camera.
Yes, it's definitely possible. There are a lot of different approaches to building this, and a small code sample won't really help much. The basic idea is to feed the frames you get from the camera to OpenGL as textures.
Check out Camera image as an OpenGL texture on top of the native camera viewfinder. The source code should give you an idea for how to proceed.