This is my first post, and I've only recently started programming for Android and OpenGl, so be nice to me :)
I'm writing a little app that will include one screen that will use a palettised image for quick changes in colour. For speed I thought I could implement the system using a shader, so I have a shader up and running using the fantastic examples from the Open GL ES 2.0 programming guide. The problem that I've been banging my head over for the past couple of days is coming up with a way of referencing my palette data within the shader.
My shader currently is:
precision lowp float;
varying vec2 v_texCoord;
uniform sampler2D s_texture;
uniform int s_palette[204]; // Specify a palette of 64 rgb entries
void main()
{
vec4 col = texture2D( s_texture, v_texCoord );
int index = (int(col.r * 255.0) * 3;
gl_FragColor = vec4(float(s_palette[index]) / 255.0, float(s_palette[index+1]) / 255.0, float(s_palette[index+2]) / 255.0, 1.0);
}
After playing around with this for ages and not getting very far I discovered that if I reference my palette data by a constant value I could get results, but not by referencing with a variable. Having searched for a while with Google I discovered that this was just the way it was with GLSL 1.1 and was fixed with GLSL 1.3. I believe Android is running on GLSL ES1.0 which is based on GLSL 1.3, so it should work, but I can't for the life of my get it to work. I can't find anything in the GLSLES specs that would suggest it's not possible either, so where am I going wrong?
If it simply isn't possible, then does anyone have any other ideas how to get around this rather crippling flaw in my plan?
No, constant arrays can't be indexed by variables, but you can easily do a palette lookup using textures instead of a constant data array. Use a Nx1 2D texture and do a lookup using texture2D in GLSL.
Of course you will need to load the texture data from the CPU side using the normal texture functions in OpenGL.
Related
When developing Andoird OpenGL, how do the created vertex and fragment shaders work?
I am basically following the Android developer guide example on OpenGL ES. However, when creating the shaders, first it creates a String including a code segment. I tried to understand how does this string segment connect with the remaining process, but I couldn't.
private final String vertexShaderCode =
"attribute vec4 vPosition;"+
"void main(){"+
" gl_position = vPosition;"+
"}";
Take a look on the Graphics Pipeline:
The main job of a vertex shader is converting/transforming the position of each vertex, from Camera (Real-world) Space to a special space called Normalized Device Space. The output position is stored in the built-int variable gl_Position. Each vertex is executed by an instance of the vertex shader. So if you have 100 vertices, you will have 100 instances of the vertex shader executed.
Your posted vertex shader code actually does not do any significant convert: gl_position = vPosition but this is fine as the author intended that the input positions are already in the Normalized Device Space.
Then in the Normalized Device Space, these positions are assembled into primitives (e.g., triangles). Next, in the Rasterization stage, these primitives are broken into fragments (can be considered pixels for the sake of simplicity). Then each fragment goes into the fragment shader to calculate the color of that fragment. Each fragment is executed by an instance of the fragment shader.
At one time, one and only one couple of vertex shader & fragment shader is used in the pipeline. This is specified by the OpenGL ES command glUseProgram(program) in which a program is just a couple of vertex & fragment shaders.
The string you posted is the source code of a vertex shader, you will see there is also the source code of a corresponding fragment shader. We use OpenGL ES commands to create shaders, set their source code (the string segment you saw), compile them, attach them to a program, link the program, and use the program.
In order to really understand all of these stuff, I suggest you to read this book. The picture above is taken from that book.
Let's say I play a video on a GLSurfaceView with a custom renderer, and in said renderer I use a fragment shader that takes an extra texture for lookup filtering. Said fragment shader looks as follows:
#extension GL_OES_EGL_image_external : require
precision mediump float;
uniform samplerExternalOES u_Texture;
uniform sampler2D inputImageTexture2;
varying highp vec2 v_TexCoordinate;
void main()
{
vec3 texel = texture2D(u_Texture, v_TexCoordinate).rgb;
texel = vec3(
texture2D(inputImageTexture2, vec2(texel.r, .16666)).r,
texture2D(inputImageTexture2, vec2(texel.g, .5)).g,
texture2D(inputImageTexture2, vec2(texel.b, .83333)).b
);
gl_FragColor = vec4(texel, 1.0);
}
In the onDrawFrame() function, after the glUseProgram() call, I have the onPreDrawFrame() function that basically binds said texture into the shader's uniform. It currently looks like this:
public void onPreDrawFrame()
{
if (filterSourceTexture2 != -1 && filterInputTextureUniform2 != -1) {
GLES20.glActiveTexture(GLES20.GL_TEXTURE2);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, filterSourceTexture2);
GLES20.glUniform1i(filterInputTextureUniform2, 2);
}
}
filterSourceTexture2 is the texture unit corresponding to the extra texture.
What I'm confused of is that if I put the glUniform1i() call before glActiveTexture(), it still works fine, but most of the tutorials I've seen put the glUniform1i() call like the above code.
So which is recommended?
The 2 make no difference. What you pass through the uniform is which active texture should be used in the fragment shader. Next to that you need to bind the texture id to the actual active texture. So the only order you will see here is that active texture must be called before binding.
Now in most cases you will see the same sequence as already mentioned: active, bind, set uniform. But when it comes to optimizing the code this will change:
Since you might want to decrease the traffic to the GPU you will want to reduce redundant calls to uniforms. A single uniform call will not benefit you much but still... You will initialize the shader and then set all the default uniforms so you will not set them on every draw frame call. In your case that means you will have 2 textures and for the first one this will always be active texture 0 and for the second one always active texture 1. So set the 2 uniforms first and then once you need to bind the new textures simply do that, preferably not in every draw call.
Some other implementations include that you have a custom shader object which remembers what your previous setting was so it will not send the same value to the shader again:
if(this.myUniform != newValue) {
this.myUniform = newValue;
GLES20.glUniform1i(filterInputTextureUniform2, newValue);
}
But the problem is this system may in the end be slower then if you actually just set the uniform again.
So the actual answer to which is recommended: I recommend setting the uniform first in some initialization if possible which means the sequence is not even in the same method. But if they are in the same method I suggest you to set the uniform last due to the readability but that would mean setup both textures first (active plus bind) and then set the 2 uniforms one after the other.
And just a note about "most tutorials":
Most of the tutorials you will find are designed to show you how the API works. They are designed for you to be able to easily identified what calls must be made and in which order (if any). This leads to usually having all the code in a single source file which should not be done for real applications. Remember to build your tools as you see fit and separate them into different classes. In the end your "on draw" method should have no more then 50 lines no matter the size of the project or scenes you are drawing.
I am using a uniform variable to pass a floating point value into a GLES shader, but my code keeps generating a 1282 error. I've followed a couple of examples on here and am pretty sure I am doing everything correctly. Can you spot anything wrong with my code? I am testing this on Android 4.4.2 on a Nexus 7
I use this in onDrawFrame:
GLES20.glUseProgram(mProgram);
int aMyUniform = GLES20.glGetUniformLocation(mProgram, "myUniform");
glVar = frameNumber/100f;
GLES20.glUniform1f(aMyUniform, glVar);
System.out.println("aMyUniform = " + aMyUniform); //diagnostic check
This is in the top of the fragment shader:
"uniform float myUniform;\n" +
And this in the main routine of the fragment shader:
"gl_FragColor[2] = myUniform;\n" +
The variable myUniform does not appear in the vertex shader.
The value reported for aMyUniform is 0, which suggests the uniform has been found correctly. If I change the fragment shader to remove the reference to myUniform and replace it with a hard-coded value everything works as expected; aMyUniform returns a value of -1 but scene draws correctly.
If you can't spot anything wrong with the code any hints on how to debug it would be appreciated.
After a lot of head scratching this turns out to have the same cause as this fault:
GLSL Shader will not render color from uniform variable
I was reusing the mvpMatrix uniform across additional shaders. This did not produce an error until this additional uniform was introduced.
I only clue I found to this was that the error did not occur until the next shader was called, I didn't pay much attention to this at the time but it should have been a clue.
I want to pass the current time to my Shader for texture-animation like this:
float shaderTime = (float)((helper::getMillis() - device.stat.startTime));
glUniform1f(uTime, shaderTime);
To animate the texture I do:
GLSL Fragment Shader:
#if defined GL_ES && defined GL_FRAGMENT_PRECISION_HIGH
#define HIGHFLOAT highp
#endif
uniform HIGHFLOAT float uTime;
...
void main()
{
vec2 coord = vTexCoord.xy;
coord.x += uTime*0.001;
gl_FragColor = texture2D(uTex, coord);
}
My problem is: if I run the program for a while:
The first minute: Everything is animating fine
After ~5 minutes: The animation stutters
After ~10 minutes: The animation has stopped
Any Ideas how to fix this?
floats lose precision as they get larger which will result in the stuttering as the number gets larger. Also the coordinates that you can use to sample a texture will have a numerical limit so will fail after you go over it.
You should wrap your time value so it doesn't go over a certain point. For example, adding 1.5 to the UV coordinates across an entire triangle is the same as adding 0.5, so just wrap your values so that uTime*0.001 is always between 0 and 1.
You can also consider switching time to be an integer so that it doesn't lose precision. You can multiply this integer by a constant delta value in your shader. e.g. multiply by 1/30 if your framerate is 30fps.
Like Muzza says, the coordinates that using to sample a texture will have a limit. This leads to the problem you face.
There are also some alternative methods to do texture-animation without modify the fragment shader.
For example, revise the projection matrix to choose which part of the texture you want to see.
Modify the view port when the time goes is another way can help.
When creating a custom shader in GLSL for renderscript the program builder seems to be converting all the members of structure I bind as uniform constants to floats or vec, regardless of what their specified type is. Also, I have a uniform that is reporting at compile time the following error: "Could not link program, L0010, Uniform "uniform name here" differ on precision.". I have the same named uniform in two different structures that separately bind to the vertex and fragment shaders.
[EDIT]
Thanks for the answer to the second part. In regards to the first part I will try to be more clear. When building my shader programs in the java side, I bind constants to the program builders (both vertex and fragment) with the input being a the java variable that is bound to a renderscript structure. Everything works great, all my float variables are completely accessable as uniforms in the shader programs. However, if the structure has a member such as bool or int types and I attempt something such as if (i == UNI_memberInt) where i is an integer counter declared in the shader or if (UNI_memberBool) then I get errors along the lines of "cannot compare int to float" or "if() condition must be of a boolean type" which suggests to me that the data is not making it to the GLSL program intact. I can get around this by making them float values and using things like 0.0 since GLSL requires the float value of 0 to always be exact but it seems crude to me. Similar things occur if I try and use UNI_memberInt as a stop condition in a for loop.
Thanks for asking this at the developer hangout. Unfortunately an engineer on the Renderscript team doesn't quite understand the first part of your question. If you can clarify that would be great. As for the second part, it is a known bug.
Basically, if you have a uniform "foo" in both the vertex and fragment shader, the vertex shader is high precision, and the fragment shader is medium, which the GLSL compiler can't handle. Unfortunately, the solution is to not have any name collisions between the two.