When developing Andoird OpenGL, how do the created vertex and fragment shaders work?
I am basically following the Android developer guide example on OpenGL ES. However, when creating the shaders, first it creates a String including a code segment. I tried to understand how does this string segment connect with the remaining process, but I couldn't.
private final String vertexShaderCode =
"attribute vec4 vPosition;"+
"void main(){"+
" gl_position = vPosition;"+
"}";
Take a look on the Graphics Pipeline:
The main job of a vertex shader is converting/transforming the position of each vertex, from Camera (Real-world) Space to a special space called Normalized Device Space. The output position is stored in the built-int variable gl_Position. Each vertex is executed by an instance of the vertex shader. So if you have 100 vertices, you will have 100 instances of the vertex shader executed.
Your posted vertex shader code actually does not do any significant convert: gl_position = vPosition but this is fine as the author intended that the input positions are already in the Normalized Device Space.
Then in the Normalized Device Space, these positions are assembled into primitives (e.g., triangles). Next, in the Rasterization stage, these primitives are broken into fragments (can be considered pixels for the sake of simplicity). Then each fragment goes into the fragment shader to calculate the color of that fragment. Each fragment is executed by an instance of the fragment shader.
At one time, one and only one couple of vertex shader & fragment shader is used in the pipeline. This is specified by the OpenGL ES command glUseProgram(program) in which a program is just a couple of vertex & fragment shaders.
The string you posted is the source code of a vertex shader, you will see there is also the source code of a corresponding fragment shader. We use OpenGL ES commands to create shaders, set their source code (the string segment you saw), compile them, attach them to a program, link the program, and use the program.
In order to really understand all of these stuff, I suggest you to read this book. The picture above is taken from that book.
Related
Let's say I play a video on a GLSurfaceView with a custom renderer, and in said renderer I use a fragment shader that takes an extra texture for lookup filtering. Said fragment shader looks as follows:
#extension GL_OES_EGL_image_external : require
precision mediump float;
uniform samplerExternalOES u_Texture;
uniform sampler2D inputImageTexture2;
varying highp vec2 v_TexCoordinate;
void main()
{
vec3 texel = texture2D(u_Texture, v_TexCoordinate).rgb;
texel = vec3(
texture2D(inputImageTexture2, vec2(texel.r, .16666)).r,
texture2D(inputImageTexture2, vec2(texel.g, .5)).g,
texture2D(inputImageTexture2, vec2(texel.b, .83333)).b
);
gl_FragColor = vec4(texel, 1.0);
}
In the onDrawFrame() function, after the glUseProgram() call, I have the onPreDrawFrame() function that basically binds said texture into the shader's uniform. It currently looks like this:
public void onPreDrawFrame()
{
if (filterSourceTexture2 != -1 && filterInputTextureUniform2 != -1) {
GLES20.glActiveTexture(GLES20.GL_TEXTURE2);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, filterSourceTexture2);
GLES20.glUniform1i(filterInputTextureUniform2, 2);
}
}
filterSourceTexture2 is the texture unit corresponding to the extra texture.
What I'm confused of is that if I put the glUniform1i() call before glActiveTexture(), it still works fine, but most of the tutorials I've seen put the glUniform1i() call like the above code.
So which is recommended?
The 2 make no difference. What you pass through the uniform is which active texture should be used in the fragment shader. Next to that you need to bind the texture id to the actual active texture. So the only order you will see here is that active texture must be called before binding.
Now in most cases you will see the same sequence as already mentioned: active, bind, set uniform. But when it comes to optimizing the code this will change:
Since you might want to decrease the traffic to the GPU you will want to reduce redundant calls to uniforms. A single uniform call will not benefit you much but still... You will initialize the shader and then set all the default uniforms so you will not set them on every draw frame call. In your case that means you will have 2 textures and for the first one this will always be active texture 0 and for the second one always active texture 1. So set the 2 uniforms first and then once you need to bind the new textures simply do that, preferably not in every draw call.
Some other implementations include that you have a custom shader object which remembers what your previous setting was so it will not send the same value to the shader again:
if(this.myUniform != newValue) {
this.myUniform = newValue;
GLES20.glUniform1i(filterInputTextureUniform2, newValue);
}
But the problem is this system may in the end be slower then if you actually just set the uniform again.
So the actual answer to which is recommended: I recommend setting the uniform first in some initialization if possible which means the sequence is not even in the same method. But if they are in the same method I suggest you to set the uniform last due to the readability but that would mean setup both textures first (active plus bind) and then set the 2 uniforms one after the other.
And just a note about "most tutorials":
Most of the tutorials you will find are designed to show you how the API works. They are designed for you to be able to easily identified what calls must be made and in which order (if any). This leads to usually having all the code in a single source file which should not be done for real applications. Remember to build your tools as you see fit and separate them into different classes. In the end your "on draw" method should have no more then 50 lines no matter the size of the project or scenes you are drawing.
I thought you could modify a Uniform variable and then use this method to get the variable after draw but it throws a cannot modify uniform exception when building the shader.
glGetUniformfv(PROGRAM_INT, UNIFORM_INT, PARAMS, 0);
I want the shader to modify a variable and return that variable?
Is there a way to modify a shaders variable, and a use a GL method to get that variable?
No. Uniform variables are read-only in the shader code. They are used to pass values from your Java/C++ client code to the shader code, and not the other way.
In ES 2.0, the only way I can think of to get values that were produced by the shader back into the client code is to produce them as color values in the fragment shader output. They will then be part of the framebuffer content, which you can read back with glReadPixels().
In newer versions of OpenGL ES, as well as in recent versions of full OpenGL, there are additional options. For example, ES 3.1 introduces Shader Storage Buffers, which are also available in OpenGL 4.3 and later. They allow shaders to write values to buffers, which you could read back from client code.
I'm developing an app for Android devices using OpenGL ES 2.0 and am having trouble understanding how to move my camera - (ultimately, I'm attempting to produce a 'screen shaking' effect - so I need all currently rendered objects to move in unison).
So, at the moment, I have a SpriteBatch class which I use to batch up a load of entities and render them all in a single OpenGL call.
Within this class, I have my vertex and fragment shaders.
So, I simply create an instance of this class for my objects..... something like:
SpriteBatch tileSet1 = new SpriteBatch(100); //100 tiles
I'll then render it like this:
tileSet1.render();
Within my GLSurfaceView class (in onSurfaceChanged), I am setting up my view like this:
//offsetX, offsetY, width and height are all valid and previously declared)
GLES20.glViewport(offsetX, offsetY, width, height);
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
Then, within my onDrawFrame() method, I simply call all of my logic and rendering like this:
logic();
render();
(Obviously, my game loop has been greatly simplified for the purpose of this question).
I really am not sure where I'm supposed to put Matrix.setLookAtM.
I've tried placing it into my 'GLSufaceView' class - again, in onSurfaceChanged, but it doesn't seem to have any effect. How does setLookAtM work in relation to the shaders? I ask because my shaders are in my 'SpriteBatch' class and belong to instances of that class (I could be misunderstanding this completely though).
If any more code is required, please ask and I will post.
Would appreciate a pointer in the right direction.
Calls like Matrix.orthoM() and Matrix.setLookAtM() do not change your OpenGL state at all. They are just utility methods to calculate matrices. They set values in the matrix passed to them as the first argument, and nothing else.
Applying those matrices to your OpenGL rendering is a separate task. The main steps you need for this in a typical use case are:
Declare a uniform variable of type mat4 in your vertex shader.
uniform mat4 ProjMat;
Use the matrix while transforming vertices in the vertex shader code:
gl_Position = ProjMat * ...;
In your Java code, get the location of the uniform variable after compiling and linking the shader:
int projMatLoc = GLES20.glGetUniformLocation(progId, "ProjMat");
Calculate the desired matrix, and use it to set the value of the uniform variable:
Matrix.orthoM(projMat, ...);
...
GLES20.glUseProgram(progId);
GLES20.glUniformMatrix4fv(projMatLoc, 1, false, projMat, 0);
Note that uniform values have shader program scope. So if you have multiple shader programs that use the matrix, and want to change it for all of them, you'll have to make the glUniformMatrix4fv() call for each of them, after making it active with glUseProgram().
I want to pass the current time to my Shader for texture-animation like this:
float shaderTime = (float)((helper::getMillis() - device.stat.startTime));
glUniform1f(uTime, shaderTime);
To animate the texture I do:
GLSL Fragment Shader:
#if defined GL_ES && defined GL_FRAGMENT_PRECISION_HIGH
#define HIGHFLOAT highp
#endif
uniform HIGHFLOAT float uTime;
...
void main()
{
vec2 coord = vTexCoord.xy;
coord.x += uTime*0.001;
gl_FragColor = texture2D(uTex, coord);
}
My problem is: if I run the program for a while:
The first minute: Everything is animating fine
After ~5 minutes: The animation stutters
After ~10 minutes: The animation has stopped
Any Ideas how to fix this?
floats lose precision as they get larger which will result in the stuttering as the number gets larger. Also the coordinates that you can use to sample a texture will have a numerical limit so will fail after you go over it.
You should wrap your time value so it doesn't go over a certain point. For example, adding 1.5 to the UV coordinates across an entire triangle is the same as adding 0.5, so just wrap your values so that uTime*0.001 is always between 0 and 1.
You can also consider switching time to be an integer so that it doesn't lose precision. You can multiply this integer by a constant delta value in your shader. e.g. multiply by 1/30 if your framerate is 30fps.
Like Muzza says, the coordinates that using to sample a texture will have a limit. This leads to the problem you face.
There are also some alternative methods to do texture-animation without modify the fragment shader.
For example, revise the projection matrix to choose which part of the texture you want to see.
Modify the view port when the time goes is another way can help.
When creating a custom shader in GLSL for renderscript the program builder seems to be converting all the members of structure I bind as uniform constants to floats or vec, regardless of what their specified type is. Also, I have a uniform that is reporting at compile time the following error: "Could not link program, L0010, Uniform "uniform name here" differ on precision.". I have the same named uniform in two different structures that separately bind to the vertex and fragment shaders.
[EDIT]
Thanks for the answer to the second part. In regards to the first part I will try to be more clear. When building my shader programs in the java side, I bind constants to the program builders (both vertex and fragment) with the input being a the java variable that is bound to a renderscript structure. Everything works great, all my float variables are completely accessable as uniforms in the shader programs. However, if the structure has a member such as bool or int types and I attempt something such as if (i == UNI_memberInt) where i is an integer counter declared in the shader or if (UNI_memberBool) then I get errors along the lines of "cannot compare int to float" or "if() condition must be of a boolean type" which suggests to me that the data is not making it to the GLSL program intact. I can get around this by making them float values and using things like 0.0 since GLSL requires the float value of 0 to always be exact but it seems crude to me. Similar things occur if I try and use UNI_memberInt as a stop condition in a for loop.
Thanks for asking this at the developer hangout. Unfortunately an engineer on the Renderscript team doesn't quite understand the first part of your question. If you can clarify that would be great. As for the second part, it is a known bug.
Basically, if you have a uniform "foo" in both the vertex and fragment shader, the vertex shader is high precision, and the fragment shader is medium, which the GLSL compiler can't handle. Unfortunately, the solution is to not have any name collisions between the two.