Changing camera position in OpenGL ES 2.0 - android

I'm developing an app for Android devices using OpenGL ES 2.0 and am having trouble understanding how to move my camera - (ultimately, I'm attempting to produce a 'screen shaking' effect - so I need all currently rendered objects to move in unison).
So, at the moment, I have a SpriteBatch class which I use to batch up a load of entities and render them all in a single OpenGL call.
Within this class, I have my vertex and fragment shaders.
So, I simply create an instance of this class for my objects..... something like:
SpriteBatch tileSet1 = new SpriteBatch(100); //100 tiles
I'll then render it like this:
tileSet1.render();
Within my GLSurfaceView class (in onSurfaceChanged), I am setting up my view like this:
//offsetX, offsetY, width and height are all valid and previously declared)
GLES20.glViewport(offsetX, offsetY, width, height);
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
Then, within my onDrawFrame() method, I simply call all of my logic and rendering like this:
logic();
render();
(Obviously, my game loop has been greatly simplified for the purpose of this question).
I really am not sure where I'm supposed to put Matrix.setLookAtM.
I've tried placing it into my 'GLSufaceView' class - again, in onSurfaceChanged, but it doesn't seem to have any effect. How does setLookAtM work in relation to the shaders? I ask because my shaders are in my 'SpriteBatch' class and belong to instances of that class (I could be misunderstanding this completely though).
If any more code is required, please ask and I will post.
Would appreciate a pointer in the right direction.

Calls like Matrix.orthoM() and Matrix.setLookAtM() do not change your OpenGL state at all. They are just utility methods to calculate matrices. They set values in the matrix passed to them as the first argument, and nothing else.
Applying those matrices to your OpenGL rendering is a separate task. The main steps you need for this in a typical use case are:
Declare a uniform variable of type mat4 in your vertex shader.
uniform mat4 ProjMat;
Use the matrix while transforming vertices in the vertex shader code:
gl_Position = ProjMat * ...;
In your Java code, get the location of the uniform variable after compiling and linking the shader:
int projMatLoc = GLES20.glGetUniformLocation(progId, "ProjMat");
Calculate the desired matrix, and use it to set the value of the uniform variable:
Matrix.orthoM(projMat, ...);
...
GLES20.glUseProgram(progId);
GLES20.glUniformMatrix4fv(projMatLoc, 1, false, projMat, 0);
Note that uniform values have shader program scope. So if you have multiple shader programs that use the matrix, and want to change it for all of them, you'll have to make the glUniformMatrix4fv() call for each of them, after making it active with glUseProgram().

Related

How do the shaders work in Android OpenGL?

When developing Andoird OpenGL, how do the created vertex and fragment shaders work?
I am basically following the Android developer guide example on OpenGL ES. However, when creating the shaders, first it creates a String including a code segment. I tried to understand how does this string segment connect with the remaining process, but I couldn't.
private final String vertexShaderCode =
"attribute vec4 vPosition;"+
"void main(){"+
" gl_position = vPosition;"+
"}";
Take a look on the Graphics Pipeline:
The main job of a vertex shader is converting/transforming the position of each vertex, from Camera (Real-world) Space to a special space called Normalized Device Space. The output position is stored in the built-int variable gl_Position. Each vertex is executed by an instance of the vertex shader. So if you have 100 vertices, you will have 100 instances of the vertex shader executed.
Your posted vertex shader code actually does not do any significant convert: gl_position = vPosition but this is fine as the author intended that the input positions are already in the Normalized Device Space.
Then in the Normalized Device Space, these positions are assembled into primitives (e.g., triangles). Next, in the Rasterization stage, these primitives are broken into fragments (can be considered pixels for the sake of simplicity). Then each fragment goes into the fragment shader to calculate the color of that fragment. Each fragment is executed by an instance of the fragment shader.
At one time, one and only one couple of vertex shader & fragment shader is used in the pipeline. This is specified by the OpenGL ES command glUseProgram(program) in which a program is just a couple of vertex & fragment shaders.
The string you posted is the source code of a vertex shader, you will see there is also the source code of a corresponding fragment shader. We use OpenGL ES commands to create shaders, set their source code (the string segment you saw), compile them, attach them to a program, link the program, and use the program.
In order to really understand all of these stuff, I suggest you to read this book. The picture above is taken from that book.

GLES20 Texturing - do I glUniform1i(uniformLoc, txtUnit) before or after glActiveTexture() + glBindTexture()?

Let's say I play a video on a GLSurfaceView with a custom renderer, and in said renderer I use a fragment shader that takes an extra texture for lookup filtering. Said fragment shader looks as follows:
#extension GL_OES_EGL_image_external : require
precision mediump float;
uniform samplerExternalOES u_Texture;
uniform sampler2D inputImageTexture2;
varying highp vec2 v_TexCoordinate;
void main()
{
vec3 texel = texture2D(u_Texture, v_TexCoordinate).rgb;
texel = vec3(
texture2D(inputImageTexture2, vec2(texel.r, .16666)).r,
texture2D(inputImageTexture2, vec2(texel.g, .5)).g,
texture2D(inputImageTexture2, vec2(texel.b, .83333)).b
);
gl_FragColor = vec4(texel, 1.0);
}
In the onDrawFrame() function, after the glUseProgram() call, I have the onPreDrawFrame() function that basically binds said texture into the shader's uniform. It currently looks like this:
public void onPreDrawFrame()
{
if (filterSourceTexture2 != -1 && filterInputTextureUniform2 != -1) {
GLES20.glActiveTexture(GLES20.GL_TEXTURE2);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, filterSourceTexture2);
GLES20.glUniform1i(filterInputTextureUniform2, 2);
}
}
filterSourceTexture2 is the texture unit corresponding to the extra texture.
What I'm confused of is that if I put the glUniform1i() call before glActiveTexture(), it still works fine, but most of the tutorials I've seen put the glUniform1i() call like the above code.
So which is recommended?
The 2 make no difference. What you pass through the uniform is which active texture should be used in the fragment shader. Next to that you need to bind the texture id to the actual active texture. So the only order you will see here is that active texture must be called before binding.
Now in most cases you will see the same sequence as already mentioned: active, bind, set uniform. But when it comes to optimizing the code this will change:
Since you might want to decrease the traffic to the GPU you will want to reduce redundant calls to uniforms. A single uniform call will not benefit you much but still... You will initialize the shader and then set all the default uniforms so you will not set them on every draw frame call. In your case that means you will have 2 textures and for the first one this will always be active texture 0 and for the second one always active texture 1. So set the 2 uniforms first and then once you need to bind the new textures simply do that, preferably not in every draw call.
Some other implementations include that you have a custom shader object which remembers what your previous setting was so it will not send the same value to the shader again:
if(this.myUniform != newValue) {
this.myUniform = newValue;
GLES20.glUniform1i(filterInputTextureUniform2, newValue);
}
But the problem is this system may in the end be slower then if you actually just set the uniform again.
So the actual answer to which is recommended: I recommend setting the uniform first in some initialization if possible which means the sequence is not even in the same method. But if they are in the same method I suggest you to set the uniform last due to the readability but that would mean setup both textures first (active plus bind) and then set the 2 uniforms one after the other.
And just a note about "most tutorials":
Most of the tutorials you will find are designed to show you how the API works. They are designed for you to be able to easily identified what calls must be made and in which order (if any). This leads to usually having all the code in a single source file which should not be done for real applications. Remember to build your tools as you see fit and separate them into different classes. In the end your "on draw" method should have no more then 50 lines no matter the size of the project or scenes you are drawing.

Texture Program - OpenGL ES 2.0 Android

After a lot of reading I was able to understand the steps needed to load a texture in OpenGL ES 2.0, but some question are still not answered:
What's the code below is actually doing?
glUniform1i(sampler2DLocation, 0);
If I erase this line from my code, nothing changes. Some books describe it as "Tell the texture uniform sampler to use this texture in the shader by telling it to read from texture unit 0"
This is called after the line:
glActiveTexture(GL_TEXTURE0);
But as stated in khronos.org the default active texture is GL_TEXTURE0, so I guess the line "glActiveTexture(GL_TEXTURE0);" code is just written as a good practice?
One last thing, when I call:
glBindTexture(GL_TEXTURE_2D, genTextures[0]);
I'm saying that future calls that affect "GL_TEXTURE_2D" will affect the texture unit stored in genTextures[0], due the binding. But there is any relation between "GL_TEXTURE_2D" and the active texture unit? I mean, there is an intrinsic chain between the 03 "components"?
genTextures[0] <---> GL_TEXTURE_2D <---> the active texture unit
Thank you,
The reason why the code continues to work even if you remove the instruction "glUniform1i(sampler2DLocation, 0);" is due, possibly to the default value assigned by the driver when the uniform is not provided.
To better understand it I would need an example of the shared code and the way the uniform ID is taken but I am pretty sure that the instruction is there to say to the GPU through the shader to use the texture unit 0.
The effect of calling "glUniform1i(sampler2DLocation, 0);" says take texture unit 0.
The effect of not calling it, sets anyway the sampler uniform to 0 and therefore, even if not formally correct, has the same behavior.
According to OpenGL standard, you activate a texture unit with the glActivateTexture which requires the number of the texture unit and then you bind the texture with glbindtexture to that specific current texture unit.
In other words, first you set the texture unit you want to work on and then you tell to the driver which texture needs to be bound in it.
I hope this helps.
Maurizio

Passing current time to OpenGL ES 2.0 shader for texture-animation: animation stops after certain time

I want to pass the current time to my Shader for texture-animation like this:
float shaderTime = (float)((helper::getMillis() - device.stat.startTime));
glUniform1f(uTime, shaderTime);
To animate the texture I do:
GLSL Fragment Shader:
#if defined GL_ES && defined GL_FRAGMENT_PRECISION_HIGH
#define HIGHFLOAT highp
#endif
uniform HIGHFLOAT float uTime;
...
void main()
{
vec2 coord = vTexCoord.xy;
coord.x += uTime*0.001;
gl_FragColor = texture2D(uTex, coord);
}
My problem is: if I run the program for a while:
The first minute: Everything is animating fine
After ~5 minutes: The animation stutters
After ~10 minutes: The animation has stopped
Any Ideas how to fix this?
floats lose precision as they get larger which will result in the stuttering as the number gets larger. Also the coordinates that you can use to sample a texture will have a numerical limit so will fail after you go over it.
You should wrap your time value so it doesn't go over a certain point. For example, adding 1.5 to the UV coordinates across an entire triangle is the same as adding 0.5, so just wrap your values so that uTime*0.001 is always between 0 and 1.
You can also consider switching time to be an integer so that it doesn't lose precision. You can multiply this integer by a constant delta value in your shader. e.g. multiply by 1/30 if your framerate is 30fps.
Like Muzza says, the coordinates that using to sample a texture will have a limit. This leads to the problem you face.
There are also some alternative methods to do texture-animation without modify the fragment shader.
For example, revise the projection matrix to choose which part of the texture you want to see.
Modify the view port when the time goes is another way can help.

OpenGL ES 2.0: sharing constants between the main program and shader code

There is a constant I am using both in my main code (Android) and in the shader:
// Main code
private static final int XSIZE=16;
private float[] sinusoida = new float[XSIZE];
// shader
const int XSIZE = 16;
uniform float u_SinArray[XSIZE];
Both constants refer to the same thing, so obviously it would be optimal to share them and have one automatically change when you change the first one. Is that possible?
If you are asking whether the Java code and the shader code can literally access the same variable, then no. Especially if you are using a pre-compiled shader, the answer is no. If you are compiling the shader in your Java code, then you can simply use the Java constant to build the shader script (but it doesn't seem like that's what you're doing). An alternative would be to pass another uniform to the shader instead of using a constant. Assuming it wouldn't put you over the maximum number of uniforms in your shader, that is probably the safest way to go IMO.
Edit:
To future readers, never mind the uniform suggestion. Uniforms are implicitly constant during execution, but not at compile time, which would be necessary for an array declaration.

Categories

Resources