I want to make some AR stuff. I walked through several steps of some tutorials like
Displaying Graphics with OpenGL ES
Learn OpenGL ES
I am Using OpenGL ES 2.0. Especially in the first tutorial they implement some GLSL shader code for the fragment and the vertex shader. And then they compile it.
Do I need to implement such code for every primitive object I want to draw with OpenGL? Or can I reuse shader code for drawing different types of shapes and different instances of the same type of shape?
Furthermore: Can I only reuse shader code or can I also reuse a compiled shader program?
Reusing the same shader for several geometries is a common way of improving performance as they will be treated as a single draw-call.
If you set the shader (glUseProgram) it will be available to any number of subsequent calls.
The tutorials are very basic and you should abstract the shader code into a more object oriented approach.
For example:
public class Material{
String mVertexShaderCode;
String mFragmentShaderCode;
int mProgram;
void initialize(){
// do loadShader and attachShader here
}
void draw(){
GLES20.glUseProgram(mProgram);
// do more draw stuff
}
}
Maybe this makes it easier to wrap you head around how you can use and reuse the shader code.
To answer the follow-up question; Yes you can reuse both. The limitation is that you can't use different shader specific attributes if you also reuse the compiled shader. The limitations will become obvious as you start using them.
Related
When developing Andoird OpenGL, how do the created vertex and fragment shaders work?
I am basically following the Android developer guide example on OpenGL ES. However, when creating the shaders, first it creates a String including a code segment. I tried to understand how does this string segment connect with the remaining process, but I couldn't.
private final String vertexShaderCode =
"attribute vec4 vPosition;"+
"void main(){"+
" gl_position = vPosition;"+
"}";
Take a look on the Graphics Pipeline:
The main job of a vertex shader is converting/transforming the position of each vertex, from Camera (Real-world) Space to a special space called Normalized Device Space. The output position is stored in the built-int variable gl_Position. Each vertex is executed by an instance of the vertex shader. So if you have 100 vertices, you will have 100 instances of the vertex shader executed.
Your posted vertex shader code actually does not do any significant convert: gl_position = vPosition but this is fine as the author intended that the input positions are already in the Normalized Device Space.
Then in the Normalized Device Space, these positions are assembled into primitives (e.g., triangles). Next, in the Rasterization stage, these primitives are broken into fragments (can be considered pixels for the sake of simplicity). Then each fragment goes into the fragment shader to calculate the color of that fragment. Each fragment is executed by an instance of the fragment shader.
At one time, one and only one couple of vertex shader & fragment shader is used in the pipeline. This is specified by the OpenGL ES command glUseProgram(program) in which a program is just a couple of vertex & fragment shaders.
The string you posted is the source code of a vertex shader, you will see there is also the source code of a corresponding fragment shader. We use OpenGL ES commands to create shaders, set their source code (the string segment you saw), compile them, attach them to a program, link the program, and use the program.
In order to really understand all of these stuff, I suggest you to read this book. The picture above is taken from that book.
I thought you could modify a Uniform variable and then use this method to get the variable after draw but it throws a cannot modify uniform exception when building the shader.
glGetUniformfv(PROGRAM_INT, UNIFORM_INT, PARAMS, 0);
I want the shader to modify a variable and return that variable?
Is there a way to modify a shaders variable, and a use a GL method to get that variable?
No. Uniform variables are read-only in the shader code. They are used to pass values from your Java/C++ client code to the shader code, and not the other way.
In ES 2.0, the only way I can think of to get values that were produced by the shader back into the client code is to produce them as color values in the fragment shader output. They will then be part of the framebuffer content, which you can read back with glReadPixels().
In newer versions of OpenGL ES, as well as in recent versions of full OpenGL, there are additional options. For example, ES 3.1 introduces Shader Storage Buffers, which are also available in OpenGL 4.3 and later. They allow shaders to write values to buffers, which you could read back from client code.
There is a constant I am using both in my main code (Android) and in the shader:
// Main code
private static final int XSIZE=16;
private float[] sinusoida = new float[XSIZE];
// shader
const int XSIZE = 16;
uniform float u_SinArray[XSIZE];
Both constants refer to the same thing, so obviously it would be optimal to share them and have one automatically change when you change the first one. Is that possible?
If you are asking whether the Java code and the shader code can literally access the same variable, then no. Especially if you are using a pre-compiled shader, the answer is no. If you are compiling the shader in your Java code, then you can simply use the Java constant to build the shader script (but it doesn't seem like that's what you're doing). An alternative would be to pass another uniform to the shader instead of using a constant. Assuming it wouldn't put you over the maximum number of uniforms in your shader, that is probably the safest way to go IMO.
Edit:
To future readers, never mind the uniform suggestion. Uniforms are implicitly constant during execution, but not at compile time, which would be necessary for an array declaration.
I'm new to open gl in android and I need to draw some text in my GLSurfaceView. I found only one solution - to create bitmap with text and display it like texture (like this, for example; http://giantandroid.blogspot.ru/2011/03/draw-text-in-opengl-es.html). I tried to do like this, but it didn't work for me. What is textures array in this code? And is there any simplier way to display text?
openGL by itself doesn't offer any "simpler" way to render text - it even doesn't "know" anything about the way glyphs may be represented in bitmap or outline font sets.
Using some other library "knowing" how to handle different font sets and how to rasterize them to directly let them paint into an openGL texture doesn't seem so complicated that any other approach may claim to be a lot easier.
You might want to take a look at both this (see: Draw text in OpenGL ES) article here on stackoverflow as well as into the linked pages to get an overview of other methods available to choose the one that seems best to you.
Regarding your second question about the textures array in the code sample you linked the array is just used to fulfill the API requirements of the call to glGenTextures() as it expects an array as the second argument.
In fact just a single texture id is allocated in this line:
gl.glGenTextures(1, textures, 0);
Taking a look at the spec (see: http://docs.oracle.com/javame/config/cldc/opt-pkgs/api/jb/jsr239/javax/microedition/khronos/opengles/GL10.html) it turns out there is just a single texture id allocated stored at index 0 in the textures array.
Why then a (pseudo) array at all?
The seemingly complex signature of the java method
public void glGenTextures(int n, int[] textures, int offset)
is due to the fact that there are no pointers in java while the C function that gets wrappped requires a pointer as it's second argument:
void glGenTextures(GLsizei n, GLuint *textures);
As in C code allocating a single texture can be done with:
GLuint texture_id;
glGenTextures (1, &texture_id);
there is no need for a special function just returning a single texture id, so there's only a single glGenTextures() function available.
To allow for allocations that can be done with pointer arithmetic like:
GLuint textures[10];
glGenTextures (3, textures + 5);
in C the java wrapper allows for a third parameter specifying the starting index from which to assign allocated texture ids.
The corresponding method invocation in java would look like this:
int[] textures = new int[10];
glGenTextures (3, textures, 5);
Thus one more parameter in java wrapper and the requirement to use an array even if only a single id is needed.
I am trying to do some GPGPU using OpenGL ES 2.0.
It seems to me that the GL_NV_draw_buffers and the GL_OES_texture_float extensions are some of the essentials here.
This question relates to the GL_OES_texture_float extension: From the desktop world I'm used to textures being in the [0..1] range when accessed in the shader if the format is fixed point (like GL_RGBA).Consulting the respective OES extension page, it says: " ... If the internal format of the texture is fixed-point, components are clamped to [0..1]. Otherwise, values are not modified."
Now I've heard several times on the web (for example the answer here: Do OpenGL GLSL samplers always return floats from 0.0 to 1.0?) that ES 2.0 supports access to unclamped values in the fragment shader, too. But where is this functionality specified? The extension says "otherwise, values are not modified" but since the OpenGL ES specification only knows fixed-point formats it doesn't make sense to me.
Also, as I understand it, the extension only specifies that float values can be read from client memory into a texture but does not specify how (i.e. how many bits per channel) the texture is represented in graphics memory. Is there any official spec on this?
Finally I'd like to write unclamped floating point values to an FBO color attachment in my fragment shader, preferably using 32 bits per channel. Is this possible?