I thought you could modify a Uniform variable and then use this method to get the variable after draw but it throws a cannot modify uniform exception when building the shader.
glGetUniformfv(PROGRAM_INT, UNIFORM_INT, PARAMS, 0);
I want the shader to modify a variable and return that variable?
Is there a way to modify a shaders variable, and a use a GL method to get that variable?
No. Uniform variables are read-only in the shader code. They are used to pass values from your Java/C++ client code to the shader code, and not the other way.
In ES 2.0, the only way I can think of to get values that were produced by the shader back into the client code is to produce them as color values in the fragment shader output. They will then be part of the framebuffer content, which you can read back with glReadPixels().
In newer versions of OpenGL ES, as well as in recent versions of full OpenGL, there are additional options. For example, ES 3.1 introduces Shader Storage Buffers, which are also available in OpenGL 4.3 and later. They allow shaders to write values to buffers, which you could read back from client code.
Related
When developing Andoird OpenGL, how do the created vertex and fragment shaders work?
I am basically following the Android developer guide example on OpenGL ES. However, when creating the shaders, first it creates a String including a code segment. I tried to understand how does this string segment connect with the remaining process, but I couldn't.
private final String vertexShaderCode =
"attribute vec4 vPosition;"+
"void main(){"+
" gl_position = vPosition;"+
"}";
Take a look on the Graphics Pipeline:
The main job of a vertex shader is converting/transforming the position of each vertex, from Camera (Real-world) Space to a special space called Normalized Device Space. The output position is stored in the built-int variable gl_Position. Each vertex is executed by an instance of the vertex shader. So if you have 100 vertices, you will have 100 instances of the vertex shader executed.
Your posted vertex shader code actually does not do any significant convert: gl_position = vPosition but this is fine as the author intended that the input positions are already in the Normalized Device Space.
Then in the Normalized Device Space, these positions are assembled into primitives (e.g., triangles). Next, in the Rasterization stage, these primitives are broken into fragments (can be considered pixels for the sake of simplicity). Then each fragment goes into the fragment shader to calculate the color of that fragment. Each fragment is executed by an instance of the fragment shader.
At one time, one and only one couple of vertex shader & fragment shader is used in the pipeline. This is specified by the OpenGL ES command glUseProgram(program) in which a program is just a couple of vertex & fragment shaders.
The string you posted is the source code of a vertex shader, you will see there is also the source code of a corresponding fragment shader. We use OpenGL ES commands to create shaders, set their source code (the string segment you saw), compile them, attach them to a program, link the program, and use the program.
In order to really understand all of these stuff, I suggest you to read this book. The picture above is taken from that book.
I want to make some AR stuff. I walked through several steps of some tutorials like
Displaying Graphics with OpenGL ES
Learn OpenGL ES
I am Using OpenGL ES 2.0. Especially in the first tutorial they implement some GLSL shader code for the fragment and the vertex shader. And then they compile it.
Do I need to implement such code for every primitive object I want to draw with OpenGL? Or can I reuse shader code for drawing different types of shapes and different instances of the same type of shape?
Furthermore: Can I only reuse shader code or can I also reuse a compiled shader program?
Reusing the same shader for several geometries is a common way of improving performance as they will be treated as a single draw-call.
If you set the shader (glUseProgram) it will be available to any number of subsequent calls.
The tutorials are very basic and you should abstract the shader code into a more object oriented approach.
For example:
public class Material{
String mVertexShaderCode;
String mFragmentShaderCode;
int mProgram;
void initialize(){
// do loadShader and attachShader here
}
void draw(){
GLES20.glUseProgram(mProgram);
// do more draw stuff
}
}
Maybe this makes it easier to wrap you head around how you can use and reuse the shader code.
To answer the follow-up question; Yes you can reuse both. The limitation is that you can't use different shader specific attributes if you also reuse the compiled shader. The limitations will become obvious as you start using them.
There is a constant I am using both in my main code (Android) and in the shader:
// Main code
private static final int XSIZE=16;
private float[] sinusoida = new float[XSIZE];
// shader
const int XSIZE = 16;
uniform float u_SinArray[XSIZE];
Both constants refer to the same thing, so obviously it would be optimal to share them and have one automatically change when you change the first one. Is that possible?
If you are asking whether the Java code and the shader code can literally access the same variable, then no. Especially if you are using a pre-compiled shader, the answer is no. If you are compiling the shader in your Java code, then you can simply use the Java constant to build the shader script (but it doesn't seem like that's what you're doing). An alternative would be to pass another uniform to the shader instead of using a constant. Assuming it wouldn't put you over the maximum number of uniforms in your shader, that is probably the safest way to go IMO.
Edit:
To future readers, never mind the uniform suggestion. Uniforms are implicitly constant during execution, but not at compile time, which would be necessary for an array declaration.
I am trying to do some GPGPU using OpenGL ES 2.0.
It seems to me that the GL_NV_draw_buffers and the GL_OES_texture_float extensions are some of the essentials here.
This question relates to the GL_OES_texture_float extension: From the desktop world I'm used to textures being in the [0..1] range when accessed in the shader if the format is fixed point (like GL_RGBA).Consulting the respective OES extension page, it says: " ... If the internal format of the texture is fixed-point, components are clamped to [0..1]. Otherwise, values are not modified."
Now I've heard several times on the web (for example the answer here: Do OpenGL GLSL samplers always return floats from 0.0 to 1.0?) that ES 2.0 supports access to unclamped values in the fragment shader, too. But where is this functionality specified? The extension says "otherwise, values are not modified" but since the OpenGL ES specification only knows fixed-point formats it doesn't make sense to me.
Also, as I understand it, the extension only specifies that float values can be read from client memory into a texture but does not specify how (i.e. how many bits per channel) the texture is represented in graphics memory. Is there any official spec on this?
Finally I'd like to write unclamped floating point values to an FBO color attachment in my fragment shader, preferably using 32 bits per channel. Is this possible?
When creating a custom shader in GLSL for renderscript the program builder seems to be converting all the members of structure I bind as uniform constants to floats or vec, regardless of what their specified type is. Also, I have a uniform that is reporting at compile time the following error: "Could not link program, L0010, Uniform "uniform name here" differ on precision.". I have the same named uniform in two different structures that separately bind to the vertex and fragment shaders.
[EDIT]
Thanks for the answer to the second part. In regards to the first part I will try to be more clear. When building my shader programs in the java side, I bind constants to the program builders (both vertex and fragment) with the input being a the java variable that is bound to a renderscript structure. Everything works great, all my float variables are completely accessable as uniforms in the shader programs. However, if the structure has a member such as bool or int types and I attempt something such as if (i == UNI_memberInt) where i is an integer counter declared in the shader or if (UNI_memberBool) then I get errors along the lines of "cannot compare int to float" or "if() condition must be of a boolean type" which suggests to me that the data is not making it to the GLSL program intact. I can get around this by making them float values and using things like 0.0 since GLSL requires the float value of 0 to always be exact but it seems crude to me. Similar things occur if I try and use UNI_memberInt as a stop condition in a for loop.
Thanks for asking this at the developer hangout. Unfortunately an engineer on the Renderscript team doesn't quite understand the first part of your question. If you can clarify that would be great. As for the second part, it is a known bug.
Basically, if you have a uniform "foo" in both the vertex and fragment shader, the vertex shader is high precision, and the fragment shader is medium, which the GLSL compiler can't handle. Unfortunately, the solution is to not have any name collisions between the two.