Renderscript GLSL shader converting all uniforms to floats - android

When creating a custom shader in GLSL for renderscript the program builder seems to be converting all the members of structure I bind as uniform constants to floats or vec, regardless of what their specified type is. Also, I have a uniform that is reporting at compile time the following error: "Could not link program, L0010, Uniform "uniform name here" differ on precision.". I have the same named uniform in two different structures that separately bind to the vertex and fragment shaders.
[EDIT]
Thanks for the answer to the second part. In regards to the first part I will try to be more clear. When building my shader programs in the java side, I bind constants to the program builders (both vertex and fragment) with the input being a the java variable that is bound to a renderscript structure. Everything works great, all my float variables are completely accessable as uniforms in the shader programs. However, if the structure has a member such as bool or int types and I attempt something such as if (i == UNI_memberInt) where i is an integer counter declared in the shader or if (UNI_memberBool) then I get errors along the lines of "cannot compare int to float" or "if() condition must be of a boolean type" which suggests to me that the data is not making it to the GLSL program intact. I can get around this by making them float values and using things like 0.0 since GLSL requires the float value of 0 to always be exact but it seems crude to me. Similar things occur if I try and use UNI_memberInt as a stop condition in a for loop.

Thanks for asking this at the developer hangout. Unfortunately an engineer on the Renderscript team doesn't quite understand the first part of your question. If you can clarify that would be great. As for the second part, it is a known bug.
Basically, if you have a uniform "foo" in both the vertex and fragment shader, the vertex shader is high precision, and the fragment shader is medium, which the GLSL compiler can't handle. Unfortunately, the solution is to not have any name collisions between the two.

Related

How do the shaders work in Android OpenGL?

When developing Andoird OpenGL, how do the created vertex and fragment shaders work?
I am basically following the Android developer guide example on OpenGL ES. However, when creating the shaders, first it creates a String including a code segment. I tried to understand how does this string segment connect with the remaining process, but I couldn't.
private final String vertexShaderCode =
"attribute vec4 vPosition;"+
"void main(){"+
" gl_position = vPosition;"+
"}";
Take a look on the Graphics Pipeline:
The main job of a vertex shader is converting/transforming the position of each vertex, from Camera (Real-world) Space to a special space called Normalized Device Space. The output position is stored in the built-int variable gl_Position. Each vertex is executed by an instance of the vertex shader. So if you have 100 vertices, you will have 100 instances of the vertex shader executed.
Your posted vertex shader code actually does not do any significant convert: gl_position = vPosition but this is fine as the author intended that the input positions are already in the Normalized Device Space.
Then in the Normalized Device Space, these positions are assembled into primitives (e.g., triangles). Next, in the Rasterization stage, these primitives are broken into fragments (can be considered pixels for the sake of simplicity). Then each fragment goes into the fragment shader to calculate the color of that fragment. Each fragment is executed by an instance of the fragment shader.
At one time, one and only one couple of vertex shader & fragment shader is used in the pipeline. This is specified by the OpenGL ES command glUseProgram(program) in which a program is just a couple of vertex & fragment shaders.
The string you posted is the source code of a vertex shader, you will see there is also the source code of a corresponding fragment shader. We use OpenGL ES commands to create shaders, set their source code (the string segment you saw), compile them, attach them to a program, link the program, and use the program.
In order to really understand all of these stuff, I suggest you to read this book. The picture above is taken from that book.

Code possibly optimized away in OpenGL ES 2.0 Fragment Shader on iOS

I am writing yet another GPU mandelbrot renderer for iOS, and I have some unexpected results in the fragment shader.
I have 2 uniforms, and if I test their values independently:
if (u_h0 == 0.00130208337) {
return 200.; // this line is executed
}
comment out the above, and then:
if (u_h1 == -0.0000000000388051084) {
return 100.; // this line is executed
}
I hope these are valid tests. Now I call a function:
vec2 e_ty = ds_mul(vec2(1., 0.), vec2(0.00130208337, -0.0000000000388051084));
if (e_ty.y == -0.0000000000388051084) {
return 100.; // this line is executed (correct result)
}
But, the following does not yield the same result:
vec2 e_ty = ds_mul(vec2(1., 0.), vec2(u_h0, u_h1));
if (e_ty.y == -0.0000000000388051084) {
return 100.; // this is NOT executed
}
Looking a bit further:
vec2 e_ty = ds_mul(vec2(1., 0.), vec2(u_h0, u_h1));
if (e_ty.y == 0.) {//-0.0000000000388051084) {
return 100.; // this IS executed
}
What can be going on here? I suspect this is some compiler optimization type magic, but I cannot find any pragma-type options (to turn off fast math?) except (if I switch to OpenGL ES 3.0):
#pragma optimize({on, off}) - enable or disable shader optimization (default on)
Which does not solve my problem. I believe there are:
#pragma optionNV(fastmath off)
#pragma optionNV(fastprecision off)
for nVidia, but I cannot find an equivalent for iOS devices.
Does anyone have any ideas? This is driving me nuts..
sorry i meant does anyone have any useful ideas
Yes. Stop trying to equality-compare floating-point numbers. It's almost always a bad idea.
The problem you're having is a direct result of you expecting floating-point comparisons to be exact. They aren't going to be exact. They will never be exact. And there's no setting you can use to make them work.
The specific issue is this:
(u_h1 == -0.0000000000388051084)
This is a comparison of a uniform value with a floating-point literal. The uniform value will be provided by you on the CPU. The literal is also provided by you on the CPU, as interpreted by the GLSL compiler.
If the GLSL compiler uses the same float-parsing algorithm you used to get the float value you provide to the uniform, then odds are good this comparison will work. It's simply doing a floating-point comparison of the data you provided with other data that you also provided.
The key point here is that no GLSL computations will be used.
vec2 e_ty = ds_mul(vec2(1., 0.), vec2(0.00130208337, -0.0000000000388051084));
Assuming that ds_mul is a pure function, this will boil down to a constant expression. Any compiler worth using will execute this function call on the CPU, simply storing the result. And in doing so, it will use the CPU's native floating-point precision and representation.
Indeed, any compiler worth using will realize that e_ty is a constant expression and therefore execution the conditional comparison on the CPU as well.
But either way, the point is the same as before: no GLSL computations will be executed.
vec2 e_ty = ds_mul(vec2(1., 0.), vec2(u_h0, u_h1));
This is an expression based on the value of 2 uniforms. As such, it cannot be optimized away; it must be executed as written on the GPU. Which means you are now at the mercy of the GPU's floating-point precision.
And on this issue, GPUs show no mercy.
Does the GPU permit 32-bit floats? You can use highp and hope for the best. Does the GPU properly handle demoralized IEEE-754 32-bit floats? Odds are good no, and there is absolutely no way for you to force it to do so.
So what is the result of that expression? It will be the result of the math, within the tolerance of the GPU's computation precision. Which you cannot control. Because the GPU used less precision, it computed a value of 0. Which is not equal to the small float value you provided.
Whatever algorithm you're trying to use relies on precise floating-point computations. Such things cannot be controlled in GLSL. Therefore, you must devise an algorithm that is more tolerant of floating-point imprecision.
Slightly tangential to the original question, but just a footnote that in general it's bad practice to have control-flow based on uniform expressions in shaders.
The uniform is a constant for the entire draw sequence, so why force the GPU to waste time evaluating the expression for every vertex or fragment? You might be lucky, and the compiler might optimize it away, but given that your application knows what uniform it is using when it issues the draw, it is entirely possible to not need per-fragment control code.
Build one shader per constant code path you need, with the conditional stuff removed, and move the condition checks for which shader to use to the CPU. The shader language supports a pre-processor, so you can build multiple variants of the same shader just by adding a #define before uploading the source.

OpenGL ES 2.0 modify shader variable

I thought you could modify a Uniform variable and then use this method to get the variable after draw but it throws a cannot modify uniform exception when building the shader.
glGetUniformfv(PROGRAM_INT, UNIFORM_INT, PARAMS, 0);
I want the shader to modify a variable and return that variable?
Is there a way to modify a shaders variable, and a use a GL method to get that variable?
No. Uniform variables are read-only in the shader code. They are used to pass values from your Java/C++ client code to the shader code, and not the other way.
In ES 2.0, the only way I can think of to get values that were produced by the shader back into the client code is to produce them as color values in the fragment shader output. They will then be part of the framebuffer content, which you can read back with glReadPixels().
In newer versions of OpenGL ES, as well as in recent versions of full OpenGL, there are additional options. For example, ES 3.1 introduces Shader Storage Buffers, which are also available in OpenGL 4.3 and later. They allow shaders to write values to buffers, which you could read back from client code.

GLES glError 1282 when passing uniform variable to shader

I am using a uniform variable to pass a floating point value into a GLES shader, but my code keeps generating a 1282 error. I've followed a couple of examples on here and am pretty sure I am doing everything correctly. Can you spot anything wrong with my code? I am testing this on Android 4.4.2 on a Nexus 7
I use this in onDrawFrame:
GLES20.glUseProgram(mProgram);
int aMyUniform = GLES20.glGetUniformLocation(mProgram, "myUniform");
glVar = frameNumber/100f;
GLES20.glUniform1f(aMyUniform, glVar);
System.out.println("aMyUniform = " + aMyUniform); //diagnostic check
This is in the top of the fragment shader:
"uniform float myUniform;\n" +
And this in the main routine of the fragment shader:
"gl_FragColor[2] = myUniform;\n" +
The variable myUniform does not appear in the vertex shader.
The value reported for aMyUniform is 0, which suggests the uniform has been found correctly. If I change the fragment shader to remove the reference to myUniform and replace it with a hard-coded value everything works as expected; aMyUniform returns a value of -1 but scene draws correctly.
If you can't spot anything wrong with the code any hints on how to debug it would be appreciated.
After a lot of head scratching this turns out to have the same cause as this fault:
GLSL Shader will not render color from uniform variable
I was reusing the mvpMatrix uniform across additional shaders. This did not produce an error until this additional uniform was introduced.
I only clue I found to this was that the error did not occur until the next shader was called, I didn't pay much attention to this at the time but it should have been a clue.

OpenGL ES 2.0: sharing constants between the main program and shader code

There is a constant I am using both in my main code (Android) and in the shader:
// Main code
private static final int XSIZE=16;
private float[] sinusoida = new float[XSIZE];
// shader
const int XSIZE = 16;
uniform float u_SinArray[XSIZE];
Both constants refer to the same thing, so obviously it would be optimal to share them and have one automatically change when you change the first one. Is that possible?
If you are asking whether the Java code and the shader code can literally access the same variable, then no. Especially if you are using a pre-compiled shader, the answer is no. If you are compiling the shader in your Java code, then you can simply use the Java constant to build the shader script (but it doesn't seem like that's what you're doing). An alternative would be to pass another uniform to the shader instead of using a constant. Assuming it wouldn't put you over the maximum number of uniforms in your shader, that is probably the safest way to go IMO.
Edit:
To future readers, never mind the uniform suggestion. Uniforms are implicitly constant during execution, but not at compile time, which would be necessary for an array declaration.

Categories

Resources