Android OpenGL ES 3.0 Skeletal Animations glitch with UBO - android

I've been spending the better part of the last 2 days hunting down an OpenGL ES bug I've encountered only on some devices. In detail:
I'm trying to implement skeletal animations, using the following GLSL code:
#ifndef NR_BONES_INC
#define NR_BONES_INC
#ifndef NR_MAX_BONES
#define NR_MAX_BONES 256
#endif
in ivec4 aBoneIds;
in vec4 aBoneWeights;
layout (std140) uniform boneOffsets { mat4 offsets[NR_MAX_BONES]; } bones;
mat4 bones_get_matrix() {
mat4 mat = bones.offsets[aBoneIds.x] * aBoneWeights.x;
mat += bones.offsets[aBoneIds.y] * aBoneWeights.y;
mat += bones.offsets[aBoneIds.z] * aBoneWeights.z;
mat += bones.offsets[aBoneIds.w] * aBoneWeights.w;
return mat;
}
#endif
This is then included in the vertex shader and used as such:
vec4 viewPos = mv * bones_get_matrix() * vec4(aPos, 1.0);
gl_Position = projection * viewPos;
The desired output, achieved for example on the Android Emulator (armv8) running on my M1 MacBook Pro is this:
I can't actually capture the output of the faulting device (Espon Moverio BT-350, Android on x86 Intel Atom) sadly, but it's basically the same picture without head, arms or legs.
The uniform buffer bound to boneOffsets, for testing, is created as a std::vector<glm::mat4> with size 256 and is created/bound as such:
GLint buffer = 0;
std::vector<glm::mat4> testData(256, glm::mat4(1.0));
glGenBuffers(1, &buffer);
glBindBuffer(GL_UNIFORM_BUFFER, buffer);
glBufferData(GL_UNIFORM_BUFFER, sizeof(glm::mat4) * testData.size(), &testData[0], GL_DYNAMIC_DRAW);
glBindBufferBase(GL_UNIFORM_BUFFER, 0, buffer);
glUniformBlockBinding(programId, glGetUniformBlockIndex(programId, "boneOffsets"), 0);
Am I missing a step somewhere in my setup? Is this a GPU bug I'm encountering? Have I misunderstood the std140 layout?
P.S.: After every OpenGL call, I run glGetError(), but nothing shows up. Also nothing in the various info logs for Shaders and Programs.
EDIT
It's the next day, and I've tried skipping the UBO and using a plain uniform array (100 elements instead of 256, my model has 70-ish bones anyway). Same result.
I've also just tried with a "texture". It's a 4*256 GL_RGBAF teture, which is "sampled" as such:
uniform sampler2D bonesTxt;
mat4 read_bones_txt(int id) {
return mat4(
texelFetch(bonesTxt, ivec2(0, id), 0),
texelFetch(bonesTxt, ivec2(1, id), 0),
texelFetch(bonesTxt, ivec2(2, id), 0),
texelFetch(bonesTxt, ivec2(3, id), 0));
}
Still no dice. As a comment suggested, I've checked my bone IDs and Weights. What I send to glBufferData() is ok, but I can't actually check what's on the GPU because I can't get RenderDoc (or anything else) to work on my device.

I finally figured it out.
When binding my bone IDs I used glVertexAttribPointer() instead of glVertexAttribIPointer().
I was sending the correct type (GL_INT) to glVertexAttribPointer(), but I didn't read this line in the docs:
For glVertexAttribPointer() [...] values will be converted to floats [...]
As usual, RTFM people.

Related

Color using alpha blending are different on different Android phones with OpenGL-ES

I am experiencing an issue on Android with OpenGL ES 3.1. I wrote an application that shows a liquid falling down from the top of the screen. This liquid is made of many particles that are a bit transparent, but the color using alpha blending are displayed differently on another phone.
The color of every drop is defined as follow:
private int particleColor = Color.argb(50, 0, 172, 231);
Each particle color is stored in a buffer:
private ByteBuffer colorBuffer = ByteBuffer.allocateDirect(MAX_PARTICLES * PARTICLE_COLOR_COUNT).order(ByteOrder.nativeOrder());
And this buffer is passed to the OpenGL entity to be drawn:
/**
* Binds the data to OpenGL.
* #param program as the program used for the binding.
* #param positionBuffer as the position buffer.
* #param colorBuffer as the color buffer.
*/
public void bindData(LiquidShaderProgram program, ByteBuffer positionBuffer, ByteBuffer colorBuffer){
glVertexAttribPointer(program.getPositionAttributeLocation(), POSITION_COMPONENT_COUNT, GL_FLOAT, false, POSITION_COMPONENT_COUNT * OpenGLUtils.BYTE_PER_FLOAT, positionBuffer);
glEnableVertexAttribArray(program.getPositionAttributeLocation());
glVertexAttribPointer(program.getColorAttributeLocation(), COLOR_COMPONENT_COUNT, GL_UNSIGNED_BYTE, true, COLOR_COMPONENT_COUNT, colorBuffer);
glEnableVertexAttribArray(program.getColorAttributeLocation());
}
The rendering is down by calling this function:
/**
* Render the liquid.
* #param particleCount as the particle count.
*/
public void draw(int particleCount) {
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDrawArrays(GL_POINTS, 0, particleCount);
glDisable(GL_BLEND);
}
The fragment shader just draw the color that it receives:
precision mediump float;
varying vec4 v_Color;
void main() {
if (length(gl_PointCoord - vec2(0.5)) > 0.5) {
discard;
} else {
gl_FragColor = v_Color;
}
}
It works very well on one phone (Nexus 5X):
But on another phone (Galaxy S10), with the exact same code, the color is not the same:
Does anyone has any idea about a way to solve this issue? I would like to display the correct color on the second phone as well.
I finally understood the issue and found the solution, after reading A LOT of documentation and browsing the web for many hours.
It looks like, on Android, the alpha premultiplication is managed by the framework while OpenGL and the native code have to manage it manually ( I still don't understand why it worked correctly on some phone and not on other phones ). But after changing the blending function and performing alpha premultiplication in the fragment shader, the problem was fixed.
The correct blending function to use is the following one:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Using this function ensure that the colors are the same on all phones, but that's not enough because the colors are not correct. To make it perfect, alpha premultiplication must be performed in the fragment shader:
gl_FragColor = vec4(v_Color.x * v_Color.w, v_Color.y * v_Color.w, v_Color.z * v_Color.w, v_Color.w);
After doing that, I was able to display the correct colors on all phones.
That's my understanding of the problem and the fix I found. If someone has a better explanation, I would be happy to hear it, otherwise I hope it will help someone.

Mulitple Fragment Outputs in GLSL 300 es

While writing unit tests for a simple NDK Opengl ES 3.0 demo, I encountered an issue in using multiple render targets. Consider this simple Fragment shader with two outputs, declared in a C++11 string literal.
const static std::string multipleOutputsFragment = R"(#version 300 es
precision mediump float;
layout(location = 0) out vec3 out_color1;
layout(location = 1) out vec3 out_color2;
void main()
{
out_color1 = vec3(1.0, 0.0, 0.0);
out_color2 = vec3(0.0, 0.0, 1.0);
}
)";
I have correctly setup an FBO with two color attachments (via glFramebufferTexture2D) and the glCheckFramebufferStatus comes back as GL_FRAMEBUFFER_COMPLETE. I also call glDrawBuffers(2, &attachments[0]), where attachments is a vector of GL_COLOR_ATTACHMENTi enums. Afterwards, I compile and link the shader, without any linking or compile errors (just used a simple vertex passthrough that is irrevelant to this post).
Is there any reason why I can get the fragment location for out_color1 but not for out_color2, using the following opengles function?
auto location = glGetFragDataLocation(m_programId, name.c_str());
The correct location is returned for out_color1 of 0, but when providing "out_color2" to glGetFragDataLocation, the function returns -1.
I'm testing this on a physical device, Galaxy S4, Android 4.4.4 Adreno 320, Opengl ES 3.0.
if you call:
glReadBuffer(COLOR_ATTACHMENT_1);
glReadPixels(*);
can you see the data that you write to the fbo?

Array indexing with loop variable in fragment shader on Android devices

I'm writing shader codes in the GPUImage framework in Android. Then I encounter a problem of array indexing in the fragment shader.
According to Appendix of The OpenGL ES Shading Language, in vertex shader, uniform arrays can be indexed by any integer, and varying arrays can be indexed by constant-index-expression. In fragment shader, both (uniform/varying) arrays can only be indexed by constant-index-expression. Under the definition of constant-index-expression, the for-loop index should be able to used as an array index.
However, things go wrong when I use the loop index as array index in fragment shader. There is no compile error and the shader codes can be run, but it seems that the program treats all index value to 0 during every round in the loop.
Here is my fragment shader codes:
uniform sampler2D inputImageTexture;
uniform highp float sample_weights[9]; // passed by glUniform1fv
varying highp vec2 texture_coordinate;
varying highp vec2 sample_coordinates[9]; // computed in the vertex shader
...
void main()
{
lowp vec3 sum = vec3(0.0);
lowp vec4 fragment_color = texture2D(inputImageTexture, texture_coordinate);
for (int i = 0; i < 9; i++)
{
sum += texture2D(inputImageTexture, sample_coordinates[i]).rgb * sample_weights[i];
}
gl_FragColor = vec4(sum, fragment_color.a);
}
The result will be correct if I unroll the loop and access [0] to [8] for the arrays.
However, when using the loop index, the result is wrong and becomes the same as running
sum += texture2D(inputImageTexture, sample_coordinates[0]).rgb * sample_weights[0];
by 9 times, and there is no compile error reported during the process.
I've only tested one device, which is Nexus 7 with Android version 4.3. The GPUImage framework uses android.opengl.GLES20, but not GLES30.
Is this an additional restriction on shader codes in Android devices, or in OpenGL ES 2.0, or it is a device-dependent issue?
Updated:
After testing more Android devices (4.1~4.4), It seems that only that Nexus 7 device has this issue. The results on other devices are correct. It's weird. Is this an implementation issue on individual devices?
This is a little known fact, but texture lookups inside loops are an undefined thing in GLSL. Specifically the tiny sentence: "Derivatives are undefined within non-uniform control flow", see section 8.9 of the GLSL ES spec. And see section 3.9.2 to find out what non uniform control flow. The fact that this works on other devices is by chance.
Your only option may be to un roll the loop unfortunately.

Texture not drawn using OpenGL ES 2.0 and Android NDK

I want to display an image as texture on a quad with OpenGL ES 2.0 using the Android NDK.
I have the following simple vertex and fragment shader:
#define DISP_SHADER_V_SRC "\
attribute vec4 aPos;\
attribute vec2 aTexCoord;\
varying vec2 vTexCoord;\
void main() {\
gl_Position = aPos;\
vTexCoord = aTexCoord;\
}"
#define DISP_SHADER_F_SRC "\
precision mediump float;\n\
varying vec2 vTexCoord;\n\
uniform sampler2D sTexture;\n\
void main() {\n\
gl_FragColor = texture2D(sTexture, vTexCoord);\n\
}"
At first, a native "create" method is called when the GLSurfaceView is created. It sets the clear color, builds the shader and gets me a texture id using glGenTextures. A "resize" method sets the current view size. Another method sets the texture data like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
I don't believe that there's something wrong there. The important thing should be the "draw" method. After glClear, glViewport and glUseProgram I do the following:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texId);
glEnableVertexAttribArray(shAPos);
glVertexAttribPointer(shAPos, 3, GL_FLOAT, false, 0, quadVertices);
glVertexAttribPointer(shATexCoord, 2, GL_FLOAT, false, 0, quadTexCoordsStd);
glEnableVertexAttribArray(shATexCoord);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// now glDisableVertex...
I can confirm that the shader basically works, since gl_FragColor=vec4(1.0); results in a white screen. It just does not work when I load a texture. I tried out setting the pixel data to "all white" using memset just to confirm that the issue is not related with my image data, but still the screen stays black. What am I missing?
IsaacKleiner's own comment was correct, I have met with the same problem, I develop a android app with OpenGL ES 2.0 in C++, using NDK. The function to load texture was in the C++ part originally. For some reason, that did not work. Only after I move the load texture code to java part , did it work as normal. I load the texture in the java part, bind it as normal, and I pass the texture ID to the C++ JNI code. I do not know why there is such a problem. I can provide with a link, in which there is an example that use OpenGL ES 1.0 and NDK to show a cube. Although it is in Chinese, the code can explain itself. please pay attention to how the texture is generated and how the texture id is passed between Java and C++.

OpenGL crashes when linking program, LG Nexus 4

I'm having another OpenGL ES driver error. This time I'm trying to compile the following lines:
precision mediump float;
varying highp vec2 textureCoordinate;
void main() {
highp vec4 color = texture2D(input0, textureCoordinate);
vec3 color3 = color.rgb;
vec2 tc = (2.0 * textureCoordinate) - 1.0;
float d = dot(tc, tc);
vec2 lookup = vec2(d, color3.r);
..
..
}
but I'm getting after the line:
GLES20.glLinkProgram(program);
native crash : "Fatal signal 11(SIGDEV) at 0x00000060(code = 1), thread 1231 "
I'm guessing that it happens because LG nexus 4 uses GPU Adreno, and it also crashes for me with error code 14 on a different crash - using too many macros.
After you compile the shader, using glGetShaderiv get the status of the shader compilation. Like:
GLint compiled;
glGetShaderiv(index, GL_COMPILE_STATUS, &compiled); //index is the shader value
Then, if compiled is returned as zero, get the info length first, and then the error message as follows:
GLint infoLen = 0;
glGetShaderiv(index, GL_INFO_LOG_LENGTH, &infoLen);
if(infoLen > 1)
{
char* infoLog = new char(infoLen);
glGetShaderInfoLog(index, infoLen, NULL, infoLog);
}
Check infoLog finally to see the error message that returned from shader compilation. Segmentation fault message in your original post does not give anything useful to solve the problem.
As far as I can see from your short code excerpt in your fragment shader you haven't specified float precision. In ES 2.0 you must explicitly specify float precision.
precision mediump float;
Please read about this in specs, p. 4.5.3 Default Precision Qualifiers.
Shader may work without specifying float precision on certain OpenGL ES drivers and may fail to compile on another ones.
However, full source code is needed to find out exact cause of your issue.
I'd suggest you to start commenting out parts of shader code until it starts compiling correctly. This way you will narrow down a problematic line (I believe even faster than waiting for answer here on SO).

Categories

Resources