I'm having another OpenGL ES driver error. This time I'm trying to compile the following lines:
precision mediump float;
varying highp vec2 textureCoordinate;
void main() {
highp vec4 color = texture2D(input0, textureCoordinate);
vec3 color3 = color.rgb;
vec2 tc = (2.0 * textureCoordinate) - 1.0;
float d = dot(tc, tc);
vec2 lookup = vec2(d, color3.r);
..
..
}
but I'm getting after the line:
GLES20.glLinkProgram(program);
native crash : "Fatal signal 11(SIGDEV) at 0x00000060(code = 1), thread 1231 "
I'm guessing that it happens because LG nexus 4 uses GPU Adreno, and it also crashes for me with error code 14 on a different crash - using too many macros.
After you compile the shader, using glGetShaderiv get the status of the shader compilation. Like:
GLint compiled;
glGetShaderiv(index, GL_COMPILE_STATUS, &compiled); //index is the shader value
Then, if compiled is returned as zero, get the info length first, and then the error message as follows:
GLint infoLen = 0;
glGetShaderiv(index, GL_INFO_LOG_LENGTH, &infoLen);
if(infoLen > 1)
{
char* infoLog = new char(infoLen);
glGetShaderInfoLog(index, infoLen, NULL, infoLog);
}
Check infoLog finally to see the error message that returned from shader compilation. Segmentation fault message in your original post does not give anything useful to solve the problem.
As far as I can see from your short code excerpt in your fragment shader you haven't specified float precision. In ES 2.0 you must explicitly specify float precision.
precision mediump float;
Please read about this in specs, p. 4.5.3 Default Precision Qualifiers.
Shader may work without specifying float precision on certain OpenGL ES drivers and may fail to compile on another ones.
However, full source code is needed to find out exact cause of your issue.
I'd suggest you to start commenting out parts of shader code until it starts compiling correctly. This way you will narrow down a problematic line (I believe even faster than waiting for answer here on SO).
Related
I'm pretty new to opengl-es and currently have a problem rendering my video output on a screen in Unity.
I was developing a video player project with Unity. I bought the EasyMovieTexture plugin and replaced the video player module with another open-source video player (Ijkplayer) years ago which worked fine all the time.
Now I want to replace it with newer VLC using libvlcjni. I compiled and just replaced the old Ijkplayer but it didn't work as I expected. The screen just started flashing one color from every video frame but the video was going and the audio track was normally playing too.
Screenshot - A test scene with a screen, sorry there's mistake with texcoord but only color flashing
I'd like to provide some further information hope that I can find some help (Sorry if I have some mistakes or misunderstandings):
As far as I know, these video player modules need a Surface (Or SurfaceTexture) in the Android layer and the video decoder will work as a data producer. The SurfaceTexture consumes data from the producer and converts it to the texture with the type of GL_TEXTURE_EXTERNAL_OES which can be directly consumed and displayed with components like TextureView. But this texture data cannot be consumed in the Unity layer unless I use a GLSL shader directly sampling my OES texture data, like this:
// Unity shader with GLSL
GLSLPROGRAM
#pragma only_renderers gles3
#include "UnityCG.glslinc"
// ...
// Ignoring vertex shader
// ...
in vec2 textureCoord;
layout(binding = 0) uniform samplerExternalOES _MainTex;
out vec4 fragColor;
void main()
{
fragColor = texture(_MainTex, textureCoord);
}
ENDGLSL
My approach was to convert the texture to GL_TEXTURE_2D with the native library which came along with the EasyMovieTexture plugin. Here I cannot provide the source code of this .so library but I've decompiled it in IDAPro and I know it can work along with GLES and render the external texture data to another 2d texture using FrameBuffer Object and external shader program.
Here is a random example to explain the procedure, it is NOT the accurate code from the binary: FilterFBOTexture.java
Though I cannot edit the .so file, luckily it was reading two external files as the shader program:
// vertex shader
attribute highp vec3 inVertex;
attribute mediump vec3 inNormal;
attribute mediump vec2 inTexCoord;
uniform highp mat4 MVPMatrix;
uniform mediump vec2 TexCoordMove;
varying mediump vec2 TexCoord;
void main()
{
highp vec4 vPos = vec4(0,0,0,1);
vPos.x = ( inTexCoord.x * 2.0 - 1.0 );
vPos.y = ( inTexCoord.y * 2.0 - 1.0 );
gl_Position = vPos;
mediump vec4 vec4Temp = vec4(inTexCoord.x - TexCoordMove.x,inTexCoord.y - TexCoordMove.y,0,1.0);
vec4Temp = MVPMatrix * vec4Temp;
vec4Temp.xyz = vec4Temp.xyz / vec4Temp.w;
TexCoord = vec4Temp.xy;
}
// fragment shader
#extension GL_OES_EGL_image_external : require
uniform samplerExternalOES sTexture;
uniform lowp float AlphaValue;
varying mediump vec2 TexCoord;
void main()
{
lowp vec4 color = texture2D(sTexture, TexCoord) ;
color = vec4(color.rgb, color.a * AlphaValue);
gl_FragColor = color;
}
I don't know whether I have to check the vertex shader or just dive into the source code of libvlcjni so that I can correctly render my video output. Any idea will be grateful, thanks.
Update 2022-11-4:
I turned around and began to use VLC for Android.
Big appreciation to #mfkl, I created an issue days ago on the VLC repo.
https://code.videolan.org/videolan/vlc-unity/-/issues/164
The problem still remains but at least I can work for something now.
Does anybody tried the separated shader objects on Android with ESSL 3.1 ? I want to create a shader without may output variables, but want to use a shader that ignore them.
Vertex Shader
#version 310 es
#extension GL_EXT_shader_io_blocks: enable
precision highp float;
out gl_PerVertex { vec4 gl_Position; float gl_PointSize; };
out vec2 texCoord0;
in vec4 attr_Vertex;
in vec2 attr_TexCoord;
void main()
{
gl_Position = attr_Vertex;
texCoord0 = attr_TexCoord;
}
Fragment Shader
Code :
#version 310 es
#extension GL_EXT_shader_io_blocks: enable
precision highp float;
void main() {} // Do nothing, (ie, for a depth only shader)
Go this error during pipeline validation
glValidateProgramPipeline(ppo);
glGetProgramPipelineiv(ppo, GL_VALIDATE_STATUS, &status);
Output from glGetProgramPipelineInfoLog:
Error: output texCoord0 not declared in input from next stage.
On Android with ES 3.2, the message is
error: The fragment stage's input interface doesn't match preceding stage's output
However this is working on Desktop OpenGL and iOS with GL_EXT_shader_separated_objects
I have many vertex shaders and want to perform a depth pass or other only with a special fragment shader which don't need the input, or ignore them
Why output not declared in input from next stage is an error ? It should be at worst a warning, but not an error, and why it happens on ESSL 3.1 and not on Desktop (Also same shader on Desktop works - ie no error -)
According the documentation
With separable program objects, interfaces between shader stages may
involve the outputs from one program object and the inputs from a
second program object
Apparently layout(location = xx) is required to not have the validation failed ?
As a continuation from my previous question (GLSL : Accessing an array in a for-loop hinders performance), I have encountered an entirely new and annoying problem.
So, I have a shader that performs a black hole effect.
The shader works perfectly on my computer, the android emulator, and ShaderToy – but for some reason, even though the code is exactly the same, does not work on my Android device.
The problem occurs when I zoom in too far. For whatever reason, when my zoom reaches a certain point – the whole background zooms in and then zooms out and goes crazy. Like this :
When it should look like this :
However, it does work on my device if I change this :
#ifdef GL_ES
precision mediump float;
#endif
to this :
#ifdef GL_ES
precision highp float;
#endif
The problem with this is that it also decreases my FPS from 60 down to ~40.
I believe the problem is that my Android device's OpenGL version is "OpenGL ES 3.0" according to Gdx.gl.glGetString(GL20.GL_VERSION).
But I cannot figure out how to set my version to OpenGL 2.0 since the AndroidApplicationConfiguration class is giving me little to no options.
I've tried putting <uses-feature android:glEsVersion="0x00020000" android:required="true" /> in the manifest, but it still prints "OpenGL ES 3.0".
And I still don't even know if this is actually the cause of the problem or not, so that's why I'm asking here. Thank you for taking the time to read/answer my question :).
P.S. Here's the Shader code:
#ifdef GL_ES
precision mediump float;
#endif
const int MAX_HOLES = 4;
uniform sampler2D u_sampler2D;
varying vec2 vTexCoord0;
struct BlackHole {
vec2 position;
float radius;
float deformRadius;
};
uniform vec2 screenSize;
uniform vec2 cameraPos;
uniform float cameraZoom;
uniform BlackHole blackHole[MAX_HOLES];
void main() {
vec2 pos = vTexCoord0;
float black = 0.0;
for (int i = 0; i < MAX_HOLES; i++) {
BlackHole hole = blackHole[i];
vec2 position = (hole.position - cameraPos.xy) / cameraZoom + screenSize*0.5;
float radius = hole.radius / cameraZoom;
float deformRadius = hole.deformRadius / cameraZoom;
vec2 deltaPos = vec2(position.x - gl_FragCoord.x, position.y - gl_FragCoord.y);
float dist = length(deltaPos);
float distToEdge = max(deformRadius - dist, 0.0);
float dltR = max(sign(radius - dist), 0.0);
black = min(black+dltR, 1.0);
pos += (distToEdge * normalize(deltaPos) / screenSize);
}
gl_FragColor = (1.0 - black) * texture2D(u_sampler2D, pos) + black * vec4(0, 0, 0, 1);
}
As you have found the issue is down to a lack of precision in fp16 (mediump), which is fixed by using fp32 (highp). Most maths units will have double the throughput for fp16 vs fp32, which also explains the drop in performance.
Querying the driver GLES version will return maximum supported version, not the version of the current EGL context, so what you are seeing is expected.
Also please note that "highp" is optional in OpenGL ES 2.0 fragment shaders, so there is no guarantee that your shader will work on some GPUs in an OpenGL ES 2.0 context. The Mali-4xx series only support fp16 fragment shaders, for example (I think also some of the OpenGL ES 2.0 Vivante GPUs based on past experience).
In OpenGL ES 3.0 highp is mandatory in fragment shaders, so it would be guaranteed to work there.
On Android devices with Mali-400 GPU (Samsung Galaxy S II, Samsung Galaxy S3 Mini, Samsung Galaxy Note II), at random times the screen will start showing repeated frames.
Example from 0:51 until 1:01 on the following video https://www.youtube.com/watch?v=5-p6Oy0BZmg
It seems as if new frames aren't being rendered, and what was in the old buffer is being shown again. The game continues to advance behind the repeated frames.
This doesn't happen on other GPUs.
I read about using glFlush or glFinish, but GLSurfaceView takes care of this when doing eglSwapBuffers after onDrawFrame.
I've read about quirks of Mali-400, like is better to use varying for texture coords, or to use lowp, but it doesn't help. Here are the shaders for reference:
Vertex shader:
// Vertex Shader
attribute vec4 position;
attribute vec4 colorModelIn;
attribute vec4 colorVertexIn;
varying lowp vec4 colorOut;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelMatrix;
attribute vec2 TexCoordIn;
varying lowp vec2 TexCoordOut;
uniform bool bUseVertexColor;
void main()
{
if( bUseVertexColor ){
colorOut = colorVertexIn * colorModelIn;
} else {
colorOut = colorModelIn;
}
TexCoordOut = TexCoordIn;
gl_Position = modelViewProjectionMatrix * modelMatrix * position;
}
Fragment shader:
// Fragment shader
varying lowp vec4 colorOut;
varying lowp vec2 TexCoordOut;
uniform sampler2D Texture;
uniform bool bUseTexture;
void main()
{
if( bUseTexture ){
gl_FragColor = colorOut * texture2D(Texture, TexCoordOut);
} else {
gl_FragColor = colorOut;
}
}
I'm aware that these shaders aren't optimal, and that I'm going down the path of reproducing the fixed pipeline.
The rendering goes back to normal after some time, or after touching the screen. The only reason I can think for going back to normal when touching is that I use color-coding to detect the touched object. I render an image to the back buffer and glReadPixels from it. Then, overwrite the back buffer with the normal game image.
I'm out of ideas on how to attack this problem.
EDIT
After following Muzza's advice I started to log GL errors. glGetInteger and glBindBuffer report out-of-memory.
Above I said that the problem solves itself after a while. When that happens these appear in the logs:
01-23 21:57:52.956: D/WebView(9860): onSizeChanged - w:480 h:75
01-23 21:57:53.126: D/TilesManager(9860): new EGLContext from framework: 40e00bd0
01-23 21:57:53.126: D/GLWebViewState(9860): Reinit shader
01-23 21:57:53.171: D/GLWebViewState(9860): Reinit transferQueue
This can happen if the OpenGL state becomes invalid in some way. The graphics drivers can just skip frames entirely. Check Logcat to see if there is any output from the drivers, and add glGetError() calls throughout your code to see if any error comes up there.
I'm writing shader codes in the GPUImage framework in Android. Then I encounter a problem of array indexing in the fragment shader.
According to Appendix of The OpenGL ES Shading Language, in vertex shader, uniform arrays can be indexed by any integer, and varying arrays can be indexed by constant-index-expression. In fragment shader, both (uniform/varying) arrays can only be indexed by constant-index-expression. Under the definition of constant-index-expression, the for-loop index should be able to used as an array index.
However, things go wrong when I use the loop index as array index in fragment shader. There is no compile error and the shader codes can be run, but it seems that the program treats all index value to 0 during every round in the loop.
Here is my fragment shader codes:
uniform sampler2D inputImageTexture;
uniform highp float sample_weights[9]; // passed by glUniform1fv
varying highp vec2 texture_coordinate;
varying highp vec2 sample_coordinates[9]; // computed in the vertex shader
...
void main()
{
lowp vec3 sum = vec3(0.0);
lowp vec4 fragment_color = texture2D(inputImageTexture, texture_coordinate);
for (int i = 0; i < 9; i++)
{
sum += texture2D(inputImageTexture, sample_coordinates[i]).rgb * sample_weights[i];
}
gl_FragColor = vec4(sum, fragment_color.a);
}
The result will be correct if I unroll the loop and access [0] to [8] for the arrays.
However, when using the loop index, the result is wrong and becomes the same as running
sum += texture2D(inputImageTexture, sample_coordinates[0]).rgb * sample_weights[0];
by 9 times, and there is no compile error reported during the process.
I've only tested one device, which is Nexus 7 with Android version 4.3. The GPUImage framework uses android.opengl.GLES20, but not GLES30.
Is this an additional restriction on shader codes in Android devices, or in OpenGL ES 2.0, or it is a device-dependent issue?
Updated:
After testing more Android devices (4.1~4.4), It seems that only that Nexus 7 device has this issue. The results on other devices are correct. It's weird. Is this an implementation issue on individual devices?
This is a little known fact, but texture lookups inside loops are an undefined thing in GLSL. Specifically the tiny sentence: "Derivatives are undefined within non-uniform control flow", see section 8.9 of the GLSL ES spec. And see section 3.9.2 to find out what non uniform control flow. The fact that this works on other devices is by chance.
Your only option may be to un roll the loop unfortunately.