I'm making a simple image filter app in android and I implemented lowpass filter using same method in GPUImage(https://github.com/BradLarson/GPUImage)
it buffers the previous and current camera frames mixture and render it.
So i created a buffer FBO and render the current camera texture, re-use it as a texture for mixture in lowpass filter shader with next camera texture.
I tested my code with some smartphones(Galaxy S10, Nexus 6P, etc..) and it worked well. However in Galaxy S8(Mali-G71) the result is strange and I don't know what was wrong.
These are the wrong results
Here are my code:
Fragment shader:
varying vec2 vTextureCoord;
uniform sampler2D sTexture1;
uniform float filterStrength;
void main() {
vec4 texColor0 = texture2D(sTexture, vTextureCoord);
vec4 texColor1 = texture2D(sTexture1, vTextureCoord);
gl_FragColor = mix(texColor0, texColor1, filterStrength);
}
What can cause this results?
Thanks in advance.
The artifacts look tile aligned for Mali, so if I had to guess you are reading the currently bound framebuffer color attachment as an input texture at the same time as writing in to it.
This is "implementation defined" behavior in the specification, and concurrent reads and writes will definitely do bad things on a tile-based renderer like Mali.
I am using OpenGL ES to run some shaders on Android.
On some older/cheap devices they do not support highp precision so the shader output is incorrect.
I need to know when the app starts if the device can support high precision. That way I can tell the user "forget it, your device does not support high precision floats" rather than have it output garbage for them.
I found this query code online, but it seems to only be for WebGL
var highp = gl.getShaderPrecisionFormat(gl.FRAGMENT_SHADER, gl.HIGH_FLOAT);
var highpSupported = highp.precision != 0;
Does anyone have a way I can query an android device (KitKat or higher) to see what precision the GLES shaders will support?
This is the final code I now use, but contents of range and precision are always -999 no matter where I run the code in my app. Before, during or after the GLSurfaceView has been created and GLES output has run.
IntBuffer range = IntBuffer.allocate(2);
IntBuffer precision = IntBuffer.allocate(1);
range.put(0,-999);
range.put(1,-999);
precision.put(0,-999);
android.opengl.GLES20.glGetShaderPrecisionFormat(android.opengl.GLES20.GL_FRAGMENT_SHADER, android.opengl.GLES20.GL_HIGH_FLOAT,range,precision);
String toastText="Range[0]="+String.valueOf(range.get(0))+" Range[1]="+String.valueOf(range.get(1))+" Precision[0]="+String.valueOf(precision.get(0));
Toast.makeText(getApplicationContext(),toastText, Toast.LENGTH_SHORT).show();
The above code always returns -999 for all 3 values, and the kronos doco states if an error occurs then the values will be unchanged. So it looks like there is an error or I am not calling it at the right time.
I'm writing shader codes in the GPUImage framework in Android. Then I encounter a problem of array indexing in the fragment shader.
According to Appendix of The OpenGL ES Shading Language, in vertex shader, uniform arrays can be indexed by any integer, and varying arrays can be indexed by constant-index-expression. In fragment shader, both (uniform/varying) arrays can only be indexed by constant-index-expression. Under the definition of constant-index-expression, the for-loop index should be able to used as an array index.
However, things go wrong when I use the loop index as array index in fragment shader. There is no compile error and the shader codes can be run, but it seems that the program treats all index value to 0 during every round in the loop.
Here is my fragment shader codes:
uniform sampler2D inputImageTexture;
uniform highp float sample_weights[9]; // passed by glUniform1fv
varying highp vec2 texture_coordinate;
varying highp vec2 sample_coordinates[9]; // computed in the vertex shader
...
void main()
{
lowp vec3 sum = vec3(0.0);
lowp vec4 fragment_color = texture2D(inputImageTexture, texture_coordinate);
for (int i = 0; i < 9; i++)
{
sum += texture2D(inputImageTexture, sample_coordinates[i]).rgb * sample_weights[i];
}
gl_FragColor = vec4(sum, fragment_color.a);
}
The result will be correct if I unroll the loop and access [0] to [8] for the arrays.
However, when using the loop index, the result is wrong and becomes the same as running
sum += texture2D(inputImageTexture, sample_coordinates[0]).rgb * sample_weights[0];
by 9 times, and there is no compile error reported during the process.
I've only tested one device, which is Nexus 7 with Android version 4.3. The GPUImage framework uses android.opengl.GLES20, but not GLES30.
Is this an additional restriction on shader codes in Android devices, or in OpenGL ES 2.0, or it is a device-dependent issue?
Updated:
After testing more Android devices (4.1~4.4), It seems that only that Nexus 7 device has this issue. The results on other devices are correct. It's weird. Is this an implementation issue on individual devices?
This is a little known fact, but texture lookups inside loops are an undefined thing in GLSL. Specifically the tiny sentence: "Derivatives are undefined within non-uniform control flow", see section 8.9 of the GLSL ES spec. And see section 3.9.2 to find out what non uniform control flow. The fact that this works on other devices is by chance.
Your only option may be to un roll the loop unfortunately.
I am trying to implement image processing algorithm like gaussian filtering, bilateral filtering in GPU using glsl.
And I am getting confused with which part is "really" parallel execution. for example, I have a 1280*720 preview as texture. I am not quite sure which part is really running for 1280*720 times and which part is not.
what's the dispatching mechanism of glsl codes?
my gaussian filtering code is like:
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 vTextureCoord;
uniform samplerExternalOES sTexture;
uniform sampler2D sTextureMask;
void main() {
float r=texture2D(sTexture, vTextureCoord).r;
float g=texture2D(sTexture, vTextureCoord).g;
float b=texture2D(sTexture, vTextureCoord).b;
// a test sample
float test=1.0*0.5;
float width=1280.0;
float height=720.0;
vec4 sum;
//offsets of a 3*3 kernel
vec2 offset0=vec2(-1.0,-1.0); vec2 offset1=vec2(0.0,-1.0); vec2 offset2=vec2(1.0,-1.0);
vec2 offset3=vec2(-1.0,0.0); vec2 offset4=vec2(0.0,0.0); vec2 offset5=vec2(1.0,0.0);
vec2 offset6=vec2(-1.0,1.0); vec2 offset7=vec2(0.0,1.0); vec2 offset8=vec2(1.0,1.0);
//gaussina kernel with sigma==100.0;
float kernelValue0 = 0.999900; float kernelValue1 = 0.999950; float kernelValue2 = 0.999900;
float kernelValue3 = 0.999950; float kernelValue4 =1.000000; float kernelValue5 = 0.999950;
float kernelValue6 = 0.999900; float kernelValue7 = 0.999950; float kernelValue8 = 0.999900;
vec4 cTemp0;vec4 cTemp1;vec4 cTemp2;vec4 cTemp3;vec4 cTemp4;vec4 cTemp5;vec4 cTemp6;vec4 cTemp7;vec4 cTemp8;
//getting 3*3 pixel values around current pixel
vec2 src_coor_2;
src_coor_2=vec2(vTextureCoord[0]+offset0.x/width,vTextureCoord[1]+offset0.y/height);
cTemp0=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset1.x/width,vTextureCoord[1]+offset1.y/height);
cTemp1=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset2.x/width,vTextureCoord[1]+offset2.y/height);
cTemp2=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset3.x/width,vTextureCoord[1]+offset3.y/height);
cTemp3=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset4.x/width,vTextureCoord[1]+offset4.y/height);
cTemp4=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset5.x/width,vTextureCoord[1]+offset5.y/height);
cTemp5=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset6.x/width,vTextureCoord[1]+offset6.y/height);
cTemp6=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset7.x/width,vTextureCoord[1]+offset7.y/height);
cTemp7=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset8.x/width,vTextureCoord[1]+offset8.y/height);
cTemp8=texture2D(sTexture, src_coor_2);
//convolution
sum =kernelValue0*cTemp0+kernelValue1*cTemp1+kernelValue2*cTemp2+
kernelValue3*cTemp3+kernelValue4*cTemp4+kernelValue5*cTemp5+
kernelValue6*cTemp6+kernelValue7*cTemp7+kernelValue8*cTemp8;
float factor=kernelValue0+kernelValue1+kernelValue2+kernelValue3+kernelValue4+kernelValue5+kernelValue6+kernelValue7+kernelValue8;
gl_FragColor = sum/factor;
//gl_FragColor=texture2D(sTexture, vTextureCoord);
}
this code is running with lower fps against pure preview on my phone(galaxy nexus).
but if I change the last part of my code to direct output with original pixel value, like
//gl_FragColor = sum/factor;
gl_FragColor=texture2D(sTexture, vTextureCoord);
it would run fast and same fps as pure preview.
the quesion is: things I write for test and useless in the beginning like:
float test=1.0*0.5;
how many time is it executed?
other parts like:
sum =kernelValue0*cTemp0+kernelValue1*cTemp1+kernelValue2*cTemp2+
kernelValue3*cTemp3+kernelValue4*cTemp4+kernelValue5*cTemp5+
kernelValue6*cTemp6+kernelValue7*cTemp7+kernelValue8*cTemp8;
would not run 1280*720 times just when I change
gl_FragColor = sum/factor;
to
gl_FragColor=texture2D(sTexture, vTextureCoord);?
how is the mechanism to decide which is to run 1280*720 times, which is just useless when parallel though out the pixels? is it done automatically?
what's the architecture, dispatching, how it organize the data to the GPU and other things for a glsl program?
I am wondering what should I do for more complicated operations like bilateral filtering and with kernel size like 9*9 and 9 times per pixel than this 3*3 gaussian kernel.
The entire fragment shader code is executed as a whole for each and every fragment. A fragment approximates either, if no antialiasing is done an output pixel, or with multisample antialiasing the samples of the framebuffer. What a fragment exactly is, is not specified in detail by the OpenGL spec, other than it's the output of the fragment stage which is then turned into values on the framebuffer bitplanes.
The rasterizer produces a series of framebuffer addresses and values
using a two-dimensional description of a point, line segment, or polygon. Each
fragment so produced is fed to the next stage that performs operations on
individual fragments before they finally alter the framebuffer. These operations include
[OpenGL-3.3 core spec, section 2.4]
would not run 1280*720 times just when I change
gl_FragColor = sum/factor;
to
gl_FragColor=texture2D(sTexture, vTextureCoord);?
Division is a costly and complex operation. Since the sum of the kernel is a constant, and doesn't change per fragment you shouldn't evaulate that in the shader. Evaluate it on the CPU and supply 1./factor as a uniform (which is a constant equal for all fragments) and multiply that with sum which is much faster than division.
Your gaussian kernel is actually a 3×3 matrix, for which there is a dedicated type in GLSL. The calculations you perform can be rewritten in terms of dot products (mathematically correct term would be scalar or inner product), for which GPUs have dedicated, accelerated instructions.
Also you shouldn't split up the components of a texture into individual floats.
All in all you built quite a number of speed bumps into your code.
On a modern (Shader Model 3.0+) GPU, fragment shaders are scheduled to operate on 2x2 blocks of pixels (pixel quads) at a time. Fun fact, this was required in order to implement the derivative instruction in Shader Model 3.0 and it has remained part of GPU architecture design ever since. Pixel quads are the lowest-level of granularity you can ever get in fragment shader scheduling. In fact, if you were to discard in a fragment shader, unless all of the fragments in the pixel quad also discard, then every instance of the fragment shader in the block continues running and the result is thrown out at the end for the individual fragments that requested discard.
In addition to this, most GPUs have multiple stream processing units and will schedule pixel quads into larger workgroups (NV calls them warps, AMD calls them wavefronts). In a nutshell, everything is happening in parallel, that is the entire premise of GPUs - they apply a single task across multiple threads that all operate on the same data in parallel; this is why they scale so well when cores are increased as opposed to CPUs.
Put simply, rather than dispatching individual instructions in your GLSL shader to run on separate functional units, what really happens is this. Your GLSL shader is run on multiple processing units simultaneously (conceptually, one thread per-fragment), and these threads all execute the same sequence of instructions in a paradigm known as SIMT (Single Instruction Multiple Thread).
Getting back to the basic scheduling unit (warp/wavefront), if one instance of your shader stalls fetching memory the rest of the instances in said scheduling unit also stall, because they all run the same instruction simultaneously. This is why dependent texture reads and large filter kernels are bad mojo; since the texture memory needed by a particular group of fragments may be indeterminate until run-time or spread too far, efficiently pre-fetching and caching texture data within a scheduling unit can become difficult if not impossible.
The biggest problem with accurately describing the level of parallelism is that the GPU architectures keep changing (most of the discussion above related to Shader Model 3.0+ GPUs). Not too long ago, GPUs had vectorized ISAs but now both AMD and NV have switched to superscalar because it actually improves instruction scheduling efficiency. Throw specialized embedded GPUs into the mix and you have a real nightmare on your hands, it is hard to really say what shader model they run (since derivative is optional in OpenGL ES 2.0).
See this other question on Stack Overflow for a more concise statement of what I just wrote.
For some pretty diagrams, here is a somewhat out of date, but still useful presentation from nVIDIA.
Ive been trying to make a 2.5D engine with depth and normal map textures for a few weeks now, not unlike whats used here Linky. After thinking the drawing of a depth map in the fragment shader from a texture was impossible due to ES 2.0 missing the gl_fragDepth variable I found a tutorial for iOS where they used glBlendEquation with the mode GL_MIN/GL_MAX to "fake" depth buffering of the fragment to a framebuffer-texture Linky. Unfortunely GLES20.glBlendEquation makes the application crash on both my phones (SGS 1/2) with UnsupportedOperationException. So Im wondering if anyone has used this function to any success? GL_MIN/GL_MAX also seems to be missing from the Android Opengl ES 2.0 spec so Im probably out of luck here...
Any ideas?
BTW It does seem to work in GL11Ext but since Im using the fragment shader for normal mapping this wont work from me.
i was experimenting on my Vega tablet (Tegra) and this worked for me:
fragment shader:
#extension GL_NV_shader_framebuffer_fetch : require
// makes gl_LastFragColor accessible
precision highp float;
varying vec2 v_texcoord;
uniform sampler2D n_sampler;
void main()
{
vec4 v_tex = texture2D(n_sampler, v_texcoord);
gl_FragColor = min(gl_LastFragColor, v_tex); // MIN blending
}
Pretty easy, huh? But i'm afraid this will be NV-only.