Query precision of OpenGLES Android device supports - android

I am using OpenGL ES to run some shaders on Android.
On some older/cheap devices they do not support highp precision so the shader output is incorrect.
I need to know when the app starts if the device can support high precision. That way I can tell the user "forget it, your device does not support high precision floats" rather than have it output garbage for them.
I found this query code online, but it seems to only be for WebGL
var highp = gl.getShaderPrecisionFormat(gl.FRAGMENT_SHADER, gl.HIGH_FLOAT);
var highpSupported = highp.precision != 0;
Does anyone have a way I can query an android device (KitKat or higher) to see what precision the GLES shaders will support?
This is the final code I now use, but contents of range and precision are always -999 no matter where I run the code in my app. Before, during or after the GLSurfaceView has been created and GLES output has run.
IntBuffer range = IntBuffer.allocate(2);
IntBuffer precision = IntBuffer.allocate(1);
range.put(0,-999);
range.put(1,-999);
precision.put(0,-999);
android.opengl.GLES20.glGetShaderPrecisionFormat(android.opengl.GLES20.GL_FRAGMENT_SHADER, android.opengl.GLES20.GL_HIGH_FLOAT,range,precision);
String toastText="Range[0]="+String.valueOf(range.get(0))+" Range[1]="+String.valueOf(range.get(1))+" Precision[0]="+String.valueOf(precision.get(0));
Toast.makeText(getApplicationContext(),toastText, Toast.LENGTH_SHORT).show();
The above code always returns -999 for all 3 values, and the kronos doco states if an error occurs then the values will be unchanged. So it looks like there is an error or I am not calling it at the right time.

Related

Android Opengles 3.1 - Are Uniform Buffer Objects supported

I have an opengles 3.1 application that renders fine on the desktop but does not render on android.
The bit that goes wrong is when i have uniform buffer objects. In the vertex shader I have the below for example
layout (std140, binding = 0) uniform matrixUbo
{
mat4 projection;
mat4 view;
};
This works ok using deskop drivers but on android it fails. The version of opengles I am testing on is 3.2 compatible and the function calls are available in android.
I have tried both setting the bindings in the vertex shader and setting them using the glUniformBlockBinding method and both don't work on android (but both work on the desktop).
If I don't use those to matrix then the objects do render ok (I can see them ok on my android phone) but when I include those matrix nothing is drawn which tells me the matrix are full of zero's.
Is there anything special that needs to be done for UBO's to be supported on android?
I'm happy to provide more information as required.
To answer my own question, they are supported on android opengl es 3.1 but when you update the data you need to use a ByteBuffer not a FloatBuffer even though th function calls support it. Strange issue and a pain to debug!!

Noise screen on OpenGLES Android

I try to create noise screen on Android device. For getting random I use this formula:
fract((sin(dot(gl_FragCoord.xy ,vec2(0.9898,78.233)))) * 4375.85453)
I tested it in http://glslsandbox.com/ and it's work perfect.
Result form http://glslsandbox.com/
But on phone and tablet I have another result
On Nexus 9 I receive this:
But it's not so bad. On LG-D415 I receive this
Can somebody help me with it?
Shader:
#ifdef GL_ES
precision mediump float;
#endif
float rand(vec2 co){
return fract((sin(dot(co.xy ,vec2(0.9898,78.233)))) * 4375.85453);
}
void main() {
vec2 st = gl_FragCoord.xy;
float rnd = rand(st);
gl_FragColor = vec4(vec3(rnd),1.0);
}
It's almost certainly a precision issue. Your mediump based code might be executed as standard single precision float on some devices or half precision on others (or anywhere in between).
Half precision simply isn't enough to do those calculations with any degree of accuracy.
You could change your calculations to use highp, but then you'll run into the next problem which is that many older devices don't support highp in fragment shaders.
I'd strongly recommend you create a noise texture, and just sample from it. It'll produce much more consistent results across devices, and I'd expect it to be much faster on the majority of mobile GPUs.
This is an interesting article about floating point precision: http://www.youi.tv/mobile-gpu-floating-point-accuracy-variances/
This is the related app on the Google Play store.
This is a more in-depth discussion of GPU precision: https://community.arm.com/groups/arm-mali-graphics/blog/2013/05/29/benchmarking-floating-point-precision-in-mobile-gpus

Mulitple Fragment Outputs in GLSL 300 es

While writing unit tests for a simple NDK Opengl ES 3.0 demo, I encountered an issue in using multiple render targets. Consider this simple Fragment shader with two outputs, declared in a C++11 string literal.
const static std::string multipleOutputsFragment = R"(#version 300 es
precision mediump float;
layout(location = 0) out vec3 out_color1;
layout(location = 1) out vec3 out_color2;
void main()
{
out_color1 = vec3(1.0, 0.0, 0.0);
out_color2 = vec3(0.0, 0.0, 1.0);
}
)";
I have correctly setup an FBO with two color attachments (via glFramebufferTexture2D) and the glCheckFramebufferStatus comes back as GL_FRAMEBUFFER_COMPLETE. I also call glDrawBuffers(2, &attachments[0]), where attachments is a vector of GL_COLOR_ATTACHMENTi enums. Afterwards, I compile and link the shader, without any linking or compile errors (just used a simple vertex passthrough that is irrevelant to this post).
Is there any reason why I can get the fragment location for out_color1 but not for out_color2, using the following opengles function?
auto location = glGetFragDataLocation(m_programId, name.c_str());
The correct location is returned for out_color1 of 0, but when providing "out_color2" to glGetFragDataLocation, the function returns -1.
I'm testing this on a physical device, Galaxy S4, Android 4.4.4 Adreno 320, Opengl ES 3.0.
if you call:
glReadBuffer(COLOR_ATTACHMENT_1);
glReadPixels(*);
can you see the data that you write to the fbo?

Android OpenGL ES not rasterizing - Matrix multiplication switched

I just bought a new SGS3 (I9300 - NOT LTE) and hoped to continue with developing an OpenGL ES (2) application.
Unfortunately when I compile it I don't see anything.
I get the following LogCat error messages:
D/libEGL(6890): loaded /system/lib/egl/libEGL_mali.so
D/libEGL(6890): loaded /system/lib/egl/libGLESv1_CM_mali.so
D/libEGL(6890): loaded /system/lib/egl/libGLESv2_mali.so
E/(6890): Device driver API match
E/(6890): Device driver API version: 23
E/(6890): User space API version: 23
E/(6890): mali: REVISION=Linux-r3p2-01rel3 BUILD_DATE=Wed Oct 9 21:05:57 KST 2013**
D/OpenGLRenderer(6890): Enabling debug mode 0
I also installed a custom rom (cyanogenmod 11 - snapshot M4) but I get the same problem.
When I start the app, I get the blank screen without any vertices rasterized. The clear color works so far so the basic functionality of OpenGL is working.
To be sure I tried it with the basic tutorial from the Google Developers page with GLES 1 and GLES 2. Both don't work! Here are the screenshots:
http://developer.android.com/training/graphics/opengl/index.html
The project itself works fine on my old Galaxy S1 (also on Cyanogenmod) but does not show anything else than a blank screen on my SGS3.
Is it possible that the Mali-400 MP4 graphics driver / system interpret GL commands differently? Is there a different way of calling than with the Hummingbird GPU from my SGS1?
Has anyone an idea what to do? Is this a problem with my phone or eclipse? Or is this normal - just my lack of understanding? How can I fix this problem?
------- EDIT : SOLUTION FOUND -------
Okay, I found the error. The google document shows a "wrong" the multiplication of the matrices in the vertex shader:
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
void main() {
gl_Position = uMVPMatrix * vPosition;
}
This doesn't seem to be a problem for my old Galaxy S1 but somehow the S3 (or the Mali GPU) is picky about this. I changed the order of multiplication to this:
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
void main() {
gl_Position = vPosition * uMVPMatrix;
}
And it works (also on the S1). Still not sure why the S1 works fine with both versions but this solves the problem.
Thanks for your help!
Just off the top of my head, did you check the projection settings? It's possible those shapes are being drawn, just not where you expect them to be.
Also, check the return values for the shader loading steps. If there's a compile issue, you'll get an invalid handle for the shader program.
There may be some warnings or errors in your code which aren't bubbling up to LogCat. I've found that using a native OpenGL wrapper from LibGDX actually shows me those errors and is pretty helpful for debugging.
You can find some helpful pointers on setting up the LibGDX libraries here - http://www.learnopengles.com/android-lesson-seven-an-introduction-to-vertex-buffer-objects-vbos/

glsl programming architecture which part is "really" parallel execution?

I am trying to implement image processing algorithm like gaussian filtering, bilateral filtering in GPU using glsl.
And I am getting confused with which part is "really" parallel execution. for example, I have a 1280*720 preview as texture. I am not quite sure which part is really running for 1280*720 times and which part is not.
what's the dispatching mechanism of glsl codes?
my gaussian filtering code is like:
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 vTextureCoord;
uniform samplerExternalOES sTexture;
uniform sampler2D sTextureMask;
void main() {
float r=texture2D(sTexture, vTextureCoord).r;
float g=texture2D(sTexture, vTextureCoord).g;
float b=texture2D(sTexture, vTextureCoord).b;
// a test sample
float test=1.0*0.5;
float width=1280.0;
float height=720.0;
vec4 sum;
//offsets of a 3*3 kernel
vec2 offset0=vec2(-1.0,-1.0); vec2 offset1=vec2(0.0,-1.0); vec2 offset2=vec2(1.0,-1.0);
vec2 offset3=vec2(-1.0,0.0); vec2 offset4=vec2(0.0,0.0); vec2 offset5=vec2(1.0,0.0);
vec2 offset6=vec2(-1.0,1.0); vec2 offset7=vec2(0.0,1.0); vec2 offset8=vec2(1.0,1.0);
//gaussina kernel with sigma==100.0;
float kernelValue0 = 0.999900; float kernelValue1 = 0.999950; float kernelValue2 = 0.999900;
float kernelValue3 = 0.999950; float kernelValue4 =1.000000; float kernelValue5 = 0.999950;
float kernelValue6 = 0.999900; float kernelValue7 = 0.999950; float kernelValue8 = 0.999900;
vec4 cTemp0;vec4 cTemp1;vec4 cTemp2;vec4 cTemp3;vec4 cTemp4;vec4 cTemp5;vec4 cTemp6;vec4 cTemp7;vec4 cTemp8;
//getting 3*3 pixel values around current pixel
vec2 src_coor_2;
src_coor_2=vec2(vTextureCoord[0]+offset0.x/width,vTextureCoord[1]+offset0.y/height);
cTemp0=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset1.x/width,vTextureCoord[1]+offset1.y/height);
cTemp1=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset2.x/width,vTextureCoord[1]+offset2.y/height);
cTemp2=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset3.x/width,vTextureCoord[1]+offset3.y/height);
cTemp3=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset4.x/width,vTextureCoord[1]+offset4.y/height);
cTemp4=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset5.x/width,vTextureCoord[1]+offset5.y/height);
cTemp5=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset6.x/width,vTextureCoord[1]+offset6.y/height);
cTemp6=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset7.x/width,vTextureCoord[1]+offset7.y/height);
cTemp7=texture2D(sTexture, src_coor_2);
src_coor_2=vec2(vTextureCoord[0]+offset8.x/width,vTextureCoord[1]+offset8.y/height);
cTemp8=texture2D(sTexture, src_coor_2);
//convolution
sum =kernelValue0*cTemp0+kernelValue1*cTemp1+kernelValue2*cTemp2+
kernelValue3*cTemp3+kernelValue4*cTemp4+kernelValue5*cTemp5+
kernelValue6*cTemp6+kernelValue7*cTemp7+kernelValue8*cTemp8;
float factor=kernelValue0+kernelValue1+kernelValue2+kernelValue3+kernelValue4+kernelValue5+kernelValue6+kernelValue7+kernelValue8;
gl_FragColor = sum/factor;
//gl_FragColor=texture2D(sTexture, vTextureCoord);
}
this code is running with lower fps against pure preview on my phone(galaxy nexus).
but if I change the last part of my code to direct output with original pixel value, like
//gl_FragColor = sum/factor;
gl_FragColor=texture2D(sTexture, vTextureCoord);
it would run fast and same fps as pure preview.
the quesion is: things I write for test and useless in the beginning like:
float test=1.0*0.5;
how many time is it executed?
other parts like:
sum =kernelValue0*cTemp0+kernelValue1*cTemp1+kernelValue2*cTemp2+
kernelValue3*cTemp3+kernelValue4*cTemp4+kernelValue5*cTemp5+
kernelValue6*cTemp6+kernelValue7*cTemp7+kernelValue8*cTemp8;
would not run 1280*720 times just when I change
gl_FragColor = sum/factor;
to
gl_FragColor=texture2D(sTexture, vTextureCoord);?
how is the mechanism to decide which is to run 1280*720 times, which is just useless when parallel though out the pixels? is it done automatically?
what's the architecture, dispatching, how it organize the data to the GPU and other things for a glsl program?
I am wondering what should I do for more complicated operations like bilateral filtering and with kernel size like 9*9 and 9 times per pixel than this 3*3 gaussian kernel.
The entire fragment shader code is executed as a whole for each and every fragment. A fragment approximates either, if no antialiasing is done an output pixel, or with multisample antialiasing the samples of the framebuffer. What a fragment exactly is, is not specified in detail by the OpenGL spec, other than it's the output of the fragment stage which is then turned into values on the framebuffer bitplanes.
The rasterizer produces a series of framebuffer addresses and values
using a two-dimensional description of a point, line segment, or polygon. Each
fragment so produced is fed to the next stage that performs operations on
individual fragments before they finally alter the framebuffer. These operations include
[OpenGL-3.3 core spec, section 2.4]
would not run 1280*720 times just when I change
gl_FragColor = sum/factor;
to
gl_FragColor=texture2D(sTexture, vTextureCoord);?
Division is a costly and complex operation. Since the sum of the kernel is a constant, and doesn't change per fragment you shouldn't evaulate that in the shader. Evaluate it on the CPU and supply 1./factor as a uniform (which is a constant equal for all fragments) and multiply that with sum which is much faster than division.
Your gaussian kernel is actually a 3×3 matrix, for which there is a dedicated type in GLSL. The calculations you perform can be rewritten in terms of dot products (mathematically correct term would be scalar or inner product), for which GPUs have dedicated, accelerated instructions.
Also you shouldn't split up the components of a texture into individual floats.
All in all you built quite a number of speed bumps into your code.
On a modern (Shader Model 3.0+) GPU, fragment shaders are scheduled to operate on 2x2 blocks of pixels (pixel quads) at a time. Fun fact, this was required in order to implement the derivative instruction in Shader Model 3.0 and it has remained part of GPU architecture design ever since. Pixel quads are the lowest-level of granularity you can ever get in fragment shader scheduling. In fact, if you were to discard in a fragment shader, unless all of the fragments in the pixel quad also discard, then every instance of the fragment shader in the block continues running and the result is thrown out at the end for the individual fragments that requested discard.
In addition to this, most GPUs have multiple stream processing units and will schedule pixel quads into larger workgroups (NV calls them warps, AMD calls them wavefronts). In a nutshell, everything is happening in parallel, that is the entire premise of GPUs - they apply a single task across multiple threads that all operate on the same data in parallel; this is why they scale so well when cores are increased as opposed to CPUs.
Put simply, rather than dispatching individual instructions in your GLSL shader to run on separate functional units, what really happens is this. Your GLSL shader is run on multiple processing units simultaneously (conceptually, one thread per-fragment), and these threads all execute the same sequence of instructions in a paradigm known as SIMT (Single Instruction Multiple Thread).
Getting back to the basic scheduling unit (warp/wavefront), if one instance of your shader stalls fetching memory the rest of the instances in said scheduling unit also stall, because they all run the same instruction simultaneously. This is why dependent texture reads and large filter kernels are bad mojo; since the texture memory needed by a particular group of fragments may be indeterminate until run-time or spread too far, efficiently pre-fetching and caching texture data within a scheduling unit can become difficult if not impossible.
The biggest problem with accurately describing the level of parallelism is that the GPU architectures keep changing (most of the discussion above related to Shader Model 3.0+ GPUs). Not too long ago, GPUs had vectorized ISAs but now both AMD and NV have switched to superscalar because it actually improves instruction scheduling efficiency. Throw specialized embedded GPUs into the mix and you have a real nightmare on your hands, it is hard to really say what shader model they run (since derivative is optional in OpenGL ES 2.0).
See this other question on Stack Overflow for a more concise statement of what I just wrote.
For some pretty diagrams, here is a somewhat out of date, but still useful presentation from nVIDIA.

Categories

Resources