I have been trying to port this shader to Mobile Device. I am doing this on android device in OpenGL ES 2.0. here is fragment shader code from above site for reference:
void main(void)
{
// clamp pixel posiiton in [-1,1]
vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / iResolution.xy;
vec2 uv;
// calculate angle of current pixel from origin
// atan return values are in [-pi, pi]
float a = atan(p.y,p.x);
// distance of point from origin
//float r = sqrt(dot(p,p));
float power = 7.0;
// http://en.wikipedia.org/wiki/Minkowski_distance
float r = pow( pow(p.x*p.x,power) + pow(p.y*p.y,power), 1.0/(2.0*power) );
// add global time for a moving tunnel
uv.x = .2/r + iGlobalTime/2.0;
uv.y = a/(3.1416);
// multiplication by r to give a darkened effect in center
vec3 col = texture2D(iChannel0, uv).xyz * (1.0-r);
//vec3 col = vec3(uv.y, 0.0,0.0);
gl_FragColor = vec4(col,1.0);
}
I am getting following result on my mobile phone(Moto G, Samsung Galaxy S Advance). Note how texture is flat and kind of clamped in middle
and following when same code is run on emulator(Nexus 5 API 21) (with emulated host gpu option)
Which is the expected output.
My texture wrapping mode is set to GL_REPEAT. What might be wrong ?
This is a precision issue. mediump, which is the highest precision that is guaranteed to be supported in ES 2.0 shaders, has a floating point magnitude range of [2^-14, 2^14], as listed in the table on page 33 of the spec ("The OpenGL ES Shading Language", version 1.00, which can be found at https://www.khronos.org/registry/gles/).
The following sequence of statements will quickly produce underflow:
float power = 7.0;
float r = pow( pow(p.x*p.x,power) + pow(p.y*p.y,power), 1.0/(2.0*power) );
Looking at this sub-expression:
pow(p.x*p.x,power)
Based on the calculation/comment at the start of your shader, the range of p.x is [-1, 1]. Using 0.1 as the value:
pow(0.1 * 0.1, 7.0) = pow(0.01, 7.0) = 10^-14
This is right around the limit for a medium value to underflow. So these sub-expressions will underflow to zero any time either p.x or p.y approaches the range [-0.1, 0.1].
It's not obvious what the best workaround is. Some ideas to try:
Use highp precision if your device supports it. highp support is optional in ES 2.0, and only available if GL_FRAGMENT_PRECISION_HIGH is defined.
Try a lower exponent than 7.0, and see if the visual result still meets your requirement.
Use some form of clamping for the pixels that would underflow. You could either discard them, or color them black. While it will not render he wall in the center, it would at least avoid the ugly artifacts.
Related
I write game with android opengl es 2.0 and I have a problem with light brightness on some diffrent devices. I use this formula to calculate light in my fragment shader:
float shineDamper = 0.0;
float reflectivity = 0.001;
float ambient = 0.5;
vec3 unitNormal = normalize(surfaceNormal);
vec3 unitVectorToCamera = normalize(toCameraVector);
vec3 lightColor = vec3(0.5,0.5,0.5);
vec3 unitLightVector = normalize(toLightVector);
float nDot1 = dot(unitNormal,unitLightVector);
float brightness = max(nDot1,0.0);
vec3 lightDirection = -unitLightVector;
vec3 reflectedLightDirection = reflect(lightDirection,unitNormal);
float specularFactor = dot(reflectedLightDirection,unitVectorToCamera);
specularFactor = max(specularFactor,0.0);
float dampedFactor = pow(specularFactor,shineDamper);
vec3 diffuse = brightness * lightColor;
diffuse = max(diffuse,ambient);
vec3 finalSpecular = dampedFactor * reflectivity * lightColor;
gl_FragColor =vec4(diffuse,1.0) * text + vec4(finalSpecular,1.0);
The problem is the light is very dark on some old device but if I change light color to 10 then on old devices is ok but on another phone light is very very bright. Light pos is very far from models because i want get the sun effect.
Solved!!.
I just put light calculation to vertex shader.
Most likely it's caused by the fact that some older Android devices only support mediump in the fragment shader which approximates to a half precision float. Probably it was struggling to perform some of the calculations or hold some of the values at that limited precision.
The non-normalized vectors toCameraVector and toLightVector are the most likely culprits.
A decent fix would be to normalize them on the vertex shader (and still normalize in the fragment shader to minimize artifacts from denormalization during interpolation).
If you're getting good visual results by doing the calc in the vertex shader, then that's ideal, because you've probably massively improved performance too.
I'm trying to implement DepthBuffer-like functionality using OpenGL ES on Android.
In other words I'm trying to get the 3D point on surface that is rendered on point [x, y] on the user device. In order to make that I need to be able to read the distance of the fragment at that given point.
Answer in different circumstances:
When using normal OpenGL you could achieve this by creating FrameBuffer and then attach either RenderBuffer or Texture with depth component to it.
Both of those approaches use glReadPixels, with internal format of GL_DEPTH_COMPONENT to retrieve the data from the buffer/texture. Unfortunately OpenGL ES only supports GL_ALPHA, GL_RGB, and GL_RGBA as the readback formats, so there's really no way to reach the framebuffer's depth data directly.
The only viable approach that I can think of (and that I have found suggested on the internet) is to create different shaders just for depth buffering. The shader, that is used only for depth rendering, should write gl_FragCoord.z value (=the distance value that we want to read.) on the gl_FragColor. However:
The actual Question:
When I write gl_FragCoord.z value on the gl_FragColor = new Vec4(vec3(gl_FragCoord.z), 1.0); and later use glReadPixels to read back the rgb values, those read values don't match up with the input.
What I have tried:
I realize that there's only 24 bits (r, g, b * 8 bits each) representing the depth data so I tried shifting the returned value by 8 - to get 32 bits, but it didn't seem to work. I also tried to shift distance when applying it to red, green and blue, but that didn't seem to work as expected. I have been trying to figure out what's wrong by observing the bits, results at the bottom.
fragmentShader.glsl(candidate #3):
void main() {
highp float distance = 1.0; //currently just 1.0 to test the results with different values.
lowp float red = distance / exp2(16.0);
lowp float green = distance / exp2(8.0);
lowp float blue = distance / exp2(0.0);
gl_FragColor = vec4(red, green, blue, 1.0);
}
Method to read the values (=glReadPixels)
private float getDepth(int x, int y){
FloatBuffer buffer = GeneralSettings.getFloatBuffer(1); //just creates FloatBuffer with capacity of 1 float value.
terrainDepthBuffer.bindFrameBuffer(); //bind the framebuffer before read back.
GLES20.glReadPixels(x, y, 1, 1, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, buffer); //read the values from previously bind framebuffer.
GeneralSettings.checkGlError("glReadPixels"); //Make sure there is no gl related errors.
terrainDepthBuffer.unbindCurrentFrameBuffer(); //Remember to unbind the buffer after reading/writing.
System.out.println(buffer.get(0)); //Print the value.
}
Observations in bits using the shader & method above:
Value | Shader input | ReadPixels output
1.0f | 111111100000000000000000000000 | 111111110000000100000000
0.0f | 0 | 0
0.5f | 111111000000000000000000000000 | 100000000000000100000000
As a continuation from my previous question (GLSL : Accessing an array in a for-loop hinders performance), I have encountered an entirely new and annoying problem.
So, I have a shader that performs a black hole effect.
The shader works perfectly on my computer, the android emulator, and ShaderToy – but for some reason, even though the code is exactly the same, does not work on my Android device.
The problem occurs when I zoom in too far. For whatever reason, when my zoom reaches a certain point – the whole background zooms in and then zooms out and goes crazy. Like this :
When it should look like this :
However, it does work on my device if I change this :
#ifdef GL_ES
precision mediump float;
#endif
to this :
#ifdef GL_ES
precision highp float;
#endif
The problem with this is that it also decreases my FPS from 60 down to ~40.
I believe the problem is that my Android device's OpenGL version is "OpenGL ES 3.0" according to Gdx.gl.glGetString(GL20.GL_VERSION).
But I cannot figure out how to set my version to OpenGL 2.0 since the AndroidApplicationConfiguration class is giving me little to no options.
I've tried putting <uses-feature android:glEsVersion="0x00020000" android:required="true" /> in the manifest, but it still prints "OpenGL ES 3.0".
And I still don't even know if this is actually the cause of the problem or not, so that's why I'm asking here. Thank you for taking the time to read/answer my question :).
P.S. Here's the Shader code:
#ifdef GL_ES
precision mediump float;
#endif
const int MAX_HOLES = 4;
uniform sampler2D u_sampler2D;
varying vec2 vTexCoord0;
struct BlackHole {
vec2 position;
float radius;
float deformRadius;
};
uniform vec2 screenSize;
uniform vec2 cameraPos;
uniform float cameraZoom;
uniform BlackHole blackHole[MAX_HOLES];
void main() {
vec2 pos = vTexCoord0;
float black = 0.0;
for (int i = 0; i < MAX_HOLES; i++) {
BlackHole hole = blackHole[i];
vec2 position = (hole.position - cameraPos.xy) / cameraZoom + screenSize*0.5;
float radius = hole.radius / cameraZoom;
float deformRadius = hole.deformRadius / cameraZoom;
vec2 deltaPos = vec2(position.x - gl_FragCoord.x, position.y - gl_FragCoord.y);
float dist = length(deltaPos);
float distToEdge = max(deformRadius - dist, 0.0);
float dltR = max(sign(radius - dist), 0.0);
black = min(black+dltR, 1.0);
pos += (distToEdge * normalize(deltaPos) / screenSize);
}
gl_FragColor = (1.0 - black) * texture2D(u_sampler2D, pos) + black * vec4(0, 0, 0, 1);
}
As you have found the issue is down to a lack of precision in fp16 (mediump), which is fixed by using fp32 (highp). Most maths units will have double the throughput for fp16 vs fp32, which also explains the drop in performance.
Querying the driver GLES version will return maximum supported version, not the version of the current EGL context, so what you are seeing is expected.
Also please note that "highp" is optional in OpenGL ES 2.0 fragment shaders, so there is no guarantee that your shader will work on some GPUs in an OpenGL ES 2.0 context. The Mali-4xx series only support fp16 fragment shaders, for example (I think also some of the OpenGL ES 2.0 Vivante GPUs based on past experience).
In OpenGL ES 3.0 highp is mandatory in fragment shaders, so it would be guaranteed to work there.
I am new to opengl. I am drawing a map in android using opengl ES2.0. When i touch on screen i am getting screen coordinates. i want these coordinates to be converted into world coordinates. The code i found through research is as follows:
vec3 UnProjectPoint( const vec3& Point, const max4& Projection, const mat4& ModelView )
{
vec4 R( Point, 1.0f );
R.x = 2.0f * R.x - 1.0f;
R.y = 2.0f * R.y - 1.0f;
R.y = -R.y;
R.z = 1.0f;
R = Projection.GetInversed() * R;
R = ModelView.GetInversed() * R;
return R.ToVec3();
}
but android is not allowing me to use vec3 and vec4?
I also use built in gluunproject function but this function is also not giving me correct results.
I suggest you just use an ortographic projection.
This worked for me in this 2d example project of mine:
Matrix.orthoM(mtrxProjection, 0, left, right, bottom, top, near, far);
And then: Matrix.multiplyMM(... Take a look in that aforementioned source.
In the vertex shader, you then simply use:
gl_Position = uMVPMatrix * vPosition;
Graphics
The whole application logic works with the screen coordinates and these are projected onto OpenGL coordinates within that vertex shader.
I'm just an OpenGL noob, so I don't know whether this is an optimal solution.
BTW: On Page 111 in this book: Graphics Gems V. one can find a general approach to such transformations.
HTH
I did a lot of search and nothing solved my problem. I'm both new to android and to 3d programming. I'm working on an Android project where I need to draw a 3d object on the android device using opengl es. For each pixel I have Distance value between 200 and 9000, which needs to be mapped as a Z coordinate. The object is 320x240.
The questions are:
How do I map from (x,y,z) to opengl es coordinate system? I have created a vertex array whose values are {50f, 50f, 400f, 50f, 51f, 290f, ...}. Each pixel is represented as 3 floats (x,y,z).
How can this vertex array be drawn using opengl on an android?
Is it possible to draw 320*240 pixels using OpenGl ES?
OpenGL doesn't really work well with large numbers (like anything over 10.0f, just the way it is designed). It would be better to convert your coordinates to be between -1 and 1 (i.e. normalize) than to try and make openGL use coordinates of 50f or 290f.
The reason the coordinates are normalized to between -1 and 1 is because model coordinates are only supposed to be relative to each other and not indicative of their actual dimensions in a specific game/app. The model could be used in many different games/apps with different coordinate systems, so you want all the model coordinates to be in some normalized standard form, so the programmer can then interpret in their own way.
To normalize, you loop through all your coordinates and find the value furthest from 0 i.e.
float maxValueX = 0;
float maxValueY = 0;
float maxValueZ = 0;
// find the max value of x, y and z
for(int i=0;i<coordinates.length'i++){
maxValueX = Math.max(Math.abs(coordinates[i].getX()), maxValueX);
maxValueY = Math.max(Math.abs(coordinates[i].getY()), maxValueY);
maxValueZ = Math.max(Math.abs(coordinates[i].getZ()), maxValueZ);
}
// convert all the coordinates to be between -1 and 1
for(int i=0;i<coordinates.length'i++){
Vector3f coordinate = coordinates[i];
coordinate.setX(coordinate.getX() / maxValueX);
coordinate.setY(coordinate.getY() / maxValueY);
coordinate.setZ(coordinate.getZ() / maxValueZ);
}
You only need to do this once. Assuming you are storing your data in a file, you can write a little utility program that does the above to the file and save it, rather than doing it every time you load the data into your app
Checkout the GLSurfaceView Activity in the APIDemos that ship with the Android SDK. That will give you a basic primer on how Android handles rendering through OpenGL ES. This is located in android-sdk/samples/android-10/ApiDemos. Make sure you have downloaded the 'Samples for SDK' under the given API level.
Here's a couple of resources to get you started as well:
Android Dev Blog on GLSurfaceView
Instructions on OpenGLES
Android Development Documentation on OpenGL
Hope that helps.
Adding to what James had mentioned about normalizing to [-1,1].
A little bit of code :
FIll in data in a flat array as x,y,z assuming you are using a vertex shader similar to :
"attribute vec3 coord3d;" +
"uniform mat4 transform;" +
"void main(void) {" +
" gl_Position = transform * vec4(coord3d.xyz, 1.0f);" + // size of 3 with a=1.0f for all points
" gl_PointSize = 10.0;"+
"}"
Get the attribute :
attribute_coord3d = glGetAttribLocation(program, "coord3d");
Create VBO:
glGenBuffers(1, vbo,0);
Bind
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
Put data in:
glBufferData(GL_ARRAY_BUFFER, size:SIZE_OF_ARRAY, makeFloatBuffer(FlatArray), GL_STATIC_DRAW);
where makeFloatBuffer is a function that creates a buffer:
private FloatBuffer makeFloatBuffer(float[] arr) {
ByteBuffer bb = ByteBuffer.allocateDirect(arr.length*4);
bb.order(ByteOrder.nativeOrder());
FloatBuffer fb = bb.asFloatBuffer();
fb.put(arr);
fb.position(0);
return fb;
}
Bind and Point to buffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glEnableVertexAttribArray(attribute_coord3d);
glVertexAttribPointer(attribute_coord3d,
size:3,GL_FLOAT,false,stride:vertexStride, 0);
where vertexStride = num_components*Float.BYTES; in our case num_components = 3 // x,y,z.
Draw:
glDrawArrays(GL_POINTS, 0, NUM_OF_POINTS);
Disable VBO:
glDisableVertexAttribArray(attribute_coord2d);