I have three different OBJs in my scene, a body, and a shirt and pant simulated over that body (rendered in that order).
Rendering was showing inner shirt outside the pants at some points in the form of some 'holes' on my test android devices while it works just fine on the desktops.
I guessed that some points are very close to each other and hence tried highp for precision and it started working fine on some of my devices (Surprisingly it doesn't work on an year old Nexus!)
Q. Have I identified the correct problem or it could be because of any other possible reason as well. Is there any way I can solve this issue on all devices ?
Q. Can I somehow at least get to know which GPUs will have this problem so that I can target my APK accordingly ?
Using :
Android 5.0
OpenGL ES 3.0
Edit:
Just in case its of any help, when rotating the scene, or zooming in-out, these holes show a 'twinkling behavior'.
highp support is not mandatory, and AFAICT it's also not an 'extension', so you can't query it and support for it is also not recorded in GLES capability databases like http://opengles.gpuinfo.org/
You can query the precision of the shader using glGetShaderPrecisionFormat ( http://docs.gl/es3/glGetShaderPrecisionFormat )
Of course it's up to your application to know what actual precision is actually needed. And this is at runtime, no way to know in advance.
Alright, so I seem to have solved this.
I quote from the opengl archives :
12.040 Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on?
You may have configured your zNear and zFar clipping planes in a way
that severely limits your depth buffer precision. Generally, this is
caused by a zNear clipping plane value that's too close to 0.0. As the
zNear clipping plane is set increasingly closer to 0.0, the effective
precision of the depth buffer decreases dramatically. Moving the zFar
clipping plane further away from the eye always has a negative impact
on depth buffer precision, but it's not one as dramatic as moving the
zNear clipping plane.
Same page has lots of details about bounding boxes and related issues. Nice read.
I had a huge size BB. I reduced the size and moved camera nearer to the objects, the issue is resolved from all the devices.
Related
I have started reading about RenderScript in android. I am following this particular blog.
While reading I come across the part called Setting floating point precision
It might seem as a noob question, but why do we need to change floating point precision? What benefit we get? Anything related to RenderScript in particular?
These precisions are for the compute part of renderscript. Generally they will not affect rendering in which you will get GL precision which is much lower than IEEE 754 in general, but you shouldn't use that since the graphics part of renderscript is deprecated.
Essentially, you should use rs_fp_relaxed since that will get you onto the highest range of mobile GPU and SIMD-supporting CPU devices.
rs_fp_relaxed enables flush to zero for denorms and round to zero operation. This affects the answer when you do math on half, float and double types. Although you should also avoid double if you want to be accelerated by mobile gpus and also not take a speed hit even on devices which natively support doubles.
I recommend checking out the wiki pages on floats: https://en.wikipedia.org/wiki/Single-precision_floating-point_format
The gist is floats are stored in two parts the exponent and the significand similar to scientific notation of 1.23 * 10^13. When the exponent is all 0s, then your number is denormal. So if your calculation results in a value where the exponent is 0, then the significand will also end up being zero instead of the actual value. For float32 the specific values are 1.1754942E-38 (0x7ffff) to 1.4E-45 (0x1) and the corresponding negative values.
Round to zero comes in when you do math with two floating point numbers an implementation will not calculate the extra digit of precision to know which way to round the last bit so you can be off by 1 ulp from a round-to-even implementation. Generally 1 ulp is quite small but the absolute difference depends on where your value lies in the real number space. For example 1.0 is encoded as 0x3f800000. A 1 ulp error could give you 0x3f800001 which is converted to 1.0000001.
Precision is basically what it is. It tells how precise things on screen will be drawn. In some cases floating point precision might be insignificant on its own or in comparison to other thing like memory or performance. If you have a device with small screen and low memory you don't need double precision to draw a model.
I'm using SDL 2.0 with OpenGL 2.1 on android (5.1 lollipop on a Moto G).
I have essentially this code:
vec3 norm = normalize(var_normal);
vec3 sun = normalize(sun_position-var_position);
float diff = max(dot(norm,sun),0.);
in my fragment shader.
diff should always be a value between 0. and 1., right?
unfortunately, due to the difficulties of debugging specifics of glsl I'm not sure exactly the value that diff gets, but it seems to be vastly blown out. if I set gl_FragColor = vec4(diff,diff,diff,1.); for debugging, I get straight white for 90% of pixels- the only ones that aren't white are those with normals just shy of orthogonal to the camera.
Here's the interesting bit- when I run it on my computer (OSX El Capitan), it works as expected.
So, instead of maxing the dot with 0., I clamp it to 0. and 1.. Then everything starts to look more reasonable, but with a visibly small diffuse range.
Any ideas moving forward regarding the differences?
Shucks. Should have known.
My problem was floating point precision. My "sun_position" variable was set veeeerry far away. I just brought it closer and everything works fine.
I would have expected errors in precision to result in a less precise direction (as in, the sun_pos-frag_pos vector would "snap" to the various allowances given by such low precision at such a distance), but normalize should have brought that vector down to unit size? right? so I'd expect any precision errors to result in a "wrong" direction, not a blown out normal? (Clearly I'm wrong about this. I guess my understanding of IEEE floating point still needs some work...)
While you are promoting my Android project, I discovered a strange.
I can display the map in the ocean Android OpenGL ES 2D graphics.
So, to be used only to determine the phase order of the object, the value is reduced to about 0.0001 Z-axis.
I tried over 1000 times the size of the object In the meantime.
Then, a phenomenon depending on the zoom in / zoom out, some objects flickering occurred.
Why such problems occur??
It is the problem of the target terminal-specific this can not be resolved if?
Or is it a problem of Android OpenGL ES itself?
***More....
The photo below is what you screen shot every time the screen of the actual device.
***I occurs when such a phenomenon to zoom in / zoom out each time.
I assume what you are experiencing is z-fighting: http://en.wikipedia.org/wiki/Z-fighting
This results due to the fact that your objects are too close together so that the z-buffer for certain pixels can't distinguish between which pixel is below or above the other.
You have three choices now:
1) Adjust your projection, specifically adjust znear and zfar values. Read more here: http://www.opengl.org/archives/resources/faq/technical/depthbuffer.htm
2) Increase the distance between both objects
3) Since you are drawing a 2D scene, you might use orthogonal projection. In that case it might be worth not to use depth buffering at all and just draw the objects from back to front (Painters Algorithm, http://en.wikipedia.org/wiki/Painters_algorithm).
Drawing 2D graphics request only two coordinates and by default Z coordinate is 0. Is it possible to use that Z coordinate to adjust graphics sizes. Lets say for larger screens I set Z to be 0 but when screen is small (ldpi) i set z to be lets say -5 units and whole graphics fits into the screen. Is it good practice? Is it even possible to do like that?
To adjust your graphics to the screen size (and rotation) you should adjust the opengl viewport size.
Not sure what you exactly plan to do with the z-coordinate, but it doesn't look like a good way for me.
Looks like you plan to use the z coordinate to zoom in or zoom out so that the scene fits correctly into the screen. It is valid point, you can easily do that by "hacking" the projection matrix that way. The only drawback I truly see is that you need to send down your pipeline one more coordinate for each vertex. Would be much easier to just set a global scaling factor which is stored either in the modelview-projection matrix or passed to the vertex shaders.
My guess is (and i mean no disrespect) that you know little about 2D rendering and came up with this idea. Actually is not that bad, it's a good first approach, but things are quite polished in the area. You should stick to the standard way of dealing with it, unless you really know what you are doing.
Standard way is to use projection matrices (or cameras in a higher level of abstraction). When using projections you define your "world coordinates". The projection maps your world to the GL viewport (usually the hole screen), so no matter the device screen size, you always show the same portion of the world. Note you'll have to deal with stretching.
I don't know if I'm really answering your question. This is not really what you asked, but what i think you wanted to ask. You shouldn´t bother with z-components if used an Orthographic projection (which is typical for 2D).
So you'd want to add a "fake" depth to your 2D app ?
With an orthographic projection (used in most of the 2D rendering world), it would be completely useless.
With a perspective projection, it would surely lead to many subpixel glitches when the texture minification will occur, or blurring in case of magnification.
You could resize your sprites or - better - you could create a set of baked sprites of different sizes.
I'm currently using OpenGL on Android to draw set width lines, which work great except for the fact that OpenGL on Android does not natively support the anti-aliasing of such lines. I have done some research, however I'm stuck on how to implement my own AA.
FSAA
The first possible solution I have found is Full Screen Anti-Aliasing. I have read this page on the subject but I'm struggling to understand how I could implement it.
First of all, I'm unsure on the entire concept of implementing FSAA here. The article states "One straightforward jittering method is to modify the projection matrix, adding small translations in x and y". Does this mean I need to be constantly moving the same line extremely quickly, or drawing the same line multiple times?
Secondly, the article says "To compute a jitter offset in terms of pixels, divide the jitter amount by the dimension of the object coordinate scene, then multiply by the appropriate viewport dimension". What's the difference between the dimension of the object coordinate scene and the viewport dimension? (I'm using a 800 x 480 resolution)
Now, based on the information given in that article the 'jitter' coordinates should be relatively easy to compute. Based on my assumptions so far, here is what I have come up with (Java)...
float currentX = 50;
float currentY = 75;
// I'm assuming the "jitter" amount is essentially
// the amount of anti-aliasing (e.g 2x, 4x and so on)
int jitterAmount = 2;
// don't know what these two are
int coordSceneDimensionX;
int coordSceneDimensionY;
// I assume screen size
int viewportX = 800;
int viewportY = 480;
float newX = (jitterAmount/coordSceneDimensionX)/viewportX;
float newY = (jitterAmount/coordSceneDimensionY)/viewportY;
// and then I don't know what to do with these new coordinates
That's as far as I've got with FSAA
Anti-Aliasing with textures
In the same document I was referencing for FSAA, there is also a page that briefly discusses implementing anti-aliasing with the use of textures. However, I don't know what the best way to go about implementing AA in this way would be and whether it would be more efficient than FSAA.
Hopefully someone out there knows a lot more about Anti-Aliasing than I do and can help me achieve this. Much appreciated!
The method presented in the articles predates the time, when GPUs were capable of performing antialiasing themself. This jittered rendering to a accumulation buffer is not really state of the art with realtime graphics (it is a widely implemented form of antialiasing for offline rendering though).
What you do these days is requesting an antialiased framebuffer. That's it. The keyword here is multisampling. See this SO answer:
How do you activate multisampling in OpenGL ES on the iPhone? – although written for the iOS, doing it for Android follows a similar path. AFAIK On Android this extension is used instead http://www.khronos.org/registry/gles/extensions/ANGLE/ANGLE_framebuffer_multisample.txt
First of all the article you refer to uses the accumulation buffer, whose existence I really doubt in OpenGL ES, but I might be wrong here. If the accumulation buffer is really supported in ES, then you at least have to explicitly request it when creating the GL context (however this is done in Android).
Note that this technique is extremely inefficient and also deprecated, since nowadays GPUs usually support some kind of multisampling atialiasing (MSAA). You should research if your system/GPU/driver supports multi-sampling. This may require you to request a multisample framebuffer during context creation or something similar.
Now back to the article. The basic idea of this article is not to move the line quickly, but to render the line (or actually the whole scene) multiple times at very slightly different (at sub-pixel accuracy) locations (in image space) and average these multiple renderings to get the final image, every frame.
So you have a set of sample positions (in [0,1]), which are actually sub-pixel positions. This means if you have a sample positon (0.25, 0.75) you move the whole scene about a quarter of a pixel in the x direction and 3 quarters of a pixel in the y direction (in screen space, of course) when rendering. When you have done this for each different sample, you average all these renderings together to gain the final antialiased rendering.
The dimension of the object coordinate scene is basically the dimension of the screen (actually the near plane of the viewing volume) in object space, or more practically, the values you passed into glOrtho or glFrustum (or a similar function, but with gluPerspective it is not that obvious). For modifying the projection matrix to realize this jittering, you can use the functions presented in the article.
The jitter amount is not the antialiasing factor, but the sub-pixel sample locations. The antialiasing factor in this context is the number of samples and therfore the number of jittered renderings you perform. And your code won't work, if I assume correctly and you try to only jitter the line end points. You have to draw the whole scene multiple times using this jittered projection and not just this single line (it may work with a simple black background and appropriate blending, though).
You might also be able to achieve this without an accum buffer using blending (with glBlendFunc(GL_CONSTANT_COLOR, GL_ONE) and glBlendColor(1.0f/n, 1.0f/n, 1.0f/n, 1.0f/n), with n being the antialiasing factor/sample count). But keep in mind to render the whole scene like this and not just this single line.
But like said this technique is completely outdated and you should rather look for a way to enable MSAA on your ES platform.