Missing triangles on 3D sphere with Libgdx - android

I'm trying to build a 3d transparent globe on android (with transparent regions at the place of water regions) and the way I'm doing it is by creating a sphere model with Libgdx and then filling it with a .png texture of the earth with transparent water regions. It is working fine except that after I disable cull face (to be able to see the back face of the sphere), I observe some triangles missing and the back face vanishes as I rotate the 3d model: Pic1, Pic2. If I rotate the sphere at some other angles it appairs to work fine and I can see without problem the back face of the globe.
I put here some relevant code:
render:
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
camController.update();
modelBatch.begin(cam);
modelBatch.render(instance, environment);
modelBatch.end();
I've tried all possible values for the DepthTestAttribute but seems that there is no way to get rid of this very strange effect. Please give me some advice, many thanks in advance.

In the case of general geometry, one common approach is to sort the triangles, and render them in back to front order.
However, with the special properties of a sphere, I believe there is a simpler and more efficient approach that should work well. With a sphere, you always have exactly one layer of back-facing triangles and exactly one layer of front-facing triangles. So if you render the back-facing triangles first, followed by the front-facing triangles, you get an order where a triangle rendered earlier is never in front of a triangle rendered later, which is sufficient to get correct transparency rendering.
You already figured out how to render only the front-facing triangles: By enabling culling of the back faces. Rendering only the back-facing triangles is the same thing, except that you call the front faces. So the code looks like this:
glBlendFunc(...);
glEnable(GL_BLEND);
glEnable(GL_CULL_FACE);
glClear(...);
glCullFace(GL_FRONT);
// draw sphere
glCullFace(GL_BACK);
// draw sphere

This looks like the typical conflict between transparency and depth testing.
A transparent pixel is drawn, but does write a value to the depth buffer causing pixels behind it - which are supposed to be visible - to be discarded due to a failed depth test.
Same happens here. Some elements in front are drawn first - writing to the depth buffer even though parts are translucent - so elements behind it are not drawn.
A quick and dirty fix would be to discard pixels with an alpha value below a certain threshold by enabling alpha testing (or discarding it in your shader in later OpenGL versions). However, this will in most cases result in visible artifacts.
A better way would be to sort the individual elements/triangles of the globe from back to front relative to the camera and drawing them in that order.
I would also suggest you read the OpenGL wiki on https://www.opengl.org/wiki/Transparency_Sorting

Related

OpenGL: Create a "transparent window-like object"

I am developing an augmented reality app, that should render a 3D model. So far so good. I am using Vuforia for AR, libgdx for graphics, everything is on Android, works like charm...
Problem is, that I need to create a "window - like" effect. I literally need to make the model look like a window you can look through and see behind it. That means I have some kind of wall-object, which has a hole in it(a window). Through this hole, you can see another 3D model behind the wall.
Problem is, I need to also render the video background. And this background is also behind the wall. I can't just turn of blending when rendering the wall, because that would corrupt the video image.
So I need to make the wall and everything directly behind it transparent, but not the video background.
Is such marvel even possible using only OpenGL?
I have been thinking about some combination of front-to-end and back-to-front rendering: render background first, then render the wall, but blend it only into the alpha channel (making video visible only on pixels that are not covered by wall), then render the actual content, but blend it only into the visible pixels (that are not behind the wall) and then "render" the wall once more, but this time make everything behind it visible. Would such thing work?
I can't just turn of blending when rendering the wall
What makes you think that? OpenGL is not a scene graph. It's a drawing API and everything happens in the order and as you call it.
So order of operations would be
Draw video background with blending turned off.
The the objects between video and the wall (turn blending on or off as needed)
Draw the wall, with blending or alpha test enabled, so that you can create the window.
Is such marvel even possible using only OpenGL?
The key in understanding OpenGL is, that you don't think of using it to setup a 3D world scene, but instead use it to draw a 2D picture of a 3D world (because that's what OpenGL actually does). In the end OpenGL is just a bit smarter brush to draw onto a flat canvas. Think about how you'd paint a picture on paper, how you'd mask different parts. And then you do that with OpenGL.
Update
Ohkay, now I see what you want to achieve. The wall is not really visible, but a depth dependent mask. Easy enough to achieve: Use alpha testing instead of blending to produce the window in the depth buffer. Or, instead of alpha testing you could just draw 4 quads, which form a window between them.
The trick is, that you draw it into just the depth buffer, but not into the color buffer.
glDepthMask(1);
glColorMask(0,0,0,0);
draw_wall();
Blending will not work in this case, since even fully transparent fragments will end up in the depth buffer. Hence alpha test. In fixed function OpenGL glEnable(GL_ALPHA_TEST) and glAlphaFunc(…). However on OpenGL-ES2 you've to implement it through a shader.
Say you've got a single channel texture, in the fragment shader do
float opacity = texture(sampler, uv).r;
if( opacity < threshold ) discard;

Transparent objects in OpenGl ES 2.0

So I've been playing around with OpenGL ES 2.0 on Android but now got to a problem I haven't been able to solve. Apologies in advance, it appears that I'm not allowed to post more that two links (yet), so I put my three images in a Photobucket album here.
I'm trying to create a 3D environment that is enclosed by transparent areas ("colored glass"). To see if it works I also put a opaque cube within. I enabled the following capabilities:
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
Now the picture looks like this (screenshot 1). Not bad, but not exactly how I wanted it: A (lower) wall at the back as well as the wall on the right should be visible because the wall I'm looking through is transparent.
Then I found that and tried using GLES20.glDepthMask(true); before drawing the opaque and GLES20.glDepthMask(false); before drawing the transparent objects, as well as disabling blending while drawing the opaque objects.
The result (screenshot 2) looks quite messed up. But then I had another idea, not to turn off writing to the depth buffer but to turn off GLES20.DEPTH_TEST altogether, while drawing the transparent objects.
That (screenshot 3) got me closest tho the picture I'm looking for. You can finally see the backwall as well as the right sidewall but, because the depth testing is disabled when drawing the opaques, the cube is partially covered by the backwall, which it shouldn't be.
Does anyone know how to get the effect I'm looking for?
I think that I solved it. By that I mean that it works in my case but I can't tell if that is just by coincidence...
I am enabling depth testing and blending as usual. Then, when drawing, I draw the opaque shapes first and the transparent shapes second, like before. But, while drawing the transparent shapes, I turn GLES20.glDepthMask(..) off to not write to the depth buffer and thus draw all transparent shapes that are not covered by opaque shapes. I did that previously (picture 2) and it completely messed up but I now I do it in reverse - disabling the depth mask for transparent shapes, not opaque ones.

Artifacts about rendering ajacent cubes with opengl es?

I was trying to render rubix cubes with opengl es on android. Here is how I do it: I render 27 ajacent cubes. And the faces of the cubes which is covered is textured with black bmp picture and other faces that can be seen is textured with colorful picture. I used cull face and depth-test to avoid rendering useless faces. But look at what I got, it is pretty wierd. The black faces show up sometimes. Can anyone tell me how to get rid of the artifacts?
Screenshots:
With the benefit of screenshots it looks like the depth buffering simply isn't having any effect — would it be safe to conclude that you render the side of the cube with the blue faces first, then the central section behind it, then the back face?
I'm slightly out of my depth with the Android stuff but I think the confusion is probably just that enabling the depth test within OpenGL isn't sufficient. You also have to ensure that a depth buffer is allocated.
Probably you have a call to setEGLConfigChooser that's disabling the depth buffer. There are a bunch of overloaded variants of that method but the single boolean version and the one that allows redSize, greenSize, etc to be specified give you explicit control over whether there's a depth buffer size. So you'll want to check those.
If you're creating your framebuffer explicitly then make sure you are attaching a depth renderbuffer.

Seamlessly layering transparent sprites in OpenGL ES

I am working on an Android app, based on the LibGDX framework (Though I don't think that should affect this problem too much), and I am having trouble finding a way to get the results I want when drawing using transparent sprites. The problem is that the sprites visibly layer on top of each other where they overlap, similar to what is displayed in this image :
This is pretty unsightly for some of what I want to do, and even completely breaks other parts. What I would like them to do is merge together seamlessly, like so:
The only success I have had thus far is to draw the entire sequence of sprites on a separate texture at full opacity, and then draw that texture back with the desired opacity. I had this working moderately well, and I could likely make it work for most of what I need it to, but the large problem right now is that these things are dynamically drawn onto the screen, and the process of modifying a fairly large texture and sending it back are pretty taxing on mobile devices, and causes an unacceptable level of performance.
I've spent a good chunk of time looking for more ideal solutions, including experimenting with blend modes and coming up with quirky formulas that balanced out alpha and color values in ways to even things out, but nothing was particularly successful. My guess is that the only viable route for this is the previously mentioned way of creating a texture and applying the alpha difference to that, but I am unsure of the best way to make that work with lower powered mobile devices.
There might be a few other ways to do this: The most straight forward would be to attach a stencil buffer and draw circles to stencil first and then draw a full screen rect with desired color+alpha with the stencil, this should be much faster then some FBO with a separate texture.
Another thing might work is drawing those circles first with disabled blend and then your whole scene over it with inverted "blendFunc" but do note it might be impossible if other elements also need blending.
3rd instead of using stencil you could just use the alpha channel of your render buffer. Just use a color mask to draw only to alpha and draw the circles, then reenable RGB on color mask and draw the fullscreen rect using appropriate "blendFunc" also note here that if previous shapes have used blend you will need to clear the alpha to 1.0 before doing this (color mask to alpha only, disabled blend, draw full screen rect with color that has alpha set to 1.0)

OpenGL ES 1.1: Culling versus near clip plane -- performance

I'm writing an app in OpenGL ES 1.1. I'm rendering a textured sphere around the viewer, i.e. the eye viewpoint is in the centre of the sphere. Because I don't need any lighting effects, there's no need at present to have surface normals for each vertex. However, I'd need those to turn on backface culling. Is there any benefit in turning on backface culling though? Because my eye is at the centre of the sphere, any faces that got culled (i.e. behind the eye) would be dealt with by the near z plane clipping anyway.
So would adding surface normals and turning on backface culling get me any performance benefit, in this situation?
First of all, surface normals don't have anything to do with back-face culling, at least for OpenGL's part. The front or back-sidedness of a face is determined by the ordering of its vertices when projected into screen space, which by default (which can be changed using glFrontFace) is front-facing when the vertices are ordered counter-clockwise and back-facing when ordered clockwise.
So you don't need any normals. These are only used for lighting computation and nothing else. How should the normal of a single vertex influence the surrounding faces orientation? Take the average of the vertex normals? What if these are completely opposite or something more weird (since the user can specify any normals he wants)?
That said you also don't need back-face culling for your sphere, since from the inside you always see the front-side anyway. But it also doesn't hurt to turn it on, since you also don't see any back-faces (and you can assume face-culling to be for free). But in this case pay attention to the orientation or your faces, it may be that you need to enable front-face culling (glCullFace(GL_FRONT)) instead of back-face culling when you are in the inside of the sphere, since most objects are tessellated to have their triangles facing outside. Or you could just switch to glFrontFace(GL_CW).
You certainly don't need normals in order to enable backface culling. The only thing that matters for culling is whether a polygon is frontfacing or backfacing. You can specify what front facing is by using the glFrontFace function, and specify exactly what needs to be culled by using the glCullFace function. No normals necessary.
That said, it would indeed not matter in this case, since your backfacing polygons would be clipped anyway.

Categories

Resources