I'm writing an app in OpenGL ES 1.1. I'm rendering a textured sphere around the viewer, i.e. the eye viewpoint is in the centre of the sphere. Because I don't need any lighting effects, there's no need at present to have surface normals for each vertex. However, I'd need those to turn on backface culling. Is there any benefit in turning on backface culling though? Because my eye is at the centre of the sphere, any faces that got culled (i.e. behind the eye) would be dealt with by the near z plane clipping anyway.
So would adding surface normals and turning on backface culling get me any performance benefit, in this situation?
First of all, surface normals don't have anything to do with back-face culling, at least for OpenGL's part. The front or back-sidedness of a face is determined by the ordering of its vertices when projected into screen space, which by default (which can be changed using glFrontFace) is front-facing when the vertices are ordered counter-clockwise and back-facing when ordered clockwise.
So you don't need any normals. These are only used for lighting computation and nothing else. How should the normal of a single vertex influence the surrounding faces orientation? Take the average of the vertex normals? What if these are completely opposite or something more weird (since the user can specify any normals he wants)?
That said you also don't need back-face culling for your sphere, since from the inside you always see the front-side anyway. But it also doesn't hurt to turn it on, since you also don't see any back-faces (and you can assume face-culling to be for free). But in this case pay attention to the orientation or your faces, it may be that you need to enable front-face culling (glCullFace(GL_FRONT)) instead of back-face culling when you are in the inside of the sphere, since most objects are tessellated to have their triangles facing outside. Or you could just switch to glFrontFace(GL_CW).
You certainly don't need normals in order to enable backface culling. The only thing that matters for culling is whether a polygon is frontfacing or backfacing. You can specify what front facing is by using the glFrontFace function, and specify exactly what needs to be culled by using the glCullFace function. No normals necessary.
That said, it would indeed not matter in this case, since your backfacing polygons would be clipped anyway.
Related
I'm trying to build a 3d transparent globe on android (with transparent regions at the place of water regions) and the way I'm doing it is by creating a sphere model with Libgdx and then filling it with a .png texture of the earth with transparent water regions. It is working fine except that after I disable cull face (to be able to see the back face of the sphere), I observe some triangles missing and the back face vanishes as I rotate the 3d model: Pic1, Pic2. If I rotate the sphere at some other angles it appairs to work fine and I can see without problem the back face of the globe.
I put here some relevant code:
render:
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
camController.update();
modelBatch.begin(cam);
modelBatch.render(instance, environment);
modelBatch.end();
I've tried all possible values for the DepthTestAttribute but seems that there is no way to get rid of this very strange effect. Please give me some advice, many thanks in advance.
In the case of general geometry, one common approach is to sort the triangles, and render them in back to front order.
However, with the special properties of a sphere, I believe there is a simpler and more efficient approach that should work well. With a sphere, you always have exactly one layer of back-facing triangles and exactly one layer of front-facing triangles. So if you render the back-facing triangles first, followed by the front-facing triangles, you get an order where a triangle rendered earlier is never in front of a triangle rendered later, which is sufficient to get correct transparency rendering.
You already figured out how to render only the front-facing triangles: By enabling culling of the back faces. Rendering only the back-facing triangles is the same thing, except that you call the front faces. So the code looks like this:
glBlendFunc(...);
glEnable(GL_BLEND);
glEnable(GL_CULL_FACE);
glClear(...);
glCullFace(GL_FRONT);
// draw sphere
glCullFace(GL_BACK);
// draw sphere
This looks like the typical conflict between transparency and depth testing.
A transparent pixel is drawn, but does write a value to the depth buffer causing pixels behind it - which are supposed to be visible - to be discarded due to a failed depth test.
Same happens here. Some elements in front are drawn first - writing to the depth buffer even though parts are translucent - so elements behind it are not drawn.
A quick and dirty fix would be to discard pixels with an alpha value below a certain threshold by enabling alpha testing (or discarding it in your shader in later OpenGL versions). However, this will in most cases result in visible artifacts.
A better way would be to sort the individual elements/triangles of the globe from back to front relative to the camera and drawing them in that order.
I would also suggest you read the OpenGL wiki on https://www.opengl.org/wiki/Transparency_Sorting
I'm a newbie in the OpenGL ES world, and learning some basics on 3d graphics on Android OpenGL ES. I'm wondering how to create a image plane that emitting light? This is easy to be implemented in 3d model software like Blender (using the Cycles Render), see the image below for effects I'm looking for. Through some research, I learnt that they may be related to Blur or Bloom effect using shader. But I'm not very sure, and I don't know how to implement them.
As per Paul-Jan's comment, what you want is far from basic in OpenGL.
The default approach for OpenGL is forward rendering. i.e. every time you specify a piece of geometry the calculation goes forwards from triangle to pixels, a function is applied to determine the colour for each of those pixels and they're forwarded to the frame buffer. So the starting position is that each individual pixel has no concept of the world around it. Each exists in isolation.
In your scene, the floor below the box has no idea it should be blue because it has no idea that there is a box above it.
Programs like Blender use a different approach, which in this context could accurate be called backwards rendering. It starts from each pixel and asks what geometry lies behind it. In doing that it explicitly has an idea of all the geometry in the scene. So when it spots that the floor is behind a certain position it can then continue and ask "and which light sources can the floor see?" to establish lighting.
The default OpenGL approach is long established for real-time rendering. If you look at old video games you'll notice evidence of it all over the place: objects often don't cast shadows on each other (or such shadows are very rough approximations), there's only one source of light which is infinitely far away (i.e. it's in a fixed position as far as geometry is concerned; no need to know about the scene really).
So solutions are to invest the geometry with some knowledge of the whole scene. A common approach is to perform internal renderings of the scene from the point of view of the light source. That generates a depth buffer. By handing the light position and depth buffer off to every piece of geometry in the scene they can calculate whether they're visible to the light source. If so then they're illuminated by it. If not then they're not.
Another option is deferred rendering; you do a standard pass of your scene, populating at each pixel the depth, the surface colour, the surface normal, etc. So you get the full scene information broken down into pixel-by-pixel storage from the point of view of the camera. You then pretend that everything the camera can see is everything that there is. So you just need to pass that buffer around for pixels to be able to work out, approximately, which light sources they can and can't see. You can also have different parts of the screen only consider which lights they're close enough to by a broad-phase 2d distance check, which saves time.
In either case we're actually talking about relatively advanced OpenGL stuff.
I was trying to render rubix cubes with opengl es on android. Here is how I do it: I render 27 ajacent cubes. And the faces of the cubes which is covered is textured with black bmp picture and other faces that can be seen is textured with colorful picture. I used cull face and depth-test to avoid rendering useless faces. But look at what I got, it is pretty wierd. The black faces show up sometimes. Can anyone tell me how to get rid of the artifacts?
Screenshots:
With the benefit of screenshots it looks like the depth buffering simply isn't having any effect — would it be safe to conclude that you render the side of the cube with the blue faces first, then the central section behind it, then the back face?
I'm slightly out of my depth with the Android stuff but I think the confusion is probably just that enabling the depth test within OpenGL isn't sufficient. You also have to ensure that a depth buffer is allocated.
Probably you have a call to setEGLConfigChooser that's disabling the depth buffer. There are a bunch of overloaded variants of that method but the single boolean version and the one that allows redSize, greenSize, etc to be specified give you explicit control over whether there's a depth buffer size. So you'll want to check those.
If you're creating your framebuffer explicitly then make sure you are attaching a depth renderbuffer.
I load an STL mesh, draw it correctly (using GL_TRIANGLES), rotate nicely, change colour, the lights stay in position while the mesh moves, everything is great. Then I switch off the triangles and display just the vertices (using GL_POINTS), now when I rotate (and even when I display the triangles and the vertices together) the points seem to fade out as I rotate - as if they are lit from only one side.
Does this ring any bells with anyone?
Thanks for any help.
Baz
It maybe just a perception artefact. If you still got lighting on, the points are of course lit like the triangle vertices (depending on their normals), meaning they actually have an orientation, even if this is not intuitive for points. So they may get darker or brighter when rotating them into/away from the light. It's just that this change seems more evident, because you don't have the other surface points to fill the gaps and compensate for the dimming. Try disabling lighting and they should keep their color when rotating.
Assume I have 3 cubes at random location/orientation and I want to detect if any of the cube is overlapping (or colliding) with another cube. This overlap or collision could also happen as the cubes location/rotation are changed in each frame. Please note that I am looking for Android based and OpenGL ES (1.0 or 1.1) based solution for this.
This isn't really an OpenGL problem - it just does rendering.
I don't know of any ready-made android libraries for 3D collision detection, so you might just have to do the maths yourself. Efficient collision detection is generally the art of using quick, cheap tests to avoid doing more expensive analysis. For your problem, a good approach to detecting if cube A intersects cube b would be to do a quick rejection test, either
Compute the bounding spheres for A and B - if the distance between the two sphere's centers is greater than the sum of radii, then A and B do not intersect
Compute the axis-aligned bounding boxes for A and B - if the bounds do not intersect (very easy to test), then neither do A and B
If the bounds test indicates possible collision it's time for some maths. There are two ways to go from here: testing for vertex inclusion and testing for edge/face intersection
Vertex inclusion is testing the vertices of A to see if they lie within B: either rotate the vertex into B's frame of reference to test for inclusion, or use the planes of B's faces directly in a frustum-culling style operation.
Edge/Face intersection is testing each of the edges of A for intersection with B's face triangles.
While the vertex inclusion test is a bit cheaper than Edge/Face testing, it's possible for cubes to intersect without encompassing each other's vertices, so a negative result does not mean no intersection. Similarly, it's possible for cubes to intersect without an intersection between an edge and a face (if one lies within the other). You'll have to do a little of both tests to catch every intersection. This can be avoided if you can make some assupmtions about how the cubes can move from frame to frame, i.e.: if the A and B were not touching last frame, it's unlikely that they are A is wholly within B now.