Artifacts about rendering ajacent cubes with opengl es? - android

I was trying to render rubix cubes with opengl es on android. Here is how I do it: I render 27 ajacent cubes. And the faces of the cubes which is covered is textured with black bmp picture and other faces that can be seen is textured with colorful picture. I used cull face and depth-test to avoid rendering useless faces. But look at what I got, it is pretty wierd. The black faces show up sometimes. Can anyone tell me how to get rid of the artifacts?
Screenshots:

With the benefit of screenshots it looks like the depth buffering simply isn't having any effect — would it be safe to conclude that you render the side of the cube with the blue faces first, then the central section behind it, then the back face?
I'm slightly out of my depth with the Android stuff but I think the confusion is probably just that enabling the depth test within OpenGL isn't sufficient. You also have to ensure that a depth buffer is allocated.
Probably you have a call to setEGLConfigChooser that's disabling the depth buffer. There are a bunch of overloaded variants of that method but the single boolean version and the one that allows redSize, greenSize, etc to be specified give you explicit control over whether there's a depth buffer size. So you'll want to check those.
If you're creating your framebuffer explicitly then make sure you are attaching a depth renderbuffer.

Related

Dynamic Environment mapping from camera in Augmented Reality setting

I am trying to implement something like the technique described in this (old) paper to use the phone camera's video frames to create an illusion of environment mapping in an AR app.
I want to take the camera frame, divide it into sub-areas and then use those as faces on the cube map. The division of the camera frame would look something like this:
Now the X area is easy, I can use glCopyTexImage2D to copy that square area to my cubemap texture. But I need help with the trapezoid shaped areas around X (forget about the trianlges for now).
How can I take those trapezoidal areas and distort them into square textures? I think I need the opposite transformation of the later occurring perspective projection, so that the two will cancel each other out in the final render if I render the cubemap as a skybox around my camera (does that explain what I want?).
Before doing this I tried a simpler step of putting the square X area on every side of the cubemap just to see if glCopyTexImage2D can even be used for this. It can, but the results are not rotated right, some faces are "upside down" when I render the cubemap as a skybox. The question is similar: How can I rotate them before using them as textures?
I also thought about solving the problem from the other side and modifying the "texture coordinates" to make the necessary adjustments, but that also does not seem easy since the lookup in the fragment shader with "textureCube" is more complicated than a normal texture lookup.
Any ideas?
I'm trying to do this in my AR app on Android with OpenGL ES 2.0 but I guess more general OpenGL advice might also be useful.
Update
I have come to the conclusion that this is not worth pursuing anymore. The paper makes it look nice, but my experiments with a phone camera have shown a major contradiction. If you want to reflect the environment in an object rendered in AR, the camera view is very limited. When the camera is far away from the tracked object you have enough environment information for a good reflection, but you will barely see it because the camera is far away. But when you bring the camera closer to see the awesome reflection in detail, the tracked object will fill most of the camera's field of view and you barely have any environment to reflect anymore. So in either case you lose and the result is not worth the effort.
It seems that you need to create mesh with UV mapping described in article and render it with texture from camera to another texture. Then use it as cubemap.

OpenGL ES - how to create a plane emitting light

I'm a newbie in the OpenGL ES world, and learning some basics on 3d graphics on Android OpenGL ES. I'm wondering how to create a image plane that emitting light? This is easy to be implemented in 3d model software like Blender (using the Cycles Render), see the image below for effects I'm looking for. Through some research, I learnt that they may be related to Blur or Bloom effect using shader. But I'm not very sure, and I don't know how to implement them.
As per Paul-Jan's comment, what you want is far from basic in OpenGL.
The default approach for OpenGL is forward rendering. i.e. every time you specify a piece of geometry the calculation goes forwards from triangle to pixels, a function is applied to determine the colour for each of those pixels and they're forwarded to the frame buffer. So the starting position is that each individual pixel has no concept of the world around it. Each exists in isolation.
In your scene, the floor below the box has no idea it should be blue because it has no idea that there is a box above it.
Programs like Blender use a different approach, which in this context could accurate be called backwards rendering. It starts from each pixel and asks what geometry lies behind it. In doing that it explicitly has an idea of all the geometry in the scene. So when it spots that the floor is behind a certain position it can then continue and ask "and which light sources can the floor see?" to establish lighting.
The default OpenGL approach is long established for real-time rendering. If you look at old video games you'll notice evidence of it all over the place: objects often don't cast shadows on each other (or such shadows are very rough approximations), there's only one source of light which is infinitely far away (i.e. it's in a fixed position as far as geometry is concerned; no need to know about the scene really).
So solutions are to invest the geometry with some knowledge of the whole scene. A common approach is to perform internal renderings of the scene from the point of view of the light source. That generates a depth buffer. By handing the light position and depth buffer off to every piece of geometry in the scene they can calculate whether they're visible to the light source. If so then they're illuminated by it. If not then they're not.
Another option is deferred rendering; you do a standard pass of your scene, populating at each pixel the depth, the surface colour, the surface normal, etc. So you get the full scene information broken down into pixel-by-pixel storage from the point of view of the camera. You then pretend that everything the camera can see is everything that there is. So you just need to pass that buffer around for pixels to be able to work out, approximately, which light sources they can and can't see. You can also have different parts of the screen only consider which lights they're close enough to by a broad-phase 2d distance check, which saves time.
In either case we're actually talking about relatively advanced OpenGL stuff.

Clipping object in openGL ES

I want to crop and object my openGL ES application, it should be done in the following manner:
The left is the initial image, middle is the stencil buffer matrix, and right is the result.
From what i have read here: Discard with stencil might have performance issues,
and since the model that will be clipped is going to be rotated and translated, i honestly don't know if the drawn model will be clipped out in the wrong places after these actions.
will it?
So, i thought about depth buffer.
Again, an example:
(This photo was taken from this question.)
Assume that the black square is movable, and might not be just a simple square, but a complex UIBezierPath.
I was wondering about how to use the depth buffer, so that all that is drawn outside the square (or UIBezierPath) will be clipped out, meaning, adjusting all z values of the left out pixels to some threshold value, that wont be shown on screen.
So to summarise:
1) Will using stencil is going to be expensive as stated?
2) Is it possible to use stencil on a rotated and translated object so that it will always will be drawn?
3) Using depth buffer, Is it possible to find out what is inside and what is outside the square (or UIBezierPath) and how? masking it somehow?
4) What is the better approach?
I know it's a lot to answer on but since they all relate to each other i thought it better be asked at the same question.
The stencil buffer is the way to go here. The discard answer you refer to is about using the discard function in fragment shaders, which is very expensive for tile based deferred rendering GPUs (ie basically every mobile GPU).
Using the stencil buffer however is very cheap, as it is present on chip for each tile and does not interfere with deferred rendering.
To summarise:
No.
Yes, the stencil buffer operates in 2D over the whole viewport on the transformed vertices. It will clip the cube after its model transforms have been applied.
Yes, but needless to say this is complicated, somehow sounds similar to shadow volumes.
Use the stencil buffer.

Android: Creating blurred texture using blending in OpenGL 1.1

Has anyone had much success on Android with creating blurred textures using blending to blur a texture?
I'm thinking of the technique described here but the crux is to take a loaded texture and then apply a blur to it so that the bound texture itself is blurred.
"Inplace blurring" is something a CPU can do, but using a GPU, which generally does things in parallel, you must have another image buffer as render target.
Even with new shaders, reads and writes from/to the same buffer can lead to corruption because they can be done reordered. A similar issue is, that a gaussian blurring kernel, which can handle blurring in one pass, depends on neighbor fragments which could have been modified by the kernel applied at their fragment coordinate.
If you don't have the 'framebuffer_object' extension available for rendering into renderbuffers or even textures (additionally requires 'render_texture' extension),
you have to render into the back buffer as in the example and then do glReadPixels() to get the image back for uploading it to the source texture, or do a fast and direct glCopyTexImage2D() (OpenGL* 1.1 have this).
If the render target is too small, you can render multiple tiles.

Options for offscreen rendering on Android with OpenGL ES 1.0?

I'm working on implementing picking for an OpenGL game I'm writing for android. It's using the "unique color" method of drawing each touchable object as a solid color that is unique to each object. The user input then reads glReadPixels() at the location of the touch. I've gotten the coloring working, and glReadPixels working, but I have been unable to separate the "color" rendering from the main actual rendering, which complicated the use of glReadPixels.
Supposedly the trick to working with this is to render the second scene (for input) into an offscreen buffer, but this seems to be a bit problematic. I've investigated using OpenGL ES1.1 FBO's to act as an offscreen buffer, but it seems my handset (Samsung Galaxy S Vibrant (2.2)) does not support FBO's. I'm at a loss for how to correctly render this scene (and run glReadPixels on it) without the user witnessing it.
Any ideas how offscreen rendering of this sort can be done?
if FBO is not supported, you can always resort to rendering to your normal back-buffer.
Typical usage would be:
Clear back-buffer
draw "color-as-id" objects
Clear back-buffer
draw normal
SwapBuffers
The second clear will make sure the picking code will not show up on the final image.

Categories

Resources