Options for offscreen rendering on Android with OpenGL ES 1.0? - android

I'm working on implementing picking for an OpenGL game I'm writing for android. It's using the "unique color" method of drawing each touchable object as a solid color that is unique to each object. The user input then reads glReadPixels() at the location of the touch. I've gotten the coloring working, and glReadPixels working, but I have been unable to separate the "color" rendering from the main actual rendering, which complicated the use of glReadPixels.
Supposedly the trick to working with this is to render the second scene (for input) into an offscreen buffer, but this seems to be a bit problematic. I've investigated using OpenGL ES1.1 FBO's to act as an offscreen buffer, but it seems my handset (Samsung Galaxy S Vibrant (2.2)) does not support FBO's. I'm at a loss for how to correctly render this scene (and run glReadPixels on it) without the user witnessing it.
Any ideas how offscreen rendering of this sort can be done?

if FBO is not supported, you can always resort to rendering to your normal back-buffer.
Typical usage would be:
Clear back-buffer
draw "color-as-id" objects
Clear back-buffer
draw normal
SwapBuffers
The second clear will make sure the picking code will not show up on the final image.

Related

OpenGL ES - how to create a plane emitting light

I'm a newbie in the OpenGL ES world, and learning some basics on 3d graphics on Android OpenGL ES. I'm wondering how to create a image plane that emitting light? This is easy to be implemented in 3d model software like Blender (using the Cycles Render), see the image below for effects I'm looking for. Through some research, I learnt that they may be related to Blur or Bloom effect using shader. But I'm not very sure, and I don't know how to implement them.
As per Paul-Jan's comment, what you want is far from basic in OpenGL.
The default approach for OpenGL is forward rendering. i.e. every time you specify a piece of geometry the calculation goes forwards from triangle to pixels, a function is applied to determine the colour for each of those pixels and they're forwarded to the frame buffer. So the starting position is that each individual pixel has no concept of the world around it. Each exists in isolation.
In your scene, the floor below the box has no idea it should be blue because it has no idea that there is a box above it.
Programs like Blender use a different approach, which in this context could accurate be called backwards rendering. It starts from each pixel and asks what geometry lies behind it. In doing that it explicitly has an idea of all the geometry in the scene. So when it spots that the floor is behind a certain position it can then continue and ask "and which light sources can the floor see?" to establish lighting.
The default OpenGL approach is long established for real-time rendering. If you look at old video games you'll notice evidence of it all over the place: objects often don't cast shadows on each other (or such shadows are very rough approximations), there's only one source of light which is infinitely far away (i.e. it's in a fixed position as far as geometry is concerned; no need to know about the scene really).
So solutions are to invest the geometry with some knowledge of the whole scene. A common approach is to perform internal renderings of the scene from the point of view of the light source. That generates a depth buffer. By handing the light position and depth buffer off to every piece of geometry in the scene they can calculate whether they're visible to the light source. If so then they're illuminated by it. If not then they're not.
Another option is deferred rendering; you do a standard pass of your scene, populating at each pixel the depth, the surface colour, the surface normal, etc. So you get the full scene information broken down into pixel-by-pixel storage from the point of view of the camera. You then pretend that everything the camera can see is everything that there is. So you just need to pass that buffer around for pixels to be able to work out, approximately, which light sources they can and can't see. You can also have different parts of the screen only consider which lights they're close enough to by a broad-phase 2d distance check, which saves time.
In either case we're actually talking about relatively advanced OpenGL stuff.

OpenGL: Create a "transparent window-like object"

I am developing an augmented reality app, that should render a 3D model. So far so good. I am using Vuforia for AR, libgdx for graphics, everything is on Android, works like charm...
Problem is, that I need to create a "window - like" effect. I literally need to make the model look like a window you can look through and see behind it. That means I have some kind of wall-object, which has a hole in it(a window). Through this hole, you can see another 3D model behind the wall.
Problem is, I need to also render the video background. And this background is also behind the wall. I can't just turn of blending when rendering the wall, because that would corrupt the video image.
So I need to make the wall and everything directly behind it transparent, but not the video background.
Is such marvel even possible using only OpenGL?
I have been thinking about some combination of front-to-end and back-to-front rendering: render background first, then render the wall, but blend it only into the alpha channel (making video visible only on pixels that are not covered by wall), then render the actual content, but blend it only into the visible pixels (that are not behind the wall) and then "render" the wall once more, but this time make everything behind it visible. Would such thing work?
I can't just turn of blending when rendering the wall
What makes you think that? OpenGL is not a scene graph. It's a drawing API and everything happens in the order and as you call it.
So order of operations would be
Draw video background with blending turned off.
The the objects between video and the wall (turn blending on or off as needed)
Draw the wall, with blending or alpha test enabled, so that you can create the window.
Is such marvel even possible using only OpenGL?
The key in understanding OpenGL is, that you don't think of using it to setup a 3D world scene, but instead use it to draw a 2D picture of a 3D world (because that's what OpenGL actually does). In the end OpenGL is just a bit smarter brush to draw onto a flat canvas. Think about how you'd paint a picture on paper, how you'd mask different parts. And then you do that with OpenGL.
Update
Ohkay, now I see what you want to achieve. The wall is not really visible, but a depth dependent mask. Easy enough to achieve: Use alpha testing instead of blending to produce the window in the depth buffer. Or, instead of alpha testing you could just draw 4 quads, which form a window between them.
The trick is, that you draw it into just the depth buffer, but not into the color buffer.
glDepthMask(1);
glColorMask(0,0,0,0);
draw_wall();
Blending will not work in this case, since even fully transparent fragments will end up in the depth buffer. Hence alpha test. In fixed function OpenGL glEnable(GL_ALPHA_TEST) and glAlphaFunc(…). However on OpenGL-ES2 you've to implement it through a shader.
Say you've got a single channel texture, in the fragment shader do
float opacity = texture(sampler, uv).r;
if( opacity < threshold ) discard;

Artifacts about rendering ajacent cubes with opengl es?

I was trying to render rubix cubes with opengl es on android. Here is how I do it: I render 27 ajacent cubes. And the faces of the cubes which is covered is textured with black bmp picture and other faces that can be seen is textured with colorful picture. I used cull face and depth-test to avoid rendering useless faces. But look at what I got, it is pretty wierd. The black faces show up sometimes. Can anyone tell me how to get rid of the artifacts?
Screenshots:
With the benefit of screenshots it looks like the depth buffering simply isn't having any effect — would it be safe to conclude that you render the side of the cube with the blue faces first, then the central section behind it, then the back face?
I'm slightly out of my depth with the Android stuff but I think the confusion is probably just that enabling the depth test within OpenGL isn't sufficient. You also have to ensure that a depth buffer is allocated.
Probably you have a call to setEGLConfigChooser that's disabling the depth buffer. There are a bunch of overloaded variants of that method but the single boolean version and the one that allows redSize, greenSize, etc to be specified give you explicit control over whether there's a depth buffer size. So you'll want to check those.
If you're creating your framebuffer explicitly then make sure you are attaching a depth renderbuffer.

getting part of the (already rendered) screen as texture

I'm making an android opengl es 2d app, and trying to use a part of my rendered screen as a texture for a billboard.
so far, i had partial success with glCopyTexSubImage - it only works on some phones.
everywhere i read recommends using frameBufferObject to render to texture, but i can't grasp how to use it, so if anyone can help me get this, i would thank them greatly.
if i use a FBO that is binded to a texture, is it possible to render just part of the screen? if not, isn't that a bit overkill? (also much more work texture mapping and moving the texture. that and the texture would have to be big enough for the part i need to not be blurry)
i need to get a snapshot of something that should be rendered to screen anyway, does that mean i have to render my scene twice every frame(one for my texture and another for the actuall render)? am i missing something here?

Specular over texture in OpenGL ES?

I'm trying to get specular highlight over objetcs that are texture mapped. As far as I know, the only direct way for OpenGL to not compute the final color over the texture color (this is, as for example plain white) is with the call glLightModelf(GL_LIGHT_MODEL_COLOR_CONTROL,GL_SEPARATE_SPECULAR_COLOR), but that is not supported in OpenGL ES.
So, how can I do this? Do I have to use another texture for the specluar highlight, of is there another easier way?
Than you!
P.S. I'm using OpenGL ES 1.x
One workaround would be to run in two passes: first pass renders the texture with ambient & diffuse lighting, the second pass renders the specular highlights on top of that (without texturing enabled).
Thanks for the great suggestions, they saved my day. The only problem was a flicker. I first thought there was a problem with a depth buffer and limited depth resolution but it was not the case. I had to use gl.glDepthFunc(GL10.GL_ALWAYS); for perfect blending. However for complex primitives you will be able to see through the object.
After playing for another week I figured that one simply has to disable depth test when doing blending and simply take care of the order in which you render objects on each pass. Typically you want to finish all rendering passes for far object before drawing a closer one.
This takes care for flicker problem that I had completely.

Categories

Resources