OpenGL: Create a "transparent window-like object" - android

I am developing an augmented reality app, that should render a 3D model. So far so good. I am using Vuforia for AR, libgdx for graphics, everything is on Android, works like charm...
Problem is, that I need to create a "window - like" effect. I literally need to make the model look like a window you can look through and see behind it. That means I have some kind of wall-object, which has a hole in it(a window). Through this hole, you can see another 3D model behind the wall.
Problem is, I need to also render the video background. And this background is also behind the wall. I can't just turn of blending when rendering the wall, because that would corrupt the video image.
So I need to make the wall and everything directly behind it transparent, but not the video background.
Is such marvel even possible using only OpenGL?
I have been thinking about some combination of front-to-end and back-to-front rendering: render background first, then render the wall, but blend it only into the alpha channel (making video visible only on pixels that are not covered by wall), then render the actual content, but blend it only into the visible pixels (that are not behind the wall) and then "render" the wall once more, but this time make everything behind it visible. Would such thing work?

I can't just turn of blending when rendering the wall
What makes you think that? OpenGL is not a scene graph. It's a drawing API and everything happens in the order and as you call it.
So order of operations would be
Draw video background with blending turned off.
The the objects between video and the wall (turn blending on or off as needed)
Draw the wall, with blending or alpha test enabled, so that you can create the window.
Is such marvel even possible using only OpenGL?
The key in understanding OpenGL is, that you don't think of using it to setup a 3D world scene, but instead use it to draw a 2D picture of a 3D world (because that's what OpenGL actually does). In the end OpenGL is just a bit smarter brush to draw onto a flat canvas. Think about how you'd paint a picture on paper, how you'd mask different parts. And then you do that with OpenGL.
Update
Ohkay, now I see what you want to achieve. The wall is not really visible, but a depth dependent mask. Easy enough to achieve: Use alpha testing instead of blending to produce the window in the depth buffer. Or, instead of alpha testing you could just draw 4 quads, which form a window between them.
The trick is, that you draw it into just the depth buffer, but not into the color buffer.
glDepthMask(1);
glColorMask(0,0,0,0);
draw_wall();
Blending will not work in this case, since even fully transparent fragments will end up in the depth buffer. Hence alpha test. In fixed function OpenGL glEnable(GL_ALPHA_TEST) and glAlphaFunc(…). However on OpenGL-ES2 you've to implement it through a shader.
Say you've got a single channel texture, in the fragment shader do
float opacity = texture(sampler, uv).r;
if( opacity < threshold ) discard;

Related

Rendering virtual objects transparent in augmented reality

I want to place virtual objects in my augmented reality scene which are rendered transparent.
For sake of simplicity, let's take Googles SimpleAugmentedReality Scene. When you just take the scene as it comes from the example project, an earth and a moon are placed in front of you, rendered opaque with a texture.
When I change the materials "Rendering Mode" to "Transparent" I expect them to become transparent but nothing changes.
I also tried custom shaders in which I rendered just a cube and set the alpha value below 1.0. It is also drawn opaque in the AR scene.
Do I miss some configuration in Tango Camera prefab?
This is not supported in TangoSDK now, but I think it's a very reasonable ask.
The reason it's not working is because TangoSDK inject the background camera render command into Unity's command buffer, and rendered with glDepthMask(GL_FALSE);
This will cause alpha blending to not to take in the color from background camera render.
Actually it works now:
I was not familiar with shaders in Unity, so I forgot the Tags{"Queue" = "Transparent"} tag within the sub shader. Also, the Blend SrcAlpha OneMinusSrcAlpha directive was not set within the pass.
Now my objects are rendered transparent "on top" of the RGB camera video frame depending on the alpha value defined in the shader.

OpenGL ES - how to create a plane emitting light

I'm a newbie in the OpenGL ES world, and learning some basics on 3d graphics on Android OpenGL ES. I'm wondering how to create a image plane that emitting light? This is easy to be implemented in 3d model software like Blender (using the Cycles Render), see the image below for effects I'm looking for. Through some research, I learnt that they may be related to Blur or Bloom effect using shader. But I'm not very sure, and I don't know how to implement them.
As per Paul-Jan's comment, what you want is far from basic in OpenGL.
The default approach for OpenGL is forward rendering. i.e. every time you specify a piece of geometry the calculation goes forwards from triangle to pixels, a function is applied to determine the colour for each of those pixels and they're forwarded to the frame buffer. So the starting position is that each individual pixel has no concept of the world around it. Each exists in isolation.
In your scene, the floor below the box has no idea it should be blue because it has no idea that there is a box above it.
Programs like Blender use a different approach, which in this context could accurate be called backwards rendering. It starts from each pixel and asks what geometry lies behind it. In doing that it explicitly has an idea of all the geometry in the scene. So when it spots that the floor is behind a certain position it can then continue and ask "and which light sources can the floor see?" to establish lighting.
The default OpenGL approach is long established for real-time rendering. If you look at old video games you'll notice evidence of it all over the place: objects often don't cast shadows on each other (or such shadows are very rough approximations), there's only one source of light which is infinitely far away (i.e. it's in a fixed position as far as geometry is concerned; no need to know about the scene really).
So solutions are to invest the geometry with some knowledge of the whole scene. A common approach is to perform internal renderings of the scene from the point of view of the light source. That generates a depth buffer. By handing the light position and depth buffer off to every piece of geometry in the scene they can calculate whether they're visible to the light source. If so then they're illuminated by it. If not then they're not.
Another option is deferred rendering; you do a standard pass of your scene, populating at each pixel the depth, the surface colour, the surface normal, etc. So you get the full scene information broken down into pixel-by-pixel storage from the point of view of the camera. You then pretend that everything the camera can see is everything that there is. So you just need to pass that buffer around for pixels to be able to work out, approximately, which light sources they can and can't see. You can also have different parts of the screen only consider which lights they're close enough to by a broad-phase 2d distance check, which saves time.
In either case we're actually talking about relatively advanced OpenGL stuff.

Transparent objects in OpenGl ES 2.0

So I've been playing around with OpenGL ES 2.0 on Android but now got to a problem I haven't been able to solve. Apologies in advance, it appears that I'm not allowed to post more that two links (yet), so I put my three images in a Photobucket album here.
I'm trying to create a 3D environment that is enclosed by transparent areas ("colored glass"). To see if it works I also put a opaque cube within. I enabled the following capabilities:
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
Now the picture looks like this (screenshot 1). Not bad, but not exactly how I wanted it: A (lower) wall at the back as well as the wall on the right should be visible because the wall I'm looking through is transparent.
Then I found that and tried using GLES20.glDepthMask(true); before drawing the opaque and GLES20.glDepthMask(false); before drawing the transparent objects, as well as disabling blending while drawing the opaque objects.
The result (screenshot 2) looks quite messed up. But then I had another idea, not to turn off writing to the depth buffer but to turn off GLES20.DEPTH_TEST altogether, while drawing the transparent objects.
That (screenshot 3) got me closest tho the picture I'm looking for. You can finally see the backwall as well as the right sidewall but, because the depth testing is disabled when drawing the opaques, the cube is partially covered by the backwall, which it shouldn't be.
Does anyone know how to get the effect I'm looking for?
I think that I solved it. By that I mean that it works in my case but I can't tell if that is just by coincidence...
I am enabling depth testing and blending as usual. Then, when drawing, I draw the opaque shapes first and the transparent shapes second, like before. But, while drawing the transparent shapes, I turn GLES20.glDepthMask(..) off to not write to the depth buffer and thus draw all transparent shapes that are not covered by opaque shapes. I did that previously (picture 2) and it completely messed up but I now I do it in reverse - disabling the depth mask for transparent shapes, not opaque ones.

Seamlessly layering transparent sprites in OpenGL ES

I am working on an Android app, based on the LibGDX framework (Though I don't think that should affect this problem too much), and I am having trouble finding a way to get the results I want when drawing using transparent sprites. The problem is that the sprites visibly layer on top of each other where they overlap, similar to what is displayed in this image :
This is pretty unsightly for some of what I want to do, and even completely breaks other parts. What I would like them to do is merge together seamlessly, like so:
The only success I have had thus far is to draw the entire sequence of sprites on a separate texture at full opacity, and then draw that texture back with the desired opacity. I had this working moderately well, and I could likely make it work for most of what I need it to, but the large problem right now is that these things are dynamically drawn onto the screen, and the process of modifying a fairly large texture and sending it back are pretty taxing on mobile devices, and causes an unacceptable level of performance.
I've spent a good chunk of time looking for more ideal solutions, including experimenting with blend modes and coming up with quirky formulas that balanced out alpha and color values in ways to even things out, but nothing was particularly successful. My guess is that the only viable route for this is the previously mentioned way of creating a texture and applying the alpha difference to that, but I am unsure of the best way to make that work with lower powered mobile devices.
There might be a few other ways to do this: The most straight forward would be to attach a stencil buffer and draw circles to stencil first and then draw a full screen rect with desired color+alpha with the stencil, this should be much faster then some FBO with a separate texture.
Another thing might work is drawing those circles first with disabled blend and then your whole scene over it with inverted "blendFunc" but do note it might be impossible if other elements also need blending.
3rd instead of using stencil you could just use the alpha channel of your render buffer. Just use a color mask to draw only to alpha and draw the circles, then reenable RGB on color mask and draw the fullscreen rect using appropriate "blendFunc" also note here that if previous shapes have used blend you will need to clear the alpha to 1.0 before doing this (color mask to alpha only, disabled blend, draw full screen rect with color that has alpha set to 1.0)

Android: how to have an animation on top of a background image

What I'm trying to do is have a background image, for sake of simplicity, lets say it's a picture of the front of a house. Then, I want to have a red ball move from window to window.
**I want to have a background picture, and a picture on top of it.
**I then want to be able to tell the top picture EXACTLY where to go.
How can I do this?
I'm just beginning to learn about animations in Android, and have not yet run across any way to do this.
There are two routes to animation in android: Canvas and OpenGL ES.
I would recommend OpenGL for anything requiring smoothness and speed, like a moving ball.
You should create a view using the helper class GLSurfaceView
http://android-developers.blogspot.com/2009/04/introducing-glsurfaceview.html, and implement a Renderer.
I assume you have the images saved in your res/drawable folders, in a format like png and the ball file contains an alpha channel.
You can see many tutorials online, but basically you need to load your background image and your ball resource at onSurfaceCreated and store it in a texture using GLUtils.texImage2D.
In the onDrawFrame method, you should set up a 2D projection such as glOrtho2D, then draw the background.
Then just before you draw the ball texture, you can use the glTranslate(x,y,0) function to move the ball over the house. Use an alpha blend for the ball:
glBlendFunc(GL_SRC_ALPHA, GL_SRC_ONE_MINUS_ALPHA);
glEnable(GL_BLEND);
Unfortunately writing in OpenGL isn't as straightforward as you might hope. Everything is done with 3D coordinates, despite the fact you want only a 2D image. But hopefully this gives you enough info to google for good exmaples, which are abundant!

Categories

Resources