Changing texture brightness on Android in Opengl-es - android

Firstly I must say I'm green as grass in opengl subject.
I'm writing simple 2-d game for android using opengl-es. I use GL11Ext.glDrawTexfOES() to draw the textures (just like in one sample code named 'sprite-method-test'), and gl.glColor4f to color them (and set transparency). But I'm completely helpless if I need apply some brightness or contrast to the rendering texture.
I really appreciate any help.

Related

Rendering virtual objects transparent in augmented reality

I want to place virtual objects in my augmented reality scene which are rendered transparent.
For sake of simplicity, let's take Googles SimpleAugmentedReality Scene. When you just take the scene as it comes from the example project, an earth and a moon are placed in front of you, rendered opaque with a texture.
When I change the materials "Rendering Mode" to "Transparent" I expect them to become transparent but nothing changes.
I also tried custom shaders in which I rendered just a cube and set the alpha value below 1.0. It is also drawn opaque in the AR scene.
Do I miss some configuration in Tango Camera prefab?
This is not supported in TangoSDK now, but I think it's a very reasonable ask.
The reason it's not working is because TangoSDK inject the background camera render command into Unity's command buffer, and rendered with glDepthMask(GL_FALSE);
This will cause alpha blending to not to take in the color from background camera render.
Actually it works now:
I was not familiar with shaders in Unity, so I forgot the Tags{"Queue" = "Transparent"} tag within the sub shader. Also, the Blend SrcAlpha OneMinusSrcAlpha directive was not set within the pass.
Now my objects are rendered transparent "on top" of the RGB camera video frame depending on the alpha value defined in the shader.

OpenGL: Create a "transparent window-like object"

I am developing an augmented reality app, that should render a 3D model. So far so good. I am using Vuforia for AR, libgdx for graphics, everything is on Android, works like charm...
Problem is, that I need to create a "window - like" effect. I literally need to make the model look like a window you can look through and see behind it. That means I have some kind of wall-object, which has a hole in it(a window). Through this hole, you can see another 3D model behind the wall.
Problem is, I need to also render the video background. And this background is also behind the wall. I can't just turn of blending when rendering the wall, because that would corrupt the video image.
So I need to make the wall and everything directly behind it transparent, but not the video background.
Is such marvel even possible using only OpenGL?
I have been thinking about some combination of front-to-end and back-to-front rendering: render background first, then render the wall, but blend it only into the alpha channel (making video visible only on pixels that are not covered by wall), then render the actual content, but blend it only into the visible pixels (that are not behind the wall) and then "render" the wall once more, but this time make everything behind it visible. Would such thing work?
I can't just turn of blending when rendering the wall
What makes you think that? OpenGL is not a scene graph. It's a drawing API and everything happens in the order and as you call it.
So order of operations would be
Draw video background with blending turned off.
The the objects between video and the wall (turn blending on or off as needed)
Draw the wall, with blending or alpha test enabled, so that you can create the window.
Is such marvel even possible using only OpenGL?
The key in understanding OpenGL is, that you don't think of using it to setup a 3D world scene, but instead use it to draw a 2D picture of a 3D world (because that's what OpenGL actually does). In the end OpenGL is just a bit smarter brush to draw onto a flat canvas. Think about how you'd paint a picture on paper, how you'd mask different parts. And then you do that with OpenGL.
Update
Ohkay, now I see what you want to achieve. The wall is not really visible, but a depth dependent mask. Easy enough to achieve: Use alpha testing instead of blending to produce the window in the depth buffer. Or, instead of alpha testing you could just draw 4 quads, which form a window between them.
The trick is, that you draw it into just the depth buffer, but not into the color buffer.
glDepthMask(1);
glColorMask(0,0,0,0);
draw_wall();
Blending will not work in this case, since even fully transparent fragments will end up in the depth buffer. Hence alpha test. In fixed function OpenGL glEnable(GL_ALPHA_TEST) and glAlphaFunc(…). However on OpenGL-ES2 you've to implement it through a shader.
Say you've got a single channel texture, in the fragment shader do
float opacity = texture(sampler, uv).r;
if( opacity < threshold ) discard;

getting part of the (already rendered) screen as texture

I'm making an android opengl es 2d app, and trying to use a part of my rendered screen as a texture for a billboard.
so far, i had partial success with glCopyTexSubImage - it only works on some phones.
everywhere i read recommends using frameBufferObject to render to texture, but i can't grasp how to use it, so if anyone can help me get this, i would thank them greatly.
if i use a FBO that is binded to a texture, is it possible to render just part of the screen? if not, isn't that a bit overkill? (also much more work texture mapping and moving the texture. that and the texture would have to be big enough for the part i need to not be blurry)
i need to get a snapshot of something that should be rendered to screen anyway, does that mean i have to render my scene twice every frame(one for my texture and another for the actuall render)? am i missing something here?

Android graphic effects entire canvas

I am writing my own 2D game for android. So far I've been using some home brew graphics - drawing frame-by-frame on a canvas with a surfaceview. I've been able to draw a variety of lines, shapes and bitmaps with solid performance.
I am wondering if it is possible (or if anyone has clever ideas) to apply certain effects to an entire canvas. For instance, it would be cool if I could add some changing gaussian blur effect to simulate movement. I have found tutorials on how to apply a gaussian blur to a bitmap, but I need to apply it to my entire canvas (which is made up of a bitmap with shapes drawn in front of it).
Any suggestions?
Once something is renderer in the hardware backbuffer, it's very hard to get access to it, it's slow, if not entirely impossible.
The standard way to make a performant post process effect is to render everything in an image or texture you created yourself. You then apply whatever effect you want to that texture, or use that texture to render in the back buffer using your effect.

Drawing a 3d object inside a 3d object using java opengl1.0 for android

I have a 3D cube created using GL_TRIANGLE_STRIP.Is it possible to draw points(using GL_POINTS) or a triangle (using GL_TRIANGLE) on/inside my 3D Cube?How could that be achieved?
If you want to draw something directly of the face of another object (by using the exact same vertex coordinates), you will need to use glPolygonOffset to prevent stitching. There is a chapter in the Red Book that explains it.
If by inside you mean to draw something in the volume of the cube, than there is nothing stopping you. You just need to get the alpha values and blending right to actually see through the cube. Look for some generic tutorial on transparency in OpenGL.
But maybe I'm horribly mistaken and what you are looking for a textures.
If I understand you correctly you could just generate the appropriate texture with the points and apply it to the cube.

Categories

Resources