How to put a semi-opaque layer on a frame? - android

Usually when clearing the frame for a new draw, one uses the glClear() or glClearColor(). But each of those completely removes the previous frame.
I'd like to make the frames disappear gradually, i.e. with each new frame put a semi-transparent overlay on what's already on the canvas. I tried to use the glClearColor()'s alpha parameter, but it doesn't seem to have any effect.
What should I do to achieve this gradual disappearing effect?

If you just want to draw the clear color over the last frame without getting rid of it entirely, draw a screen-size quad over the viewport with the same color as what you'd pass to glClearColor, and skip calling glClear(GL_COLOR_BUFFER_BIT) (you should probably still clear the depth/stencil buffers if you're using either of them). So, if you're using a depth buffer, first clear the depth buffer if need be, disable depth testing (this is important mainly to make sure that your quad does not update the depth buffer), draw your screen-size quad, and then re-enable depth testing. Draw anything else afterward if you need to.
What follows assumes you're using OpenGL ES 2.0
If you need to blend two different frames together and you realistically never actually see the clear color, you should probably render the last frame to a texture and draw that over the new frame. For that, you can either read the current framebuffer and copy it to a texture, or create a new framebuffer and attach a texture to it (see glFramebufferTexture2D). Once the framebuffer is set up, you can happily draw into that. After you've drawn the frame into the texture, you can go ahead and bind the texture and draw a quad over the screen (remembering of course to switch the framebuffer back to your regular framebuffer).
If you're using OpenGL ES 1.1, you will instead want to use glCopyTexImage2D (or glCopyTexSubImage2D) to copy the color buffer to a texture.

Related

Android, Buffers in textureView

I want to use TextureView for draw Math curves(a lot), which data source is external device.
Here, every zone i draw, must add lines to previous.
Down to TextureView render using 3 buffers, i would like the buffer i draw in each moment, has like source the buffer i´ve just to release.
That is, i want the contain from buffer i release, fill the next buffer before i draw on it.
Other posibility, will be, force to use only one buffer.
I see, is possible get bitmap and setbitmap, but i would like do it without charge this in memory.
Anyone know if is this possible.
I would recommend two things:
Assuming you're rendering with Canvas, don't use TextureView. Use a custom View instead. TextureView doesn't really give you an advantage, and you lose hardware-accelerated rendering.
Render to an off-screen Bitmap, then blit the Bitmap to the View. Offscreen rendering is not hardware-accelerated, but you avoid re-drawing the entire scene. You will have to experiment to determine which is most efficient.
If you're drawing with OpenGL ES, just draw everything to the TextureView on every frame (unless you want to play with FBOs).
You can try lockCanvas(null) of Surface.
use TextureView.getSurfaceTexture to get a surfaceTexture
use new Surface(surfaceTexture) to create a surface
use Surface.lockCanvas or Canvas.lockHardwareCanvas to get a canvas.
Then you can do a lot of drawings on textureView with this canvas.

Clipping object in openGL ES

I want to crop and object my openGL ES application, it should be done in the following manner:
The left is the initial image, middle is the stencil buffer matrix, and right is the result.
From what i have read here: Discard with stencil might have performance issues,
and since the model that will be clipped is going to be rotated and translated, i honestly don't know if the drawn model will be clipped out in the wrong places after these actions.
will it?
So, i thought about depth buffer.
Again, an example:
(This photo was taken from this question.)
Assume that the black square is movable, and might not be just a simple square, but a complex UIBezierPath.
I was wondering about how to use the depth buffer, so that all that is drawn outside the square (or UIBezierPath) will be clipped out, meaning, adjusting all z values of the left out pixels to some threshold value, that wont be shown on screen.
So to summarise:
1) Will using stencil is going to be expensive as stated?
2) Is it possible to use stencil on a rotated and translated object so that it will always will be drawn?
3) Using depth buffer, Is it possible to find out what is inside and what is outside the square (or UIBezierPath) and how? masking it somehow?
4) What is the better approach?
I know it's a lot to answer on but since they all relate to each other i thought it better be asked at the same question.
The stencil buffer is the way to go here. The discard answer you refer to is about using the discard function in fragment shaders, which is very expensive for tile based deferred rendering GPUs (ie basically every mobile GPU).
Using the stencil buffer however is very cheap, as it is present on chip for each tile and does not interfere with deferred rendering.
To summarise:
No.
Yes, the stencil buffer operates in 2D over the whole viewport on the transformed vertices. It will clip the cube after its model transforms have been applied.
Yes, but needless to say this is complicated, somehow sounds similar to shadow volumes.
Use the stencil buffer.

Seamlessly layering transparent sprites in OpenGL ES

I am working on an Android app, based on the LibGDX framework (Though I don't think that should affect this problem too much), and I am having trouble finding a way to get the results I want when drawing using transparent sprites. The problem is that the sprites visibly layer on top of each other where they overlap, similar to what is displayed in this image :
This is pretty unsightly for some of what I want to do, and even completely breaks other parts. What I would like them to do is merge together seamlessly, like so:
The only success I have had thus far is to draw the entire sequence of sprites on a separate texture at full opacity, and then draw that texture back with the desired opacity. I had this working moderately well, and I could likely make it work for most of what I need it to, but the large problem right now is that these things are dynamically drawn onto the screen, and the process of modifying a fairly large texture and sending it back are pretty taxing on mobile devices, and causes an unacceptable level of performance.
I've spent a good chunk of time looking for more ideal solutions, including experimenting with blend modes and coming up with quirky formulas that balanced out alpha and color values in ways to even things out, but nothing was particularly successful. My guess is that the only viable route for this is the previously mentioned way of creating a texture and applying the alpha difference to that, but I am unsure of the best way to make that work with lower powered mobile devices.
There might be a few other ways to do this: The most straight forward would be to attach a stencil buffer and draw circles to stencil first and then draw a full screen rect with desired color+alpha with the stencil, this should be much faster then some FBO with a separate texture.
Another thing might work is drawing those circles first with disabled blend and then your whole scene over it with inverted "blendFunc" but do note it might be impossible if other elements also need blending.
3rd instead of using stencil you could just use the alpha channel of your render buffer. Just use a color mask to draw only to alpha and draw the circles, then reenable RGB on color mask and draw the fullscreen rect using appropriate "blendFunc" also note here that if previous shapes have used blend you will need to clear the alpha to 1.0 before doing this (color mask to alpha only, disabled blend, draw full screen rect with color that has alpha set to 1.0)

Animating color values in open gles 1.0 on Android

I am working on an application in which I will perform some drawing using openGL. In this application I will draw a 3d-object with each vertex a different color. This can be done by using a colorpointer.
Now, my problem is that I would like these colors to animate over time.
As the color values are given using a buffer, I would have to either recreate the buffer every frame with new colors, or replacing the values in the buffer somehow (which is probably quite error prone). I also thought about the possibility of using two buffers and switching between them (drawing with one buffer, and changing the other, then switch).
And in any case, I would have to upload the buffer to the video memory every frame...
So, my question is this; how do I, as efficient as possible, animate the different colors of an object in GL10?
Note; It would of course be easy to do this using shaders in gles 2.0, but I would prefer it if I could just use GL10 (or 11) for this project.
Instead of using vertex colors, maybe you could come up with a clever way to use a texture instead, and animate this using the texture matrix? That way you wouldn't have to update your vertex buffers ever.

How to set background Image for OpenGL in Android?

I am new to OpenGL. How to set background image for OpenGL. Actually when I am rendering the square texture and normal square (means which is including with colors). Texture also change its color...
I don't completely understand your question, but there is no background image in OpenGL. If you want to have an image as background of your rendering, just draw a textured square covering the whole screen before drawing everything else.
In case you have depth buffering enabled, you should also make sure your background image doesn't write to the depth buffer, so that the other things you render after it are actually rendered on top of the background. This can either be done by rendering it at the far plane so it gets the maximum depth of 1 or by just disabling depht writes using
glDepthMask(GL_FALSE);
and of course enabling it again (using glDepthMask(GL_TRUE)) after it is drawn.
But of course OpenGL is no scene or image management system and has no notion of any persistent scene or images and forgets about anything after it has been drawn. This means, like everything else you have to draw this background image each frame before the other scene objects are drawn.

Categories

Resources