glBlendEquation(GL_MIN) replacement in OpenGL ES 2.0 - android

I've search around and seems that glBlendEquation has some issues in Android, -GL_MAX/MIN is not even listed in the opengles specs
I need to find a workaround to GL_MIN blendEquation mode. Is that possible? I want to write to a color buffer only if the alpha there is greater than the pixel's alpha i'm trying to write. Is it possible to do it without any extension or using more than 1 texture?

The dirty solution I was trying to avoid is this. Using Frame Buffer objects and shaders to emulate the color buffer and the blending mode:
Modify existing shaders to blend the scene with an FBO_1, into FBO_2.
Render FBO_2 to the screen.
The next drawing call swap FBO_1 with FBO_2, as FBO_2 equals the color buffer.
An "unintrusive" and more inefficient alternative is to use 3 FBOs and a shader, and make an additional pass.
Render scene to FBO_1 //without any modification to existing shaders
Blend FBO_1 with FBO_2 into FBO_3 //with new shader.
Render FBO_3 to the screen.
The next drawing call swap FBO_2 with FBO_3. The only advantage of this alternative is that i dont have to modify the existing drawing logic.
I really don't like any of this ideas. I'll gladly accept better answers!

Related

Is there a way in Opengl es 2.0 fragment shader, to get a previous fragment color

My code is drawing a background image and drawing some other images (particles) on top of that image.
I want to make the particles have some blending effects, like darken, lighten, burn ... the same as Canvas globalcompositeoperation does .
So in the fragment shader, i need to get the previous fragment color and blend it with the new color.
But i could not find a way to do it.
No, there is no possibility within the standard. However, with the extensions EXT_shader_framebuffer_fetch (non-Mali devices) and ARM_shader_framebuffer_fetch (Mali devices) a value from the framebuffer can be read (since OpenGL 2.0 and for OpenGL ES 2.0 / 3.0):
This extension provides a mechanism whereby a fragment shader may read existing framebuffer data as input. This can be used to implement compositing operations that would have been inconvenient or impossible with fixed-function blending. It can also be used to apply a function to the framebuffer color, by writing a shader which uses the existing framebuffer color as its only input.
Note that there is no guarantee that hardware will support an extension. You need to test whether the extension is supported or not at runtime.
If you want to read fragments from the previous rendering, the usual way is to implement multiple rendering passes and render to a texture. See also LearnOpenGL - Deferred Shading.
In many cases, there is no need to read fragments in the fragment shader. A lot of rendering effects can be implemented using the standard Blending functionality. The blend function can be changed with glBlendFunc and the blend equation can be changed with glBlendEquation.
Yes it is possible.
First, render result to the texturebuffer.
Then send the texturebuffer to the second shader to apply the effects.
to find out how to do this, look for the keywords
GLES20.glBindFramebuffer (GL_FRAMEBUFFER, mFramebuff);

Clipping object in openGL ES

I want to crop and object my openGL ES application, it should be done in the following manner:
The left is the initial image, middle is the stencil buffer matrix, and right is the result.
From what i have read here: Discard with stencil might have performance issues,
and since the model that will be clipped is going to be rotated and translated, i honestly don't know if the drawn model will be clipped out in the wrong places after these actions.
will it?
So, i thought about depth buffer.
Again, an example:
(This photo was taken from this question.)
Assume that the black square is movable, and might not be just a simple square, but a complex UIBezierPath.
I was wondering about how to use the depth buffer, so that all that is drawn outside the square (or UIBezierPath) will be clipped out, meaning, adjusting all z values of the left out pixels to some threshold value, that wont be shown on screen.
So to summarise:
1) Will using stencil is going to be expensive as stated?
2) Is it possible to use stencil on a rotated and translated object so that it will always will be drawn?
3) Using depth buffer, Is it possible to find out what is inside and what is outside the square (or UIBezierPath) and how? masking it somehow?
4) What is the better approach?
I know it's a lot to answer on but since they all relate to each other i thought it better be asked at the same question.
The stencil buffer is the way to go here. The discard answer you refer to is about using the discard function in fragment shaders, which is very expensive for tile based deferred rendering GPUs (ie basically every mobile GPU).
Using the stencil buffer however is very cheap, as it is present on chip for each tile and does not interfere with deferred rendering.
To summarise:
No.
Yes, the stencil buffer operates in 2D over the whole viewport on the transformed vertices. It will clip the cube after its model transforms have been applied.
Yes, but needless to say this is complicated, somehow sounds similar to shadow volumes.
Use the stencil buffer.

Artifacts about rendering ajacent cubes with opengl es?

I was trying to render rubix cubes with opengl es on android. Here is how I do it: I render 27 ajacent cubes. And the faces of the cubes which is covered is textured with black bmp picture and other faces that can be seen is textured with colorful picture. I used cull face and depth-test to avoid rendering useless faces. But look at what I got, it is pretty wierd. The black faces show up sometimes. Can anyone tell me how to get rid of the artifacts?
Screenshots:
With the benefit of screenshots it looks like the depth buffering simply isn't having any effect — would it be safe to conclude that you render the side of the cube with the blue faces first, then the central section behind it, then the back face?
I'm slightly out of my depth with the Android stuff but I think the confusion is probably just that enabling the depth test within OpenGL isn't sufficient. You also have to ensure that a depth buffer is allocated.
Probably you have a call to setEGLConfigChooser that's disabling the depth buffer. There are a bunch of overloaded variants of that method but the single boolean version and the one that allows redSize, greenSize, etc to be specified give you explicit control over whether there's a depth buffer size. So you'll want to check those.
If you're creating your framebuffer explicitly then make sure you are attaching a depth renderbuffer.

Specular over texture in OpenGL ES?

I'm trying to get specular highlight over objetcs that are texture mapped. As far as I know, the only direct way for OpenGL to not compute the final color over the texture color (this is, as for example plain white) is with the call glLightModelf(GL_LIGHT_MODEL_COLOR_CONTROL,GL_SEPARATE_SPECULAR_COLOR), but that is not supported in OpenGL ES.
So, how can I do this? Do I have to use another texture for the specluar highlight, of is there another easier way?
Than you!
P.S. I'm using OpenGL ES 1.x
One workaround would be to run in two passes: first pass renders the texture with ambient & diffuse lighting, the second pass renders the specular highlights on top of that (without texturing enabled).
Thanks for the great suggestions, they saved my day. The only problem was a flicker. I first thought there was a problem with a depth buffer and limited depth resolution but it was not the case. I had to use gl.glDepthFunc(GL10.GL_ALWAYS); for perfect blending. However for complex primitives you will be able to see through the object.
After playing for another week I figured that one simply has to disable depth test when doing blending and simply take care of the order in which you render objects on each pass. Typically you want to finish all rendering passes for far object before drawing a closer one.
This takes care for flicker problem that I had completely.

Animating color values in open gles 1.0 on Android

I am working on an application in which I will perform some drawing using openGL. In this application I will draw a 3d-object with each vertex a different color. This can be done by using a colorpointer.
Now, my problem is that I would like these colors to animate over time.
As the color values are given using a buffer, I would have to either recreate the buffer every frame with new colors, or replacing the values in the buffer somehow (which is probably quite error prone). I also thought about the possibility of using two buffers and switching between them (drawing with one buffer, and changing the other, then switch).
And in any case, I would have to upload the buffer to the video memory every frame...
So, my question is this; how do I, as efficient as possible, animate the different colors of an object in GL10?
Note; It would of course be easy to do this using shaders in gles 2.0, but I would prefer it if I could just use GL10 (or 11) for this project.
Instead of using vertex colors, maybe you could come up with a clever way to use a texture instead, and animate this using the texture matrix? That way you wouldn't have to update your vertex buffers ever.

Categories

Resources