I'm trying to get specular highlight over objetcs that are texture mapped. As far as I know, the only direct way for OpenGL to not compute the final color over the texture color (this is, as for example plain white) is with the call glLightModelf(GL_LIGHT_MODEL_COLOR_CONTROL,GL_SEPARATE_SPECULAR_COLOR), but that is not supported in OpenGL ES.
So, how can I do this? Do I have to use another texture for the specluar highlight, of is there another easier way?
Than you!
P.S. I'm using OpenGL ES 1.x
One workaround would be to run in two passes: first pass renders the texture with ambient & diffuse lighting, the second pass renders the specular highlights on top of that (without texturing enabled).
Thanks for the great suggestions, they saved my day. The only problem was a flicker. I first thought there was a problem with a depth buffer and limited depth resolution but it was not the case. I had to use gl.glDepthFunc(GL10.GL_ALWAYS); for perfect blending. However for complex primitives you will be able to see through the object.
After playing for another week I figured that one simply has to disable depth test when doing blending and simply take care of the order in which you render objects on each pass. Typically you want to finish all rendering passes for far object before drawing a closer one.
This takes care for flicker problem that I had completely.
Related
My code is drawing a background image and drawing some other images (particles) on top of that image.
I want to make the particles have some blending effects, like darken, lighten, burn ... the same as Canvas globalcompositeoperation does .
So in the fragment shader, i need to get the previous fragment color and blend it with the new color.
But i could not find a way to do it.
No, there is no possibility within the standard. However, with the extensions EXT_shader_framebuffer_fetch (non-Mali devices) and ARM_shader_framebuffer_fetch (Mali devices) a value from the framebuffer can be read (since OpenGL 2.0 and for OpenGL ES 2.0 / 3.0):
This extension provides a mechanism whereby a fragment shader may read existing framebuffer data as input. This can be used to implement compositing operations that would have been inconvenient or impossible with fixed-function blending. It can also be used to apply a function to the framebuffer color, by writing a shader which uses the existing framebuffer color as its only input.
Note that there is no guarantee that hardware will support an extension. You need to test whether the extension is supported or not at runtime.
If you want to read fragments from the previous rendering, the usual way is to implement multiple rendering passes and render to a texture. See also LearnOpenGL - Deferred Shading.
In many cases, there is no need to read fragments in the fragment shader. A lot of rendering effects can be implemented using the standard Blending functionality. The blend function can be changed with glBlendFunc and the blend equation can be changed with glBlendEquation.
Yes it is possible.
First, render result to the texturebuffer.
Then send the texturebuffer to the second shader to apply the effects.
to find out how to do this, look for the keywords
GLES20.glBindFramebuffer (GL_FRAMEBUFFER, mFramebuff);
I'm a newbie in the OpenGL ES world, and learning some basics on 3d graphics on Android OpenGL ES. I'm wondering how to create a image plane that emitting light? This is easy to be implemented in 3d model software like Blender (using the Cycles Render), see the image below for effects I'm looking for. Through some research, I learnt that they may be related to Blur or Bloom effect using shader. But I'm not very sure, and I don't know how to implement them.
As per Paul-Jan's comment, what you want is far from basic in OpenGL.
The default approach for OpenGL is forward rendering. i.e. every time you specify a piece of geometry the calculation goes forwards from triangle to pixels, a function is applied to determine the colour for each of those pixels and they're forwarded to the frame buffer. So the starting position is that each individual pixel has no concept of the world around it. Each exists in isolation.
In your scene, the floor below the box has no idea it should be blue because it has no idea that there is a box above it.
Programs like Blender use a different approach, which in this context could accurate be called backwards rendering. It starts from each pixel and asks what geometry lies behind it. In doing that it explicitly has an idea of all the geometry in the scene. So when it spots that the floor is behind a certain position it can then continue and ask "and which light sources can the floor see?" to establish lighting.
The default OpenGL approach is long established for real-time rendering. If you look at old video games you'll notice evidence of it all over the place: objects often don't cast shadows on each other (or such shadows are very rough approximations), there's only one source of light which is infinitely far away (i.e. it's in a fixed position as far as geometry is concerned; no need to know about the scene really).
So solutions are to invest the geometry with some knowledge of the whole scene. A common approach is to perform internal renderings of the scene from the point of view of the light source. That generates a depth buffer. By handing the light position and depth buffer off to every piece of geometry in the scene they can calculate whether they're visible to the light source. If so then they're illuminated by it. If not then they're not.
Another option is deferred rendering; you do a standard pass of your scene, populating at each pixel the depth, the surface colour, the surface normal, etc. So you get the full scene information broken down into pixel-by-pixel storage from the point of view of the camera. You then pretend that everything the camera can see is everything that there is. So you just need to pass that buffer around for pixels to be able to work out, approximately, which light sources they can and can't see. You can also have different parts of the screen only consider which lights they're close enough to by a broad-phase 2d distance check, which saves time.
In either case we're actually talking about relatively advanced OpenGL stuff.
I want to crop and object my openGL ES application, it should be done in the following manner:
The left is the initial image, middle is the stencil buffer matrix, and right is the result.
From what i have read here: Discard with stencil might have performance issues,
and since the model that will be clipped is going to be rotated and translated, i honestly don't know if the drawn model will be clipped out in the wrong places after these actions.
will it?
So, i thought about depth buffer.
Again, an example:
(This photo was taken from this question.)
Assume that the black square is movable, and might not be just a simple square, but a complex UIBezierPath.
I was wondering about how to use the depth buffer, so that all that is drawn outside the square (or UIBezierPath) will be clipped out, meaning, adjusting all z values of the left out pixels to some threshold value, that wont be shown on screen.
So to summarise:
1) Will using stencil is going to be expensive as stated?
2) Is it possible to use stencil on a rotated and translated object so that it will always will be drawn?
3) Using depth buffer, Is it possible to find out what is inside and what is outside the square (or UIBezierPath) and how? masking it somehow?
4) What is the better approach?
I know it's a lot to answer on but since they all relate to each other i thought it better be asked at the same question.
The stencil buffer is the way to go here. The discard answer you refer to is about using the discard function in fragment shaders, which is very expensive for tile based deferred rendering GPUs (ie basically every mobile GPU).
Using the stencil buffer however is very cheap, as it is present on chip for each tile and does not interfere with deferred rendering.
To summarise:
No.
Yes, the stencil buffer operates in 2D over the whole viewport on the transformed vertices. It will clip the cube after its model transforms have been applied.
Yes, but needless to say this is complicated, somehow sounds similar to shadow volumes.
Use the stencil buffer.
I'm trying to figure a way to recreate or at least have a similar result to the max clamp blend equation in OpenGl es 2.0 on Android devices.
Unfortunately, glBlendEquation(GL_MAX_EXT) is not supported on Android. GL_MAX enum is defined in the gl header in Android but when executing, the result is a GL_INVALID_ENUM​, 0x0500 error.
I have a solution using shaders and off screen textures where each render ping-pongs back and forth between textures using the shader to calculate the max pixel value.
However, this solution isn't fast enough for any real time execution on most Android devices.
So given this limitation, is there any way to recreate a similar result using just different blend equations and blend factors?
I have tried many blend function combinations, the closest have been:
glBlendFunction(GL_SRC_ALPHA, GL_ONE_MINUS_ALPHA) : This comes close but textures become too transparent. Textures with low alpha values are difficult to see.
glBlendFunction(GL_ONE_MINUS_DST_ALPHA, GL_SRC_ALPHA) : This also comes some what close but the alpha accumulates too much and the colors become darker than intended.
If you could do GL_MAX blending without needing a special blend function... OpenGL would never have added it in the first place. So your options are to do without or to use your shader method.
I've search around and seems that glBlendEquation has some issues in Android, -GL_MAX/MIN is not even listed in the opengles specs
I need to find a workaround to GL_MIN blendEquation mode. Is that possible? I want to write to a color buffer only if the alpha there is greater than the pixel's alpha i'm trying to write. Is it possible to do it without any extension or using more than 1 texture?
The dirty solution I was trying to avoid is this. Using Frame Buffer objects and shaders to emulate the color buffer and the blending mode:
Modify existing shaders to blend the scene with an FBO_1, into FBO_2.
Render FBO_2 to the screen.
The next drawing call swap FBO_1 with FBO_2, as FBO_2 equals the color buffer.
An "unintrusive" and more inefficient alternative is to use 3 FBOs and a shader, and make an additional pass.
Render scene to FBO_1 //without any modification to existing shaders
Blend FBO_1 with FBO_2 into FBO_3 //with new shader.
Render FBO_3 to the screen.
The next drawing call swap FBO_2 with FBO_3. The only advantage of this alternative is that i dont have to modify the existing drawing logic.
I really don't like any of this ideas. I'll gladly accept better answers!