I am using opengl es with android. I have have three objects on the screen, two of which I am not using a texture map, just colors, and one of which I am. When I comment out the code that draws the two color objects, the texture maps onto my other object fine, but when the two color objects are present, the texture does not map onto my object and I just get a white square. Is there a call I need to make to opengl after I draw the color objects so that the texture will render on the other object?
Before you draw the two color objects i guess you are calling glDisable(GL_TEXTURE_2D) ,if so you need to call glEnable(GL_TEXTURE_2D) before you draw the object with the texture.
So your code should be something like that:
glDisable(GL_TEXTURE_2D)
drawColorObject1();
drawColorObject1();
glEnable(GL_TEXTURE_2D)
drawTextureObject();
I am using OpenGL ES 2.0 (on Android) to draw simple 2D scene has few images. I have background image and some others which have alpha channel.
I would like to draw outline around non-transparent pixels in texture using only shader programs. After somewhat extensive search I failed to find example code. It looks like GLES 2.0 is still not that popular.
Can you provide some sample code or point me in right direction where I can find more information on how to do this?
There are a couple of ways of doing this depending on the a) Qaulity, and b) Speed you need. The common search terms are:
"glow outline"
"bloom"
"toon shader" or "toon shading"
"edge detection"
"silhouette extraction"
"mask"
1) The traditional approach is to use the stencil buffer and render to texture
Clear the stencil buffer (usually done once per frame)
glClear( GL_COLOR_BUFFER_BIT | DEPTH_BUFFER_BIT | STENCIL_BUFFER_BIT )
Render to Texture
Disable Depth Writes
glDepthMask( 1 );
Disable Color Buffer Writes
glColorMask( 0, 0, 0, 0 );
Enable the Stencil buffer Set stencil to always pass and replace
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE );
glStencilFunc( GL_ALWAYS, 1, 1 );
Draw object into texture
Disable stencil
Enable Color Buffer Writes
Enable Depth Writes
Do a N-pass "tap", such as 5 or 7 pass tap where you blur the texture via rendering to itself in both the vertical and horizontal direction (another option is to scale drawing the texture image up)
Switch to orthographic projection
Draw & Blend the texture image back into the framebuffer
Restore perspective projection
2) Pass along extra vertex, namely which vertices are adjacent in the proper winding order, and dynamically generate extra outline triangles.
See: http://www.gamasutra.com/view/feature/1644/sponsored_feature_inking_the_.php?print=1
3) Use cheap edge detection. In the vertex shader check the dot product of the normal with the view. If it is between:
-epsilon < 0 < epsilon
Then you have an edge.
4) Use cheap-o-rama object scaling. It doesn't work for concave objects of course but depending on your quality needs may be "good enough"
Switch to a "flat" shader
Enable Alpha Testing
Draw the model scaled up slightly
Disable Alpha Testing
Draw the model but at the normal size
References:
https://developer.valvesoftware.com/wiki/L4D_Glow_Effect
http://prideout.net/blog/?p=54
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html
Related SO questions:
Outline effects in OpenGL
To get the pixel shader drawing something, there needs to be geometry.
As far as I understand, you want to draw a border around these images,
but the outermost fragments generated would be image pixels in a basic implementation,
so you'd overdraw them with any border.
If you want a 'line border', you cannot do anything else than drawing the image triangles/quads (GL_TRIANGLES,GL_QUADS), and in an additional call the outline (using GL_LINES), where you may share the vertices of a single quad.
Consider, that lines can't be drawn efficiently by many GPU's)
Otherwise, see below solutions:
Solution 1:
Draw the rectangle as big as the image + border will be and adjust texture coords for the image, so that it will be placed within the rectangle appropriately.
This way, no extra geometry or draw calls are required.
Set the texture border property (single 4 component color), there will be no need to do extra fragment shader calculations, the texture unit/sampler does all the work.
Texture properties:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER)
glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,borderColor4f)
I've never used a color border for a single channel texture, so this approach needs to be verified.
Solution 2:
Similar to 1, but with calculations in the fragment shader to check, whether the texture coords are within the border area, instead of the texture border. Without modification, the scalars of a texture coord range from 0.0 to 1.0.
Texture properties may be:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP)
The fragment color could be determined by any of these methods:
an additional border color attribute for the rectangle, where either the texel or that border color is selected then (could be a vertex attribute, but more likely an uniform or constant).
combination of the alpha texture with a second texture as background for the whole rectangle (like a picture frame) and here too, either texel is choosen.
some other math function
Of course, the color values could be mixed for image/border gradients.
EDIT:
As the number, length and position of such outline segments will vary and can even form concave shapes, you'd need to do this with a geometry shader, which is not available in ES 2.0 core. The best thing you can do is to precompute a line loop for each image on the CPU. Doing such tests in a shader is rather inefficient and even overkill, depending on image size, the hardware you actually run it on etc. If you'd draw a fixed amount of line segments and transform them using the vertex shader, you can not properly cover all cases, at least not without immense effort and GPU workload.
Should you intend to change the color values of corresponding texels, your fragment shader would need to fetch a massive and varying amount of texels for each neighbour pixel towards the texture edges as in all other implementations. Such brute force techniques are usually a replacement for recursive and iterative algos, for which the CPU is a better choice. So I suggest that you do it there by either modifying the texture or generate a second one for combination in the fragment shader.
Basically, you need to implement a path finding algo, which tries to 'get around' opaque pixels towards any edge.
Your alpha channel can be seen as a grey scale image. Look for any edge detection/drawing algorithm. For example Canny edge detector (http://en.wikipedia.org/wiki/Canny_edge_detector). Alternatively and probably much better idea if your images are not procedural is to pre-compute the edges.
If your goal is to blend various images and then apply the contour from the result of that blending, try rendering to a texture and then render again that texture over the screen and perform the edge detection algorithm.
I have an Android 4.0 application that uses the GL_OES_EGL_image_external method of rendering video as an OpenGL surface. That works great. In addition, I would like to stretch/warp a few patches on top of that. I'm currently shading those areas I would like to warp with some additional shaders on some quads on top of those areas. I'm stuck on how to get the underlying color. How does the shader on my quad on top of the video quad warp the underlying image? Is it possible?
I'm on iOS, but my app does something very similar.
How I've achieved it is based on some sample code from Apple (look at the RippleModel.m in particular). How it works is that it places the video texture video not on a quad, but on a highly tessellated grid, so you've got a ton of triangles with a ton of texture coordinates. It creates the vertices of this grid programmatically -- and more importantly, it creates the texture coordinates programmatically as well -- and holds them in an array.
For each frame, it iterates through all the vertices and updates the texture coordinates for each, 'warping' them in a ripple pattern, based on where the user has touched, and based on how much texture offset the surrounding vertices have. So the geometry isn't changed at all, and they don't perform the warp in the shader, it's all done in the texture coordinates; the shader is then just doing the straight texture lookup on the coordinates it has received.
So it's hard to say if this approach will work for your needs, but if your warps only happen in 2d, and if you can figure out how to define your warp as texture coordinate adjustments, this may help.
I am currently using VBOs and triangle fans to draw circles. Someone told me that it was more efficient to map a texture of a circle onto a quad, and then apply transparency. My circle needs to gradually change color over time (hundreds of possible colors).
Is texturing a quad really more efficient? If so, could someone please provide me with a relevant link or some code/pseudocode (specifically how to change the colors for just the circular region, and the appropriate blending filter) as to how to make this dream a reality?
If your circle always has the same color over its whole region (colors don't change on different regions idependently), you can just change the color of your quad and multiply it by a white circle texture either using GL_MODULATE texture environment (if using fixed function) or by just writing the constant color instead of the texture color (if using shaders).
Along with mapping a white texture with texture coordinates and vertex coordinates, giving a valid color pointer with required color values in it worked for me. I did not use any GL_MODULATE in 1.x code.
I'm new to opengl-es on android and struggling to get my head around the concept of texturing.
I am looking to produce a tilemap of various difference textures. I understand that it is better to use an atlas of all the combined textures so I don't repeatedly rebind. However I am unsure quite how to then map these textures on to my tilemap.
I understand the process of specifiying vertices and then coordinates of where on the texture map I wish to take them from (i drew a picture too!)
Click for image - curse newbies not allowed to post images :(
But my question is can I draw a triangle strip that is, in effect, longer than one "tile" but map a different area of the texture to that "tile".
So instead of drawing a triangle strip pretending to be a quad, one at a time for each tile, can I somehow draw a whole row of the tilemap (like 1,2,3,4 and cleverly shift around the texture coordinates so each "tile" is now from a different area of the texture? So for example I draw a triangle strip 4 tiles long but shift the texture coordinates so the first "tile" is the yellow of my texture the second red ... third blue... etc
If I've not explained myself too well apologies!
It might just be that this is not possible and I have to draw each one individually which seems like I've saved effort with an atlas, then had to draw them all out slowly anyway regardless. Hmm.
Sure, just adjust texture coordinates, that is how texture atlases work.