I am currently using VBOs and triangle fans to draw circles. Someone told me that it was more efficient to map a texture of a circle onto a quad, and then apply transparency. My circle needs to gradually change color over time (hundreds of possible colors).
Is texturing a quad really more efficient? If so, could someone please provide me with a relevant link or some code/pseudocode (specifically how to change the colors for just the circular region, and the appropriate blending filter) as to how to make this dream a reality?
If your circle always has the same color over its whole region (colors don't change on different regions idependently), you can just change the color of your quad and multiply it by a white circle texture either using GL_MODULATE texture environment (if using fixed function) or by just writing the constant color instead of the texture color (if using shaders).
Along with mapping a white texture with texture coordinates and vertex coordinates, giving a valid color pointer with required color values in it worked for me. I did not use any GL_MODULATE in 1.x code.
Related
As I know, we can set the line color before draw lines. Is it possible to write a fragment shader that can make the model edge a different color so that we do not need to draw edge in different color specially?
FI know its old thread, but better than scaling up model. is a 2 pass shader, scale up on each vertex, by pushing vertex position out a bit more along the direction of the normal for that vertex. Works much better than scaling the model Scaling model only works for convex shapes (cubes, balls). For concave and complex shapes (humanoid) the normal-push version works.
I have a 3d mesh, which is a terrain. This runs perfectly fine btw, but I want to have shapes moving accross this terrain. These shapes are flat on the landscape and are blob-like: They can change shape and should follow the contoures and the heightmap of the terrain. These shapes can be painted on the landscape or flow over it, that doesn't matter.
The shapes are meant to be blocks of armies moving across the map, and this should be happening Real-Time! Also: they are 2d convex hull shapes. Also they are just one color with an alpha value (like blue with alpha 0.25f).
The only problem is: I can't figure out how to do this and the question is: Can anyone tell me how to do it?
My first thoughts were just to copy the terrain vertex matrix, push it up a bit so it will be on top of the terrain, load this buffer into a VBO and update the index buffer according to the position and shape needed and then draw the shape. This is rather slow and inefficient, especially when the shape is moving and changing. Also, the resolution of the heightmap is 175x175, so the movement is not at all smooth but rather jaggy.
Then I thought, but rather new to this area, update the shape outlines to the fragment shader of the terrain and let the shader decide if a point lies in that area and change color accordingly. This also was a really slow option, but if anyone sees potential and a good way to do this, tell me!
The next option was to draw directly onto the texture, which is still in the failing stage. If someone has any good ideas on how to draw a scene to a flat area and then put that on a terrain mesh, that would be great!
So if anyone has a solution to draw a shape (or multiple) on a terrain? That would be awesome. Thanks in advance!
I'm trying to get my point sprites to display with the correct opacity.
Originally, I was getting my sprite texture on a black square.
So, I added the following to my fragment shader:
"if(color.a < 0.5) "+
"discard;"+
Now, this does seem to work, in that my sprite displays without the black background, however, my texture itself is 'partially transparent' - and it isn't showing this partial transparency, it is appearing solid. It's a bit difficult to explain, but I hope you understand what I mean. If I draw the same texture using canvas/surfaceview, it displays correctly.
Basically I'm trying to get my textures to display in their original format (ie as they do in the software in which they were created - ie, the Gimp / photoshop etc).
Would appreciate any help - thanks
First make sure your textures are loaded from transparent pngs through a Bitmap with either RGBA_8888 or RGBA_4444 configuration, so you don't lose the alpha channel.
Second you need to enable GL_BLEND with the glEnable() command. On Android you will write it like this: GLES20.glEnable(GLES20.GL_BLEND);. This allows you to blend the already drawn color with the new color, achieving a transparent look.
The blending function should be set to GL_ONE, GL_ONE_MINUS_ALPHA for regular transparency: glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
GL_SRC_ALPHA, GL_ONE_MINUS_ALPHA for regular transparency: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Finally, you do not need to use discard, just set the gl_FragColor to a 4-component vector with the alpha in the fourth channel (which is what you get when reading a texture from a sampler), e.g. you could just do gl_FragColor = texture2D(sampler, texCoord); if you wanted to.
You will most likely have to turn off depth-testing with glDisable(GL_DEPTH_TEST) to avoid problems with unsorted triangles.
You can read a little bit more about transparency here.
I am using OpenGL ES 2.0 (on Android) to draw simple 2D scene has few images. I have background image and some others which have alpha channel.
I would like to draw outline around non-transparent pixels in texture using only shader programs. After somewhat extensive search I failed to find example code. It looks like GLES 2.0 is still not that popular.
Can you provide some sample code or point me in right direction where I can find more information on how to do this?
There are a couple of ways of doing this depending on the a) Qaulity, and b) Speed you need. The common search terms are:
"glow outline"
"bloom"
"toon shader" or "toon shading"
"edge detection"
"silhouette extraction"
"mask"
1) The traditional approach is to use the stencil buffer and render to texture
Clear the stencil buffer (usually done once per frame)
glClear( GL_COLOR_BUFFER_BIT | DEPTH_BUFFER_BIT | STENCIL_BUFFER_BIT )
Render to Texture
Disable Depth Writes
glDepthMask( 1 );
Disable Color Buffer Writes
glColorMask( 0, 0, 0, 0 );
Enable the Stencil buffer Set stencil to always pass and replace
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE );
glStencilFunc( GL_ALWAYS, 1, 1 );
Draw object into texture
Disable stencil
Enable Color Buffer Writes
Enable Depth Writes
Do a N-pass "tap", such as 5 or 7 pass tap where you blur the texture via rendering to itself in both the vertical and horizontal direction (another option is to scale drawing the texture image up)
Switch to orthographic projection
Draw & Blend the texture image back into the framebuffer
Restore perspective projection
2) Pass along extra vertex, namely which vertices are adjacent in the proper winding order, and dynamically generate extra outline triangles.
See: http://www.gamasutra.com/view/feature/1644/sponsored_feature_inking_the_.php?print=1
3) Use cheap edge detection. In the vertex shader check the dot product of the normal with the view. If it is between:
-epsilon < 0 < epsilon
Then you have an edge.
4) Use cheap-o-rama object scaling. It doesn't work for concave objects of course but depending on your quality needs may be "good enough"
Switch to a "flat" shader
Enable Alpha Testing
Draw the model scaled up slightly
Disable Alpha Testing
Draw the model but at the normal size
References:
https://developer.valvesoftware.com/wiki/L4D_Glow_Effect
http://prideout.net/blog/?p=54
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html
Related SO questions:
Outline effects in OpenGL
To get the pixel shader drawing something, there needs to be geometry.
As far as I understand, you want to draw a border around these images,
but the outermost fragments generated would be image pixels in a basic implementation,
so you'd overdraw them with any border.
If you want a 'line border', you cannot do anything else than drawing the image triangles/quads (GL_TRIANGLES,GL_QUADS), and in an additional call the outline (using GL_LINES), where you may share the vertices of a single quad.
Consider, that lines can't be drawn efficiently by many GPU's)
Otherwise, see below solutions:
Solution 1:
Draw the rectangle as big as the image + border will be and adjust texture coords for the image, so that it will be placed within the rectangle appropriately.
This way, no extra geometry or draw calls are required.
Set the texture border property (single 4 component color), there will be no need to do extra fragment shader calculations, the texture unit/sampler does all the work.
Texture properties:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER)
glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,borderColor4f)
I've never used a color border for a single channel texture, so this approach needs to be verified.
Solution 2:
Similar to 1, but with calculations in the fragment shader to check, whether the texture coords are within the border area, instead of the texture border. Without modification, the scalars of a texture coord range from 0.0 to 1.0.
Texture properties may be:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP)
The fragment color could be determined by any of these methods:
an additional border color attribute for the rectangle, where either the texel or that border color is selected then (could be a vertex attribute, but more likely an uniform or constant).
combination of the alpha texture with a second texture as background for the whole rectangle (like a picture frame) and here too, either texel is choosen.
some other math function
Of course, the color values could be mixed for image/border gradients.
EDIT:
As the number, length and position of such outline segments will vary and can even form concave shapes, you'd need to do this with a geometry shader, which is not available in ES 2.0 core. The best thing you can do is to precompute a line loop for each image on the CPU. Doing such tests in a shader is rather inefficient and even overkill, depending on image size, the hardware you actually run it on etc. If you'd draw a fixed amount of line segments and transform them using the vertex shader, you can not properly cover all cases, at least not without immense effort and GPU workload.
Should you intend to change the color values of corresponding texels, your fragment shader would need to fetch a massive and varying amount of texels for each neighbour pixel towards the texture edges as in all other implementations. Such brute force techniques are usually a replacement for recursive and iterative algos, for which the CPU is a better choice. So I suggest that you do it there by either modifying the texture or generate a second one for combination in the fragment shader.
Basically, you need to implement a path finding algo, which tries to 'get around' opaque pixels towards any edge.
Your alpha channel can be seen as a grey scale image. Look for any edge detection/drawing algorithm. For example Canny edge detector (http://en.wikipedia.org/wiki/Canny_edge_detector). Alternatively and probably much better idea if your images are not procedural is to pre-compute the edges.
If your goal is to blend various images and then apply the contour from the result of that blending, try rendering to a texture and then render again that texture over the screen and perform the edge detection algorithm.
This is my first question...
I've a square (triangle strip) with a texture (.png).
This .png have smooth corners like a play card.
The four extremities (corners) are transparent.
When draw the square, in the corners, i see the white color of the shape.
(Like a background behind the texture)
My question is:
How can i draw a transparent color for the shape, but mantain the color of the texture with a full apha?
(If i set transparent colors.... then also the texture become transparent)
How can i separate the two contexts?
Thanks in advance.. and sorry for my bad english.
First of all, you must use a texture format with alpha channel.
Then if you are using fixed function pipeline, you must enable blending.
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
If you are not using fixed function pipeline, please share the fragment shader code; and a little more details on how you are using it.