OpenGL ES Degenerate Triangles With TRIANGLE_FAN? - android

In OpenGL ES, is it possible to use degenerate triangles (triangles with 0 area) to separate TRIANGLE_FAN objects in a vertex array? Or is this only possible with TRIANGLE_STRIP?
If the answer is no, what would be the best way to batch multiple TRIANGLE_FAN vertex array draw calls into one?

You are correct, the answer is no, since all triangle fan triangles share same vertex.
If you want to batch multiple triangle fans it would be better to use Vertex Buffer Objects or VBO's with GL_TRIANGLES mode instead. There will be a small index buffer overhead, but it would provide a better flexibility.

You could also triangulate your surface as a strip instead of a fan. It would allow you to use degenerate triangles and batch your draw calls into one.
If your surface is a n-gon it's easy. Just change the order of vertex creation. Instead of going around the center, pick a vertex to start and generate the others by iterating on both sides. Here is an example with an Hexagon. The left image uses a triangle strip, the middle uses a fan.

Related

Android OpenGL ES2 Many textures for one VBO

I have many fixed objects like terrains and buildings and I want to merge them all in one VBO to reduce draw calls and enhance performance when there are too many objects, I load textures and store their ids in an array, my question is can I bind textures to that one VBO or must I make a separate VBO for each texture? or can I make many glDrawArrays for one VBO based on offset and length, if I can do that will this be smooth and well performed?
In ES 2.0, if you want to use multiple textures in a single draw call, your only good option is to use a texture atlas. Essentially, you store the texture data from multiple logical textures in a single OpenGL texture, and the texture coordinates are chosen so that the desired texture data is used for each primitive. This could be done by adjusting the original texture coordinates, or by feeding an id into the shader and applying an offset to the texture coordinates based on the id.
Of course you can use multiple glDrawArrays() calls for a single VBO, with binding a different texture between them. But that goes against your goal of reducing the number of draw calls. You should certainly make sure that the number of draw calls really is a bottleneck for you before you spend a lot of time on these types of optimizations.
In more advanced versions of OpenGL you have additional features that can help with this use case, like array textures.
There are couple of standard techniques that many Game Engines perform to achieve low draw calls.
Batching: This technique combines all objects referring to same material and combines them into one mesh. The objects does not even have to be static. If they are dynamic you can still batch them by passing the Model Matrix array.
Texture Atlas: Creating texture atlas for all static meshes is the best way as said in the other answer. However, you'll have to do a lot of work for combining the textures efficiently and updating their UVs accordingly.

Fastest way to draw sprites in opengles 2.0 on android

So Im trying to figure out how to draw a single textured quad many times. My issue is that since these are create and deleted and every one of them has a unique position and rotation. Im not sure a vbo is the best solution as I've heard modifying buffers is extremely slow on android and it seems I would need to create a new one each frame since different quads might disappear randomly (collide with an enemy). If I simply do a draw call for each one I get 20fps around 100, which is unusable. any advice?
Edit: I'm trying to create a bullethell, but figuring out how to draw 500+ things is hurting my head.
I think you're after a particle system. A similar question is here: Drawing many textured particles quickly in OpenGL ES 1.1.
Using point sprites is quite cheap, but you have to do extra work in the fragment shader and I'm not sure if GLES2 supports gl_PointSize if you need different sized particles. gl_PointSize Corresponding to World Space Size
My go-to particle system is storing positions in a double buffered texture, then draw using a single draw call and a static array of quads. This is related but I'll describe it a bit more here...
Create a texture (floating point if you can, but this may limit the supported devices). Each pixel holds the particle position and maybe rotation information.
[EDITED] If you need to animate the particles you want to change the values in the texture each frame. To make it fast, get the GPU to do it in a shader. Using an FBO, draw a fullscreen polygon and update the values in the fragment shader. The problem is you can't read and write to the same texture (or shouldn't). The common approach is to double buffer the texture by creating a second one to render to while you read from the first, then ping-pong between them.
Create a VBO for drawing triangles. The positions are all the same, filling a -1 to 1 quad. However make texture coordinates for each quad address the correct pixel in the above texture.
Draw the VBO, binding your positions texture. In the vertex shader, read the position given the vertex texture coordinate. Scale the -1 to 1 vertex positions to the right size, apply the position and any rotation. Use the original -1 to 1 position as the texture coordinate to pass to the fragment shader to add any regular colour textures.
If you ever have a GLSL version with gl_Vertex, I quite like generating these coordinates in the vertex shader, saving storing unnecessarily trivial data just to draw simple objects. This for example.
To spawn particles, use glTexSubImage2D and write a block of particles into the position texture. You may need a few textures if you start storing more particle attributes.

Shared vertex indices with normals in opengl

In opengl or opengl-es you can use indices to share a vertices. This works fine if you are only using vertex coords and texture coords that don't change, but when using normals, the normal on a vertex may change depending on the face. Does this mean that you are essentially forced to scrap vertex sharing in opengl? This article http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-9-vbo-indexing/
seems to imply that this is the case, but I wanted a second opinion. I'm using .obj models so should I just forget about trying to share verts? This seems like it would increase the size of my model though as I iterate and recreate the array since i am repeating tons of verts and their tex/normal attributes.
The link you posted explains the situation well. I had the same question in my mind couple months ago.I remember I read that tutorial.
If you need exactly 2 different normal, so you should add that vertex twice in your index list. For example, if your mesh is a cube you should add your vertices twice.
Otherwise indexing one vertex and calculating an average normal is kind of smoothing your normal transitions on your mesh. For example if your mesh is a terrain or a detailed player model etc. you can use this technique which you save free space and get better looking result.
If you ask how to calculate average normal, I used average normal calculating algorithm from this question and result is fast and good.
If the normals are flat faces then you can annotate the varying use in the fragment shader with the "flat" qualifier. This means only the value from the provoking vertex is used. With a good model exporter you can get relatively good vertex sharing with this.
Not sure on availability on GLES2, but is part of GLES3.
Example: imagine two triangles, expressed as a tri-strip:
V0 - Norm0
V1 - Norm1
V2 - Norm2
V2 - Norm3
Your two triangles will be V0/1/2 and V1/2/3. If you mark the varying variable for the normal as "flat" then the first triangle will use Norm0 and the second triangle will use Norm1 (i.e. only the first vertex in the triangle - known as the provoking vertex - needs to have the correct normal). This means that you can safely reuse vertices in other triangles, even if the normal is "wrong" provides that you make sure that it isn't the provoking vertex for that triangle.

Batching Multiple Rectangles in OpenGL ES

I currently am experiencing very slow performance by iterating through quad triangle strips and drawing each one separately, so I would like to batch all of my rectangles into one single draw call.
Looking around, it seems the best way to do this is to simply occur the overhead of duplicating vertices and using GL_TRIANGLES instead of GL_TRIANGLE_STRIP, simply drawing two separate triangles for each rectangle.
The problem is that each rectangle can have a different color, and I need to programmatically change the color of any of the rectangles. So simply using one GL_TRIANGLES call does not do the trick. Instead, it looks like I'll need to somehow index color data with my vertex data, associating a color with each rectangle. How would I go about this?
Thank you!
You can use vertex coloring.
Vertices can each have multiple channels of data, including position, color, (multiple) texture, normal, and more.
I recommend interleaving your vertices to include position and color one after the other, directly. Although you can set up a separate array of just colors and do it that way as well (just make sure you line up the colors with the positions correctly).
(Those tutorials are iPhone-oriented but the OpenGL ES code should work fine on Android)

Draw outline using with shader program in OpenGL ES 2.0 on Android

I am using OpenGL ES 2.0 (on Android) to draw simple 2D scene has few images. I have background image and some others which have alpha channel.
I would like to draw outline around non-transparent pixels in texture using only shader programs. After somewhat extensive search I failed to find example code. It looks like GLES 2.0 is still not that popular.
Can you provide some sample code or point me in right direction where I can find more information on how to do this?
There are a couple of ways of doing this depending on the a) Qaulity, and b) Speed you need. The common search terms are:
"glow outline"
"bloom"
"toon shader" or "toon shading"
"edge detection"
"silhouette extraction"
"mask"
1) The traditional approach is to use the stencil buffer and render to texture
Clear the stencil buffer (usually done once per frame)
glClear( GL_COLOR_BUFFER_BIT | DEPTH_BUFFER_BIT | STENCIL_BUFFER_BIT )
Render to Texture
Disable Depth Writes
glDepthMask( 1 );
Disable Color Buffer Writes
glColorMask( 0, 0, 0, 0 );
Enable the Stencil buffer Set stencil to always pass and replace
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE );
glStencilFunc( GL_ALWAYS, 1, 1 );
Draw object into texture
Disable stencil
Enable Color Buffer Writes
Enable Depth Writes
Do a N-pass "tap", such as 5 or 7 pass tap where you blur the texture via rendering to itself in both the vertical and horizontal direction (another option is to scale drawing the texture image up)
Switch to orthographic projection
Draw & Blend the texture image back into the framebuffer
Restore perspective projection
2) Pass along extra vertex, namely which vertices are adjacent in the proper winding order, and dynamically generate extra outline triangles.
See: http://www.gamasutra.com/view/feature/1644/sponsored_feature_inking_the_.php?print=1
3) Use cheap edge detection. In the vertex shader check the dot product of the normal with the view. If it is between:
-epsilon < 0 < epsilon
Then you have an edge.
4) Use cheap-o-rama object scaling. It doesn't work for concave objects of course but depending on your quality needs may be "good enough"
Switch to a "flat" shader
Enable Alpha Testing
Draw the model scaled up slightly
Disable Alpha Testing
Draw the model but at the normal size
References:
https://developer.valvesoftware.com/wiki/L4D_Glow_Effect
http://prideout.net/blog/?p=54
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html
Related SO questions:
Outline effects in OpenGL
To get the pixel shader drawing something, there needs to be geometry.
As far as I understand, you want to draw a border around these images,
but the outermost fragments generated would be image pixels in a basic implementation,
so you'd overdraw them with any border.
If you want a 'line border', you cannot do anything else than drawing the image triangles/quads (GL_TRIANGLES,GL_QUADS), and in an additional call the outline (using GL_LINES), where you may share the vertices of a single quad.
Consider, that lines can't be drawn efficiently by many GPU's)
Otherwise, see below solutions:
Solution 1:
Draw the rectangle as big as the image + border will be and adjust texture coords for the image, so that it will be placed within the rectangle appropriately.
This way, no extra geometry or draw calls are required.
Set the texture border property (single 4 component color), there will be no need to do extra fragment shader calculations, the texture unit/sampler does all the work.
Texture properties:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER)
glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,borderColor4f)
I've never used a color border for a single channel texture, so this approach needs to be verified.
Solution 2:
Similar to 1, but with calculations in the fragment shader to check, whether the texture coords are within the border area, instead of the texture border. Without modification, the scalars of a texture coord range from 0.0 to 1.0.
Texture properties may be:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP)
The fragment color could be determined by any of these methods:
an additional border color attribute for the rectangle, where either the texel or that border color is selected then (could be a vertex attribute, but more likely an uniform or constant).
combination of the alpha texture with a second texture as background for the whole rectangle (like a picture frame) and here too, either texel is choosen.
some other math function
Of course, the color values could be mixed for image/border gradients.
EDIT:
As the number, length and position of such outline segments will vary and can even form concave shapes, you'd need to do this with a geometry shader, which is not available in ES 2.0 core. The best thing you can do is to precompute a line loop for each image on the CPU. Doing such tests in a shader is rather inefficient and even overkill, depending on image size, the hardware you actually run it on etc. If you'd draw a fixed amount of line segments and transform them using the vertex shader, you can not properly cover all cases, at least not without immense effort and GPU workload.
Should you intend to change the color values of corresponding texels, your fragment shader would need to fetch a massive and varying amount of texels for each neighbour pixel towards the texture edges as in all other implementations. Such brute force techniques are usually a replacement for recursive and iterative algos, for which the CPU is a better choice. So I suggest that you do it there by either modifying the texture or generate a second one for combination in the fragment shader.
Basically, you need to implement a path finding algo, which tries to 'get around' opaque pixels towards any edge.
Your alpha channel can be seen as a grey scale image. Look for any edge detection/drawing algorithm. For example Canny edge detector (http://en.wikipedia.org/wiki/Canny_edge_detector). Alternatively and probably much better idea if your images are not procedural is to pre-compute the edges.
If your goal is to blend various images and then apply the contour from the result of that blending, try rendering to a texture and then render again that texture over the screen and perform the edge detection algorithm.

Categories

Resources