I'm trying to use vertex arrays to draw a reasonably large mesh, containing a large number of vertices. The textures have been determined from these and it's easy enough to draw in immediate mode along the lines of:
glBegin(GL_TRIANGLES) {
for ( int faceIdx = 0; faceIdx < nFaces; ++faceIdx )
glVertex3fv(&vertexArray[vIdx]);
glTexCoord2fv(&texCoord[vIdx++]);
glVertex3fv(&vertexArray[vIdx]);
glTexCoord2fv(&texCoord[vIdx++]);
glVertex3fv(&vertexArray[vIdx]);
glTexCoord2fv(&texCoord[vIdx++]);
}
}
glEnd();
However for readability, speed and the rest I want to use vertex arrays (with a view to moving to VBOs). Is there a way of getting around putting a single vertex into the array multiple times?
As I understand it at the moment it's necessary to specify each vertex of the mesh as many times as it appears in a face of the mesh, as each vertex identifies to multiple texture coordinates (the textures are captured from a real-world image of the object the mesh approximates), i.e. my vertex/tex coord array reads as if I'd filled it in immediate mode.
Is there any way to use vertex arrays, whilst specifying the texture coordinates, without using redundant (by which I mean repeated) vertices?
A single vertex is comprised of all attributes that make up this single vertex. So two vertices that share the same position but have different texture coordinates are conceptually different vertices. So no, there is no simple way around repeating vertex positions for different texCoords.
But usually such vertex duplications are only neccessary in some rare regions (like sharp edges, because of different normals, or like in your case a texture seam). So do all your faces' corners really have different texCoords? Maybe you can pre-process your data a bit and find neighbouring faces that share positions and texCoords and can therefore share the vertex. I would be suprised if that wouldn't be the case for many many vertices and you end up with only a small bunch of duplicated vertices.
If I understand your question correct there is indeed a way of getting rid of having to put multiple times the same vertex into the vertexbuffer. You can use an indexbuffer and use indices to specify the same vertex. This can speed up the rendering significantly.
Related
I have many fixed objects like terrains and buildings and I want to merge them all in one VBO to reduce draw calls and enhance performance when there are too many objects, I load textures and store their ids in an array, my question is can I bind textures to that one VBO or must I make a separate VBO for each texture? or can I make many glDrawArrays for one VBO based on offset and length, if I can do that will this be smooth and well performed?
In ES 2.0, if you want to use multiple textures in a single draw call, your only good option is to use a texture atlas. Essentially, you store the texture data from multiple logical textures in a single OpenGL texture, and the texture coordinates are chosen so that the desired texture data is used for each primitive. This could be done by adjusting the original texture coordinates, or by feeding an id into the shader and applying an offset to the texture coordinates based on the id.
Of course you can use multiple glDrawArrays() calls for a single VBO, with binding a different texture between them. But that goes against your goal of reducing the number of draw calls. You should certainly make sure that the number of draw calls really is a bottleneck for you before you spend a lot of time on these types of optimizations.
In more advanced versions of OpenGL you have additional features that can help with this use case, like array textures.
There are couple of standard techniques that many Game Engines perform to achieve low draw calls.
Batching: This technique combines all objects referring to same material and combines them into one mesh. The objects does not even have to be static. If they are dynamic you can still batch them by passing the Model Matrix array.
Texture Atlas: Creating texture atlas for all static meshes is the best way as said in the other answer. However, you'll have to do a lot of work for combining the textures efficiently and updating their UVs accordingly.
So Im trying to figure out how to draw a single textured quad many times. My issue is that since these are create and deleted and every one of them has a unique position and rotation. Im not sure a vbo is the best solution as I've heard modifying buffers is extremely slow on android and it seems I would need to create a new one each frame since different quads might disappear randomly (collide with an enemy). If I simply do a draw call for each one I get 20fps around 100, which is unusable. any advice?
Edit: I'm trying to create a bullethell, but figuring out how to draw 500+ things is hurting my head.
I think you're after a particle system. A similar question is here: Drawing many textured particles quickly in OpenGL ES 1.1.
Using point sprites is quite cheap, but you have to do extra work in the fragment shader and I'm not sure if GLES2 supports gl_PointSize if you need different sized particles. gl_PointSize Corresponding to World Space Size
My go-to particle system is storing positions in a double buffered texture, then draw using a single draw call and a static array of quads. This is related but I'll describe it a bit more here...
Create a texture (floating point if you can, but this may limit the supported devices). Each pixel holds the particle position and maybe rotation information.
[EDITED] If you need to animate the particles you want to change the values in the texture each frame. To make it fast, get the GPU to do it in a shader. Using an FBO, draw a fullscreen polygon and update the values in the fragment shader. The problem is you can't read and write to the same texture (or shouldn't). The common approach is to double buffer the texture by creating a second one to render to while you read from the first, then ping-pong between them.
Create a VBO for drawing triangles. The positions are all the same, filling a -1 to 1 quad. However make texture coordinates for each quad address the correct pixel in the above texture.
Draw the VBO, binding your positions texture. In the vertex shader, read the position given the vertex texture coordinate. Scale the -1 to 1 vertex positions to the right size, apply the position and any rotation. Use the original -1 to 1 position as the texture coordinate to pass to the fragment shader to add any regular colour textures.
If you ever have a GLSL version with gl_Vertex, I quite like generating these coordinates in the vertex shader, saving storing unnecessarily trivial data just to draw simple objects. This for example.
To spawn particles, use glTexSubImage2D and write a block of particles into the position texture. You may need a few textures if you start storing more particle attributes.
In opengl or opengl-es you can use indices to share a vertices. This works fine if you are only using vertex coords and texture coords that don't change, but when using normals, the normal on a vertex may change depending on the face. Does this mean that you are essentially forced to scrap vertex sharing in opengl? This article http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-9-vbo-indexing/
seems to imply that this is the case, but I wanted a second opinion. I'm using .obj models so should I just forget about trying to share verts? This seems like it would increase the size of my model though as I iterate and recreate the array since i am repeating tons of verts and their tex/normal attributes.
The link you posted explains the situation well. I had the same question in my mind couple months ago.I remember I read that tutorial.
If you need exactly 2 different normal, so you should add that vertex twice in your index list. For example, if your mesh is a cube you should add your vertices twice.
Otherwise indexing one vertex and calculating an average normal is kind of smoothing your normal transitions on your mesh. For example if your mesh is a terrain or a detailed player model etc. you can use this technique which you save free space and get better looking result.
If you ask how to calculate average normal, I used average normal calculating algorithm from this question and result is fast and good.
If the normals are flat faces then you can annotate the varying use in the fragment shader with the "flat" qualifier. This means only the value from the provoking vertex is used. With a good model exporter you can get relatively good vertex sharing with this.
Not sure on availability on GLES2, but is part of GLES3.
Example: imagine two triangles, expressed as a tri-strip:
V0 - Norm0
V1 - Norm1
V2 - Norm2
V2 - Norm3
Your two triangles will be V0/1/2 and V1/2/3. If you mark the varying variable for the normal as "flat" then the first triangle will use Norm0 and the second triangle will use Norm1 (i.e. only the first vertex in the triangle - known as the provoking vertex - needs to have the correct normal). This means that you can safely reuse vertices in other triangles, even if the normal is "wrong" provides that you make sure that it isn't the provoking vertex for that triangle.
Dabbling around with OGLES in Android. I want to create some triangles and translate them. Straightforward enough.
I look at the examples and see that FloatBuffers are recommended as the storage construct for polygon coordinates using GLES in Android, because they are on native code steroids. Good stuff! Explanation here.
Why FloatBuffer instead of float[]?
Now what's confusing to me is the Matrix class in Android operates almost exclusively on java float arrays. So on one hand I'm being told to use FloatBuffers for speed, but on the other hand every transformation I want to do now has to look like this?
float[] myJavaFloatArray = new float[polylength];
myFloatBuffer.get(myJavaFloatArray);
Matrix.translateM(myJavaFloatArray,0,-1.0f,0.0f,0.0f);
myFloatBuffer.put(myJavaFloatArray);
Is this really the best practice? Is the assumption just that my ratio of translates to draws will be low enough that this expense is worth it? What if I'm doing lots of translates, one on every object per frame, is that an argument for using java float arrays throughout?
First of all, isn't there a method array() for FloatBuffer that returns float[] value pointing directly to the buffer data? You should use that in translate method so you don't copy the data around the memory, this should turn your 4 lines of code into a single line:
Matrix.translateM(myFloatBuffer.array(),0,-1.0f,0.0f,0.0f);
Second it is not best practice to transform your vertex data on the CPU, you have a vertex shader for that. Use matrices and push them to the GPU so you can benefit from it. Also if you transform those data the way you do you kind of corrupt them: If the float buffer contained vertex array for some object you want to draw in several location you would have to either translate it back to 0 before translating it to a new location for each draw call or have multiple instances of the vertex data.
Any way, although it can be very handy to be able to transform the array buffer you should avoid using it per frame if possible. Do it in loading time or in background when preparing some element. As for per frame rather try to use vertex shader for that. In case you are dealing with ES1 and you have no vertex shader there are still matrix operations that work in the same way (translate, rotate, scale, set, push, pop...)
In OpenGL ES, is it possible to use degenerate triangles (triangles with 0 area) to separate TRIANGLE_FAN objects in a vertex array? Or is this only possible with TRIANGLE_STRIP?
If the answer is no, what would be the best way to batch multiple TRIANGLE_FAN vertex array draw calls into one?
You are correct, the answer is no, since all triangle fan triangles share same vertex.
If you want to batch multiple triangle fans it would be better to use Vertex Buffer Objects or VBO's with GL_TRIANGLES mode instead. There will be a small index buffer overhead, but it would provide a better flexibility.
You could also triangulate your surface as a strip instead of a fan. It would allow you to use degenerate triangles and batch your draw calls into one.
If your surface is a n-gon it's easy. Just change the order of vertex creation. Instead of going around the center, pick a vertex to start and generate the others by iterating on both sides. Here is an example with an Hexagon. The left image uses a triangle strip, the middle uses a fan.