Shared vertex indices with normals in opengl - android

In opengl or opengl-es you can use indices to share a vertices. This works fine if you are only using vertex coords and texture coords that don't change, but when using normals, the normal on a vertex may change depending on the face. Does this mean that you are essentially forced to scrap vertex sharing in opengl? This article http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-9-vbo-indexing/
seems to imply that this is the case, but I wanted a second opinion. I'm using .obj models so should I just forget about trying to share verts? This seems like it would increase the size of my model though as I iterate and recreate the array since i am repeating tons of verts and their tex/normal attributes.

The link you posted explains the situation well. I had the same question in my mind couple months ago.I remember I read that tutorial.
If you need exactly 2 different normal, so you should add that vertex twice in your index list. For example, if your mesh is a cube you should add your vertices twice.
Otherwise indexing one vertex and calculating an average normal is kind of smoothing your normal transitions on your mesh. For example if your mesh is a terrain or a detailed player model etc. you can use this technique which you save free space and get better looking result.
If you ask how to calculate average normal, I used average normal calculating algorithm from this question and result is fast and good.

If the normals are flat faces then you can annotate the varying use in the fragment shader with the "flat" qualifier. This means only the value from the provoking vertex is used. With a good model exporter you can get relatively good vertex sharing with this.
Not sure on availability on GLES2, but is part of GLES3.
Example: imagine two triangles, expressed as a tri-strip:
V0 - Norm0
V1 - Norm1
V2 - Norm2
V2 - Norm3
Your two triangles will be V0/1/2 and V1/2/3. If you mark the varying variable for the normal as "flat" then the first triangle will use Norm0 and the second triangle will use Norm1 (i.e. only the first vertex in the triangle - known as the provoking vertex - needs to have the correct normal). This means that you can safely reuse vertices in other triangles, even if the normal is "wrong" provides that you make sure that it isn't the provoking vertex for that triangle.

Related

using glDrawElements with texture co-ordinates [duplicate]

I'm trying to use vertex arrays to draw a reasonably large mesh, containing a large number of vertices. The textures have been determined from these and it's easy enough to draw in immediate mode along the lines of:
glBegin(GL_TRIANGLES) {
for ( int faceIdx = 0; faceIdx < nFaces; ++faceIdx )
glVertex3fv(&vertexArray[vIdx]);
glTexCoord2fv(&texCoord[vIdx++]);
glVertex3fv(&vertexArray[vIdx]);
glTexCoord2fv(&texCoord[vIdx++]);
glVertex3fv(&vertexArray[vIdx]);
glTexCoord2fv(&texCoord[vIdx++]);
}
}
glEnd();
However for readability, speed and the rest I want to use vertex arrays (with a view to moving to VBOs). Is there a way of getting around putting a single vertex into the array multiple times?
As I understand it at the moment it's necessary to specify each vertex of the mesh as many times as it appears in a face of the mesh, as each vertex identifies to multiple texture coordinates (the textures are captured from a real-world image of the object the mesh approximates), i.e. my vertex/tex coord array reads as if I'd filled it in immediate mode.
Is there any way to use vertex arrays, whilst specifying the texture coordinates, without using redundant (by which I mean repeated) vertices?
A single vertex is comprised of all attributes that make up this single vertex. So two vertices that share the same position but have different texture coordinates are conceptually different vertices. So no, there is no simple way around repeating vertex positions for different texCoords.
But usually such vertex duplications are only neccessary in some rare regions (like sharp edges, because of different normals, or like in your case a texture seam). So do all your faces' corners really have different texCoords? Maybe you can pre-process your data a bit and find neighbouring faces that share positions and texCoords and can therefore share the vertex. I would be suprised if that wouldn't be the case for many many vertices and you end up with only a small bunch of duplicated vertices.
If I understand your question correct there is indeed a way of getting rid of having to put multiple times the same vertex into the vertexbuffer. You can use an indexbuffer and use indices to specify the same vertex. This can speed up the rendering significantly.

How to deal with multiple objects in OpenGL ES 2

There are many examples for OpenGL ES 2 in how to visualize a single triangle or rectangle.
Google provides an example for drawing shapes (triangles, rectangles) by creating a Triangle and Rectangle class which basically do all the opengl-stuff required for visualize these objects.
But what should you do, if you have more than one triangle? What if you have objects, consists of hundreds of triangles of different colors, different sizes and positions? I can't find any good tutorial for dealing with complex scenarios in opengl es.
My approaches:
So I tried it out. First of all I've slightely changed the Triangle-Class to a more dynamic class (the constructor now gets the position and the color of the triangle). Basically this is "enough" for drawing complexe scenes. Every object would consist out of hundreds of these Triangle-classes and I render each of them seperately. But this consumes much computing power and I think most of the steps in the rendering process are redundant.
So I tried to "group" triangles into different categories. Now every object has his only vertexbuffer and puts every triangle at once in it. Now the performance is far better than before (where every triangle had his own buffer) but I still think, that it's not the correct way to go.
Is there any good example in the internet, where someone is drawing more than simple triangles or do you know where I can get these information from? I really like OpenGL but it's pretty hard for beginners because of the lack of tutorials (for OpenGL ES 2 in Android).
The standard representation of (triangle) meshes for rendering is using a vertex array containing all the vertices in the mesh, and an index array connecting storing the connectivity (triangles). You definitively want at most one draw call per object (but you might even be able to coalesce several objects).
Interleaved attribute arrays are the most efficient variant wrt. cache efficiency, so one Buffer object for the VA per object is enough. You might even combine several objects into one buffer object, even if you can not use a single draw call for both.
As GLES might be limited to 16 Bit indices, large models must be splitted into several "patches".

Translating - GLES translateM() - a floatbuffer in android

Dabbling around with OGLES in Android. I want to create some triangles and translate them. Straightforward enough.
I look at the examples and see that FloatBuffers are recommended as the storage construct for polygon coordinates using GLES in Android, because they are on native code steroids. Good stuff! Explanation here.
Why FloatBuffer instead of float[]?
Now what's confusing to me is the Matrix class in Android operates almost exclusively on java float arrays. So on one hand I'm being told to use FloatBuffers for speed, but on the other hand every transformation I want to do now has to look like this?
float[] myJavaFloatArray = new float[polylength];
myFloatBuffer.get(myJavaFloatArray);
Matrix.translateM(myJavaFloatArray,0,-1.0f,0.0f,0.0f);
myFloatBuffer.put(myJavaFloatArray);
Is this really the best practice? Is the assumption just that my ratio of translates to draws will be low enough that this expense is worth it? What if I'm doing lots of translates, one on every object per frame, is that an argument for using java float arrays throughout?
First of all, isn't there a method array() for FloatBuffer that returns float[] value pointing directly to the buffer data? You should use that in translate method so you don't copy the data around the memory, this should turn your 4 lines of code into a single line:
Matrix.translateM(myFloatBuffer.array(),0,-1.0f,0.0f,0.0f);
Second it is not best practice to transform your vertex data on the CPU, you have a vertex shader for that. Use matrices and push them to the GPU so you can benefit from it. Also if you transform those data the way you do you kind of corrupt them: If the float buffer contained vertex array for some object you want to draw in several location you would have to either translate it back to 0 before translating it to a new location for each draw call or have multiple instances of the vertex data.
Any way, although it can be very handy to be able to transform the array buffer you should avoid using it per frame if possible. Do it in loading time or in background when preparing some element. As for per frame rather try to use vertex shader for that. In case you are dealing with ES1 and you have no vertex shader there are still matrix operations that work in the same way (translate, rotate, scale, set, push, pop...)

In OpenGL ES for Android, what is the optimal way to translate objects onto a map?

I've begun working with android, so I'm developing a little game (mostly for learning purposes). My game has a simple 2d non-scrolling map, and I have many objects that will be placed in the map. The objects are already modeled statically in their classes, and I understand how to shift these into a float buffer and send them to the shaders.
I understand the gist of the model, view, and project matrices, but I've heard that translating in the shader, or passing specific model matrices for each object is inefficient.
How do I, optimally, take the modeled objects, and place them in the appropriate spot on the map (world coordinates)? Where should that translate occur (before or durring shaders? As part of the model matrix?)
(Pseudo-code is sufficient for an answer if necessary.)
Thank you!
It all comes down to the ratio of how many vertices (of a model) come to a single translation (change of transformation). As a general rule, for rendering the bulk of geometry, about at least 100 vertices should be sent for a given uniform if you want to max out the pipeline. Note that if your total number of vertices is relatively small about below 10000, you'll probably not notice any performance penality on modern systems.
So, if your objects are nontrivial, i.e. have a significant number of vertices, changing the uniforms is the way to go. If your objects are simple, placing them into a shared Vertex Array, with an additional vertex attribute indexing into a uniform arrays of transformations will make better use of the pipeline.
It really depends on how complex your objects are.

Android OpenGL ES glDrawArrays or glDrawElements?

What is the best way: if I use glDrawArrays, or if I use glDrawElements? Any difference?
For both, you pass OpenGL some buffers containing vertex data.
glDrawArrays is basically "draw this contiguous range of vertices, using the data I gave you earlier".
Good:
You don't need to build an index buffer
Bad:
If you organise your data into GL_TRIANGLES, you will have duplicate vertex data for adjacent triangles. This is obviously wasteful.
If you use GL_TRIANGLE_STRIP and GL_TRIANGLE_FAN to try and avoid duplicating data: it isn't terribly effective and you'd have to make a rendering call for each strip and fan. OpenGL calls are expensive and should be avoided where possible
With glDrawElements, you pass in buffer containing the indices of the vertices you want to draw.
Good
No duplicate vertex data - you just index the same data for different triangles
You can just use GL_TRIANGLES and rely on the vertex cache to avoid processing the same data twice - no need to re-organise your geometry data or split rendering over multiple calls
Bad
Memory overhead of index buffer
My recommendation is to use glDrawElements
The performance implications are probably similar on the iphone, the OpenGL ES Programming Guide for iOS recommends using triangle strips and joining multiple strips through degenerate triangles.
The link has a nice illustration of the concept. This way you could reuse some vertices and still do all the drawing in one step.
For best performance, your models should be submitted as a single unindexed triangle strip using glDrawArrays with as few duplicated vertices as possible. If your models require many vertices to be duplicated (because many vertices are shared by triangles that do not appear sequentially in the triangle strip or because your application merged many smaller triangle strips), you may obtain better performance using a separate index buffer and calling glDrawElements instead. There is a trade off: an unindexed triangle strip must periodically duplicate entire vertices, while an indexed triangle list requires additional memory for the indices and adds overhead to look up vertices. For best results, test your models using both indexed and unindexed triangle strips, and use the one that performs the fastest.
Where possible, sort vertex and index data so that that triangles that share common vertices are drawn reasonably close to each other in the triangle strip. Graphics hardware often caches recent vertex calculations, so locality of reference may allow the hardware to avoid calculating a vertex multiple times.
The downside is that you probably need a preprocessing step that sorts your mesh in order to obtain long enough strips.
I could not come up with a nice algorithm for this yet, so I can not give any performance or space numbers compared to GL_TRIANGLES. Of course this is also highly dependent on the meshes you want to draw.
Actually you can degenerate the triangle strip to create continuous strips so that you don't have to split it while using glDrawArray.
I have been using glDrawElements and GL_TRIANGLES but thinking about using glDrawArray instead with GL_TRIANGLE_STRIP. This way there is no need for creating inidices vector.
Anyone that knows more about the vertex cache thing that was mentioned above in one of the posts? Thinking about the performance between glDrawElements/GL_TRIANGLE vs glDrawArray/GL_TRIANGLE_STRIP.
The accepted answer is slightly outdated. Following the doc link in Jorn Horstmann's answer, OpenGL ES Programming Guide for iOS, Apple describes how to use "degenerate-triangles" trick with DrawElements, thereby gaining the best of both worlds.
The minor savings of a few indices by using DrawArrays isn't worth the savings you get by combining all your data into a single GL call, DrawElements. (You could combine all using DrawArrays, but then any "wasted elements" would be wasting vertices, which are much larger than indices, and require more render time too.)
This also means you don't need to carefully consider all your models, as to whether most of them can be rendered as a minimal number of strips, or they are too complex. One uniform solution, that handles everything. (But do try to organize into strips where possible, to minimize data sent, and maximize GPU likeliness of re-using recently cached vertex calculations.)
BEST: A single DrawElements call with GL_TRIANGLE_STRIP, containing all your data (that is changing in each frame).

Categories

Resources