I need a bit of an explanation of exactly how to apply textures to different entities. My understanding is that there can only be one bound texture at a time. So, if I have many entities all using different textures how do i go about applying a texture to an entity, rendering the entity, then binding the next to apply to the next entity.
I guess I'm confused about the timing of applying a texture to an entity and rendering it with the correct texture. I am planning on using texture atlases for similar sprites and animations and stuff. But i don't know how to have a texture or a portion of a texture (texture atlas) saved to an entity before rendering so i can move on to applying the next texture to the other entities.
Similarly, if i have a texture atlas loaded and use it to animate one entity but also need a different entity to animate, that uses a different texture atlas, do i need to have the game load the other atlas and apply it to achieve the other animation?
I'm familiar with the opengl es 2.0 api. I just need help how to apply it.
If I understand correctly you are looking for a nice application structure achieve a sprite animation using atlas textures. Know there are very many good ways to do that so I will try to explain only one.
It is best to handle your situation in quite a few classes which control textures and models.
In the bottom you should create a class that handles a texture, it contains a texture ID and if it is loaded from file should contain a file name (or some custom ID). What this class should contain as public are:
constructors as needed: with size (FBO), with name (from file)
bind (binds the texture)
texture size
image size on texture (in cases when applying a non POT image to a POT texture)
Explicit cleanup (deletes the texture)
After that since you are talking about animation it suggests you have an image with multiple subimages (a walking character for instance). That is best to subclass the texture class so it contains all the additional methods as in getting coordinates for a specific frame.
Then a level higher you would need a class that represents a texture pool. This class will cache the textures so you can reuse them. It should have an array of all the texture classes currently loaded and have methods such as:
texture named (returns a texture with a specific name either from the cache or it creates a new texture class with this name and stores it)
explicit cleanup (simply delete all the texture and empty the cache)
After that you generate a model class which contains all the data needed to be drawn. It either has a vertex buffer ID or can generate all the vertex data on the fly. Beside that it contains the texture(s) (link) which is grabbed from the texture pool class. At this point there are basically two ways of drawing this model. One is to do it all in your main drawing method which I discourage but is quickly to do. The other is to implement a method on a model class as in "draw with shader" to which you pass your custom shader class and the model itself contains the code to draw itself. The pipeline of which is something like:
Bind linked texture
Get and bind the texture coordinates from the texture (or the atlas subclass)
Get and bind the position coordinates from the model
Do any additional setting needed on the shader
Draw
This way you will have an ultimate control on what is going on and you may optimize the system to a great extend. For instance when you iterate through models to be drawn you may sort them by texture so you lose unnecessary bound texture switching, you may create a large vertex buffer to flush them in a single call, you can automatically check if a specific texture is no longer needed...
Beside that this approach will have a minimum memory footprint on your application as far as the texture data goes. And as far as the models themselves go their resources are insignificantly small, for instance each of them contains some frame structure with 4 floating values and a pointer to a texture from the pool.
I hope this explanation helps you.
Related
I am following this tutorial to add Opengl to my Android app. https://www3.ntu.edu.sg/home/ehchua/programming/android/Android_3D.html. In all the examples the shapes are created in MyGLRenderer constructor but I want to know how I can add Opengl shapes dynamically after the Renderer has been created. How can this be done?
You create an interface or class called Shape. This will contain the data needed to render a 3D shape (vertices, indices, color data, etc). Alternatively the VAO/VBO/texture IDs/other IDs for rendering.
The advantage here of using a class is that you can initialize the methods and keep everything in a single class while maintaining the ability to extend it and create more classes (Cube, Pyramid, Model, etc) to customize your objects. Or just use different instances loaded with different data. There's lots of ways to do the same thing here.
You can add more data after renderer initialization though, but by storing your data, you can reuse it later. Assuming the raw model data is stored in a way that can be reused (for your purpose, to be clear. All data can be reused, but it's pointless to reuse if you can't apply it to your use case), you can apply matrices to get different positions and "instances" of the same object. The point being, you can at any point create new shapes; OpenGL doesn't care if that's when the renderer is initialized or a while after the fact, as long as all the data is in place when you're trying to use it.
After you create the class(es) and relevant instances of them, you create a new list or map:
public List<Shape> shapes = new ArrayList<>();
//add whatever shapes you want. Create them at runtime (generate)
// or keep them static. It is up to you
In this class you create you can also implement a rendering method. In there you draw the objects. An advantage with using a class is that you can add the drawing into this class. If you don't define a specific draw method in the class, you have to manually draw every object in the main rendering method.
Personally, I separate the raw model data (meaning vertices, UV coordinates, etc.) in a single class, that's sourced into a tree of abstraction. For textured models, I currently have a Renderable Entity (where Renderable is an interface containing a draw(Shader)-function - basically one level of abstraction up from Shape). The Entity contains a TexturedModel, which contains a Texture and a Model. The Model contains the raw data, and the Texture contains the relevant texture (because it has to be applied before it's rendered).
There might be more efficient design options than that, but there's lots of ways to abstract object rendering. Like I mentioned, OpenGL doesn't care when you initialize your data, as long as you don't expect it to render data you haven't given it yet. Abstracting your models, shapes, or whatever you wanna render into classes means you have a single, managable,
renderable unit. You can add more on the fly as well -- this has also been proven by every single game with changing scenes or characters/entities in the scene.
To connect this back to the link you've provided, you already have two classes. If you add a super class and add a single list, you can render any number of them. By using matrices (C++/general OpenGL, Android), you can draw these in different positions, if that's what you meant by adding more.
Strictly speaking, with the code in the link, you don't even need to worry about reusability. You need matrices to get several of a single shape in different positions. You can also use uniform variables if you want different colors, but that's a different can of worms you didn't ask about (AKA this is an exercise for the reader; uniforms are a critical part of shaders, and you'll have to learn them at some point anyway).
It's somewhat unclear what you mean by "dynamically" in this case. if you just mean you want more objects with manually generated data, and just randomly add any, a Shape class/interface is the way to go.
If you want dynamic position, you want matrices in the shader.
If you want a pre-made abstraction tree, honestly, there isn't one. You'll have to make one based on what you need for your project. In a case where you only have a few simple geometric shapes, a Shape class makes sense, potentially in combination with the advice from the previous line. When it comes to rendering objects in OpenGL, to put it mildly, there's many ways to Rome, depending on what you need and what you intend to render.
If this answer doesn't directly apply to your situation (either you OP, or you, random reader who happened to stumble over this), I cannot recommend experimenting highly enough. What works is substantially different when you leave the world of tutorials and/or books and/or courses, and enter the world of your own projects. The key takeaways from this answer though though (TL;DR):
Abstract to a class or interface that describes the smallest unit of whatever you want to render; whether that's a Renderable, Shape, Model3D or something completely different is purely down to what you're making
Inheritance and instance reuse is your friend in cases like these
Additional models can be created after your renderer has started if you need it; OpenGL doesn't care when the data is sourced as long as you source it before you try to use it.
Dynamic changes to model data (such as position or color) can easily be done with shaders and uniform values and matrices. Don't forget: if you use VBOs, you can reuse the same model, and have different positions or other attributes and alter how your model is rendered through, among other things, uniform variables.
Further reading/watching
ThinMatrix' series (LWJGL, but explains a lot of theory)
opengl-tutorial.org (C++, a bit of the API matches, but OpenGL and OpenGL ES are different - largely recommend the theory bits over the code bits)
Android developer documentation (covers implementation on Android, as well as theory)
LWJGL GitBook (LWJGL again, but also contains a decent chunk of theory that generalizes to OpenGL ES)
docs.gl - API documentation for OpenGL ES 2 and 3, as well as OpenGL 2-4
Derive Triangle, Quad, Circle, etc.. From a 'Shape' interface that defines the draw() method. http://tutorials.jenkov.com/java/interfaces.html
Then create a List and shove the shapes into and out of it as needed.
http://www.codejava.net/java-core/collections/java-list-collection-tutorial-and-examples
In your onDrawFrame(GL10 gl) method, loop over the shape list.
for( Shape s : shapeList ) s.draw(gl);
Also, probably should add the Shape position to the Shape for the glTranslate Calls
I'm trying to make a 2d map (for a game, think tiled world map) in OpenGL ES 2.0 for an android game. Basically, there are a few tile types that have different textures, and the map is randomly generated from these types, so from game-to-game the map changes but for the duration of a single game it stays the same.
My first thought was to generate a single large texture / image / bitmap (independent from OpenGL) beforehand basically stitching duplicate tile textures together to make the larger map, and then using this single texture for one large map rectangle. In theory I think this is simple and would work fine, but I'm worried that it won't scale well for larger maps and especially on mobile I'll run out of memory with such a large image map. Plus, there's a small set of tiles that are duplicated over and over so it seems like a tremendous waste to duplicate the pixel data in a big texture over and over.
My second thought was having many textures, one for each of the tile textures. But I'm not sure how this would work, texture-binding-wise, would I need the shaders to contain multiple texture references and within the shader have logic for using the right one?
Finally, I thought using a texture atlas could work, have one texture / image with all of the tile data in it, this would be relatively small. But I'm struggling to imagine how to get the maths to work out such that "tiles" or subsections of the map rectangle would use completely different texture coordinates.
Am I approaching this the wrong way? Should I be using a rectangle for each tile? At least this way I can pass the shaders both vertex and texture coordinates for each tile independently. This seems easier, but also seems wrong since the map really is just one rectangle that won't be changing.
My first thought was to generate a single large texture...
Actualy, something like this has already been used in id Software's id Tech since version 4. It's called MegaTexture. Basicaly, it's a big texture, which could also hold additional data.
My second thought was having many textures...
You don't need to hold all the textures in a shader. Do it like this:
Implement a loop with n iterations, where n is how much different types of textures are used.
Inside a loop, bind the current texture type.
Pass any data, like position/color/texture coords to shaders.
Draw all tiles that use the bounded texture. You could use GLES30.glDrawElementsInstanced or GLES30.glDrawArraysInstanced if you are targeting devices with GLES 3.x or an appropriate extension support. Otherwise, draw your tiles using GLES20.glDrawArrays or GLES20.glDrawElements.
Shaders won't be complicated with this approach.
Finally, I thought using a texture atlas could work...
You could use loop here too and compute the texture coordinates for each tile type on CPU, then just pass them to shaders.
Considering your map is not changing through a game session, MegaTexture approach looks good. However, it depends on how large your map is and how much memory is available. Also, note that max texture size is limited. Max size differs from device to device but should be (AFAIK) equal or greater than screen size and at least 64 texels(16 for cube-mapped textures). You can get the maximum texture size on any device using glGet(GL_MAX_TEXTURE_SIZE ).
I have many fixed objects like terrains and buildings and I want to merge them all in one VBO to reduce draw calls and enhance performance when there are too many objects, I load textures and store their ids in an array, my question is can I bind textures to that one VBO or must I make a separate VBO for each texture? or can I make many glDrawArrays for one VBO based on offset and length, if I can do that will this be smooth and well performed?
In ES 2.0, if you want to use multiple textures in a single draw call, your only good option is to use a texture atlas. Essentially, you store the texture data from multiple logical textures in a single OpenGL texture, and the texture coordinates are chosen so that the desired texture data is used for each primitive. This could be done by adjusting the original texture coordinates, or by feeding an id into the shader and applying an offset to the texture coordinates based on the id.
Of course you can use multiple glDrawArrays() calls for a single VBO, with binding a different texture between them. But that goes against your goal of reducing the number of draw calls. You should certainly make sure that the number of draw calls really is a bottleneck for you before you spend a lot of time on these types of optimizations.
In more advanced versions of OpenGL you have additional features that can help with this use case, like array textures.
There are couple of standard techniques that many Game Engines perform to achieve low draw calls.
Batching: This technique combines all objects referring to same material and combines them into one mesh. The objects does not even have to be static. If they are dynamic you can still batch them by passing the Model Matrix array.
Texture Atlas: Creating texture atlas for all static meshes is the best way as said in the other answer. However, you'll have to do a lot of work for combining the textures efficiently and updating their UVs accordingly.
There are many examples for OpenGL ES 2 in how to visualize a single triangle or rectangle.
Google provides an example for drawing shapes (triangles, rectangles) by creating a Triangle and Rectangle class which basically do all the opengl-stuff required for visualize these objects.
But what should you do, if you have more than one triangle? What if you have objects, consists of hundreds of triangles of different colors, different sizes and positions? I can't find any good tutorial for dealing with complex scenarios in opengl es.
My approaches:
So I tried it out. First of all I've slightely changed the Triangle-Class to a more dynamic class (the constructor now gets the position and the color of the triangle). Basically this is "enough" for drawing complexe scenes. Every object would consist out of hundreds of these Triangle-classes and I render each of them seperately. But this consumes much computing power and I think most of the steps in the rendering process are redundant.
So I tried to "group" triangles into different categories. Now every object has his only vertexbuffer and puts every triangle at once in it. Now the performance is far better than before (where every triangle had his own buffer) but I still think, that it's not the correct way to go.
Is there any good example in the internet, where someone is drawing more than simple triangles or do you know where I can get these information from? I really like OpenGL but it's pretty hard for beginners because of the lack of tutorials (for OpenGL ES 2 in Android).
The standard representation of (triangle) meshes for rendering is using a vertex array containing all the vertices in the mesh, and an index array connecting storing the connectivity (triangles). You definitively want at most one draw call per object (but you might even be able to coalesce several objects).
Interleaved attribute arrays are the most efficient variant wrt. cache efficiency, so one Buffer object for the VA per object is enough. You might even combine several objects into one buffer object, even if you can not use a single draw call for both.
As GLES might be limited to 16 Bit indices, large models must be splitted into several "patches".
Although I'm technically working in the android platform with OpenGL 2.0 ES, I believe this can be applied to more OpenGL technologies.
I have a list of objects (enemies, characters, etc) that I'm attempting to draw onto a grid, each space being 1x1, and each object matching. Presently, each object is self translating... that is, it's taking its model coordinates and going through a simple loop to adjust them to be located in the world coordinates in its appropriate grid location. (i.e. if it should be at (3,2) it will translate it's coordinates accordingly.
The problem I've reached is I'm not sure how to effeciently draw them. I have a loop going through all the objects and calling draw for each object, similar to the android tutorial, but this seems wildly ineffecient.
The objects are each textured with their own square images, matching the 1x1 grid they fill. They likely will never need their own unique shaders, so the only thing that seems to change between objects is the verticies and the shaders.
Is there an effecient way to get each model into the pipeline without flushing because of uniform changes?
This probably requires some try and error procedure an probably is hardware dependent. I would use buffer objects for the meshes with GL_STATIC_DRAW, pack some textures in a bigger one and draw all objects depending on that bigger texture in batch to avoid states changes as much as possible. Profile and get us more information on where is your bottleneck.