Dynamically add Opengl shapes - android

I am following this tutorial to add Opengl to my Android app. https://www3.ntu.edu.sg/home/ehchua/programming/android/Android_3D.html. In all the examples the shapes are created in MyGLRenderer constructor but I want to know how I can add Opengl shapes dynamically after the Renderer has been created. How can this be done?

You create an interface or class called Shape. This will contain the data needed to render a 3D shape (vertices, indices, color data, etc). Alternatively the VAO/VBO/texture IDs/other IDs for rendering.
The advantage here of using a class is that you can initialize the methods and keep everything in a single class while maintaining the ability to extend it and create more classes (Cube, Pyramid, Model, etc) to customize your objects. Or just use different instances loaded with different data. There's lots of ways to do the same thing here.
You can add more data after renderer initialization though, but by storing your data, you can reuse it later. Assuming the raw model data is stored in a way that can be reused (for your purpose, to be clear. All data can be reused, but it's pointless to reuse if you can't apply it to your use case), you can apply matrices to get different positions and "instances" of the same object. The point being, you can at any point create new shapes; OpenGL doesn't care if that's when the renderer is initialized or a while after the fact, as long as all the data is in place when you're trying to use it.
After you create the class(es) and relevant instances of them, you create a new list or map:
public List<Shape> shapes = new ArrayList<>();
//add whatever shapes you want. Create them at runtime (generate)
// or keep them static. It is up to you
In this class you create you can also implement a rendering method. In there you draw the objects. An advantage with using a class is that you can add the drawing into this class. If you don't define a specific draw method in the class, you have to manually draw every object in the main rendering method.
Personally, I separate the raw model data (meaning vertices, UV coordinates, etc.) in a single class, that's sourced into a tree of abstraction. For textured models, I currently have a Renderable Entity (where Renderable is an interface containing a draw(Shader)-function - basically one level of abstraction up from Shape). The Entity contains a TexturedModel, which contains a Texture and a Model. The Model contains the raw data, and the Texture contains the relevant texture (because it has to be applied before it's rendered).
There might be more efficient design options than that, but there's lots of ways to abstract object rendering. Like I mentioned, OpenGL doesn't care when you initialize your data, as long as you don't expect it to render data you haven't given it yet. Abstracting your models, shapes, or whatever you wanna render into classes means you have a single, managable,
renderable unit. You can add more on the fly as well -- this has also been proven by every single game with changing scenes or characters/entities in the scene.
To connect this back to the link you've provided, you already have two classes. If you add a super class and add a single list, you can render any number of them. By using matrices (C++/general OpenGL, Android), you can draw these in different positions, if that's what you meant by adding more.
Strictly speaking, with the code in the link, you don't even need to worry about reusability. You need matrices to get several of a single shape in different positions. You can also use uniform variables if you want different colors, but that's a different can of worms you didn't ask about (AKA this is an exercise for the reader; uniforms are a critical part of shaders, and you'll have to learn them at some point anyway).
It's somewhat unclear what you mean by "dynamically" in this case. if you just mean you want more objects with manually generated data, and just randomly add any, a Shape class/interface is the way to go.
If you want dynamic position, you want matrices in the shader.
If you want a pre-made abstraction tree, honestly, there isn't one. You'll have to make one based on what you need for your project. In a case where you only have a few simple geometric shapes, a Shape class makes sense, potentially in combination with the advice from the previous line. When it comes to rendering objects in OpenGL, to put it mildly, there's many ways to Rome, depending on what you need and what you intend to render.
If this answer doesn't directly apply to your situation (either you OP, or you, random reader who happened to stumble over this), I cannot recommend experimenting highly enough. What works is substantially different when you leave the world of tutorials and/or books and/or courses, and enter the world of your own projects. The key takeaways from this answer though though (TL;DR):
Abstract to a class or interface that describes the smallest unit of whatever you want to render; whether that's a Renderable, Shape, Model3D or something completely different is purely down to what you're making
Inheritance and instance reuse is your friend in cases like these
Additional models can be created after your renderer has started if you need it; OpenGL doesn't care when the data is sourced as long as you source it before you try to use it.
Dynamic changes to model data (such as position or color) can easily be done with shaders and uniform values and matrices. Don't forget: if you use VBOs, you can reuse the same model, and have different positions or other attributes and alter how your model is rendered through, among other things, uniform variables.
Further reading/watching
ThinMatrix' series (LWJGL, but explains a lot of theory)
opengl-tutorial.org (C++, a bit of the API matches, but OpenGL and OpenGL ES are different - largely recommend the theory bits over the code bits)
Android developer documentation (covers implementation on Android, as well as theory)
LWJGL GitBook (LWJGL again, but also contains a decent chunk of theory that generalizes to OpenGL ES)
docs.gl - API documentation for OpenGL ES 2 and 3, as well as OpenGL 2-4

Derive Triangle, Quad, Circle, etc.. From a 'Shape' interface that defines the draw() method. http://tutorials.jenkov.com/java/interfaces.html
Then create a List and shove the shapes into and out of it as needed.
http://www.codejava.net/java-core/collections/java-list-collection-tutorial-and-examples
In your onDrawFrame(GL10 gl) method, loop over the shape list.
for( Shape s : shapeList ) s.draw(gl);
Also, probably should add the Shape position to the Shape for the glTranslate Calls

Related

Texturing Multiple Entities with OpenGL ES 2.0 on Android

I need a bit of an explanation of exactly how to apply textures to different entities. My understanding is that there can only be one bound texture at a time. So, if I have many entities all using different textures how do i go about applying a texture to an entity, rendering the entity, then binding the next to apply to the next entity.
I guess I'm confused about the timing of applying a texture to an entity and rendering it with the correct texture. I am planning on using texture atlases for similar sprites and animations and stuff. But i don't know how to have a texture or a portion of a texture (texture atlas) saved to an entity before rendering so i can move on to applying the next texture to the other entities.
Similarly, if i have a texture atlas loaded and use it to animate one entity but also need a different entity to animate, that uses a different texture atlas, do i need to have the game load the other atlas and apply it to achieve the other animation?
I'm familiar with the opengl es 2.0 api. I just need help how to apply it.
If I understand correctly you are looking for a nice application structure achieve a sprite animation using atlas textures. Know there are very many good ways to do that so I will try to explain only one.
It is best to handle your situation in quite a few classes which control textures and models.
In the bottom you should create a class that handles a texture, it contains a texture ID and if it is loaded from file should contain a file name (or some custom ID). What this class should contain as public are:
constructors as needed: with size (FBO), with name (from file)
bind (binds the texture)
texture size
image size on texture (in cases when applying a non POT image to a POT texture)
Explicit cleanup (deletes the texture)
After that since you are talking about animation it suggests you have an image with multiple subimages (a walking character for instance). That is best to subclass the texture class so it contains all the additional methods as in getting coordinates for a specific frame.
Then a level higher you would need a class that represents a texture pool. This class will cache the textures so you can reuse them. It should have an array of all the texture classes currently loaded and have methods such as:
texture named (returns a texture with a specific name either from the cache or it creates a new texture class with this name and stores it)
explicit cleanup (simply delete all the texture and empty the cache)
After that you generate a model class which contains all the data needed to be drawn. It either has a vertex buffer ID or can generate all the vertex data on the fly. Beside that it contains the texture(s) (link) which is grabbed from the texture pool class. At this point there are basically two ways of drawing this model. One is to do it all in your main drawing method which I discourage but is quickly to do. The other is to implement a method on a model class as in "draw with shader" to which you pass your custom shader class and the model itself contains the code to draw itself. The pipeline of which is something like:
Bind linked texture
Get and bind the texture coordinates from the texture (or the atlas subclass)
Get and bind the position coordinates from the model
Do any additional setting needed on the shader
Draw
This way you will have an ultimate control on what is going on and you may optimize the system to a great extend. For instance when you iterate through models to be drawn you may sort them by texture so you lose unnecessary bound texture switching, you may create a large vertex buffer to flush them in a single call, you can automatically check if a specific texture is no longer needed...
Beside that this approach will have a minimum memory footprint on your application as far as the texture data goes. And as far as the models themselves go their resources are insignificantly small, for instance each of them contains some frame structure with 4 floating values and a pointer to a texture from the pool.
I hope this explanation helps you.

OpenGL ES : Understanding Vertex Buffer Objects

I am working on an Android project a bit like Minecraft. I am finding this a great way to learn about OpenGL Performance.
I have moved over to a vertex buffer object which has given me huge performance gains but now I am seeing the down sides.
I am right in thinking I need a vertex buffer object per:
Different mesh
Different texture
Different colour
Am I also right in thinking that every time the player adds a cube I need to add that on to the end of the VBO and every time the user removes a cube I need to regenerate the VBO?
I can't see how you could map a object with properties to its place in the VBO.
Does anyone know if Minecraft type games use VBO's
Yeah if you malloc() a memory space then you need to create new VBO-s. If you want to expand it because you need more memory. If you want to show less then I guess you could play with IBO-s but again you have to rearrange the VBO at some point.
I'm not really sure what you mean by object properties but if you want them to be shown then I think you'll need different VBO-s for each kind of property/cube-type / shader pairs. And draw them in groups.
If you want to store other kind of properties then you shouldn't store it in VBO that you pass to OpenGL.
I have no idea what Minecraft uses but my best advice is that you store the not likely to reach cubes in VBO-s and the the likely to use cubes in easy to modify container. (I don't know if it would help or not)

How to deal with multiple objects in OpenGL ES 2

There are many examples for OpenGL ES 2 in how to visualize a single triangle or rectangle.
Google provides an example for drawing shapes (triangles, rectangles) by creating a Triangle and Rectangle class which basically do all the opengl-stuff required for visualize these objects.
But what should you do, if you have more than one triangle? What if you have objects, consists of hundreds of triangles of different colors, different sizes and positions? I can't find any good tutorial for dealing with complex scenarios in opengl es.
My approaches:
So I tried it out. First of all I've slightely changed the Triangle-Class to a more dynamic class (the constructor now gets the position and the color of the triangle). Basically this is "enough" for drawing complexe scenes. Every object would consist out of hundreds of these Triangle-classes and I render each of them seperately. But this consumes much computing power and I think most of the steps in the rendering process are redundant.
So I tried to "group" triangles into different categories. Now every object has his only vertexbuffer and puts every triangle at once in it. Now the performance is far better than before (where every triangle had his own buffer) but I still think, that it's not the correct way to go.
Is there any good example in the internet, where someone is drawing more than simple triangles or do you know where I can get these information from? I really like OpenGL but it's pretty hard for beginners because of the lack of tutorials (for OpenGL ES 2 in Android).
The standard representation of (triangle) meshes for rendering is using a vertex array containing all the vertices in the mesh, and an index array connecting storing the connectivity (triangles). You definitively want at most one draw call per object (but you might even be able to coalesce several objects).
Interleaved attribute arrays are the most efficient variant wrt. cache efficiency, so one Buffer object for the VA per object is enough. You might even combine several objects into one buffer object, even if you can not use a single draw call for both.
As GLES might be limited to 16 Bit indices, large models must be splitted into several "patches".

Android 2D Graphics, Comparing Objects, OOP

I want to develop an Android 2D Game, with a playing field full of rectangles and circles.
The rectangles and circles will be displaying objects, consisting of numbers, that have to be compared at some point.
Furthermore, all the objects (rect and circle) need to be dragable. I want to drag one object to another position, and then the surrounding objects-values should be compared with the value in the dragged object.
What I have so far is one abstract base class, and two sub classes, that represent the objects displayed in the rectangles and circles.
Furthermore the base class extends View, so that I can override the onDraw method for each of the two subclasses. Now I draw a rectangle of the one object, and a circle for the other object, furthermore i draw text containing the numbers of each object.
My question is, am I on the correct path concerning the development of an application like this, or would there be a better approach?
Thank you very much in advance.
Regarding your class-hierarchy I would say you are on the correct path. That is assuming that the base-class has implemented the drag-mechanism. Also your base class should define the method for comparing two objects, so that the sub-classes have it as well. The place for actual logic of object comparison depends on the algorithm. You can either let the parent class compare the objects, but that only works if you only need members of the base-class and the actual object types for the comparison. If you need anything specific to the distinct sub-classes, you would need to override the comparison method in each sub-class.
What you also need is a mechanism to find the neighbouring objects. This could be done with a simple coordinate-check (if you can drag around freely) on a List of all game objects, or with the base-class containing references to all adjacent objects that is updated whenever a piece is dragged around (if it is more like a jigsaw puzzle game).
However, I would strongly recommend using OpenGL for any complex game. Unfortunately, I only hava a good tutorial in German, but I'm sure there are many English tutorials available also.
If you have any more detailed questions let me know.

SVG conversion - efficient way to store Path, Paint and Matrix objects?

In my Android application I have created an SVG image converter class. It parses through the SVG XML data and converts it into the appropriate Path, Paint and Matrix objects which can then be applied to the Canvas. Using this class I have then implemented a View which uses my SVG converter class to draw images I've produced in Inkscape on the screen. So far, so good. (I appreciate that writing my own SVG converter could be considered reinvention of the wheel considering it's been done before, but for me it's a useful learning exercise in my first Android application, and shall hopefully give me some extra flexibility.)
The purpose of using SVG is so that I can quickly and easily author various designs of graphical gauge. Each gauge typically consists of a section of graphics which only need to be drawn once (e.g. the gauge face and legends), plus graphics which are regularly updated (pointer angle, numeric value text).
At present my gauge View is not very efficient because every time onDraw() is called, my SVG class is called to rattle through the entire SVG file to produce all the vector data for the Canvas.
What I would like to do is to have some intermediate storage of the vector data so that the SVG XML file only need be parsed once. Therefore, I was thinking that the View could lazy-initialize itself from the SVG file on the first onDraw() and then store all of the resulting Paths, Paint and Matrix objects to various Lists. Then, on each subsequent onDraw(), I just pull those out of the List(s) and rattle through those onto the Canvas.
A further advantage of this would be to use separate Lists to store sections of vector graphics that are "moving", e.g. the gauge pointer. I thought of doing this by assigning a certain 'magic' ID to the group of paths in Inkscape that represent the pointer; the SVG parser class would then recognise that this separate group needs to be stored separately to the 'still' graphics. Then, whenever I need to refresh the angle of the pointer in accordance to measurement data, the View will only apply the rotational transform to that bunch of vector data. In fact, I'm thinking of doing it so that the moving pointer graphics are actually drawn in a child View, so that just the child View is redrawn when the pointer has to be refreshed.
The end objective of all of this is this: I (or perhaps even users) could fire up a vector imaging program like Inkscape and quickly produce a new design of gauge widget. I embed a bit of metadata to indicate which bits of the graphics have to be manipulated according to measurement data.
Rather than asking for a solution to a problem as such, I'd like to hear opinions about what I'm doing here, and whether what I'm proposing could be done in a much more optimised way. Could it be very memory inefficient to cache groups of Path and Paint objects?
Furthermore, once it's good enough(!) I'll gladly publish my SVG class somewhere if anyone would find it useful.
Implement and measure!
Start with the straightforward approach--store your parsed vector data in lists in memory. If memory usage and rendering speed is acceptable: problem solved.
If not, try other things and measure. Some ideas:
parse SVG once, render once to Bitmaps, reuse bitmaps
render SVG as a part of build process, ship raster bitmaps with app
Except for the simplest cases, we're not very good with assesing how effective a particular technique is going to be. Thus the popular saying that premature optimization is root of all evil.

Categories

Resources