I'm implementing some native 2D-draw functions in my graphics engine for android, but now there's another question coming up, when I observe the performance of my program.
At the moment I'm implementing a drawLine/drawImage function. In summary, there are following different values for drawing each different line / image:
the color
the alpha value
the width of the line
rotation (only for images)
size/scale (also for images)
blending method (subrtract, add, normal-alpha)
Now, when an imageLine is drawn,
I put the CPU-calculated vertex-positions and uv-values for 6 vertices (2 triangles), into a Floatbuffer and draw it immediately with drawArrays, after passing information for drawing (color,alpha, etc.) via uniforms to the shader. When I draw an image, the pre-set VBO is directly drawn after passing information.
The first fact I recognized, is: of course drawing Images is much faster, than imagelines (beacuse of VBOs), but also: I cannot pre-put vertex-data into a VBO for imageLines, because imageLines have no static shape like normal images (varying linelength, varying linewidth and the vertex positions of x1,y1 and x2,y2 change too often)
That's why I use a normal Floatbuffer, instead of a VBO.
So my question is: What's the best way for managing images, and other 2D-graphics functions. For me it's some kind of important, that the user of the engine is able to draw as many images/2D graphics as possible, without loosing to much performance.
You can find the functions for drawing images, imagelines, rects, quads, etc. here:
https://github.com/Chrise55/LLama3D/blob/master/Llama3DLibrary/src/com/llama3d/object/graphics/image/ImageBase.java
Here an example how it looks with many images (testing artificial neural networks), it works fine, but already little bit slow with that many images... :(
Related
I am drawing some dots to represent players of two teams on a map.
Each team has its own colour.
Important to note that the dot contains two circles, outer border and a inner fill so there will be two colours, with the border always being the same.
It makes sense for me to generate this at runtime rather than packing a texture for each combination.
Upon research, there seems to many ways to achieve this but each has a associated problem
ShapeRenderer
ShapeRenderer is for debugging purposes and should not be used for usual drawing as stated by a LibGdx developer here
http://badlogicgames.com/forum/viewtopic.php?t=8573&p=38930
For this reason I avoided using this
Pixmap
This was very promising, I liked the idea that I could just generate two textures and re-use them for each sprite. The biggest problem with this is that Textures made via Pixmap are un-managed so if the OpenGL context is lost and regained (This can be easily reproduced in an Android application, if the user backgrounds the app and restores it from foreground). I am primarily targetting Android so this an issue for me
Texture Re-Colour
Was thinking I could create a grey scale dot and re-colour it but since my asset has two parts to it, I am not sure how I could selectively choose the inner circle and fill it.
Question 1 How Do I Restore Pixmap Texture On Context Loss?
I have not found an example which details how to do this? I assume it is going to be done in the resume lifecycle callback but what do I need to do?
Question 2 Alternative Way?
Is there an alternative way for my issue perhaps?
Thanks for reading!
Load just one texture with white circle. Use SpriteBatch to draw players: first call batch.setColor(borderColor) and draw the circle Texture with outer radius, then call batch.setColor(fillColor) and draw it with inner radius. Sure there is a some performance impact because of drawing fill part twice, but if circles are small enough the impact is going to be negligible.
I'm trying to make a 2d map (for a game, think tiled world map) in OpenGL ES 2.0 for an android game. Basically, there are a few tile types that have different textures, and the map is randomly generated from these types, so from game-to-game the map changes but for the duration of a single game it stays the same.
My first thought was to generate a single large texture / image / bitmap (independent from OpenGL) beforehand basically stitching duplicate tile textures together to make the larger map, and then using this single texture for one large map rectangle. In theory I think this is simple and would work fine, but I'm worried that it won't scale well for larger maps and especially on mobile I'll run out of memory with such a large image map. Plus, there's a small set of tiles that are duplicated over and over so it seems like a tremendous waste to duplicate the pixel data in a big texture over and over.
My second thought was having many textures, one for each of the tile textures. But I'm not sure how this would work, texture-binding-wise, would I need the shaders to contain multiple texture references and within the shader have logic for using the right one?
Finally, I thought using a texture atlas could work, have one texture / image with all of the tile data in it, this would be relatively small. But I'm struggling to imagine how to get the maths to work out such that "tiles" or subsections of the map rectangle would use completely different texture coordinates.
Am I approaching this the wrong way? Should I be using a rectangle for each tile? At least this way I can pass the shaders both vertex and texture coordinates for each tile independently. This seems easier, but also seems wrong since the map really is just one rectangle that won't be changing.
My first thought was to generate a single large texture...
Actualy, something like this has already been used in id Software's id Tech since version 4. It's called MegaTexture. Basicaly, it's a big texture, which could also hold additional data.
My second thought was having many textures...
You don't need to hold all the textures in a shader. Do it like this:
Implement a loop with n iterations, where n is how much different types of textures are used.
Inside a loop, bind the current texture type.
Pass any data, like position/color/texture coords to shaders.
Draw all tiles that use the bounded texture. You could use GLES30.glDrawElementsInstanced or GLES30.glDrawArraysInstanced if you are targeting devices with GLES 3.x or an appropriate extension support. Otherwise, draw your tiles using GLES20.glDrawArrays or GLES20.glDrawElements.
Shaders won't be complicated with this approach.
Finally, I thought using a texture atlas could work...
You could use loop here too and compute the texture coordinates for each tile type on CPU, then just pass them to shaders.
Considering your map is not changing through a game session, MegaTexture approach looks good. However, it depends on how large your map is and how much memory is available. Also, note that max texture size is limited. Max size differs from device to device but should be (AFAIK) equal or greater than screen size and at least 64 texels(16 for cube-mapped textures). You can get the maximum texture size on any device using glGet(GL_MAX_TEXTURE_SIZE ).
So Im trying to figure out how to draw a single textured quad many times. My issue is that since these are create and deleted and every one of them has a unique position and rotation. Im not sure a vbo is the best solution as I've heard modifying buffers is extremely slow on android and it seems I would need to create a new one each frame since different quads might disappear randomly (collide with an enemy). If I simply do a draw call for each one I get 20fps around 100, which is unusable. any advice?
Edit: I'm trying to create a bullethell, but figuring out how to draw 500+ things is hurting my head.
I think you're after a particle system. A similar question is here: Drawing many textured particles quickly in OpenGL ES 1.1.
Using point sprites is quite cheap, but you have to do extra work in the fragment shader and I'm not sure if GLES2 supports gl_PointSize if you need different sized particles. gl_PointSize Corresponding to World Space Size
My go-to particle system is storing positions in a double buffered texture, then draw using a single draw call and a static array of quads. This is related but I'll describe it a bit more here...
Create a texture (floating point if you can, but this may limit the supported devices). Each pixel holds the particle position and maybe rotation information.
[EDITED] If you need to animate the particles you want to change the values in the texture each frame. To make it fast, get the GPU to do it in a shader. Using an FBO, draw a fullscreen polygon and update the values in the fragment shader. The problem is you can't read and write to the same texture (or shouldn't). The common approach is to double buffer the texture by creating a second one to render to while you read from the first, then ping-pong between them.
Create a VBO for drawing triangles. The positions are all the same, filling a -1 to 1 quad. However make texture coordinates for each quad address the correct pixel in the above texture.
Draw the VBO, binding your positions texture. In the vertex shader, read the position given the vertex texture coordinate. Scale the -1 to 1 vertex positions to the right size, apply the position and any rotation. Use the original -1 to 1 position as the texture coordinate to pass to the fragment shader to add any regular colour textures.
If you ever have a GLSL version with gl_Vertex, I quite like generating these coordinates in the vertex shader, saving storing unnecessarily trivial data just to draw simple objects. This for example.
To spawn particles, use glTexSubImage2D and write a block of particles into the position texture. You may need a few textures if you start storing more particle attributes.
Currently I'm working on an Application involving OpenGL ES 2.0. I'm using the Java Wrapper for it, since the OpenGL part will probably not have the biggest complexity ever. Nontheless, I'm currently stuck.
First, I'm trying to draw something like this:
So I just want to draw some sort of indicator, how big my "space" is - if there even are limitations? How would I draw such a cage around the center of the camera? (Of course I just want a simple one, basically a square, indicating boundaries, not something with rounded borders etc)
To draw something like this without rounded corners I suggest you to simply draw a textured cube (there are too many of those around the web). For it to look as nice as the one on the image you will also need to add some lights into the scene as they are the ones that give a true 3d effect (a sphere without shades/lights will always appear as a 2d circle).
As for the limitations: There are no specific limitations in size except the overflow. I think in most cases you have a 32-bit floating values in your vectors so its maximum value would be how big is your space. Other limitations are more of a visual, you usually use frustum for this type of scene which has parameters zNear and zFar clipping plains. These two will define you can not see pixels nearer then zNear or further then zFar. Although you can set your own value for zFar and can be very large you should know there is a penalty in depth buffer precision doing so (result can be incorrect drawing when 2 objects are too close together).
So in general you are the one that has to take care of the scene scale or size and consider your field of view.
We are to develop a scrolling/zooming scene in OpenGL ES on Android, very much like a level in Angry Birds but more like a level in World Of Goo. More like the latter as the world will not consist of repeated layers as featured in Angry Birds but of a large image. As the scene needs to scroll/zoom and therefore a lot of it will not be visible, I was wondering about the most efficient way to implement the rendering, focusing on the environment only (ie not the objects within the world but background layers).
We will be using an orthographic projection.
The first that comes to mind is creating a large 4 vertices rectangle at world size, which has the background texture mapped to it, and translate/scale this using glTranslatef / glScalef. However, I was wondering if the non visible area outside of the screens boundaries is still being rendered by OpenGL as it is not being culled (you would lose the visible area as well as there are only 4 vertices). Therefore, would it be more efficient to subdivide this rectangle, so non visible smaller rectangles can be culled?
Another option would be creating a 4 vertice rectangle that would fill the screen, then move the background by adjusting its texture coordinates. However, I guess we would run into problems when building bigger worlds, considering the texture size limit. It seems like a nice implementation for repeated backgrounds like AngryBirds has.
Maybe there is another way..?
If someone has an idea on how it might be done in AngryBirds / World of Goo, please share as I'd love to hear. They seem to have implemented a system that allows for the world to be moved and zoomed very (WorldOfGoo = VERY) smoothly.
This is probably your best bet for implementation.
In my experience, keeping a large texture in memory is very expensive on Android. I would get quite a few OutOfMemoryError exceptions for the background texture before I moved to tiling.
I think the biggest rendering bottleneck would be with memory transfer speeds and fill rate instead of any graphics computation.
Edit: Check out 53:28 of this presentation from Google I/O 2009.
You could split the background rectangle into smaller rectangles, so that OpenGL only renders the visible rectangles. You won't have a big ass rectangle with a big ass texture loaded but smallers rectangles with smaller textures that you could load/unload, depending on what is visible on screen...
Afaik there would be no performance drop due to large areas being rendered off-screen, subdividing and culling is normally done just to reduce vertex count, but you would actually be adding to it here.
Putting that aside for now; from the way you phrased the question I am unsure whether you have a large background texture or a small repeating one. If it is large, then you will need to subdivide because of texture size limitations anyway, so the question is moot! If it is small, then I would suggest the second method, fit a quad to the screen and move the background by changing the texture coordinates.
I feel like I may have missed something, though, as I am unsure why you mentioned the texture size limitation issue when talking about the the texture coordinate method and not the large quad method. Surely for both this is not a problem for repeating textures as you can use GL_REPEAT texture wrap mode...
But for both it is a problem for a single large texture unless you subdivide, which would make the texture coordinate tactic way more complicated than necessary. In this case subdividing the mesh along texture subdivisions would be best, and culling off-screen sections. Deciding which parts to cull should be trivial with this technique.
Cheers.