I am working on an Android OpenGL ES tutorial and it says: "defining triangles is pretty easy in OpenGL, but what if you want to get a just a little more complex? Say, a square? There are a number of ways to do this, but a typical path to drawing such a shape in OpenGL ES is to use two triangles drawn together"
http://developer.android.com/training/graphics/opengl/shapes.html
Why is the typical path to drawing such a shape to use two triangles instead of drawing the four corner coordinates of the square?
Graphics cards and rendering options are only really coded to render triangles, and not anything harder. The idea is that every other shape anyone could ever think of can be duplicated or approximated with triangles, sometimes billions or trillions of them. When you see GPUs being compared, sometimes you hear "maximum polygons on the screen" or something similar. They really mean triangles, but polygons sounds cooler. Triangles are simple enough to create, but provide fantastic utility. They don't need to have ordered points, which is a huuuge help.
The tl;dr answer is that GPUs render triangles really well, so much so they don't bother knowing how to render much else.
Related
I've been programming for about a couple years now, and I know the syntax of different languages pretty well, but I've spent most of my time making programs in basic Lua. I have played around with game developing with Java and C#, with DirectX and OpenGl, but most times I don't even know terms that others may know really well such as 'shaders' and 'VBOs'. I've heard of them, but I really don't know what they mean, so please explain to me like I'm five if you have a answer.
I'm trying to make a simple Android game using Open GL ES 2.0, and, I've seen many tutorials explaining how to draw one triangle, and not many. For my case, I have a world full of triangles, its simply an array of true and false's, where if there is a true a triangle will be drawn in that position, if false the triangle won't be drawn. I've made it so it will only draw triangle's until they are off screen, but I realized one problem still, and that is that rendering is still super slow, takes about 2 seconds to render one frame.
At the moment, I currently 1 triangle class, and when it checks if a triangle is supposed to belong where it is, it will create vertex points for that triangle and pass them into the triangle class, then creating a new triangle object with those updated vertices. Now I can see how wrong this can go, but it's honestly the only thing I can come up with, with my very-limited knowledge on OpenGL ES
What I am looking for is to draw all of these triangles, in their correct position, with the same color and size, in a very efficient way, so that it wouldn't take about 2 seconds to draw one frame.
If anyone has a solution, Thank you.
What you are looking for is for instanced draw calls that basically means that you are going to send one VBO (Vertex Buffer Object - this holds all the information about the vertex of a mesh) and an array of MVPM(Model View Projection Matrices - These hold all the transformations information of every draw you want to do of that VBO) to the shader, all this in one single draw call.
unfortunately for us (i'm using right now OpenGL Es 2.0 as well) there is no way provided by opengl to do this.
Any way, there is this article about fake that but i think is too much trouble.
In other words ir are pretty much trapped with a draw call per mesh.
I am trying to make a game. Game won't have any textures, because I want to change for example color for every line or square. I heard that Shape Renderer isn't efficient and shouldn't be used intensively. My game will be 2D and is for Android.
What can I do?
Maybe LIBGDX isn't good for Shape Rendering?
What should I do?
To answer, "What should I do?", ShapeRenderer can take you a long way. Try it, look at the source to understand how it works, measure its performance, and if it doesn't perform well enough then roll your own solution.
I'm currently working through a set of tutorials on Android OpenGL ES (1.1) and feel like I'm starting to get a grasp of how the vertices and textures work, along with some sprite animation. As I understand, the only primitives here are points, straight lines, and triangles.
I'm now trying to create a simple curve and really don't know where to start.
I want the curve to be drawn dynamically to represent something like a beam deflection like this where I could input a force and have the curve change.
Is it something I would create with a line loop or triangle fan with a ton of vertices? Or perhaps a texture that I then manipulate?
Any input or a point in the right direction is much appreciated, thanks.
I can recommend this blog post http://blog.uncle.se/2012/02/opengl-es-tutorial-for-android-part-ii-building-a-polygon/ sadly the original source returns a 404. Hopefully the link provides the same quality of information. Anyhow, a good read for openGL.
You have the general idea. Whatever you do must be made of lines, points or triangles. You can generate all the numbers for any pseudo-curve however you want, but you're always going to be passing the resulting vertices to OPENGL then connecting those with lines and triangles.
I've got a an OpenGL scene rendered with a bunch of sprites, and I'd like to automagically add drop shadows to all of them. Here's a picture showing what I mean:
The scene uses orthographic projection, the sprites are textured quads, and I'm using the depth buffer to draw them front to back. I'm working with OpenGL ES 2.0, but thoughts from the iOS or non-ES worlds would be appreciated as well. I've tossed a few ideas around in my head of how I can go about this, and I'd like to find out which has the most promise.
Draw each sprite twice, the first normally, the second with some kind of drop shadow shader a bit deeper in the scene. Not sure if this is possible?
Draw a sprite, then draw it again, darkened and with some alpha, several times with some random jitter applied to the verticies. This may look silly and not at all like a shadow.
Draw the base scene without background to a texture, then blur and darken it to create one large drop shadow. Then draw the base scene over the drop shadow texture, then finally over the background. This would lose the shadows between sprites, though.
SSAO in a post-processing pass. Might be the most dynamic and automatic, but could look fuzzy/grainy and really slow things down.
At creation time, generate a shadow texture for each sprite. For rendering, draw a sprite and then its shadow texuture a bit deeper in the scene. I think I'd like to avoid this due to the loading time and extra memory requirements, but this may be the fastest and best looking?
I don't want to do any shadow work with external textures, since I use the same sprite textures at varying scales, and pre-baked shadows would scale unnaturally.
So are any of these better than the others? Are there other options I'm not thinking of? Thanks!
Those are all some well thought out options, here are my thoughts on each
It is definitely possible to use a shader but it might not be the most performant option, since the blurring will have to be done inside the shader and might involve multiple texture lookups.
Drawing the texture multiple times would work and would look like a shadow, because each "jittered" image would have slightly modified alpha values. But again, blending and multiple renders of each sprite would add up and might affect performance.
I like and recommend this option, because you can set a shader that puts black pixels instead of colored pixels (considering alpha) into a render target smaller than the screen (1/4th?) and then use this as the shadow texture. Since the texture is now being stretched, you'd get the "blurring" for free, too. The pixel shader that does the "blackening" would be very simple and not affect performance too much.
Unless you really need high-quality shadows (and the previous method doesn't suffice) I wouldn't recommend this.
This is of course the most flexible option and has an x2 rendering complexity. Unfortunately, it will consume more memory than all the other options above.
Hope this helps!
We are to develop a scrolling/zooming scene in OpenGL ES on Android, very much like a level in Angry Birds but more like a level in World Of Goo. More like the latter as the world will not consist of repeated layers as featured in Angry Birds but of a large image. As the scene needs to scroll/zoom and therefore a lot of it will not be visible, I was wondering about the most efficient way to implement the rendering, focusing on the environment only (ie not the objects within the world but background layers).
We will be using an orthographic projection.
The first that comes to mind is creating a large 4 vertices rectangle at world size, which has the background texture mapped to it, and translate/scale this using glTranslatef / glScalef. However, I was wondering if the non visible area outside of the screens boundaries is still being rendered by OpenGL as it is not being culled (you would lose the visible area as well as there are only 4 vertices). Therefore, would it be more efficient to subdivide this rectangle, so non visible smaller rectangles can be culled?
Another option would be creating a 4 vertice rectangle that would fill the screen, then move the background by adjusting its texture coordinates. However, I guess we would run into problems when building bigger worlds, considering the texture size limit. It seems like a nice implementation for repeated backgrounds like AngryBirds has.
Maybe there is another way..?
If someone has an idea on how it might be done in AngryBirds / World of Goo, please share as I'd love to hear. They seem to have implemented a system that allows for the world to be moved and zoomed very (WorldOfGoo = VERY) smoothly.
This is probably your best bet for implementation.
In my experience, keeping a large texture in memory is very expensive on Android. I would get quite a few OutOfMemoryError exceptions for the background texture before I moved to tiling.
I think the biggest rendering bottleneck would be with memory transfer speeds and fill rate instead of any graphics computation.
Edit: Check out 53:28 of this presentation from Google I/O 2009.
You could split the background rectangle into smaller rectangles, so that OpenGL only renders the visible rectangles. You won't have a big ass rectangle with a big ass texture loaded but smallers rectangles with smaller textures that you could load/unload, depending on what is visible on screen...
Afaik there would be no performance drop due to large areas being rendered off-screen, subdividing and culling is normally done just to reduce vertex count, but you would actually be adding to it here.
Putting that aside for now; from the way you phrased the question I am unsure whether you have a large background texture or a small repeating one. If it is large, then you will need to subdivide because of texture size limitations anyway, so the question is moot! If it is small, then I would suggest the second method, fit a quad to the screen and move the background by changing the texture coordinates.
I feel like I may have missed something, though, as I am unsure why you mentioned the texture size limitation issue when talking about the the texture coordinate method and not the large quad method. Surely for both this is not a problem for repeating textures as you can use GL_REPEAT texture wrap mode...
But for both it is a problem for a single large texture unless you subdivide, which would make the texture coordinate tactic way more complicated than necessary. In this case subdividing the mesh along texture subdivisions would be best, and culling off-screen sections. Deciding which parts to cull should be trivial with this technique.
Cheers.