I'm new to opengl-es on android and struggling to get my head around the concept of texturing.
I am looking to produce a tilemap of various difference textures. I understand that it is better to use an atlas of all the combined textures so I don't repeatedly rebind. However I am unsure quite how to then map these textures on to my tilemap.
I understand the process of specifiying vertices and then coordinates of where on the texture map I wish to take them from (i drew a picture too!)
Click for image - curse newbies not allowed to post images :(
But my question is can I draw a triangle strip that is, in effect, longer than one "tile" but map a different area of the texture to that "tile".
So instead of drawing a triangle strip pretending to be a quad, one at a time for each tile, can I somehow draw a whole row of the tilemap (like 1,2,3,4 and cleverly shift around the texture coordinates so each "tile" is now from a different area of the texture? So for example I draw a triangle strip 4 tiles long but shift the texture coordinates so the first "tile" is the yellow of my texture the second red ... third blue... etc
If I've not explained myself too well apologies!
It might just be that this is not possible and I have to draw each one individually which seems like I've saved effort with an atlas, then had to draw them all out slowly anyway regardless. Hmm.
Sure, just adjust texture coordinates, that is how texture atlases work.
Related
I have a 3d mesh, which is a terrain. This runs perfectly fine btw, but I want to have shapes moving accross this terrain. These shapes are flat on the landscape and are blob-like: They can change shape and should follow the contoures and the heightmap of the terrain. These shapes can be painted on the landscape or flow over it, that doesn't matter.
The shapes are meant to be blocks of armies moving across the map, and this should be happening Real-Time! Also: they are 2d convex hull shapes. Also they are just one color with an alpha value (like blue with alpha 0.25f).
The only problem is: I can't figure out how to do this and the question is: Can anyone tell me how to do it?
My first thoughts were just to copy the terrain vertex matrix, push it up a bit so it will be on top of the terrain, load this buffer into a VBO and update the index buffer according to the position and shape needed and then draw the shape. This is rather slow and inefficient, especially when the shape is moving and changing. Also, the resolution of the heightmap is 175x175, so the movement is not at all smooth but rather jaggy.
Then I thought, but rather new to this area, update the shape outlines to the fragment shader of the terrain and let the shader decide if a point lies in that area and change color accordingly. This also was a really slow option, but if anyone sees potential and a good way to do this, tell me!
The next option was to draw directly onto the texture, which is still in the failing stage. If someone has any good ideas on how to draw a scene to a flat area and then put that on a terrain mesh, that would be great!
So if anyone has a solution to draw a shape (or multiple) on a terrain? That would be awesome. Thanks in advance!
I have an Android 4.0 application that uses the GL_OES_EGL_image_external method of rendering video as an OpenGL surface. That works great. In addition, I would like to stretch/warp a few patches on top of that. I'm currently shading those areas I would like to warp with some additional shaders on some quads on top of those areas. I'm stuck on how to get the underlying color. How does the shader on my quad on top of the video quad warp the underlying image? Is it possible?
I'm on iOS, but my app does something very similar.
How I've achieved it is based on some sample code from Apple (look at the RippleModel.m in particular). How it works is that it places the video texture video not on a quad, but on a highly tessellated grid, so you've got a ton of triangles with a ton of texture coordinates. It creates the vertices of this grid programmatically -- and more importantly, it creates the texture coordinates programmatically as well -- and holds them in an array.
For each frame, it iterates through all the vertices and updates the texture coordinates for each, 'warping' them in a ripple pattern, based on where the user has touched, and based on how much texture offset the surrounding vertices have. So the geometry isn't changed at all, and they don't perform the warp in the shader, it's all done in the texture coordinates; the shader is then just doing the straight texture lookup on the coordinates it has received.
So it's hard to say if this approach will work for your needs, but if your warps only happen in 2d, and if you can figure out how to define your warp as texture coordinate adjustments, this may help.
I've got a an OpenGL scene rendered with a bunch of sprites, and I'd like to automagically add drop shadows to all of them. Here's a picture showing what I mean:
The scene uses orthographic projection, the sprites are textured quads, and I'm using the depth buffer to draw them front to back. I'm working with OpenGL ES 2.0, but thoughts from the iOS or non-ES worlds would be appreciated as well. I've tossed a few ideas around in my head of how I can go about this, and I'd like to find out which has the most promise.
Draw each sprite twice, the first normally, the second with some kind of drop shadow shader a bit deeper in the scene. Not sure if this is possible?
Draw a sprite, then draw it again, darkened and with some alpha, several times with some random jitter applied to the verticies. This may look silly and not at all like a shadow.
Draw the base scene without background to a texture, then blur and darken it to create one large drop shadow. Then draw the base scene over the drop shadow texture, then finally over the background. This would lose the shadows between sprites, though.
SSAO in a post-processing pass. Might be the most dynamic and automatic, but could look fuzzy/grainy and really slow things down.
At creation time, generate a shadow texture for each sprite. For rendering, draw a sprite and then its shadow texuture a bit deeper in the scene. I think I'd like to avoid this due to the loading time and extra memory requirements, but this may be the fastest and best looking?
I don't want to do any shadow work with external textures, since I use the same sprite textures at varying scales, and pre-baked shadows would scale unnaturally.
So are any of these better than the others? Are there other options I'm not thinking of? Thanks!
Those are all some well thought out options, here are my thoughts on each
It is definitely possible to use a shader but it might not be the most performant option, since the blurring will have to be done inside the shader and might involve multiple texture lookups.
Drawing the texture multiple times would work and would look like a shadow, because each "jittered" image would have slightly modified alpha values. But again, blending and multiple renders of each sprite would add up and might affect performance.
I like and recommend this option, because you can set a shader that puts black pixels instead of colored pixels (considering alpha) into a render target smaller than the screen (1/4th?) and then use this as the shadow texture. Since the texture is now being stretched, you'd get the "blurring" for free, too. The pixel shader that does the "blackening" would be very simple and not affect performance too much.
Unless you really need high-quality shadows (and the previous method doesn't suffice) I wouldn't recommend this.
This is of course the most flexible option and has an x2 rendering complexity. Unfortunately, it will consume more memory than all the other options above.
Hope this helps!
We are to develop a scrolling/zooming scene in OpenGL ES on Android, very much like a level in Angry Birds but more like a level in World Of Goo. More like the latter as the world will not consist of repeated layers as featured in Angry Birds but of a large image. As the scene needs to scroll/zoom and therefore a lot of it will not be visible, I was wondering about the most efficient way to implement the rendering, focusing on the environment only (ie not the objects within the world but background layers).
We will be using an orthographic projection.
The first that comes to mind is creating a large 4 vertices rectangle at world size, which has the background texture mapped to it, and translate/scale this using glTranslatef / glScalef. However, I was wondering if the non visible area outside of the screens boundaries is still being rendered by OpenGL as it is not being culled (you would lose the visible area as well as there are only 4 vertices). Therefore, would it be more efficient to subdivide this rectangle, so non visible smaller rectangles can be culled?
Another option would be creating a 4 vertice rectangle that would fill the screen, then move the background by adjusting its texture coordinates. However, I guess we would run into problems when building bigger worlds, considering the texture size limit. It seems like a nice implementation for repeated backgrounds like AngryBirds has.
Maybe there is another way..?
If someone has an idea on how it might be done in AngryBirds / World of Goo, please share as I'd love to hear. They seem to have implemented a system that allows for the world to be moved and zoomed very (WorldOfGoo = VERY) smoothly.
This is probably your best bet for implementation.
In my experience, keeping a large texture in memory is very expensive on Android. I would get quite a few OutOfMemoryError exceptions for the background texture before I moved to tiling.
I think the biggest rendering bottleneck would be with memory transfer speeds and fill rate instead of any graphics computation.
Edit: Check out 53:28 of this presentation from Google I/O 2009.
You could split the background rectangle into smaller rectangles, so that OpenGL only renders the visible rectangles. You won't have a big ass rectangle with a big ass texture loaded but smallers rectangles with smaller textures that you could load/unload, depending on what is visible on screen...
Afaik there would be no performance drop due to large areas being rendered off-screen, subdividing and culling is normally done just to reduce vertex count, but you would actually be adding to it here.
Putting that aside for now; from the way you phrased the question I am unsure whether you have a large background texture or a small repeating one. If it is large, then you will need to subdivide because of texture size limitations anyway, so the question is moot! If it is small, then I would suggest the second method, fit a quad to the screen and move the background by changing the texture coordinates.
I feel like I may have missed something, though, as I am unsure why you mentioned the texture size limitation issue when talking about the the texture coordinate method and not the large quad method. Surely for both this is not a problem for repeating textures as you can use GL_REPEAT texture wrap mode...
But for both it is a problem for a single large texture unless you subdivide, which would make the texture coordinate tactic way more complicated than necessary. In this case subdividing the mesh along texture subdivisions would be best, and culling off-screen sections. Deciding which parts to cull should be trivial with this technique.
Cheers.
I am using textured quads to render a grid of tiles from a sprite sheet. Unfortunately when rendered, there are small gaps between the individual tiles:
Changing the texture parameters to scale the texture using GL_NEAREST rather than GL_LINEAR fixes this, but results in artifacts within the textured quad itself. Is there some way to prevent GL_LINEAR from interpolating using pixels outside of the specified UV coordinates? Any other suggestions for how to fix this?
For reference, here's the sprite sheet I am using:
Looks like a precision problem with your texture maps, are you using floats (32bit) or something smaller ? And how do you calculate the coordinates ?
Also leaving a 1 pixel border between texture sometimes helps (sometimes you always get a rounding error).
Myself I use this program http://www.texturepacker.com/ (not affiliated in any way), and you get the texture map and UV coordinates from it, you can also specify a padding around the textures and it can also extrude the last color around your texture, so even if get weird rounding probs you can always get a perfect seam.
I would check your precision and calcs first though.