Preventing gaps/borders when using texture for sprite sheet - android

I am using textured quads to render a grid of tiles from a sprite sheet. Unfortunately when rendered, there are small gaps between the individual tiles:
Changing the texture parameters to scale the texture using GL_NEAREST rather than GL_LINEAR fixes this, but results in artifacts within the textured quad itself. Is there some way to prevent GL_LINEAR from interpolating using pixels outside of the specified UV coordinates? Any other suggestions for how to fix this?
For reference, here's the sprite sheet I am using:

Looks like a precision problem with your texture maps, are you using floats (32bit) or something smaller ? And how do you calculate the coordinates ?
Also leaving a 1 pixel border between texture sometimes helps (sometimes you always get a rounding error).
Myself I use this program http://www.texturepacker.com/ (not affiliated in any way), and you get the texture map and UV coordinates from it, you can also specify a padding around the textures and it can also extrude the last color around your texture, so even if get weird rounding probs you can always get a perfect seam.
I would check your precision and calcs first though.

Related

Trying to figure out OpenGLES and its "space"

Currently I'm working on an Application involving OpenGL ES 2.0. I'm using the Java Wrapper for it, since the OpenGL part will probably not have the biggest complexity ever. Nontheless, I'm currently stuck.
First, I'm trying to draw something like this:
So I just want to draw some sort of indicator, how big my "space" is - if there even are limitations? How would I draw such a cage around the center of the camera? (Of course I just want a simple one, basically a square, indicating boundaries, not something with rounded borders etc)
To draw something like this without rounded corners I suggest you to simply draw a textured cube (there are too many of those around the web). For it to look as nice as the one on the image you will also need to add some lights into the scene as they are the ones that give a true 3d effect (a sphere without shades/lights will always appear as a 2d circle).
As for the limitations: There are no specific limitations in size except the overflow. I think in most cases you have a 32-bit floating values in your vectors so its maximum value would be how big is your space. Other limitations are more of a visual, you usually use frustum for this type of scene which has parameters zNear and zFar clipping plains. These two will define you can not see pixels nearer then zNear or further then zFar. Although you can set your own value for zFar and can be very large you should know there is a penalty in depth buffer precision doing so (result can be incorrect drawing when 2 objects are too close together).
So in general you are the one that has to take care of the scene scale or size and consider your field of view.

Draw outline using with shader program in OpenGL ES 2.0 on Android

I am using OpenGL ES 2.0 (on Android) to draw simple 2D scene has few images. I have background image and some others which have alpha channel.
I would like to draw outline around non-transparent pixels in texture using only shader programs. After somewhat extensive search I failed to find example code. It looks like GLES 2.0 is still not that popular.
Can you provide some sample code or point me in right direction where I can find more information on how to do this?
There are a couple of ways of doing this depending on the a) Qaulity, and b) Speed you need. The common search terms are:
"glow outline"
"bloom"
"toon shader" or "toon shading"
"edge detection"
"silhouette extraction"
"mask"
1) The traditional approach is to use the stencil buffer and render to texture
Clear the stencil buffer (usually done once per frame)
glClear( GL_COLOR_BUFFER_BIT | DEPTH_BUFFER_BIT | STENCIL_BUFFER_BIT )
Render to Texture
Disable Depth Writes
glDepthMask( 1 );
Disable Color Buffer Writes
glColorMask( 0, 0, 0, 0 );
Enable the Stencil buffer Set stencil to always pass and replace
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE );
glStencilFunc( GL_ALWAYS, 1, 1 );
Draw object into texture
Disable stencil
Enable Color Buffer Writes
Enable Depth Writes
Do a N-pass "tap", such as 5 or 7 pass tap where you blur the texture via rendering to itself in both the vertical and horizontal direction (another option is to scale drawing the texture image up)
Switch to orthographic projection
Draw & Blend the texture image back into the framebuffer
Restore perspective projection
2) Pass along extra vertex, namely which vertices are adjacent in the proper winding order, and dynamically generate extra outline triangles.
See: http://www.gamasutra.com/view/feature/1644/sponsored_feature_inking_the_.php?print=1
3) Use cheap edge detection. In the vertex shader check the dot product of the normal with the view. If it is between:
-epsilon < 0 < epsilon
Then you have an edge.
4) Use cheap-o-rama object scaling. It doesn't work for concave objects of course but depending on your quality needs may be "good enough"
Switch to a "flat" shader
Enable Alpha Testing
Draw the model scaled up slightly
Disable Alpha Testing
Draw the model but at the normal size
References:
https://developer.valvesoftware.com/wiki/L4D_Glow_Effect
http://prideout.net/blog/?p=54
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html
Related SO questions:
Outline effects in OpenGL
To get the pixel shader drawing something, there needs to be geometry.
As far as I understand, you want to draw a border around these images,
but the outermost fragments generated would be image pixels in a basic implementation,
so you'd overdraw them with any border.
If you want a 'line border', you cannot do anything else than drawing the image triangles/quads (GL_TRIANGLES,GL_QUADS), and in an additional call the outline (using GL_LINES), where you may share the vertices of a single quad.
Consider, that lines can't be drawn efficiently by many GPU's)
Otherwise, see below solutions:
Solution 1:
Draw the rectangle as big as the image + border will be and adjust texture coords for the image, so that it will be placed within the rectangle appropriately.
This way, no extra geometry or draw calls are required.
Set the texture border property (single 4 component color), there will be no need to do extra fragment shader calculations, the texture unit/sampler does all the work.
Texture properties:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER)
glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,borderColor4f)
I've never used a color border for a single channel texture, so this approach needs to be verified.
Solution 2:
Similar to 1, but with calculations in the fragment shader to check, whether the texture coords are within the border area, instead of the texture border. Without modification, the scalars of a texture coord range from 0.0 to 1.0.
Texture properties may be:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP)
The fragment color could be determined by any of these methods:
an additional border color attribute for the rectangle, where either the texel or that border color is selected then (could be a vertex attribute, but more likely an uniform or constant).
combination of the alpha texture with a second texture as background for the whole rectangle (like a picture frame) and here too, either texel is choosen.
some other math function
Of course, the color values could be mixed for image/border gradients.
EDIT:
As the number, length and position of such outline segments will vary and can even form concave shapes, you'd need to do this with a geometry shader, which is not available in ES 2.0 core. The best thing you can do is to precompute a line loop for each image on the CPU. Doing such tests in a shader is rather inefficient and even overkill, depending on image size, the hardware you actually run it on etc. If you'd draw a fixed amount of line segments and transform them using the vertex shader, you can not properly cover all cases, at least not without immense effort and GPU workload.
Should you intend to change the color values of corresponding texels, your fragment shader would need to fetch a massive and varying amount of texels for each neighbour pixel towards the texture edges as in all other implementations. Such brute force techniques are usually a replacement for recursive and iterative algos, for which the CPU is a better choice. So I suggest that you do it there by either modifying the texture or generate a second one for combination in the fragment shader.
Basically, you need to implement a path finding algo, which tries to 'get around' opaque pixels towards any edge.
Your alpha channel can be seen as a grey scale image. Look for any edge detection/drawing algorithm. For example Canny edge detector (http://en.wikipedia.org/wiki/Canny_edge_detector). Alternatively and probably much better idea if your images are not procedural is to pre-compute the edges.
If your goal is to blend various images and then apply the contour from the result of that blending, try rendering to a texture and then render again that texture over the screen and perform the edge detection algorithm.

Scrolling/zooming a scene in OpenGL and subdivision

We are to develop a scrolling/zooming scene in OpenGL ES on Android, very much like a level in Angry Birds but more like a level in World Of Goo. More like the latter as the world will not consist of repeated layers as featured in Angry Birds but of a large image. As the scene needs to scroll/zoom and therefore a lot of it will not be visible, I was wondering about the most efficient way to implement the rendering, focusing on the environment only (ie not the objects within the world but background layers).
We will be using an orthographic projection.
The first that comes to mind is creating a large 4 vertices rectangle at world size, which has the background texture mapped to it, and translate/scale this using glTranslatef / glScalef. However, I was wondering if the non visible area outside of the screens boundaries is still being rendered by OpenGL as it is not being culled (you would lose the visible area as well as there are only 4 vertices). Therefore, would it be more efficient to subdivide this rectangle, so non visible smaller rectangles can be culled?
Another option would be creating a 4 vertice rectangle that would fill the screen, then move the background by adjusting its texture coordinates. However, I guess we would run into problems when building bigger worlds, considering the texture size limit. It seems like a nice implementation for repeated backgrounds like AngryBirds has.
Maybe there is another way..?
If someone has an idea on how it might be done in AngryBirds / World of Goo, please share as I'd love to hear. They seem to have implemented a system that allows for the world to be moved and zoomed very (WorldOfGoo = VERY) smoothly.
This is probably your best bet for implementation.
In my experience, keeping a large texture in memory is very expensive on Android. I would get quite a few OutOfMemoryError exceptions for the background texture before I moved to tiling.
I think the biggest rendering bottleneck would be with memory transfer speeds and fill rate instead of any graphics computation.
Edit: Check out 53:28 of this presentation from Google I/O 2009.
You could split the background rectangle into smaller rectangles, so that OpenGL only renders the visible rectangles. You won't have a big ass rectangle with a big ass texture loaded but smallers rectangles with smaller textures that you could load/unload, depending on what is visible on screen...
Afaik there would be no performance drop due to large areas being rendered off-screen, subdividing and culling is normally done just to reduce vertex count, but you would actually be adding to it here.
Putting that aside for now; from the way you phrased the question I am unsure whether you have a large background texture or a small repeating one. If it is large, then you will need to subdivide because of texture size limitations anyway, so the question is moot! If it is small, then I would suggest the second method, fit a quad to the screen and move the background by changing the texture coordinates.
I feel like I may have missed something, though, as I am unsure why you mentioned the texture size limitation issue when talking about the the texture coordinate method and not the large quad method. Surely for both this is not a problem for repeating textures as you can use GL_REPEAT texture wrap mode...
But for both it is a problem for a single large texture unless you subdivide, which would make the texture coordinate tactic way more complicated than necessary. In this case subdividing the mesh along texture subdivisions would be best, and culling off-screen sections. Deciding which parts to cull should be trivial with this technique.
Cheers.

Emulating constant size of point sprites using OpenGL ES

I am trying to emulate point sprites in Android using OpenGL, specifically the characteristic of the size staying the same if the view is zoomed.
I do not want to use point sprites themselves as they pop out of the frustum when the "point" reaches the edge, regardless of size. I do not want to go down the route of Orthographic projection either.
Say I have a billboard square with a size of 1, when the use zooms in, I would need to decrease the size of the square so it looks the same size, if the user zooms out, I increase it. I have the projection and model matrices to hand if these are required as well as the FOV. My head just goes blank every time I sit down and think about it! Any ideas on the necessary algorithm?
Ok, By changing field of view to zoom into the environment I divide the quad size by (max_fov / current_fov). It works for me.

Texture mapping to triangle strip from atlas Opengl ES

I'm new to opengl-es on android and struggling to get my head around the concept of texturing.
I am looking to produce a tilemap of various difference textures. I understand that it is better to use an atlas of all the combined textures so I don't repeatedly rebind. However I am unsure quite how to then map these textures on to my tilemap.
I understand the process of specifiying vertices and then coordinates of where on the texture map I wish to take them from (i drew a picture too!)
Click for image - curse newbies not allowed to post images :(
But my question is can I draw a triangle strip that is, in effect, longer than one "tile" but map a different area of the texture to that "tile".
So instead of drawing a triangle strip pretending to be a quad, one at a time for each tile, can I somehow draw a whole row of the tilemap (like 1,2,3,4 and cleverly shift around the texture coordinates so each "tile" is now from a different area of the texture? So for example I draw a triangle strip 4 tiles long but shift the texture coordinates so the first "tile" is the yellow of my texture the second red ... third blue... etc
If I've not explained myself too well apologies!
It might just be that this is not possible and I have to draw each one individually which seems like I've saved effort with an atlas, then had to draw them all out slowly anyway regardless. Hmm.
Sure, just adjust texture coordinates, that is how texture atlases work.

Categories

Resources