So Im trying to figure out how to draw a single textured quad many times. My issue is that since these are create and deleted and every one of them has a unique position and rotation. Im not sure a vbo is the best solution as I've heard modifying buffers is extremely slow on android and it seems I would need to create a new one each frame since different quads might disappear randomly (collide with an enemy). If I simply do a draw call for each one I get 20fps around 100, which is unusable. any advice?
Edit: I'm trying to create a bullethell, but figuring out how to draw 500+ things is hurting my head.
I think you're after a particle system. A similar question is here: Drawing many textured particles quickly in OpenGL ES 1.1.
Using point sprites is quite cheap, but you have to do extra work in the fragment shader and I'm not sure if GLES2 supports gl_PointSize if you need different sized particles. gl_PointSize Corresponding to World Space Size
My go-to particle system is storing positions in a double buffered texture, then draw using a single draw call and a static array of quads. This is related but I'll describe it a bit more here...
Create a texture (floating point if you can, but this may limit the supported devices). Each pixel holds the particle position and maybe rotation information.
[EDITED] If you need to animate the particles you want to change the values in the texture each frame. To make it fast, get the GPU to do it in a shader. Using an FBO, draw a fullscreen polygon and update the values in the fragment shader. The problem is you can't read and write to the same texture (or shouldn't). The common approach is to double buffer the texture by creating a second one to render to while you read from the first, then ping-pong between them.
Create a VBO for drawing triangles. The positions are all the same, filling a -1 to 1 quad. However make texture coordinates for each quad address the correct pixel in the above texture.
Draw the VBO, binding your positions texture. In the vertex shader, read the position given the vertex texture coordinate. Scale the -1 to 1 vertex positions to the right size, apply the position and any rotation. Use the original -1 to 1 position as the texture coordinate to pass to the fragment shader to add any regular colour textures.
If you ever have a GLSL version with gl_Vertex, I quite like generating these coordinates in the vertex shader, saving storing unnecessarily trivial data just to draw simple objects. This for example.
To spawn particles, use glTexSubImage2D and write a block of particles into the position texture. You may need a few textures if you start storing more particle attributes.
Related
I'm trying to make a 2d map (for a game, think tiled world map) in OpenGL ES 2.0 for an android game. Basically, there are a few tile types that have different textures, and the map is randomly generated from these types, so from game-to-game the map changes but for the duration of a single game it stays the same.
My first thought was to generate a single large texture / image / bitmap (independent from OpenGL) beforehand basically stitching duplicate tile textures together to make the larger map, and then using this single texture for one large map rectangle. In theory I think this is simple and would work fine, but I'm worried that it won't scale well for larger maps and especially on mobile I'll run out of memory with such a large image map. Plus, there's a small set of tiles that are duplicated over and over so it seems like a tremendous waste to duplicate the pixel data in a big texture over and over.
My second thought was having many textures, one for each of the tile textures. But I'm not sure how this would work, texture-binding-wise, would I need the shaders to contain multiple texture references and within the shader have logic for using the right one?
Finally, I thought using a texture atlas could work, have one texture / image with all of the tile data in it, this would be relatively small. But I'm struggling to imagine how to get the maths to work out such that "tiles" or subsections of the map rectangle would use completely different texture coordinates.
Am I approaching this the wrong way? Should I be using a rectangle for each tile? At least this way I can pass the shaders both vertex and texture coordinates for each tile independently. This seems easier, but also seems wrong since the map really is just one rectangle that won't be changing.
My first thought was to generate a single large texture...
Actualy, something like this has already been used in id Software's id Tech since version 4. It's called MegaTexture. Basicaly, it's a big texture, which could also hold additional data.
My second thought was having many textures...
You don't need to hold all the textures in a shader. Do it like this:
Implement a loop with n iterations, where n is how much different types of textures are used.
Inside a loop, bind the current texture type.
Pass any data, like position/color/texture coords to shaders.
Draw all tiles that use the bounded texture. You could use GLES30.glDrawElementsInstanced or GLES30.glDrawArraysInstanced if you are targeting devices with GLES 3.x or an appropriate extension support. Otherwise, draw your tiles using GLES20.glDrawArrays or GLES20.glDrawElements.
Shaders won't be complicated with this approach.
Finally, I thought using a texture atlas could work...
You could use loop here too and compute the texture coordinates for each tile type on CPU, then just pass them to shaders.
Considering your map is not changing through a game session, MegaTexture approach looks good. However, it depends on how large your map is and how much memory is available. Also, note that max texture size is limited. Max size differs from device to device but should be (AFAIK) equal or greater than screen size and at least 64 texels(16 for cube-mapped textures). You can get the maximum texture size on any device using glGet(GL_MAX_TEXTURE_SIZE ).
I have imported a model (e.g. a teapot) using Rajawali into my scene.
What I would like is to label parts of the model (e.g. the lid, body, foot, handle and the spout)
using plain Android views, but I have no idea how this could be achieved. Specifically, positioning
the labels on the right place seems challenging. The idea is that when I transform my model's position in the scene, the tips of the labels are still correctly positioned
Rajawali tutorial show how Android views can be placed on top of the scene here https://github.com/Rajawali/Rajawali/wiki/Tutorial-08-Adding-User-Interface-Elements
. I also understand how using the transformation matrices a 3D coordinate on the model can be
transformed into a 2D coordinate on the screen, but I have no idea how to determine the exact 3D coordinates
on the model itself. The model is exported to OBJ format using Blender, so I assume there is some clever way of determining
the coordinates in Blender and exporting them to a separate file or include them somehow in the OBJ file (but not
render those points, only include them as metadata), but I have no idea how I could do that.
Any ideas are very appreciated! :)
I would use a screenquad, not a view. This is a general GL solution, and will also work with iOS.
You must determine the indices of the desired model vertices. Using the text rendering algo below, you can just fiddle them until you hit the right ones.
Create a reasonable ARGB bitmap with same aspect ratio as the screen.
Create the screenquad texture using this bitmap
Create a canvas using this bitmap
The rest happens in onDrawFrame(). Clear the canvas using clear paint.
Use the MVP matrix to convert desired model vertices to canvas coordinates.
Draw your desired text at the canvas coordinates
Update the texture.
Your text will render very precisely at the vertices you specfied. The GL thread will double-buffer and loop you back to #4. Super smooth 3D text animation!
Use double floating point math to avoid loss of precision during coordinate conversion, which results in wobbly text. You could even use the z value of the vertex to scale the text. Fancy!
The performance bottleneck is #7 since the entire bitmap must be copied to GL texture memory, every frame. Try to keep the bitmap as small as possible, maintaining aspect ratio. Maybe let the user toggle the labels.
Note that the copy to GL texture memory is redundant since in OpenGL-ES, GL memory is just regular memory. For compatibility reasons, a redundant chunk of regular memory is reserved to artificially enforce the copy.
In OpenGL ES, is it possible to use degenerate triangles (triangles with 0 area) to separate TRIANGLE_FAN objects in a vertex array? Or is this only possible with TRIANGLE_STRIP?
If the answer is no, what would be the best way to batch multiple TRIANGLE_FAN vertex array draw calls into one?
You are correct, the answer is no, since all triangle fan triangles share same vertex.
If you want to batch multiple triangle fans it would be better to use Vertex Buffer Objects or VBO's with GL_TRIANGLES mode instead. There will be a small index buffer overhead, but it would provide a better flexibility.
You could also triangulate your surface as a strip instead of a fan. It would allow you to use degenerate triangles and batch your draw calls into one.
If your surface is a n-gon it's easy. Just change the order of vertex creation. Instead of going around the center, pick a vertex to start and generate the others by iterating on both sides. Here is an example with an Hexagon. The left image uses a triangle strip, the middle uses a fan.
I am using OpenGL ES 2.0 (on Android) to draw simple 2D scene has few images. I have background image and some others which have alpha channel.
I would like to draw outline around non-transparent pixels in texture using only shader programs. After somewhat extensive search I failed to find example code. It looks like GLES 2.0 is still not that popular.
Can you provide some sample code or point me in right direction where I can find more information on how to do this?
There are a couple of ways of doing this depending on the a) Qaulity, and b) Speed you need. The common search terms are:
"glow outline"
"bloom"
"toon shader" or "toon shading"
"edge detection"
"silhouette extraction"
"mask"
1) The traditional approach is to use the stencil buffer and render to texture
Clear the stencil buffer (usually done once per frame)
glClear( GL_COLOR_BUFFER_BIT | DEPTH_BUFFER_BIT | STENCIL_BUFFER_BIT )
Render to Texture
Disable Depth Writes
glDepthMask( 1 );
Disable Color Buffer Writes
glColorMask( 0, 0, 0, 0 );
Enable the Stencil buffer Set stencil to always pass and replace
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE );
glStencilFunc( GL_ALWAYS, 1, 1 );
Draw object into texture
Disable stencil
Enable Color Buffer Writes
Enable Depth Writes
Do a N-pass "tap", such as 5 or 7 pass tap where you blur the texture via rendering to itself in both the vertical and horizontal direction (another option is to scale drawing the texture image up)
Switch to orthographic projection
Draw & Blend the texture image back into the framebuffer
Restore perspective projection
2) Pass along extra vertex, namely which vertices are adjacent in the proper winding order, and dynamically generate extra outline triangles.
See: http://www.gamasutra.com/view/feature/1644/sponsored_feature_inking_the_.php?print=1
3) Use cheap edge detection. In the vertex shader check the dot product of the normal with the view. If it is between:
-epsilon < 0 < epsilon
Then you have an edge.
4) Use cheap-o-rama object scaling. It doesn't work for concave objects of course but depending on your quality needs may be "good enough"
Switch to a "flat" shader
Enable Alpha Testing
Draw the model scaled up slightly
Disable Alpha Testing
Draw the model but at the normal size
References:
https://developer.valvesoftware.com/wiki/L4D_Glow_Effect
http://prideout.net/blog/?p=54
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html
Related SO questions:
Outline effects in OpenGL
To get the pixel shader drawing something, there needs to be geometry.
As far as I understand, you want to draw a border around these images,
but the outermost fragments generated would be image pixels in a basic implementation,
so you'd overdraw them with any border.
If you want a 'line border', you cannot do anything else than drawing the image triangles/quads (GL_TRIANGLES,GL_QUADS), and in an additional call the outline (using GL_LINES), where you may share the vertices of a single quad.
Consider, that lines can't be drawn efficiently by many GPU's)
Otherwise, see below solutions:
Solution 1:
Draw the rectangle as big as the image + border will be and adjust texture coords for the image, so that it will be placed within the rectangle appropriately.
This way, no extra geometry or draw calls are required.
Set the texture border property (single 4 component color), there will be no need to do extra fragment shader calculations, the texture unit/sampler does all the work.
Texture properties:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER)
glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,borderColor4f)
I've never used a color border for a single channel texture, so this approach needs to be verified.
Solution 2:
Similar to 1, but with calculations in the fragment shader to check, whether the texture coords are within the border area, instead of the texture border. Without modification, the scalars of a texture coord range from 0.0 to 1.0.
Texture properties may be:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP)
The fragment color could be determined by any of these methods:
an additional border color attribute for the rectangle, where either the texel or that border color is selected then (could be a vertex attribute, but more likely an uniform or constant).
combination of the alpha texture with a second texture as background for the whole rectangle (like a picture frame) and here too, either texel is choosen.
some other math function
Of course, the color values could be mixed for image/border gradients.
EDIT:
As the number, length and position of such outline segments will vary and can even form concave shapes, you'd need to do this with a geometry shader, which is not available in ES 2.0 core. The best thing you can do is to precompute a line loop for each image on the CPU. Doing such tests in a shader is rather inefficient and even overkill, depending on image size, the hardware you actually run it on etc. If you'd draw a fixed amount of line segments and transform them using the vertex shader, you can not properly cover all cases, at least not without immense effort and GPU workload.
Should you intend to change the color values of corresponding texels, your fragment shader would need to fetch a massive and varying amount of texels for each neighbour pixel towards the texture edges as in all other implementations. Such brute force techniques are usually a replacement for recursive and iterative algos, for which the CPU is a better choice. So I suggest that you do it there by either modifying the texture or generate a second one for combination in the fragment shader.
Basically, you need to implement a path finding algo, which tries to 'get around' opaque pixels towards any edge.
Your alpha channel can be seen as a grey scale image. Look for any edge detection/drawing algorithm. For example Canny edge detector (http://en.wikipedia.org/wiki/Canny_edge_detector). Alternatively and probably much better idea if your images are not procedural is to pre-compute the edges.
If your goal is to blend various images and then apply the contour from the result of that blending, try rendering to a texture and then render again that texture over the screen and perform the edge detection algorithm.
I have an Android 4.0 application that uses the GL_OES_EGL_image_external method of rendering video as an OpenGL surface. That works great. In addition, I would like to stretch/warp a few patches on top of that. I'm currently shading those areas I would like to warp with some additional shaders on some quads on top of those areas. I'm stuck on how to get the underlying color. How does the shader on my quad on top of the video quad warp the underlying image? Is it possible?
I'm on iOS, but my app does something very similar.
How I've achieved it is based on some sample code from Apple (look at the RippleModel.m in particular). How it works is that it places the video texture video not on a quad, but on a highly tessellated grid, so you've got a ton of triangles with a ton of texture coordinates. It creates the vertices of this grid programmatically -- and more importantly, it creates the texture coordinates programmatically as well -- and holds them in an array.
For each frame, it iterates through all the vertices and updates the texture coordinates for each, 'warping' them in a ripple pattern, based on where the user has touched, and based on how much texture offset the surrounding vertices have. So the geometry isn't changed at all, and they don't perform the warp in the shader, it's all done in the texture coordinates; the shader is then just doing the straight texture lookup on the coordinates it has received.
So it's hard to say if this approach will work for your needs, but if your warps only happen in 2d, and if you can figure out how to define your warp as texture coordinate adjustments, this may help.