Has anyone had much success on Android with creating blurred textures using blending to blur a texture?
I'm thinking of the technique described here but the crux is to take a loaded texture and then apply a blur to it so that the bound texture itself is blurred.
"Inplace blurring" is something a CPU can do, but using a GPU, which generally does things in parallel, you must have another image buffer as render target.
Even with new shaders, reads and writes from/to the same buffer can lead to corruption because they can be done reordered. A similar issue is, that a gaussian blurring kernel, which can handle blurring in one pass, depends on neighbor fragments which could have been modified by the kernel applied at their fragment coordinate.
If you don't have the 'framebuffer_object' extension available for rendering into renderbuffers or even textures (additionally requires 'render_texture' extension),
you have to render into the back buffer as in the example and then do glReadPixels() to get the image back for uploading it to the source texture, or do a fast and direct glCopyTexImage2D() (OpenGL* 1.1 have this).
If the render target is too small, you can render multiple tiles.
Related
I was trying to render rubix cubes with opengl es on android. Here is how I do it: I render 27 ajacent cubes. And the faces of the cubes which is covered is textured with black bmp picture and other faces that can be seen is textured with colorful picture. I used cull face and depth-test to avoid rendering useless faces. But look at what I got, it is pretty wierd. The black faces show up sometimes. Can anyone tell me how to get rid of the artifacts?
Screenshots:
With the benefit of screenshots it looks like the depth buffering simply isn't having any effect — would it be safe to conclude that you render the side of the cube with the blue faces first, then the central section behind it, then the back face?
I'm slightly out of my depth with the Android stuff but I think the confusion is probably just that enabling the depth test within OpenGL isn't sufficient. You also have to ensure that a depth buffer is allocated.
Probably you have a call to setEGLConfigChooser that's disabling the depth buffer. There are a bunch of overloaded variants of that method but the single boolean version and the one that allows redSize, greenSize, etc to be specified give you explicit control over whether there's a depth buffer size. So you'll want to check those.
If you're creating your framebuffer explicitly then make sure you are attaching a depth renderbuffer.
I have around 2 GiB of textures (all are 256x256 tiles) in uncompressed RGBA 8888 format (the RGB 565 texture format is not an option, because there are lots smooth gradients and shades of gray, which have green tint with 565 format). So I load them on demand, when they should become visible and delete the old ones. The problem is, that there is an annoying FPS drop when I upload them OpenGL. Currently using OpenGL ES 1.1.
I decode the textures in a separate thread (i.e. BitmapFactory.decodeStream(...)) and then send the Bitmap to GL thread and upload it as a texture. When this happens the GL thread is sometimes slowed-down a bit for this upload. I measured the texture upload time and mostly it varys from 1-8ms, in average it is ~2ms. But from time-to-time it is 40-70ms. What can cause this drop?
I also generate mipmaps on the GPU (disabling mipmaps does not affect this behaviour) and here are all the texture parameters:
GLES11.glTexParameterf(GLES11.GL_TEXTURE_2D, GLES11.GL_TEXTURE_MIN_FILTER, GLES11.GL_LINEAR_MIPMAP_NEAREST);
GLES11.glTexParameterf(GLES11.GL_TEXTURE_2D, GLES11.GL_TEXTURE_MAG_FILTER, GLES11.GL_LINEAR);
GLES11.glTexParameterf(GLES11.GL_TEXTURE_2D, GLES11.GL_TEXTURE_WRAP_S, GLES11.GL_CLAMP_TO_EDGE);
GLES11.glTexParameterf(GLES11.GL_TEXTURE_2D, GLES11.GL_TEXTURE_WRAP_T, GLES11.GL_CLAMP_TO_EDGE);
GLES11.glTexParameterf(GLES11.GL_TEXTURE_2D, GLES11.GL_GENERATE_MIPMAP, GLES11.GL_TRUE);
GLES11.glTexEnvx(GLES11.GL_TEXTURE_ENV, GLES11.GL_TEXTURE_ENV_MODE, GLES11.GL_MODULATE);
GLUtils.texImage2D(GLES11.GL_TEXTURE_2D, 0, GLES11.GL_RGBA, bmp, 0);
How can I do this better? E.g. how are Open GL video players done, when they need to load and display many frames per second? Or current web browser which render pages as tiles? Is EGL_image a good option worth looking at? Will be OpenGL ES 2.0 different?
EDIT:
The slow down of loading was because of GC-ing on the GL thread.
In addition to making sure the GC does not kick in, you could do all the bitmap generation and texture upload on a separate thread as described in my answer to a similar question: Threading textures load process for android opengl game
This would prevent you from small frame drops when the upload itself takes to long.
Finally I solved it, but not with native code, nor with EGL_image, but thanks to this video. Fortunatelly I had almost all textures in 256x256 and I am targeting Android 3.2 and newer, so for this scenario there is an easy solution:
BitmapOptions.inBitmap which will reuse one Bitmap for every new tile, so no more furios GC-ing. I had to enlarge the few bitmaps that where smaller than 256x256.
My Android app needs to display a full-screen bitmap as a background, then on top of that display some dynamic 3D graphics using OpenGL ES (either 1.1. or 2.0 - not decided yet). The background image is a snapshot of a WebView component in the same app, so its dimensions already fit the screen perfectly.
I'm new to OpenGL, but I know that the regular way to display a bitmap involve scaling it into a POT texture (glTexImage2D), configuring the matrices, creating some vertices for the rectangle and displaying that with glDrawArrays. Seems to be a lot of extra work (with loss of quality when down-scaling the image to POT size) when all that's needed is just to draw a bitmap to the screen, in 1:1 scale.
The "desktop" GL has glDrawPixels(), which seems to do exactly what's needed in this situation, but that's apparently missing in GLES. Is there any way to copy pixels to the screen buffer in GLES, circumventing the 3D pipeline? Or is there any way to draw OpenGL graphics on top of a "flat" background drawn by regular Android means? Or making a translucent GLView (there is RSTextureView for Renderscript-based display, but I couldn't find an equivalent for GL)?
but I know that the regular way to display a bitmap involve scaling it into a POT texture (glTexImage2D)
Then your knowledge is outdated. Modern OpenGL (version 2 and later) are fine with arbitrary image dimensions for their textures.
The "desktop" GL has glDrawPixels(), which seems to do exactly what's needed in this situation, but that's apparently missing in GLES.
Well, modern "desktop" OpenGL, namely version 3 core and later don't have glDrawPixels either.
However appealing this function is/was, it offers only poor performance and has so many caveats, that it's rarely used, whenever it's use can be avoided.
Just upload your unscaled image into a texture, disable mipmapping and draw it onto a fullscreen quad.
I'm working on implementing picking for an OpenGL game I'm writing for android. It's using the "unique color" method of drawing each touchable object as a solid color that is unique to each object. The user input then reads glReadPixels() at the location of the touch. I've gotten the coloring working, and glReadPixels working, but I have been unable to separate the "color" rendering from the main actual rendering, which complicated the use of glReadPixels.
Supposedly the trick to working with this is to render the second scene (for input) into an offscreen buffer, but this seems to be a bit problematic. I've investigated using OpenGL ES1.1 FBO's to act as an offscreen buffer, but it seems my handset (Samsung Galaxy S Vibrant (2.2)) does not support FBO's. I'm at a loss for how to correctly render this scene (and run glReadPixels on it) without the user witnessing it.
Any ideas how offscreen rendering of this sort can be done?
if FBO is not supported, you can always resort to rendering to your normal back-buffer.
Typical usage would be:
Clear back-buffer
draw "color-as-id" objects
Clear back-buffer
draw normal
SwapBuffers
The second clear will make sure the picking code will not show up on the final image.
This is just a quick question before I dive deeper into converting my current rendering system to openGL. I heard that textures needed to be in base 2 sizes in order to be stored for rendering. Is this true?
My application is very tight on memory, but most of the bitmaps are not powers of two. Does storing non-base 2 textures consume more memory?
It's true depending on the OpenGL ES version, OpenGL ES 1.0/1.1 have the power of two restriction. OpenGL ES 2.0 doesn't have the limitation, but it restrict the wrap modes for non power of two textures.
Creating bigger textures to match POT dimensions does waste texture memory.
Suresh, the power of 2 limitation was built into OpenGL back in the (very) early days of computer graphics (before affordable hardware acceleration), and it was done for performance reasons. Low-level rendering code gets a decent performance boost when it can be hard-coded for power-of-two textures. Even in modern GPU's, POT textures are faster than NPOT textures, but the speed difference is much smaller than it used to be (though it may still be noticeable on many ES devices).
GuyNoir, what you should do is build a texture atlas. I just solved this problem myself this past weekend for my own Android game. I created a class called TextureAtlas, and its constructor calls glTexImage2D() to create a large texture of any size I choose (passing null for the pixel values). Then I can call add(id, bitmap), which calls glTexSubImage2D(), repeatedly to pack in the smaller images. The TextureAtlas class tracks the used and free space within the larger texture and the rectangles each bitmap is stored in. Then the rendering code can call get(id) to get the rectangle for an image within the atlas (which it can then convert to texture coordinates).
Side note #1: Choosing the best way to pack in various texture sizes is NOT a trivial task. I chose to start with simple logic in the TextureAtlas class (think typewriter + carriage return + line feed) and make sure I load the images in the best order to take advantage of that logic. In my case, that was to start with the smallest square-ish images and work my way up to the medium square-ish images. Then I load any short+wide images, force a CR+LF, and then load any tall+skinny images. I load the largest square-ish images last.
Side note #2: If you need multiple texture atlases, try to group images inside each that will be rendered together to minimize the number of times you need to switch textures (which can kill performance). For example, in my Android game I put all the static game board elements into one atlas and all the frames of various animation effects in a second atlas. That way I can bind atlas #1 and draw everything on the game board, then I can bind atlas #2 and draw all the special effects on top of it. Two texture selects per frame is very efficient.
Side note #3: If you need repeating/mirroring textures, they need to go into their own textures, and you need to scale them (not add black pixels to fill in the edges).
No, it must be a 2base. However, you can get around this by adding black bars to the top and/or bottom of your image, then using the texture coordinates array to restrict where the texture will be mapped from your image. For example, lets say you have a 13 x 16 pixel texture. You can add 3 pixels of black to the right side then do the following:
static const GLfloat texCoords[] = {
0.0, 0.0,
0.0, 13.0/16.0,
1.0, 0.0,
1.0, 13.0/16.0
};
Now, you have a 2base image file, but a non-2base texture. Just make sure you use linear scaling :)
This is a bit late but Non-power of 2 textures are supported under OpenGL ES 1/2 through extensions.
The main one is GL_OES_texture_npot. There is also GL_IMG_texture_npot and GL_APPLE_texture_2D_limited_npot for iOS devices
Check for these extensions by calling glGetString(GL_EXTENSIONS) and searching for the extension you need.
I would also advise keeping your textures to sizes that are multiples of 4 as some hardware stretches textures if not.