Displaying full-screen background in OpenGL - android

My Android app needs to display a full-screen bitmap as a background, then on top of that display some dynamic 3D graphics using OpenGL ES (either 1.1. or 2.0 - not decided yet). The background image is a snapshot of a WebView component in the same app, so its dimensions already fit the screen perfectly.
I'm new to OpenGL, but I know that the regular way to display a bitmap involve scaling it into a POT texture (glTexImage2D), configuring the matrices, creating some vertices for the rectangle and displaying that with glDrawArrays. Seems to be a lot of extra work (with loss of quality when down-scaling the image to POT size) when all that's needed is just to draw a bitmap to the screen, in 1:1 scale.
The "desktop" GL has glDrawPixels(), which seems to do exactly what's needed in this situation, but that's apparently missing in GLES. Is there any way to copy pixels to the screen buffer in GLES, circumventing the 3D pipeline? Or is there any way to draw OpenGL graphics on top of a "flat" background drawn by regular Android means? Or making a translucent GLView (there is RSTextureView for Renderscript-based display, but I couldn't find an equivalent for GL)?

but I know that the regular way to display a bitmap involve scaling it into a POT texture (glTexImage2D)
Then your knowledge is outdated. Modern OpenGL (version 2 and later) are fine with arbitrary image dimensions for their textures.
The "desktop" GL has glDrawPixels(), which seems to do exactly what's needed in this situation, but that's apparently missing in GLES.
Well, modern "desktop" OpenGL, namely version 3 core and later don't have glDrawPixels either.
However appealing this function is/was, it offers only poor performance and has so many caveats, that it's rarely used, whenever it's use can be avoided.
Just upload your unscaled image into a texture, disable mipmapping and draw it onto a fullscreen quad.

Related

Android: Creating blurred texture using blending in OpenGL 1.1

Has anyone had much success on Android with creating blurred textures using blending to blur a texture?
I'm thinking of the technique described here but the crux is to take a loaded texture and then apply a blur to it so that the bound texture itself is blurred.
"Inplace blurring" is something a CPU can do, but using a GPU, which generally does things in parallel, you must have another image buffer as render target.
Even with new shaders, reads and writes from/to the same buffer can lead to corruption because they can be done reordered. A similar issue is, that a gaussian blurring kernel, which can handle blurring in one pass, depends on neighbor fragments which could have been modified by the kernel applied at their fragment coordinate.
If you don't have the 'framebuffer_object' extension available for rendering into renderbuffers or even textures (additionally requires 'render_texture' extension),
you have to render into the back buffer as in the example and then do glReadPixels() to get the image back for uploading it to the source texture, or do a fast and direct glCopyTexImage2D() (OpenGL* 1.1 have this).
If the render target is too small, you can render multiple tiles.

Scrolling/zooming a scene in OpenGL and subdivision

We are to develop a scrolling/zooming scene in OpenGL ES on Android, very much like a level in Angry Birds but more like a level in World Of Goo. More like the latter as the world will not consist of repeated layers as featured in Angry Birds but of a large image. As the scene needs to scroll/zoom and therefore a lot of it will not be visible, I was wondering about the most efficient way to implement the rendering, focusing on the environment only (ie not the objects within the world but background layers).
We will be using an orthographic projection.
The first that comes to mind is creating a large 4 vertices rectangle at world size, which has the background texture mapped to it, and translate/scale this using glTranslatef / glScalef. However, I was wondering if the non visible area outside of the screens boundaries is still being rendered by OpenGL as it is not being culled (you would lose the visible area as well as there are only 4 vertices). Therefore, would it be more efficient to subdivide this rectangle, so non visible smaller rectangles can be culled?
Another option would be creating a 4 vertice rectangle that would fill the screen, then move the background by adjusting its texture coordinates. However, I guess we would run into problems when building bigger worlds, considering the texture size limit. It seems like a nice implementation for repeated backgrounds like AngryBirds has.
Maybe there is another way..?
If someone has an idea on how it might be done in AngryBirds / World of Goo, please share as I'd love to hear. They seem to have implemented a system that allows for the world to be moved and zoomed very (WorldOfGoo = VERY) smoothly.
This is probably your best bet for implementation.
In my experience, keeping a large texture in memory is very expensive on Android. I would get quite a few OutOfMemoryError exceptions for the background texture before I moved to tiling.
I think the biggest rendering bottleneck would be with memory transfer speeds and fill rate instead of any graphics computation.
Edit: Check out 53:28 of this presentation from Google I/O 2009.
You could split the background rectangle into smaller rectangles, so that OpenGL only renders the visible rectangles. You won't have a big ass rectangle with a big ass texture loaded but smallers rectangles with smaller textures that you could load/unload, depending on what is visible on screen...
Afaik there would be no performance drop due to large areas being rendered off-screen, subdividing and culling is normally done just to reduce vertex count, but you would actually be adding to it here.
Putting that aside for now; from the way you phrased the question I am unsure whether you have a large background texture or a small repeating one. If it is large, then you will need to subdivide because of texture size limitations anyway, so the question is moot! If it is small, then I would suggest the second method, fit a quad to the screen and move the background by changing the texture coordinates.
I feel like I may have missed something, though, as I am unsure why you mentioned the texture size limitation issue when talking about the the texture coordinate method and not the large quad method. Surely for both this is not a problem for repeating textures as you can use GL_REPEAT texture wrap mode...
But for both it is a problem for a single large texture unless you subdivide, which would make the texture coordinate tactic way more complicated than necessary. In this case subdividing the mesh along texture subdivisions would be best, and culling off-screen sections. Deciding which parts to cull should be trivial with this technique.
Cheers.

How can I automatically scale my OpenGL ES 2.0 window?

I'm writing an Android and iOS engine in C++ and currently focusing on Android with the NDK.
I'd like to render to a viewport of a smaller size (say 600x360) and automatically upscale this to the native rez (say 800x480.) Currently the smaller viewport displays in a lower corner of my screen with black regions.
My problem is I don't know of a simple way to do this transparently using the NDK. There is a GLSurfaceview.setScaleX (and Y) function in API level 11, which would be perfect, but doesn't exist in API level 9, which I am targeting. Another bad solution is to render to a FBO and blit that to the screen as a final step.
I am considering simply story a scaling matrix and asking the user of the engine (for now just me) to always multiply vertices by this when drawing to the screen. This would be similar to using glPushMatrix.
I searched for a while and couldn't find a good solution. Does anyone know how to help?
What you can do is get the SurfaceHolder from GLSurfaceView, GLSurfaceView.getHolder() and then set the resolution you desire by calling SurfaceHolder.setFixedSize(width, height).
In my case the GLSurfaceView has a FrameLayout root which fills the screen, I am not sure if thats required - I have it because I add other elements on top - but if you set the size and it doesnt fill the screen then you know what's missing!
Using a FrameBuffer is also a valid way and you could draw some cool effects with it as well, the way above is just faster when the only thing you want to do is scale the rendering down (or possibly up? I haven't tried).

Android: Canvas vs OpenGL

I have a drawing app where the user can draw lines with their finger, adjust the color, thickness, etc. As the user is drawing, I am converting the massed X/Y points from MotionEvent into SVG Paths, as well as creating Android Path's and then drawing the Android Path's to the screen via a Canvas, and committing the SVG Path's to the app's database.
I am following the model used in FingerPaint, in that the 'in progress' lines are drawn on the fly by repeated calls to invalidate() (and thus, onDraw()), and once the line is complete and a new line is started, the previous line(s) are drawn in onDraw() from the underlying Canvas Bitmap, with in progress lines again generating repeated re-draws.
This works fine in this application - until you start rotating the underlying Bitmap to compensate for device rotation, supporting the ability to 'zoom in' on the drawing surface and thus having to scale the underlying Bitmap, etc. So for example, with the device rotated and the drawing scaled in, when the user is drawing, we need to scale AND rotate our Bitmap in onDraw(), and this is absolutely crawling.
I've looked at a SurfaceView, but as this still uses the same Canvas mechanism, I'm not sure I'll see noticeable improvement... so my thoughts turn to OpenGL. I have read somewhere that OpenGL can do rotations and scaling essentially 'for free', and even seen rumors (third comment) that Canvas may be disappearing in future versions.
Essentially, I am a little stuck between the Canvas and OpenGL solutions... I have a 2D drawing app that seems to fit the Canvas model perfectly when in one state, as there are not constant re-draws going on like a game (for instance when the user is not drawing I don't need any re-drawing), but when the user IS drawing, I need the maximum performance necessary to do some increasingly complex things with the surface...
Would welcome any thoughts, pointers and suggestions.
OpenGL would be able to handle the rotations and scaling easily.
Honestly, you would probably need to learn a lot of OpenGL to do this, specifically related to the topics of:
Geometry
Lighting (or just disabling it)
Picking (selecting geometry to draw on it)
Pixel Maps
Texture Mapping
Mipmapping
Also, learning OpenGL for this might be overkill, and you would have to be pretty good at it to make it efficient.
Instead, I would recommend using the graphic components of a game library built on top of openGL, such as:
Cocos2d
libgdx
any of the engines listed here
Well, this question was asked 6 years ago. Maybe Android 4.0 has not come up?
Actually, after Android 4.0 the Canvas at android.view.View is a hardware accelerated canvas, which means it is implementd by OpenGL, so you do not need to use another way for performance.
You can see the https://github.com/ChillingVan/android-openGL-canvas/blob/master/canvasglsample/src/main/java/com/chillingvan/canvasglsample/comparePerformance/ComparePerformanceActivity.java to compare the performance of normal canvas in view with GLSurfaceView.
You are right that SurfaceView uses Canvas underneath the hood. The main difference is that SurfaceView uses another thread to do the actual drawing, which generally improves performance. It sounds like it would not help you a great deal, though.
You are correct that OpenGL can do rotations very quickly, so if you need more performance that is the way to go. You should probably use GLSurfaceView. The main drawback with using OpenGL is that it is a real pain to do text. Basically you have to (okay, don't have to, but seems to be the best option) render bitmaps of text.

Can OpenGL ES render textures of non base 2 dimensions?

This is just a quick question before I dive deeper into converting my current rendering system to openGL. I heard that textures needed to be in base 2 sizes in order to be stored for rendering. Is this true?
My application is very tight on memory, but most of the bitmaps are not powers of two. Does storing non-base 2 textures consume more memory?
It's true depending on the OpenGL ES version, OpenGL ES 1.0/1.1 have the power of two restriction. OpenGL ES 2.0 doesn't have the limitation, but it restrict the wrap modes for non power of two textures.
Creating bigger textures to match POT dimensions does waste texture memory.
Suresh, the power of 2 limitation was built into OpenGL back in the (very) early days of computer graphics (before affordable hardware acceleration), and it was done for performance reasons. Low-level rendering code gets a decent performance boost when it can be hard-coded for power-of-two textures. Even in modern GPU's, POT textures are faster than NPOT textures, but the speed difference is much smaller than it used to be (though it may still be noticeable on many ES devices).
GuyNoir, what you should do is build a texture atlas. I just solved this problem myself this past weekend for my own Android game. I created a class called TextureAtlas, and its constructor calls glTexImage2D() to create a large texture of any size I choose (passing null for the pixel values). Then I can call add(id, bitmap), which calls glTexSubImage2D(), repeatedly to pack in the smaller images. The TextureAtlas class tracks the used and free space within the larger texture and the rectangles each bitmap is stored in. Then the rendering code can call get(id) to get the rectangle for an image within the atlas (which it can then convert to texture coordinates).
Side note #1: Choosing the best way to pack in various texture sizes is NOT a trivial task. I chose to start with simple logic in the TextureAtlas class (think typewriter + carriage return + line feed) and make sure I load the images in the best order to take advantage of that logic. In my case, that was to start with the smallest square-ish images and work my way up to the medium square-ish images. Then I load any short+wide images, force a CR+LF, and then load any tall+skinny images. I load the largest square-ish images last.
Side note #2: If you need multiple texture atlases, try to group images inside each that will be rendered together to minimize the number of times you need to switch textures (which can kill performance). For example, in my Android game I put all the static game board elements into one atlas and all the frames of various animation effects in a second atlas. That way I can bind atlas #1 and draw everything on the game board, then I can bind atlas #2 and draw all the special effects on top of it. Two texture selects per frame is very efficient.
Side note #3: If you need repeating/mirroring textures, they need to go into their own textures, and you need to scale them (not add black pixels to fill in the edges).
No, it must be a 2base. However, you can get around this by adding black bars to the top and/or bottom of your image, then using the texture coordinates array to restrict where the texture will be mapped from your image. For example, lets say you have a 13 x 16 pixel texture. You can add 3 pixels of black to the right side then do the following:
static const GLfloat texCoords[] = {
0.0, 0.0,
0.0, 13.0/16.0,
1.0, 0.0,
1.0, 13.0/16.0
};
Now, you have a 2base image file, but a non-2base texture. Just make sure you use linear scaling :)
This is a bit late but Non-power of 2 textures are supported under OpenGL ES 1/2 through extensions.
The main one is GL_OES_texture_npot. There is also GL_IMG_texture_npot and GL_APPLE_texture_2D_limited_npot for iOS devices
Check for these extensions by calling glGetString(GL_EXTENSIONS) and searching for the extension you need.
I would also advise keeping your textures to sizes that are multiples of 4 as some hardware stretches textures if not.

Categories

Resources