Android - open gl es 1.1 - drawing efficiency - android

Take an application containing a glSurfaceView which loads several separate images on start (GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); for each image).
This application will then have to call gl.glBindTexture(GL10.GL_TEXTURE_2D, texture pointer int) before it can proceed to draw a different texture to a "texturable square" gl object.
Is it recommended to instead load one bitmap with all the textures (ie: a sprite sheet), and then create an array of "texturable squares" that each map a different area of the giant single image, like that gl.glBindTexture(...) only needs be called once...?
Or perhaps is there no significant difference between the two techniques?

As far as I know, once a texture has been loaded via texImage2D, binding a texture is simply a matter of pointing the native OpenGL library to the correct preloaded texture, so performance wise, it shouldn't be costly.
However, you raise a good option which you should probably consider regardless of performance issues.
Normally, the textures you need don't have to be in power of two dimensions, but are set to that size anyway because of the requirements of OpenGL. This often results in a very wasteful memory allocation. Utilising "sprite sheets" as you put it can help save the time & memory of loading multiple bitmaps into textures which are usually larger than the parts you'll be rendering anyway.
For this reason, I would recommend using sprite sheets anyway, simply because it saves calls to texImage2D (which are quite costly), and potentially saves memory as well. If you properly manage the texture coordinates when switching between objects you want to render, this is the method I would recommend.

Related

How to texture a rectangle with multiple textures in OpenGL

I'm trying to make a 2d map (for a game, think tiled world map) in OpenGL ES 2.0 for an android game. Basically, there are a few tile types that have different textures, and the map is randomly generated from these types, so from game-to-game the map changes but for the duration of a single game it stays the same.
My first thought was to generate a single large texture / image / bitmap (independent from OpenGL) beforehand basically stitching duplicate tile textures together to make the larger map, and then using this single texture for one large map rectangle. In theory I think this is simple and would work fine, but I'm worried that it won't scale well for larger maps and especially on mobile I'll run out of memory with such a large image map. Plus, there's a small set of tiles that are duplicated over and over so it seems like a tremendous waste to duplicate the pixel data in a big texture over and over.
My second thought was having many textures, one for each of the tile textures. But I'm not sure how this would work, texture-binding-wise, would I need the shaders to contain multiple texture references and within the shader have logic for using the right one?
Finally, I thought using a texture atlas could work, have one texture / image with all of the tile data in it, this would be relatively small. But I'm struggling to imagine how to get the maths to work out such that "tiles" or subsections of the map rectangle would use completely different texture coordinates.
Am I approaching this the wrong way? Should I be using a rectangle for each tile? At least this way I can pass the shaders both vertex and texture coordinates for each tile independently. This seems easier, but also seems wrong since the map really is just one rectangle that won't be changing.
My first thought was to generate a single large texture...
Actualy, something like this has already been used in id Software's id Tech since version 4. It's called MegaTexture. Basicaly, it's a big texture, which could also hold additional data.
My second thought was having many textures...
You don't need to hold all the textures in a shader. Do it like this:
Implement a loop with n iterations, where n is how much different types of textures are used.
Inside a loop, bind the current texture type.
Pass any data, like position/color/texture coords to shaders.
Draw all tiles that use the bounded texture. You could use GLES30.glDrawElementsInstanced or GLES30.glDrawArraysInstanced if you are targeting devices with GLES 3.x or an appropriate extension support. Otherwise, draw your tiles using GLES20.glDrawArrays or GLES20.glDrawElements.
Shaders won't be complicated with this approach.
Finally, I thought using a texture atlas could work...
You could use loop here too and compute the texture coordinates for each tile type on CPU, then just pass them to shaders.
Considering your map is not changing through a game session, MegaTexture approach looks good. However, it depends on how large your map is and how much memory is available. Also, note that max texture size is limited. Max size differs from device to device but should be (AFAIK) equal or greater than screen size and at least 64 texels(16 for cube-mapped textures). You can get the maximum texture size on any device using glGet(GL_MAX_TEXTURE_SIZE ).

Android Bitmap Object as a draw buffer

Setup:
I have implemented a native (read JNI) mechanism to copy pixels from a Bitmap object, to native memory. This is done by malloc() uint23_t array in native memory and later using memcpy() to copy pixels to/from Bitmap's native pointer. This works well and have been tested. Pixels are successfully saved in native memory from a Bitmap object, and copied back to a Bitmap object, and visible on screen. Its pretty fast in copying, up to order of several milliseconds for fairly large bitmaps. But extremely slow in rendering it.
Intention:
The above was done to break free of heap limit on default android Bitmaps (refer to https://stackoverflow.com/a/1949205/1531054). There would be only 1 Java Bitmap object acting as buffer between native memory and target canvas.
Save a shape:
clear Buffer Bitmap.
Draw shape on Bitmap.
Copy pixels to native memory, and save the memory pointer.
Clear Buffer Bitmap.
So, any number of shapes can be saved to native memory, without running into heap size limits. This works.
Later when need to draw a shape (say in onDraw()):
clear Buffer Bitmap.
Copy pixels from native memory, to Buffer Bitmap, using the saved memory pointer.
Draw Buffer Bitmap on canvas.
Clear Buffer Bitmap.
Repeat again for next shape.
Problem When quickly drawing many shapes from memory, The Buffer Bitmap sorts of lags. Basically we're doing
clear bitmap -> load pixels from memory onto it -> draw it on view canvas
in Quick succession inside onDraw(), only the latest shape's pixels are drawn onto canvas. It appears as if:
The internal canvas.drawBitmap() is asynchronous and copies pixels off the bitmap later sometimes.
Android's Bitmaps have some hidden caching mechanism.
Has anyone run into such trouble before ? Or has some insight regarding this ?
I know one can get native skia lib's canvas instance in JNI and draw on it, but this is a non standard way.
In recent Android versions (3.0 and on, which is the majority of devices), pixels use regular Java memory heap. With the introduction of hardware acceleration, bitmaps are drawn asynchronously, and there is a caching system that manages bitmaps loaded as textures to the GPU. Therefore the hack you are trying to do will probably degrade performance on new devices. If you need more memory, try to use largeHeap="true" in your manifest.
On relatively new Androids (from 3.0 if I recall correctly) with hardware acceleration canvas.drawBitmap method does not actually draws anything (as well as dispatchDraw, draw, and onDraw). Instead, it creates record in display list which:
Might be cached for an indefinite amount of time.
Might (and will be) drawn in the future, not right away. It is not exactly asynchronous as for now, it just executed later in the same thread.
Those two points, I think are the answer to your question.
Alternatively, you can disable hardware acceleration for your view/window and see if your approach is working.
For further reading:
http://android-developers.blogspot.de/2011/03/android-30-hardware-acceleration.html
http://developer.android.com/guide/topics/graphics/hardware-accel.html#model

Android / Offscreen rendering to texture

I am making a 2D graphical app that will display planets. I say 2D because the majority of the app will be 2D. I however want to render some 3D objects into dynamic sprites offscreen (to a texture), with transparent (possibly translucent) areas, and subsequently render those rendered textures to the active screen as 2D textured quads. Rendering directly to the screen as 3D objects is not optimal in this case, because it would require me to implement some sort of 3D picking. I am not that advanced in math yet. Note also that the main screen render will be orthographic, while the offscreen render would be perspective.
How can I accomplish this (general idea, no need for specifics), and what would be the most efficient way to do this? Would this reduce support for a wide variety of devices? Also, if the 3D sprite renderings were constantly refreshed every frame (such as being rotated fine amounts) would that kill framerates with continuous unloading/reloading of texture to memory? I suppose that some scenes could have as many as 10 of these 3D offscreen sprites.
Thanks for the help
If you really must use the offscreen rendering just search for FBO(frame buffer object) and attach a texture to it, then use the texture in your main view as 2D. It is quite a straight forward procedure but might decrease the speed. You will probably not be able to do any multithreading on it so you should create just 1 FBO. Its dimensions will probably have to be a power of 2 so the resolution might be different then you wish. This procedure does not continually load/unload anything, the data is allocated when creating the texture and GL draws/reads directly from it. The largest drawback here will be the memory.. You will create as many as 10 of this textures just to draw on them and present once.
It might be very easy to place this objects on a specific place on your main buffer though: Make all the logic as if you would want to draw a full screen planet but use "viewport" method to place it to a specific part of the screen.
If those planet images will be updated only on user request (you don't want to draw them every frame) then I suggest you try to make a combination of both: Create a FBO with a texture of same size or larger then main view and draw all the planets to this single texture using "viewport" method. Then you can update any you want, just don't clear the buffer, rather draw a clear rect on the specific part of the buffer/texture. And keep drawing the whole texture to the main buffer.

Correct handling of drawing multiple objects?

Although I'm technically working in the android platform with OpenGL 2.0 ES, I believe this can be applied to more OpenGL technologies.
I have a list of objects (enemies, characters, etc) that I'm attempting to draw onto a grid, each space being 1x1, and each object matching. Presently, each object is self translating... that is, it's taking its model coordinates and going through a simple loop to adjust them to be located in the world coordinates in its appropriate grid location. (i.e. if it should be at (3,2) it will translate it's coordinates accordingly.
The problem I've reached is I'm not sure how to effeciently draw them. I have a loop going through all the objects and calling draw for each object, similar to the android tutorial, but this seems wildly ineffecient.
The objects are each textured with their own square images, matching the 1x1 grid they fill. They likely will never need their own unique shaders, so the only thing that seems to change between objects is the verticies and the shaders.
Is there an effecient way to get each model into the pipeline without flushing because of uniform changes?
This probably requires some try and error procedure an probably is hardware dependent. I would use buffer objects for the meshes with GL_STATIC_DRAW, pack some textures in a bigger one and draw all objects depending on that bigger texture in batch to avoid states changes as much as possible. Profile and get us more information on where is your bottleneck.

Can OpenGL ES render textures of non base 2 dimensions?

This is just a quick question before I dive deeper into converting my current rendering system to openGL. I heard that textures needed to be in base 2 sizes in order to be stored for rendering. Is this true?
My application is very tight on memory, but most of the bitmaps are not powers of two. Does storing non-base 2 textures consume more memory?
It's true depending on the OpenGL ES version, OpenGL ES 1.0/1.1 have the power of two restriction. OpenGL ES 2.0 doesn't have the limitation, but it restrict the wrap modes for non power of two textures.
Creating bigger textures to match POT dimensions does waste texture memory.
Suresh, the power of 2 limitation was built into OpenGL back in the (very) early days of computer graphics (before affordable hardware acceleration), and it was done for performance reasons. Low-level rendering code gets a decent performance boost when it can be hard-coded for power-of-two textures. Even in modern GPU's, POT textures are faster than NPOT textures, but the speed difference is much smaller than it used to be (though it may still be noticeable on many ES devices).
GuyNoir, what you should do is build a texture atlas. I just solved this problem myself this past weekend for my own Android game. I created a class called TextureAtlas, and its constructor calls glTexImage2D() to create a large texture of any size I choose (passing null for the pixel values). Then I can call add(id, bitmap), which calls glTexSubImage2D(), repeatedly to pack in the smaller images. The TextureAtlas class tracks the used and free space within the larger texture and the rectangles each bitmap is stored in. Then the rendering code can call get(id) to get the rectangle for an image within the atlas (which it can then convert to texture coordinates).
Side note #1: Choosing the best way to pack in various texture sizes is NOT a trivial task. I chose to start with simple logic in the TextureAtlas class (think typewriter + carriage return + line feed) and make sure I load the images in the best order to take advantage of that logic. In my case, that was to start with the smallest square-ish images and work my way up to the medium square-ish images. Then I load any short+wide images, force a CR+LF, and then load any tall+skinny images. I load the largest square-ish images last.
Side note #2: If you need multiple texture atlases, try to group images inside each that will be rendered together to minimize the number of times you need to switch textures (which can kill performance). For example, in my Android game I put all the static game board elements into one atlas and all the frames of various animation effects in a second atlas. That way I can bind atlas #1 and draw everything on the game board, then I can bind atlas #2 and draw all the special effects on top of it. Two texture selects per frame is very efficient.
Side note #3: If you need repeating/mirroring textures, they need to go into their own textures, and you need to scale them (not add black pixels to fill in the edges).
No, it must be a 2base. However, you can get around this by adding black bars to the top and/or bottom of your image, then using the texture coordinates array to restrict where the texture will be mapped from your image. For example, lets say you have a 13 x 16 pixel texture. You can add 3 pixels of black to the right side then do the following:
static const GLfloat texCoords[] = {
0.0, 0.0,
0.0, 13.0/16.0,
1.0, 0.0,
1.0, 13.0/16.0
};
Now, you have a 2base image file, but a non-2base texture. Just make sure you use linear scaling :)
This is a bit late but Non-power of 2 textures are supported under OpenGL ES 1/2 through extensions.
The main one is GL_OES_texture_npot. There is also GL_IMG_texture_npot and GL_APPLE_texture_2D_limited_npot for iOS devices
Check for these extensions by calling glGetString(GL_EXTENSIONS) and searching for the extension you need.
I would also advise keeping your textures to sizes that are multiples of 4 as some hardware stretches textures if not.

Categories

Resources