I'm programming an Android Game. To reduce the amount of textures that need to be loaded (OpenGL ES 2.0) I've created several spritesheets of size 1024x1024. Some frames of the same animation are on different spritesheets. Now my question is if that is bad for the performance since I have to bind (OpenGL.bind()) a different texture for each animation frame?
Yes, changing the texture binding has some performance impact compared to not doing it. How much would probably be best determined by empirical testing.
If you can switch to OpenGL ES 3, you can use a texture array, rather than a texture.
However, if that's not an option, why not simply bind all your sprite sheets at once? If you have fewer than GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, then you don't need to change the texture binding, just provide some way of letting the shader know to which bound texture it should go.
Related
I've been having fun with Android and OpenGL in a little side project meant to learn about it, but now I want to implement some animations and I'm having trouble getting information on how to proceed.
Let's say I have a square with a texture on it. Let's say I want to create it very small, and then gradually stretch it to its normal size. Only that square can be subjected to the effect and nothing else around it. I have this assumption that building a new vertex buffer every time is expensive, and for the animation to be fluid, this would need to happen very frequently. Is that the norm or is there a better way of doing this?
To stretch/scale objects you should use matrices, not recreating vertices.
You can read a tutorial here http://www.learnopengles.com/understanding-opengls-matrices/ or google for another one, there are lots of educational materials on OpenGL ES 2.0.
I'm working on this photo app that uses OpenGL 2.0 with a Renderer, an off-screen GLSurfaceView and some shader scripts (*.fsh and *.vsh).
after loading the shader scripts from Assets folder, preparing the GL surface and context, etc, etc we finally call GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 4); and it works quite nicely and it generates the bitmaps with the effects.
The problem, OF COURSE, is the memory limitations and any large enough bitmap (regardless of device, not so big for old Gingerbread and very large images for the Nexus 10) and it will produce and OutOfMemoryException.
I'm not so knowledgeable in OpenGL and the way I know to deal with very large amounts of data is to use a stream so it's not necessary to hold it all in memory.
So the question is, is there a way to do apply an openGl shader/renderer through a Stream instead of a in-memory Bitmap ? If yes, any pointer to a link or base procedure?
Not exactly sure what you mean by Stream but here's another solution. Split rendering up into multiple passes. Fore instance, if you have a 512x512 texture and a corresponding quad to texture but can only afford to upload a 256x256 due to memory restrictions do the following:
split up the texture into 4 chunks
create a single, fitting texture object
for each chunk
upload the current chunk into the tex objects data store
draw 1/4 of the quad, e.g. top-left and texture accordingly
Note that the above example assume a 512x512 texture and screen-size. In any case, I think you get the idea.
Obviously, this is the usual memory/performance trade-off.You circumvent memory restrictions by using more bandwidth for transfers and do more rendering.
Note: I'm a desktop GL guy and I'm not quite sure how memory is split up betweem the GPU and the rest, or if there even is some dedicated VRAM. I assume you've got a limited amount available for GL resources which is even smaller than the overall system memory.
I found this post, but it's too slow for a smooth live wallpaper. Is it possible to do the same with OpenGL, which should be faster?
It is definitely possible with OpenGL. You would load your two textures and then decide which to show on a per pixel basis using a fragment shader. The actual OpenGL part wont be too complicated as your are effectleiy just drawing a screen aligned quad. For an idea of how to write the shaders i'd look here.
As for which would be faster its hard to say, although i'd think OpengGL would be faster.
I found a 3D graphics framework for Android called Rajawali and I am learning how to use it. I followed the most basic tutorial which is rendering a shpere object with a 1024x512 size jpg image for the texture. It worked fine on Galaxy Nexus, but it didn't work on the Galaxy Player GB70.
When I say it didn't work, I mean that the object appears but the texture is not rendered. Eventually, I changed some parameters that I use for the Rajawali framework when creating textures and got it to work. Here is what I found out.
The cause was coming from where the GL_TEXTURE_MIN_FILTER was being set. Among the following four values
GLES20.GL_LINEAR_MIPMAP_LINEAR
GLES20.GL_NEAREST_MIPMAP_NEAREST
GLES20.GL_LINEAR
GLES20.GL_NEAREST
the texture is only rendered when GL_TEXTURE_MIN_FILTER is not set to a filter using mipmap. So when GL_TEXTURE_MIN_FILTER is set to the last two it works.
Now here is the what I don't understand and am curious about. When I shrink the image which I'm using as the texture to size 512x512 the GL_TEXTURE_MIN_FILTER settings does not matter. All four settings of the min filter works.
So my question is, is there a requirement for the dimensions of the image when using min filter for the texture? Such as am I required to use an image that is square? Can other things such as the wrap style or the the configuration of the mag filter be a problem?
Or does it seem like a OpenGL implementation bug of the device?
Good morning, this a typical example of non-power of 2 textures.
Textures need to be power of 2 in their resolution for a multitude of reasons, this is a very common mistake and it did happen to everybody to fall in this pitfall :) too me too.
The fact that non power of 2 textures work smoothly on some devices/GPU, depends merely to the OpenGL drivers implementation, some GPUs support them clearly, some others don't, I strongly suggest you to go for pow2 textures in order to be able to guarantee the functioning on all the devices.
Last but not least, using non power of 2 textures can lead you to a cathastrophic scenarious in GPU memory utilization since, most of the drivers which accept non-powerof2 textures, need to rescale in memory the textures to the nearest higher power of 2 factor. For instance, having a texture of 520X520 could lead to an actual memory mapping of 1024X1024.
This is something you don't want because in real world "size matters", especially on mobile devices.
You can find a quite good explanation in the OpenGL Gold Book, the OpenGL ES 2.0:
In OpenGL ES 2.0, textures can have non-power-of-two (npot)
dimensions. In other words, the width and height do not need to be a
power of two. However, OpenGL ES 2.0 does have a restriction on the
wrap modes that can be used if the texture dimensions are not power of
two. That is, for npot textures, the wrap mode can only be
GL_CLAMP_TO_EDGE and the minifica- tion filter can only be GL_NEAREST
or GL_LINEAR (in other words, not mip- mapped). The extension
GL_OES_texture_npot relaxes these restrictions and allows wrap modes
of GL_REPEAT and GL_MIRRORED_REPEAT and also allows npot textures to
be mipmapped with the full set of minification filters.
I suggest you to evaluate this book since it does a quite decent coverage to this topic.
In OpenGL ES 1.1, I would like to take multiple texture Ids and combine them into a single textureId. Then I would be able to use this resulting texture multiple times in the future. My texture sources could be transparent PNGs that I want to stack together. This would be a huge optimization since I wouldn't have to render multiple textures every frame.
I have seen examples like the wiki Texture_Combiners, but it doesn't seem like the results are reusable.
Also, if there is a way to mask an image with another into a reusable texture, that would be extremely helpful too.
What you want to do is render to texture. If you're writing for iOS you're guaranteed that the OES framebuffer extension will be available, so you can use that. If you're writing for Android or another platform then the extension may be available but isn't guaranteed. If it isn't available you can fall back on glCopyTexImage2D.
So in the first case you'd create a frame buffer which has a texture as its colour buffer. Render to that then switch to another frame buffer and you can henceforth draw from the texture.
In the second you'd draw into whatever frame buffer you have, then use glCopyTexImage2D to copy from the current colour buffer into a texture. This will be a little slower because it's a copy, but it'll still probably be a lot faster than reading back the rendered content and then uploading it yourself.
ES 2.0 makes the functions contained in the framebuffer extension mandatory, so ES 2.0 capable GPUs are very likely to support the extension.