OpenGL textures into AHardwareBuffer - android

I came across AHardwareBuffer in Android. I wanted to make use of AHardwareBuffer to store textures so that I can use them on different threads where I don't have an OpenGL context. Currently, I'm doing the following:
Generate a texture and bind it to GL_TEXTURE_2D.
Create EGLClientBuffer and EGLImageKHR from it. Attach the EGLImage as texture target.
Generate an FBO and bind it to the texture using glFramebufferTexture2D.
To draw the texture (say tex), I'm rendering it onto the AHardwareBuffer using shaders
However, I wanted away so that I don't need to rerender it onto hardwarebuffer but instead directly store data of the texture onto hardwarebuffer.
I was thinking of using glCopyTexImage2d for doing this. Is this fine and would it work?
Also (a dumb question but I cannot get over it) if I attach my EGLImage which is from the Hardwarebuffer to GL_TEXTURE_2D and define the texture using glTexImage2D, would it not store the data of the texture which is a parameter of glTexImage2D into the hardwarebuffer?

I solved this issue using glSubTexImage2D.
First create a opengl texture and bind it to GL_TEXTURE_2D.
Then use glEGLImageTargetTexture2DOES to bind texture to EGLImageKHR created from EGLClientBuffer. This is similar to glTexImage2D. Any subsequent call to glTexImage2D will break the relationship between the texture and EGLClientBuffer.
refer: https://www.khronos.org/registry/OpenGL/extensions/OES/OES_EGL_image_external.txt
However glSubTexImage2D preserves the relationship. Hence we can load data with this API and store it in AHardwareBuffer.
PS: This might me one way and if there are other ways i'll be accepting the answer.

Related

Android OpenGL ES2 Many textures for one VBO

I have many fixed objects like terrains and buildings and I want to merge them all in one VBO to reduce draw calls and enhance performance when there are too many objects, I load textures and store their ids in an array, my question is can I bind textures to that one VBO or must I make a separate VBO for each texture? or can I make many glDrawArrays for one VBO based on offset and length, if I can do that will this be smooth and well performed?
In ES 2.0, if you want to use multiple textures in a single draw call, your only good option is to use a texture atlas. Essentially, you store the texture data from multiple logical textures in a single OpenGL texture, and the texture coordinates are chosen so that the desired texture data is used for each primitive. This could be done by adjusting the original texture coordinates, or by feeding an id into the shader and applying an offset to the texture coordinates based on the id.
Of course you can use multiple glDrawArrays() calls for a single VBO, with binding a different texture between them. But that goes against your goal of reducing the number of draw calls. You should certainly make sure that the number of draw calls really is a bottleneck for you before you spend a lot of time on these types of optimizations.
In more advanced versions of OpenGL you have additional features that can help with this use case, like array textures.
There are couple of standard techniques that many Game Engines perform to achieve low draw calls.
Batching: This technique combines all objects referring to same material and combines them into one mesh. The objects does not even have to be static. If they are dynamic you can still batch them by passing the Model Matrix array.
Texture Atlas: Creating texture atlas for all static meshes is the best way as said in the other answer. However, you'll have to do a lot of work for combining the textures efficiently and updating their UVs accordingly.

Rendering a cubemap in GLES20

I have a bitmap for a 6x1 cubemap obtained from a URI, which needs to be rendered using the renderer.
How do I upload the cubemap faces to the GPU? What would be the set of GLES20 calls I need to make in surfaceCreated()?
You can use the Cube and Plane classes I prepared for my most recent article.
For those classes it would be the best if you actually convert your texture into 6 textures, one for every face of the cube.
The "easiest" way to add textures is to pass them in the constructor as bitmaps. If you want to create the Cube first, then load textures afterwards you will have to deal with thread-safety and you'll have to make sure that the texture update is recognized in the onDraw method of your planes.

Android: Attach SurfaceTexture to FrameBuffer

I am performing a video effect that requires dual pass rendering (the texture needs to be passed through multiple shader programs). Attaching a SurfaceTexture to a GL_TEXTURE_EXTERNAL_OES that is passed in the constructor does not seem to be a solution, since the displayed result is only rendered once.
One solution I am aware of is that the first rendering can be done to a FrameBuffer, and then the resulting texture can be rendered to where it actually gets displayed.
However, it seems that a SurfaceTexture must be attached to a GL_TEXTURE_EXTERNAL_OES texture, and not a FrameBuffer. I'm not sure if there is a workaround around this, or if there is a different approach I should take.
Thank you.
SurfaceTexture receives a buffer of graphics data and essentially wraps it up as an "external" texture. If it helps to see source code, start in updateTexImage(). Note the name of the class ("GLConsumer") is a more accurate description of the function than "SurfaceTexture": it consumes frames of graphic data and makes them available to GLES.
SurfaceTexture is expected to work with formats that OpenGL ES doesn't "naturally" work with, notably YUV, so it always uses external textures.

Texturing Multiple Entities with OpenGL ES 2.0 on Android

I need a bit of an explanation of exactly how to apply textures to different entities. My understanding is that there can only be one bound texture at a time. So, if I have many entities all using different textures how do i go about applying a texture to an entity, rendering the entity, then binding the next to apply to the next entity.
I guess I'm confused about the timing of applying a texture to an entity and rendering it with the correct texture. I am planning on using texture atlases for similar sprites and animations and stuff. But i don't know how to have a texture or a portion of a texture (texture atlas) saved to an entity before rendering so i can move on to applying the next texture to the other entities.
Similarly, if i have a texture atlas loaded and use it to animate one entity but also need a different entity to animate, that uses a different texture atlas, do i need to have the game load the other atlas and apply it to achieve the other animation?
I'm familiar with the opengl es 2.0 api. I just need help how to apply it.
If I understand correctly you are looking for a nice application structure achieve a sprite animation using atlas textures. Know there are very many good ways to do that so I will try to explain only one.
It is best to handle your situation in quite a few classes which control textures and models.
In the bottom you should create a class that handles a texture, it contains a texture ID and if it is loaded from file should contain a file name (or some custom ID). What this class should contain as public are:
constructors as needed: with size (FBO), with name (from file)
bind (binds the texture)
texture size
image size on texture (in cases when applying a non POT image to a POT texture)
Explicit cleanup (deletes the texture)
After that since you are talking about animation it suggests you have an image with multiple subimages (a walking character for instance). That is best to subclass the texture class so it contains all the additional methods as in getting coordinates for a specific frame.
Then a level higher you would need a class that represents a texture pool. This class will cache the textures so you can reuse them. It should have an array of all the texture classes currently loaded and have methods such as:
texture named (returns a texture with a specific name either from the cache or it creates a new texture class with this name and stores it)
explicit cleanup (simply delete all the texture and empty the cache)
After that you generate a model class which contains all the data needed to be drawn. It either has a vertex buffer ID or can generate all the vertex data on the fly. Beside that it contains the texture(s) (link) which is grabbed from the texture pool class. At this point there are basically two ways of drawing this model. One is to do it all in your main drawing method which I discourage but is quickly to do. The other is to implement a method on a model class as in "draw with shader" to which you pass your custom shader class and the model itself contains the code to draw itself. The pipeline of which is something like:
Bind linked texture
Get and bind the texture coordinates from the texture (or the atlas subclass)
Get and bind the position coordinates from the model
Do any additional setting needed on the shader
Draw
This way you will have an ultimate control on what is going on and you may optimize the system to a great extend. For instance when you iterate through models to be drawn you may sort them by texture so you lose unnecessary bound texture switching, you may create a large vertex buffer to flush them in a single call, you can automatically check if a specific texture is no longer needed...
Beside that this approach will have a minimum memory footprint on your application as far as the texture data goes. And as far as the models themselves go their resources are insignificantly small, for instance each of them contains some frame structure with 4 floating values and a pointer to a texture from the pool.
I hope this explanation helps you.

Android OpenGL ES 2.0 text rendering

There seems to be a distinct lack of support on the web for how to display text in OpenGL ES 2.0. JVitela's answer at: Draw text in OpenGL ES says to use a Canvas, and Paint the Text on that to generate a bitmap, and then use GLUtils to render the text bitmap, but the answer only shows the part directly about painting the text, not what else goes around it.
I've also been trying to go off the lessons at http://www.learnopengles.com , in this case Lesson 4 which deals with basic Textures.
How is JVitela's method passed to a vertex or fragment shader? Is the section about the background necessary, or will leaving the background out result in just the text over the rest of the GL Surface? What exactly is the textures variable he used? I think it's a texture data handle (comparing his bind() to at that of learnopengles) but why an array? is it shared with other textures?
I have a programme with a heap of stuff displayed on it already with OpenGL ES 2.0, and need some basic text (some static, some updating every 1 to 5 Hz) printed over it. My understanding is that texture mapping bitmap glyphs is quite expensive.
Are there any good tutorials to do what I need? Any advice from anyone?
How is JVitela's method passed to a vertex or fragment shader?
It is passed like any other texture:
Create a texture
Set filtering and wrapping
Upload data via GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
Is the section about the background necessary, or will leaving the background out result in just the text over the rest of the GL Surface?
It is not necessary (there will be just a background), leaving it out will write black pixels (because the code erases the bitmap pixels before with black. If you wanted to just draw the text, you'd need to enable blending or use a special shader that knows the color of the text.
What exactly is the textures variable he used?
It is an int[1], see API docs on GL10.
I think it's a texture data handle (comparing his bind() to at that of learnopengles) but why an array? is it shared with other textures?
This way, more than one texture handle can be created using glGenTextures.

Categories

Resources