I have a bitmap for a 6x1 cubemap obtained from a URI, which needs to be rendered using the renderer.
How do I upload the cubemap faces to the GPU? What would be the set of GLES20 calls I need to make in surfaceCreated()?
You can use the Cube and Plane classes I prepared for my most recent article.
For those classes it would be the best if you actually convert your texture into 6 textures, one for every face of the cube.
The "easiest" way to add textures is to pass them in the constructor as bitmaps. If you want to create the Cube first, then load textures afterwards you will have to deal with thread-safety and you'll have to make sure that the texture update is recognized in the onDraw method of your planes.
Related
I came across AHardwareBuffer in Android. I wanted to make use of AHardwareBuffer to store textures so that I can use them on different threads where I don't have an OpenGL context. Currently, I'm doing the following:
Generate a texture and bind it to GL_TEXTURE_2D.
Create EGLClientBuffer and EGLImageKHR from it. Attach the EGLImage as texture target.
Generate an FBO and bind it to the texture using glFramebufferTexture2D.
To draw the texture (say tex), I'm rendering it onto the AHardwareBuffer using shaders
However, I wanted away so that I don't need to rerender it onto hardwarebuffer but instead directly store data of the texture onto hardwarebuffer.
I was thinking of using glCopyTexImage2d for doing this. Is this fine and would it work?
Also (a dumb question but I cannot get over it) if I attach my EGLImage which is from the Hardwarebuffer to GL_TEXTURE_2D and define the texture using glTexImage2D, would it not store the data of the texture which is a parameter of glTexImage2D into the hardwarebuffer?
I solved this issue using glSubTexImage2D.
First create a opengl texture and bind it to GL_TEXTURE_2D.
Then use glEGLImageTargetTexture2DOES to bind texture to EGLImageKHR created from EGLClientBuffer. This is similar to glTexImage2D. Any subsequent call to glTexImage2D will break the relationship between the texture and EGLClientBuffer.
refer: https://www.khronos.org/registry/OpenGL/extensions/OES/OES_EGL_image_external.txt
However glSubTexImage2D preserves the relationship. Hence we can load data with this API and store it in AHardwareBuffer.
PS: This might me one way and if there are other ways i'll be accepting the answer.
I am trying to generate movie using MediaMuxer. The Grafika example is an excellent effort, but when i try to extend it, I have some problems.
I am trying to draw some basic shapes like square, triangle, lines into the Movie. My openGL code works well if I draw the shapes into the screen but I couldn't draw the same shapes into the video.
I also have questions about setting up openGL matrix, program, shader and viewport. Normally, there are methods like onSurfaceCreated and onSurfaceChanged so that I can setup these things. What is the best way to do it in GeneratedMovie?
Anybody has examples of writing into video with more complicated shapes would be welcome
The complexity of what you're drawing shouldn't matter. You draw whatever you're going to draw, then call eglSwapBuffers() to submit the buffer. Whether you draw one flat-shaded triangle or 100K super-duper-shaded triangles, you're still just submitting a buffer of data to the video encoder or the surface compositor.
There is no equivalent to SurfaceView's surfaceCreated() and surfaceChanged(), because the Surface is created by MediaCodec#createInputSurface() (so you know when it's created), and the Surface does not change.
The code that uses GeneratedMovie does some fairly trivial rendering (set scissor rect, call clear). The code in RecordFBOActivity is what you should probably be looking at -- it has a bouncing rect and a spinning triangle, and demonstrates three different ways to deal with the fact that you have to render twice.
(The code in HardwareScalerActivity uses the same GLES routines and demonstrates texturing, but it doesn't do recording.)
The key thing is to manage your EGLContext and EGLSurfaces carefully. The various bits of GLES state are held in the EGLContext, which can be current on only one thread at a time. It's easiest to use a single context and set up a separate EGLSurface for each Surface, but you can also create separate contexts (with or without sharing) and switch between them.
Some additional background material is available here.
Im trying to find a way to draw a part of a texture in opengl (for example, in a sprite I need to draw different parts of the image) and I cant find it. In the questions I have been looking into, people talk about the glDrawTexfOES but from what I understand its a short way to draw a rectangle texture.
Thanks in advance.
Yes, those texture coordinates are the ones.. You can change them at runtime but I'd need some info of your pipeline how and where do you push vertex and texture coordinates to GL. If you do that every frame with something like "glTexCoordPointer" you just need your buffer not to be constant and change values whenever you want. If you use some GPU buffers you will need to retrieve buffer pointer and change the values. In both cases it would be wise to do that on same thread as your "draw" method.
There seems to be a distinct lack of support on the web for how to display text in OpenGL ES 2.0. JVitela's answer at: Draw text in OpenGL ES says to use a Canvas, and Paint the Text on that to generate a bitmap, and then use GLUtils to render the text bitmap, but the answer only shows the part directly about painting the text, not what else goes around it.
I've also been trying to go off the lessons at http://www.learnopengles.com , in this case Lesson 4 which deals with basic Textures.
How is JVitela's method passed to a vertex or fragment shader? Is the section about the background necessary, or will leaving the background out result in just the text over the rest of the GL Surface? What exactly is the textures variable he used? I think it's a texture data handle (comparing his bind() to at that of learnopengles) but why an array? is it shared with other textures?
I have a programme with a heap of stuff displayed on it already with OpenGL ES 2.0, and need some basic text (some static, some updating every 1 to 5 Hz) printed over it. My understanding is that texture mapping bitmap glyphs is quite expensive.
Are there any good tutorials to do what I need? Any advice from anyone?
How is JVitela's method passed to a vertex or fragment shader?
It is passed like any other texture:
Create a texture
Set filtering and wrapping
Upload data via GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
Is the section about the background necessary, or will leaving the background out result in just the text over the rest of the GL Surface?
It is not necessary (there will be just a background), leaving it out will write black pixels (because the code erases the bitmap pixels before with black. If you wanted to just draw the text, you'd need to enable blending or use a special shader that knows the color of the text.
What exactly is the textures variable he used?
It is an int[1], see API docs on GL10.
I think it's a texture data handle (comparing his bind() to at that of learnopengles) but why an array? is it shared with other textures?
This way, more than one texture handle can be created using glGenTextures.
I have adapted lesson six of insantydesign's android examples (http://insanitydesign.com/wp/projects/nehe-android-ports/) to work for a 2d square and the texture is displaying fine but I also have other (non textured) shapes drawn on screen and the texture from the square "spills over" to them.
In my on surface created method I have the line
squaretexture.loadGLTexture(gl, this.context);
which I think may be the problem.
My question is where should I put this line in order to fix my problem?
You need to enable texturing when you want to draw textures primitives, and disable texturing when you want primitives without a texture. For example:
glEnable(GL_TEXTURE_2D);
drawObjectA();
glDisable(GL_TEXTURE_2D);
drawObjectB();
Object A will be textured, but object B won't.