There seems to be a distinct lack of support on the web for how to display text in OpenGL ES 2.0. JVitela's answer at: Draw text in OpenGL ES says to use a Canvas, and Paint the Text on that to generate a bitmap, and then use GLUtils to render the text bitmap, but the answer only shows the part directly about painting the text, not what else goes around it.
I've also been trying to go off the lessons at http://www.learnopengles.com , in this case Lesson 4 which deals with basic Textures.
How is JVitela's method passed to a vertex or fragment shader? Is the section about the background necessary, or will leaving the background out result in just the text over the rest of the GL Surface? What exactly is the textures variable he used? I think it's a texture data handle (comparing his bind() to at that of learnopengles) but why an array? is it shared with other textures?
I have a programme with a heap of stuff displayed on it already with OpenGL ES 2.0, and need some basic text (some static, some updating every 1 to 5 Hz) printed over it. My understanding is that texture mapping bitmap glyphs is quite expensive.
Are there any good tutorials to do what I need? Any advice from anyone?
How is JVitela's method passed to a vertex or fragment shader?
It is passed like any other texture:
Create a texture
Set filtering and wrapping
Upload data via GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
Is the section about the background necessary, or will leaving the background out result in just the text over the rest of the GL Surface?
It is not necessary (there will be just a background), leaving it out will write black pixels (because the code erases the bitmap pixels before with black. If you wanted to just draw the text, you'd need to enable blending or use a special shader that knows the color of the text.
What exactly is the textures variable he used?
It is an int[1], see API docs on GL10.
I think it's a texture data handle (comparing his bind() to at that of learnopengles) but why an array? is it shared with other textures?
This way, more than one texture handle can be created using glGenTextures.
Related
My code is drawing a background image and drawing some other images (particles) on top of that image.
I want to make the particles have some blending effects, like darken, lighten, burn ... the same as Canvas globalcompositeoperation does .
So in the fragment shader, i need to get the previous fragment color and blend it with the new color.
But i could not find a way to do it.
No, there is no possibility within the standard. However, with the extensions EXT_shader_framebuffer_fetch (non-Mali devices) and ARM_shader_framebuffer_fetch (Mali devices) a value from the framebuffer can be read (since OpenGL 2.0 and for OpenGL ES 2.0 / 3.0):
This extension provides a mechanism whereby a fragment shader may read existing framebuffer data as input. This can be used to implement compositing operations that would have been inconvenient or impossible with fixed-function blending. It can also be used to apply a function to the framebuffer color, by writing a shader which uses the existing framebuffer color as its only input.
Note that there is no guarantee that hardware will support an extension. You need to test whether the extension is supported or not at runtime.
If you want to read fragments from the previous rendering, the usual way is to implement multiple rendering passes and render to a texture. See also LearnOpenGL - Deferred Shading.
In many cases, there is no need to read fragments in the fragment shader. A lot of rendering effects can be implemented using the standard Blending functionality. The blend function can be changed with glBlendFunc and the blend equation can be changed with glBlendEquation.
Yes it is possible.
First, render result to the texturebuffer.
Then send the texturebuffer to the second shader to apply the effects.
to find out how to do this, look for the keywords
GLES20.glBindFramebuffer (GL_FRAMEBUFFER, mFramebuff);
I have imported a model (e.g. a teapot) using Rajawali into my scene.
What I would like is to label parts of the model (e.g. the lid, body, foot, handle and the spout)
using plain Android views, but I have no idea how this could be achieved. Specifically, positioning
the labels on the right place seems challenging. The idea is that when I transform my model's position in the scene, the tips of the labels are still correctly positioned
Rajawali tutorial show how Android views can be placed on top of the scene here https://github.com/Rajawali/Rajawali/wiki/Tutorial-08-Adding-User-Interface-Elements
. I also understand how using the transformation matrices a 3D coordinate on the model can be
transformed into a 2D coordinate on the screen, but I have no idea how to determine the exact 3D coordinates
on the model itself. The model is exported to OBJ format using Blender, so I assume there is some clever way of determining
the coordinates in Blender and exporting them to a separate file or include them somehow in the OBJ file (but not
render those points, only include them as metadata), but I have no idea how I could do that.
Any ideas are very appreciated! :)
I would use a screenquad, not a view. This is a general GL solution, and will also work with iOS.
You must determine the indices of the desired model vertices. Using the text rendering algo below, you can just fiddle them until you hit the right ones.
Create a reasonable ARGB bitmap with same aspect ratio as the screen.
Create the screenquad texture using this bitmap
Create a canvas using this bitmap
The rest happens in onDrawFrame(). Clear the canvas using clear paint.
Use the MVP matrix to convert desired model vertices to canvas coordinates.
Draw your desired text at the canvas coordinates
Update the texture.
Your text will render very precisely at the vertices you specfied. The GL thread will double-buffer and loop you back to #4. Super smooth 3D text animation!
Use double floating point math to avoid loss of precision during coordinate conversion, which results in wobbly text. You could even use the z value of the vertex to scale the text. Fancy!
The performance bottleneck is #7 since the entire bitmap must be copied to GL texture memory, every frame. Try to keep the bitmap as small as possible, maintaining aspect ratio. Maybe let the user toggle the labels.
Note that the copy to GL texture memory is redundant since in OpenGL-ES, GL memory is just regular memory. For compatibility reasons, a redundant chunk of regular memory is reserved to artificially enforce the copy.
I am trying to generate movie using MediaMuxer. The Grafika example is an excellent effort, but when i try to extend it, I have some problems.
I am trying to draw some basic shapes like square, triangle, lines into the Movie. My openGL code works well if I draw the shapes into the screen but I couldn't draw the same shapes into the video.
I also have questions about setting up openGL matrix, program, shader and viewport. Normally, there are methods like onSurfaceCreated and onSurfaceChanged so that I can setup these things. What is the best way to do it in GeneratedMovie?
Anybody has examples of writing into video with more complicated shapes would be welcome
The complexity of what you're drawing shouldn't matter. You draw whatever you're going to draw, then call eglSwapBuffers() to submit the buffer. Whether you draw one flat-shaded triangle or 100K super-duper-shaded triangles, you're still just submitting a buffer of data to the video encoder or the surface compositor.
There is no equivalent to SurfaceView's surfaceCreated() and surfaceChanged(), because the Surface is created by MediaCodec#createInputSurface() (so you know when it's created), and the Surface does not change.
The code that uses GeneratedMovie does some fairly trivial rendering (set scissor rect, call clear). The code in RecordFBOActivity is what you should probably be looking at -- it has a bouncing rect and a spinning triangle, and demonstrates three different ways to deal with the fact that you have to render twice.
(The code in HardwareScalerActivity uses the same GLES routines and demonstrates texturing, but it doesn't do recording.)
The key thing is to manage your EGLContext and EGLSurfaces carefully. The various bits of GLES state are held in the EGLContext, which can be current on only one thread at a time. It's easiest to use a single context and set up a separate EGLSurface for each Surface, but you can also create separate contexts (with or without sharing) and switch between them.
Some additional background material is available here.
Im trying to find a way to draw a part of a texture in opengl (for example, in a sprite I need to draw different parts of the image) and I cant find it. In the questions I have been looking into, people talk about the glDrawTexfOES but from what I understand its a short way to draw a rectangle texture.
Thanks in advance.
Yes, those texture coordinates are the ones.. You can change them at runtime but I'd need some info of your pipeline how and where do you push vertex and texture coordinates to GL. If you do that every frame with something like "glTexCoordPointer" you just need your buffer not to be constant and change values whenever you want. If you use some GPU buffers you will need to retrieve buffer pointer and change the values. In both cases it would be wise to do that on same thread as your "draw" method.
I have adapted lesson six of insantydesign's android examples (http://insanitydesign.com/wp/projects/nehe-android-ports/) to work for a 2d square and the texture is displaying fine but I also have other (non textured) shapes drawn on screen and the texture from the square "spills over" to them.
In my on surface created method I have the line
squaretexture.loadGLTexture(gl, this.context);
which I think may be the problem.
My question is where should I put this line in order to fix my problem?
You need to enable texturing when you want to draw textures primitives, and disable texturing when you want primitives without a texture. For example:
glEnable(GL_TEXTURE_2D);
drawObjectA();
glDisable(GL_TEXTURE_2D);
drawObjectB();
Object A will be textured, but object B won't.