I am working on a small game which uses OpenGL.
I was getting a gl_stack_underflow error. I have gone through the code, and there is one glPushMatrix for each glPopMatrix. Any ideas what else could be causing this error?
Did you maybe do a
glMatrixMode(GL_MODELVIEW);
/* ... */
glPushMatrix();
"balanced" by a
glMatrixMode(GL_PROJECTION);
/* ... */
glPopMatrix();
It matters which matrix is active at the time of push/pop operation.
Anyway, you shouldn't use the OpenGL built-in matrix operations at all. Use something like GLM, Eigen or linmath.h to build the matrices as part of your programs data structures and just load the matrices you require with glLoadMatrix or, when you finally go for shaders, glUniform.
No, OpenGL built-in matrix operations are not GPU accelerated, so there's no benefit at all in using them.
Related
I'm trying to build an overlay for an Android application that uses GLESv2.
I've hooked eglSwapBuffers in order to insert my rending code just before the frame finishes.
I'm able to do simple things like drawing a square with the scissor test:
glEnable(GL_SCISSOR_TEST);
glScissor(0, 0, 200, 200);
glClearColor(1, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_SCISSOR_TEST);
I've also had success drawing simple shapes with the following code, but as soon as I start using vertex attrib pointers the application stops rending correctly and shows a mostly-black screen with a small section that still displays correctly. I'm sure there's some open-gl state that I'm clobbering here but I'm not sure what it is. What would I need to save/restore before/after my draw calls in order to allow the app to continue to render correctly with my overlay?
// Save application state
GLint prev_program;
glGetIntegerv(GL_CURRENT_PROGRAM, &prev_program);
// Do overlay drawing
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT, GL_FALSE, 0, RectangleVertices);
glEnableVertexAttribArray(vPosition);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glDisableVertexAttribArray(vPosition);
// Trying to restore application state here - there are probably more things that I'm missing.
glUseProgram(prevProgram);
What would I need to save/restore before/after my draw calls in order to allow the app to continue to render correctly with my overlay?
Everything that you modified ...
Note that even full state restoration can be inadequate for some developer use cases. Normally this is a bug in the application (application making assumptions about object ID assignments), but there are use cases in development tooling such as verbatim API trace replay tools where such assumptions are made.
I want to integrate OSG scene into my Qt Quick application.
It seems that the proper way to do it is to use QQuickFramebufferObject class and call osgViewer::Viewer::frame() inside QQuickFramebufferObject::Renderer::render(). I've tried to use https://bitbucket.org/leon_manukyan/qtquick2osgitem/overview.
However, it seems this approach doesn't work correctly in all cases. For example, in Android platform this code renders only the first frame.
I think the problem is that QQuickFramebufferObject uses the same OpenGL context both for Qt Quick Scene Graph and code called within QQuickFramebufferObject::Renderer::render().
So I'm wondering, is it possible to integrate OpenSceneGraph into Qt Quick using QQuickFramebufferObject correctly or it is better to use implementation that uses QQuickItem and separate OpenGL context such as https://github.com/podsvirov/osgqtquick?
Is it possible to integrate OpenSceneGraph into Qt Quick using
QQuickFramebufferObject correctly or it is better to use
implementation that uses QQuickItem and separate OpenGL context?
The easiest way would be using QQuickPaintedItem which is derived from QQuickItem. While it is by default offering raster-image type of drawing you can switch its render target to OpenGL FramebufferObject:
QPainter paints into a QOpenGLFramebufferObject using the GL paint
engine. Painting can be faster as no texture upload is required, but
anti-aliasing quality is not as good as if using an image. This render
target allows faster rendering in some cases, but you should avoid
using it if the item is resized often.
MyQQuickItem::MyQQuickItem(QQuickItem* parent) : QQuickPaintedItem(parent)
{
// unless we set the below the render target would be slow rastering
// but we can definitely use the GL paint engine just by doing this:
this->setRenderTarget(QQuickPaintedItem::FramebufferObject);
}
How do we render with this OpenGL target then? The answer can be still good old QPainter filled with the image called on update/paint:
void MyQQuickItem::presentImage(const QImage& img)
{
m_image = img;
update();
}
// must implement
// virtual void QQuickPaintedItem::paint(QPainter *painter) = 0
void MyQQuickItem::paint(QPainter* painter)
{
// or we can precalculate the required output rect
painter->drawImage(this->boundingRect(), m_image);
}
While QOpenGLFramebufferObject used behind the scenes here is not QQuickFramebufferObject the semantics of it is pretty much what the question is about and we've confirmed with the question author that we can use QImage as a source to render in OpenGL.
P.S. I successfully use this technique since Qt 5.7 on PC desktop and singleboard touchscreen Linux device. Just a bit unsure of Android.
I have an Android OpenGL-ES application featuring two threads. Call Thread 1 the "display thread" which "blends" its current texture with a texture emanating from Thread 2 a.k.a the "worker thread". Thread 2 performs off-screen rendering (render to texture), and then Thread 1 combines this texture with it's own texture to generate the frame which is displayed to the user.
I have a working solution but I know it is inefficient and am trying to improve upon it. In it's OnSurfaceCreated() method, Thread 1 creates two textures. Thread 2, in it's draw method, does a glReadPixels() into a ByteBuffer (let's refer to it as bb). Thread 2 then signals to Thread 1 that a new frame is ready, at which point Thread 1 invokes glTexSubImage2D(bb) to update it's texture with the new data from Thread 2, and proceed with it's "blending" in order to generate a new frame.
This architecture works better on some Android devices than others, and I have been able to garner a slight improvement in performance by using PBOs. But I figured that by using so-called "direct textures" via the EGL Image extension (https://software.intel.com/en-us/articles/using-opengl-es-to-accelerate-apps-with-legacy-2d-guis) I would gain some benefit by removing the need for the costly glTexSubImage2D() call. Yes, I'd still have the glReadPixels() call which bothers me still but at least I should measure some improvement. In fact, at least on a Samsung Galaxy Tab S (Mali T628 GPU) my new code is dramatically slower than before! How can this be?
In the new code Thread 1 instantiates the EGLImage object by using gralloc and proceeds to bind it to a texture:
// note gbuffer::create() is a wrapper around gralloc
buffer = gbuffer::create(width, height, gbuffer::FORMAT_RGBA_8888);
EGLClientBuffer anb = buffer->getNativeBuffer();
EGLImageKHR pEGLImage = _eglCreateImageKHR(eglGetCurrentDisplay(), EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, (EGLClientBuffer)anb, attrs);
glBindTexture(GL_TEXTURE_2D, texid); // texid from glGenTextures(...)
_glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, pEGLImage);
Then Thread 2 in it's main loop does it's off-screen render-to-texture stuff and essentially pushes the data back over to Thread 1 via glReadPixels() with the destination address as the backing storage behind the EGLImage:
void* vaddr = buffer->lock();
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, vaddr);
buffer->unlock();
How can this be slower than glReadPixels() into a ByteBuffer followed by glTexSubImage2D from the aforementioned ByteBuffer? I'm also interested in alternative techniques as I am not limited to OpenGL-ES 2.0 and can use OpenGL-ES 3.0. I have tried FBOs but ran into some issues.
In response to the first answer I decided to take a stab at implementing a different approach. Namely, sharing the textures between Thread 1 and Thread 2. While I don't have the sharing part working yet, I do have Thread 1's EGLContext passed down to Thread 2's EGLContext so that in theory, Thread 2 can share textures with Thread 1. With these changes in place, and with the glReadPixels() and glTexSubImage2D() calls remaining, the app works but is far slower than before. Strange.
The other oddity I uncovered is the deal with the difference between javax.microedition.khronos.egl.EGLContext and android.opengl.EGLContext. GLSurfaceView exposes an interface method setEGLContextFactory() which allows me to pass Thread 1's EGLContext to Thread 2, as in the following:
public Thread1SurfaceView extends GLSurfaceView {
public Thread1SurfaceView(Context context) {
super(context);
// here is how I pass Thread 1's EGLContext to Thread 2
setEGLContextFactory(new EGLContextFactory() {
#Override
public javax.microedition.khronos.egl.EGLContext createContext(
final javax.microedition.khronos.egl.EGL10 egl,
final javax.microedition.khronos.egl.EGLDisplay display,
final javax.microedition.khronos.egl.EGLConfig eglConfig) {
// Configure context for OpenGL ES 3.0.
int[] attrib_list = {EGL14.EGL_CONTEXT_CLIENT_VERSION, 3, EGL14.EGL_NONE};
javax.microedition.khronos.egl.EGLContext renderContext =
egl.eglCreateContextdisplay, eglConfig, EGL10.EGL_NO_CONTEXT, attrib_list);
mThread2 = new Thread2(renderContext);
}
});
}
Previously, I used stuff out of the EGL14 namespace but since the interface for GLSurfaceView apparently relies on EGL10 stuff I had to change the implementation for Thread 2. Everywhere I used EGL14 I replaced with javax.microedition.khronos.egl.EGL10. Then my shaders stopped compiling until I added GLES3 to the attribute list. Now things work, albeit slower than before (but next I will remove the calls to glReadPixels and glTexSubImage2D).
My follow-on question is, is this the right way to handle the javax.microedition.khronos.egl.* versus android.opengl.* issue? Can I typecast javax.microedition.khronos.egl.EGL10 to android.opengl.EGL14, javax.microedition.khronos.egl.EGLDisplay to android.opengl.EGLDisplay, and javax.microedition.khronos.egl.EGLContext to android.opengl.EGLContext? What I have right now just seems ugly and doesn't feel right, although this proposed casting doesn't sit right either. Am I missing something?
Based on the description of what you're trying to do, both approaches sound way more complicated and inefficient than necessary.
The way I've always understood EGLImage, it's a mechanism for sharing images between different processes, and possibly different APIs.
For multiple OpenGL ES contexts in the same process, you can simply share the textures. All you need to do is make both contexts part of the same share group, and they can both use the same textures. In your use case, you can then have one thread rendering to a texture using an FBO, and then sample from it in the other thread. This way, there is no extra data copying, and your code should become much simpler.
The only slightly tricky aspect is synchronization. ES 2.0 does not have synchronization mechanisms in the API. The best you can probably do is call glFinish() in one thread (e.g. after your thread 2 finished rendering to the texture), and then use standard IPC mechanisms to signal the other thread. ES 3.0 has sync objects, which make this much more elegant.
My earlier answer here sketches some of the steps needed to create multiple contexts that are in the same share group: about opengles and texture on android. The key part of creating multiple contexts in the same share group is the 3rd argument to eglCreateContext, where you specify the context that you want to share objects with.
I'm trying to draw two objects using two different textures with one shader program in OpenGL ES 2.0 for Android. The first object should have texture0 and the second sould have texture1.
In fragment shader I have:
uniform sampler2D tex;
and in java code:
int tiu0 = 0;
int tiu1 = 1;
int texLoc = glGetUniformLocation(program, "tex");
glUseProgram(program);
// bind texture0 to texture image unit 0
glActiveTexture(GL_TEXTURE0 + tiu0);
glBindTexture(GL_TEXTURE_2D, texture0);
// bind texture1 to texture image unit 1
glActiveTexture(GL_TEXTURE0 + tiu1);
glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(texLoc, tiu0);
// success: glGetError returns GL_NO_ERROR, glGetUniformiv returns 0 for texLoc
drawFirstObject(); // should have texture0
glUniform1i(texLoc, tiu1);
// success: glGetError returns GL_NO_ERROR, glGetUniformiv returns 1 for texLoc
drawSecondObject(); // should have texture1
Running on Samsung Galaxy Ace with Android 2.3.3 both objects have the same texture0. Similar code runs correctly in OpenGL 2.0 on my desktop computer.
If I remove drawFirstObject, the second object will have texture1.
If I remove drawSecondObject, the first object will have texture0.
If somewhere between drawFirstObject and drawSecondObject I change the program for a while:
glUseProgram(0); // can be also any valid program other than the program from the next call
glUseProgram(program);
then both objects will have texture1.
Values of uniforms different from sampler2D are always set correctly.
I know I can draw the two objects with different textures using only one texture image unit and binding appropriate texture to that texture image unit before drawing the object, but I also want to know what's going on here.
Is something wrong with my code? Is it possible in OpenGL ES 2.0 to draw the objects with different textures by only switching between texture image units as I shown in the code? If it's impossible, is that difference between OpenGL 2.0 (where it's possible) and OpenGL ES 2.0 documented anywhere? I can't find it.
After hours of further research I've found out that this problem is specific to Adreno 200 GPU that is utilized in my Samsung Galaxy Ace (GT-S5830). It seems like Adreno 200 driver assigns the texture to the sampler in the first call to a drawing function and after that it ignores any changes to the sampler value (glUniform1i(samplerLocation, textureImageUnit)) until one of the two occurs:
glUseProgram is called with a different shader program,
a different texture is bound to any texture image unit used by the shader program.
There's a thread in the forums of the manufacturer of Adreno 200 GPU describing the very same problem.
So if you call drawing functions several times with the same shader program and with different textures bound before, there are two workarounds to the described problem:
Call glUseProgram(0); glUseProgram(yourDrawingProgram); before every drawing function.
Before every drawing call, bind different texture to at least one texture image unit used by your shader program. This solution can be difficult to maintain, because if you bind the same texture that is already bound to the texture image unit, the problem will remain. So in this case the easiest solution is to simply not change sampler values and bind textures of all texture image units used by the shader program before every drawing call.
I have written a test class that should only paint the application icon into the screen. So far no luck. What I am doing wrong?
public class GLTester
{
void test(final Context context)
{
BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;
bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.icon, options);
setupGLES();
createProgram();
setupTexture();
draw();
}
void draw()
{
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
glUseProgram(glProgramHandle);
}
}
A couple of things.
I presume your squareVertices buffer is supposed to contain 4 vec3s. But your shader is setup for vec4s. Maybe this is ok, but it seems odd to me.
You also are not setting up any sort of perspective matrix with glFrustum or glOrtho, and are also not setting up any sort of viewing matrix with something like Matrix.setLookAtM. You should always try to keep the vertex pipeline in mind as well. Look at slide 2 from this lecture https://wiki.engr.illinois.edu/download/attachments/195761441/3-D+Transformational+Geometry.pptx?version=2&modificationDate=1328223370000
I think what is happening is your squareVertices are going through this pipeline and coming out on the other side as pixel coordinates. So your image is probably a very tiny spec in the corner of your screen, since you are using vertices from -1.0 to 1.0.
As a shameless sidenote, I posted some code on SourceForge that makes it possible to work on and load shaders from a file in your assets folder and not have to do it inside your .java files as strings. https://sourceforge.net/projects/androidopengles/
Theres an example project in the files section that uses this shader helper.
I hope some part of this rambling was helpful. :)
It looks pretty good, but I see that for one thing you are calling glUniform inside setupTexture, while the shader is not currenty bound. You should only call glUniform after calling glUseProgram.
I don't know if this is the problem, cause I would guess that it would probably default to 0 anyway, but I don't know for sure.
Other than that, you should get familiar calling glGetError to check if there are any error conditions pending.
Also, when creating shaders, its good habit to check their success status with glGetShader(GL_COMPILE_STATUS), also glGetShaderInfoLog if the compile fails, and similar for programs with glGetProgram/glGetProgramInfoLog.