Target: Android API >=23, OpenGL ES 2.
The following code
private void deleteFBO()
{
android.util.Log.e("FBO", "deleting "+mFramebufferID);
int[] textureIds = new int[1];
int[] mFBORenderToTexture = new int[1];
textureIds[0] = mTextureID;
mFBORenderToTexture[0] = mFramebufferID;
if( GLES20.glGetError()!=GLES20.GL_NO_ERROR )
android.util.Log.e("FBO", "error before deleting");
GLES20.glDeleteTextures(1, textureIds, 0);
GLES20.glDeleteFramebuffers(1, mFBORenderToTexture, 0);
if( GLES20.glGetError()!=GLES20.GL_NO_ERROR )
android.util.Log.e("FBO", "error after deleting");
}
doesn't give me any errors (i.e. I cannot see the 'error before/after deleting') even though it is for sure called from a thread which does NOT hold any OpenGL contexts.
How is that possible? Or maybe the glDelete() calls really DO fail, but my code fails to detect this?
Seems like I don't understand WHICH OpenGL calls need to be made when holding the context? Certainly glDrawArrays gives me an error when I try to call it without holding the context, and I thought I need to be holding it in every single case, including the two glDelete*() above?
WHICH OpenGL calls need to be made when holding the context?
All of them. Which includes glGetError(). This means that your error checks themselves are invalid if there is no current context.
Even though, I found some claims that glGetError() returns GL_INVALID_OPERATION if there is no current context. But I have not been able to find that behavior defined in the spec. So until somebody shows me otherwise, I'll stick to my claim that calling glGetError() without a current context will give undefined (i.e. implementation dependent) results.
Related
I'm tasked with converting an android app from GLES1 to GLES3. The app does all its OpenGL calls in JNI, runs in a thread, and calls into Java to run basic functions like:
InitOpenGL();
MakeContextCurrent();
GLSwapBuffer();
The app maintains its own clock, etc-- so basically the main loop of the app looks something like this (simplified).
JavaBridge->InitOpenGL();
while (stillRunning())
{
SleepUntilRightTime();
UpdateEverything();
if (shouldDraw())
{
JavaBridge->MakeContextCurrent();
DrawEverything();
JavaBridge->GLSwapBuffers();
}
}
So, to accomplish this, the app has its own OpenGL factory, which initializes OpenGL1.1.
I'll try to cut out everything unncessary for brevity, here are the basics (removed all error checking to keep it short):
public class GLView extends SurfaceView implements SurfaceHolder.Callback
{
EGL10 m_GL;
EGLContext m_GLContext;
EGLDisplay m_GLDisplay=null;
EGLSurface m_GLSurface=null;
EGLConfig m_GLConfig=null;
public void InitOpenGL()
{
m_GL = (EGL10) EGLContext.getEGL();
m_GLDisplay = m_GL.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
m_GL.eglInitialize(m_GLDisplay, null);
EGLConfig[] configs = new EGLConfig[1];
int[] config_count = new int[1];
int[] specs = { EGL10.EGL_ALPHA_SIZE, 8, EGL10.EGL_DEPTH_SIZE, 16, EGL10.EGL_STENCIL_SIZE, 8, EGL10.EGL_NONE };
m_GL.eglChooseConfig(m_GLDisplay, specs, configs, 1, config_count);
m_GLContext = m_GL.eglCreateContext(m_GLDisplay, m_GLConfig, EGL10.EGL_NO_CONTEXT, null);
SurfaceHolder h = getHolder();
m_GLSurface = m_GL.eglCreateWindowSurface(m_GLDisplay, m_GLConfig, h, null);
m_GL.eglMakeCurrent(m_GLDisplay, m_GLSurface, m_GLSurface, m_GLContext);
m_GL = (EGL10) EGLContext.getEGL();
}
public void MakeContextCurrent()
{
m_GL.eglMakeCurrent(m_GLDisplay, m_GLSurface, m_GLSurface, m_GLContext);
}
public void SwapBuffers()
{
m_GL.eglSwapBuffers(m_GLDisplay, m_GLSurface);
}
}
This all works beautifully, the app runs in its thread and paints the screen whenever it deems fit (it's a game, btw, that's why the constant loop).
Now: I was hoping that to turn this into an OpenGL3.0 context, I'd just say "hey request version number" or something like that. I've tried a few things without success (setting GL_VERSION in an attrib list with createcontext, making sure the right libraries are linked, fixed manifest, etc, etc, on and on) but no matter what I do, openGL calls in the JNI either do nothing, or crash the program.
I can't even get glClear to work unless I just revert everything back to square one. Anyone have any advice on offer to turn this thing into a 3.0 capable context?
Okay, managed to figure this out. For anyone using modern android, you'll find that EG10.EGL_CONTEXT_CLIENT_VERSION is not defined. It seems like using EGL_VERSION would be the substitute, but it's not.
Why isn't EGL_CONTEXT_CLIENT_VERSION defined? Is it depreciated? Is it shunned? We'll never know. But we DO know that if it WAS defined it would be 0x3098.
So, making this all magically work was as simple as saying:
int[] attrib_list=new int[]{0x3098,3,EGL10.EGL_NONE};
m_GLContext = m_GL.eglCreateContext(m_GLDisplay, m_GLConfig, EGL10.EGL_NO_CONTEXT,attrib_list);
Is it dangerous to do this? Probably. I'll do a little more research into it. If I never return to edit this, then I found no real answer yea or nay.
(via Xamarin C#) On Android I am unable to create the framebuffer.
I have a subclassed Android.OpenGL.GLSurfaceView, set to GL ES 2, with preserve EGL Context, and my other GL commands are drawing successfully, with no GLErrors:
public class PaintingView : GLSurfaceView, GLSurfaceView.IRenderer
{
public PaintingView( Context context, IAttributeSet attrs ) :
base( context, attrs )
{
SetEGLContextClientVersion( 2 );
// Set the Renderer for drawing on the GLSurfaceView
SetRenderer( this );
PreserveEGLContextOnPause = true;
}
public void OnSurfaceCreated( IGL10 gl, Javax.Microedition.Khronos.Egl.EGLConfig config )
{
...
}
}
OnSurfaceCreated does get called, and it invokes the following code on the UI thread via myActivity.RunOnUiThread( Action ), as part of creating a buffer of desired size for offscreen rendering:
frameBuffer = -123; // Invalid value, so we can test whether it was set.
GL.GenFramebuffers( 1, out frameBuffer );
Shader.CheckGLError( "Offscreen.Resize 1" ); // My method that checks result of "GL.GetErrorCode()"
if (frameBuffer <= -1) {
... failed to assign an FBO!
}
Problem:
GenFramebuffers failed to assign an FBO - "frameBuffer" is still set to "-123".
Q: Why might this occur? Is there something I was supposed to do first? Is there a working Xamarin example of using an FBO on Android?
NOTES:
On iOS we successfully used code that created a second EGLContext and pbuffer for offscreen rendering, but after reading that switching between two EGLContexts has poor performance on Android, we are attempting an FBO approach for Android.
The problem seems to be specific to Android, as it involves setting up the FBO.
I've seen several java examples of using an FBO on Android, but have been unable to get past this first step, using Xamarin.
This isn't a Xamarin-specific issue. The problem is attempting to create and use an FBO at a time that the current (active) EGLContext is not able to handle GL calls (even though GLError did not report an error).
The code needs to be part of the draw cycle for the GL view that the FBO will be rendered to.
For an Android.OpenGL.GLSurfaceView, that means "during GLSurfaceView.IRenderer.OnDrawFrame( IGL10 gl )":
public class PaintingView : GLSurfaceView, GLSurfaceView.IRenderer
{
...
public void OnDrawFrame( IGL10 gl )
{
Int32 frameBuffer = -123; // Invalid value, so we can test whether it was set.
GL.GenFramebuffers( 1, out frameBuffer );
Shader.CheckGLError( "PaintingView.OnDrawFrame 1" ); // My method that checks result of "GL.GetErrorCode()"
if (frameBuffer <= -1) {
...
}
...
}
}
Here, frameBuffer gets assigned "1". (On the first call. On second call it is "2", and so on. TBD whether these increasing values are a bad sign.)
TBD whether we should create it once and re-use it in future calls, or recreate it each time.
TBD whether there is an earlier point in GLSurfaceView's lifecycle, at which this can be initialized. If done too early, the view is not attached to the window, so there is no surface, so all GL calls fail because Android has not initialized an EGLContext yet for this view. In addition, it needs to be done on the GL rendering thread, which AFAIK is only accessible via OnDrawFrame.
EDIT:
More debugging led me to the fact that glGetAttribLocation returns -1, except for the first start of the Application. Program ID is valid (I guess?), it was 12 in my testing right now. I also tried to retrieve attribute location right before drawing again, but this did not work out neither.
My shader "architecture" now looks like this:
I've turned the shader into a singleton. I.e. only one instance. Using it:
public void useProgram() {
GLES20.glUseProgram(iProgram);
getUniformLocations();
getAttributeLocations();
}
I.e. program will be sent to OpenGL, afterwards I'm retrieving uniform and attribute locations for all my variables, they are stored within a HashMap (one for each shader):
#Override
protected void getAttributeLocations() {
addToGLMap(A_NORMAL, GLES20.glGetAttribLocation(iProgram, A_NORMAL));
addToGLMap(A_POSITION, GLES20.glGetAttribLocation(iProgram, A_POSITION));
addToGLMap(A_COLOR, GLES20.glGetAttribLocation(iProgram, A_COLOR));
}
I don't understand, why the program's ID is for example 12, but all the attribute locations are non-existent in the second and the following run of my Application...
In my Application, I am loading a Wavefront object, as well as I am drawing several lines and cubes, just to try something. After starting the Application "clean", i.e. after rebooting or installing it, everything looks as intended. But if I close the Application and re-open it, it looks weird, screenshot is at the bottom.
What I'm currently doing:
onSurfaceCreated:
Taking care of culling, clear color, etc, etc.
Clear all loaded objects (just for testing, will of course not delete memory in later phase).
Reload objects (threaded).
My objects are stored like this:
public class WavefrontObject {
private FloatBuffer mPositionBuffer = null;
private FloatBuffer mColorBuffer = null;
private FloatBuffer mNormalBuffer = null;
private ShortBuffer mIndexBuffer = null;
}
Buffers are filled upon creation of the element.
They are drawn:
mColorBuffer.position(0);
mNormalBuffer.position(0);
mIndexBuffer.position(0);
mPositionBuffer.position(0);
GLES20.glVertexAttribPointer(mShader.getGLLocation(BaseShader.A_POSITION), 3, GLES20.GL_FLOAT, false,
0, mPositionBuffer);
GLES20.glEnableVertexAttribArray(mShader.getGLLocation(BaseShader.A_POSITION));
// etc...
GLES20.glDrawElements(GLES20.GL_TRIANGLES, mIndexBuffer.capacity(), GLES20.GL_UNSIGNED_SHORT, mIndexBuffer);
Do I need to disable the VertexAttribArrays after drawing them? I am currently overwriting the buffer for each drawing loop, but do they maybe interact with other models being drawn?
The model I am loading displays a small toy-plane. After restarting the Application, it looks like this (loading the object, all colors are set to white (for testing)):
So to me it looks like the buffers either have left-over stuff in them? What's the "best practice" for using these buffers? Disable the arrays? Does OpenGL ES2.0 offer some sort of "clear buffer" method that I can use before putting my values in them?
What was expected to be drawn: At the point where the "weird triangles" and colors origin from, there should be the plane-model. All in white.
When your application loses context its OpenGL context is destroyed.
So all objects (programs and its uniform/attribute handles, etc) are invalidated.
During reopening you have to clear/invalidate all singleton objects like yours...
So I have done a lot of looking around and the answer to this seems to be to use:
int[] maxSize = new int[1];
gl.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, maxSize, 0);
to detect the size of the texture, now my issue is how do I create or get access to the gl var that holds the function I need? Is it already there somewhere? I would like to support android 2.2 and above, so the 4.0+ new trick wont work. If this is a repeat question just point me in the right direction in the comments and I will take t down. Couldn't seem to find a good explanation of how to set this up properly anywhere, just those two lines of code.
If you take a look on how OpenGL Apps are made you will notice there are the main app thread (main activity) and a renderer class(http://developer.android.com/guide/topics/graphics/opengl.html). The heart of the renderer class if the method public void onDrawFrame(GL10 gl) , this is called by the android infrastructure when the frame needs to be redraw.
So basically, a context object (GL10 gl var) is passed to the renderer (yours) when required and there you can check your max texture size.
I have a large amount of textures in JPG format.
And I need to preload them in opengl memory before the actual drawing starts.
I've asked a question and I've been told that the way to do this is to separate JPEG unpacking from glTexImage2D(...) calls to another thread.
The problem is I'm not quite sure how to do this.
OpenGL (handler?), needed to execute glTexImage2D is only available in GLSurfaceView.Renderer's onSurfaceCreated and OnDrawFrame methods.
I can't unpack all my textures and then in onSurfaceCreated(...) load them in opnegl,
because they probably won't fit in limited vm's memory (20-40MB?)
That means I have to unpack and load them one-by one, but in that case I can't get an opengl pointer.
Could someone, please, give me and example of threading of textures loading for opengl game?
It must be some some typical procedure, and I can't get any info anywhere.
As explained in 'OpenGLES preloading textures in other thread' there are two separate steps: bitmap creation and bitmap upload. In most cases you should be fine by just doing the bitmap creation on a secondary thread --- which is fairly easy.
If you experience frame drops while uploading the textures, call texImage2D from a background thread. To do so you'll need to create a new OpenGL context which shares it's textures with your rendering thread because each thread needs it's own OpenGL context.
EGLContext textureContext = egl.eglCreateContext(display, eglConfig, renderContext, null);
Getting the parameters for eglCreateContext is a little bit tricky. You need to use setEGLContextFactory on your SurfaceView to hook into the EGLContext creation:
#Override
public EGLContext createContext(final EGL10 egl, final EGLDisplay display, final EGLConfig eglConfig) {
EGLContext renderContext = egl.eglCreateContext(display, eglConfig, EGL10.EGL_NO_CONTEXT, null);
// create your texture context here
return renderContext;
}
Then you are ready to start a texture loading thread:
public void run() {
int pbufferAttribs[] = { EGL10.EGL_WIDTH, 1, EGL10.EGL_HEIGHT, 1, EGL14.EGL_TEXTURE_TARGET,
EGL14.EGL_NO_TEXTURE, EGL14.EGL_TEXTURE_FORMAT, EGL14.EGL_NO_TEXTURE,
EGL10.EGL_NONE };
EGLSurface localSurface = egl.eglCreatePbufferSurface(display, eglConfig, pbufferAttribs);
egl.eglMakeCurrent(display, localSurface, localSurface, textureContext);
int textureId = loadTexture(R.drawable.waterfalls);
// here you can pass the textureId to your
// render thread to be used with glBindTexture
}
I've created a working demonstration of the above code snippets at https://github.com/perpetual-mobile/SharedGLContextsTest.
This solution is based on many sources around the internet. The most influencing ones where these three:
http://www.khronos.org/message_boards/showthread.php/9029-Loading-textures-in-a-background-thread-on-Android
http://www.khronos.org/message_boards/showthread.php/5843-Texture-Sharing
Why is eglMakeCurrent() failing with EGL_BAD_MATCH?
You just have your main thread with the uploading routine, that has access to OpenGL and calls glTexImage2D. The other thread loads (and decodes) the image from file to memory. While the secondary thread loads the next image, the main thread uploads the previously loaded image into the texture. So you only need memory for two images, the one currently loaded from file and the one currently uploaded into the GL (which is the one loaded previously). Of course you need a bit of synchronization, to prevent the loader thread from overwriting the memory, that the main thread currently sends to GL and to prevent the main thread from sending unfinished data.
"There has to be a way to call GL functions outside of the initialization function." - Yes. Just copy the pointer to gl and use it anywhere.
"Just be sure to only use OpenGL in the main thread." Very important. You cannot call in your Game Engine (which may be in another thread) a texture-loading function which is not synchronized with the gl-thread. Set there a flag to signal your gl-thread to load a new texture (for example, you can place a function in OnDrawFrame(GL gl) which checks if there must be a new texture loaded.
To add to Rodja's answer, if you want an OpenGL ES 2.0 context, then use the following to create the context:
final int EGL_CONTEXT_CLIENT_VERSION = 0x3098;
int[] contextAttributes =
{ EGL_CONTEXT_CLIENT_VERSION, 2, EGL10.EGL_NONE };
EGLContext renderContext = egl.eglCreateContext(
display, config, EGL10.EGL_NO_CONTEXT, contextAttributes);
You still need to call setEGLContextClientVersion(2) as well, as that is also used by the default config chooser.
This is based on Attribute list in eglCreateContext
Found a solution for this, which is actually very easy: After you load the bitmap (in a separate thread), store it in an instance variable, and in draw method, you check if it's initialized, if yes, load the texture. Something like this:
if (bitmap != null && textureId == -1) {
initTexture(gl, bitmap);
}