I'm tasked with converting an android app from GLES1 to GLES3. The app does all its OpenGL calls in JNI, runs in a thread, and calls into Java to run basic functions like:
InitOpenGL();
MakeContextCurrent();
GLSwapBuffer();
The app maintains its own clock, etc-- so basically the main loop of the app looks something like this (simplified).
JavaBridge->InitOpenGL();
while (stillRunning())
{
SleepUntilRightTime();
UpdateEverything();
if (shouldDraw())
{
JavaBridge->MakeContextCurrent();
DrawEverything();
JavaBridge->GLSwapBuffers();
}
}
So, to accomplish this, the app has its own OpenGL factory, which initializes OpenGL1.1.
I'll try to cut out everything unncessary for brevity, here are the basics (removed all error checking to keep it short):
public class GLView extends SurfaceView implements SurfaceHolder.Callback
{
EGL10 m_GL;
EGLContext m_GLContext;
EGLDisplay m_GLDisplay=null;
EGLSurface m_GLSurface=null;
EGLConfig m_GLConfig=null;
public void InitOpenGL()
{
m_GL = (EGL10) EGLContext.getEGL();
m_GLDisplay = m_GL.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
m_GL.eglInitialize(m_GLDisplay, null);
EGLConfig[] configs = new EGLConfig[1];
int[] config_count = new int[1];
int[] specs = { EGL10.EGL_ALPHA_SIZE, 8, EGL10.EGL_DEPTH_SIZE, 16, EGL10.EGL_STENCIL_SIZE, 8, EGL10.EGL_NONE };
m_GL.eglChooseConfig(m_GLDisplay, specs, configs, 1, config_count);
m_GLContext = m_GL.eglCreateContext(m_GLDisplay, m_GLConfig, EGL10.EGL_NO_CONTEXT, null);
SurfaceHolder h = getHolder();
m_GLSurface = m_GL.eglCreateWindowSurface(m_GLDisplay, m_GLConfig, h, null);
m_GL.eglMakeCurrent(m_GLDisplay, m_GLSurface, m_GLSurface, m_GLContext);
m_GL = (EGL10) EGLContext.getEGL();
}
public void MakeContextCurrent()
{
m_GL.eglMakeCurrent(m_GLDisplay, m_GLSurface, m_GLSurface, m_GLContext);
}
public void SwapBuffers()
{
m_GL.eglSwapBuffers(m_GLDisplay, m_GLSurface);
}
}
This all works beautifully, the app runs in its thread and paints the screen whenever it deems fit (it's a game, btw, that's why the constant loop).
Now: I was hoping that to turn this into an OpenGL3.0 context, I'd just say "hey request version number" or something like that. I've tried a few things without success (setting GL_VERSION in an attrib list with createcontext, making sure the right libraries are linked, fixed manifest, etc, etc, on and on) but no matter what I do, openGL calls in the JNI either do nothing, or crash the program.
I can't even get glClear to work unless I just revert everything back to square one. Anyone have any advice on offer to turn this thing into a 3.0 capable context?
Okay, managed to figure this out. For anyone using modern android, you'll find that EG10.EGL_CONTEXT_CLIENT_VERSION is not defined. It seems like using EGL_VERSION would be the substitute, but it's not.
Why isn't EGL_CONTEXT_CLIENT_VERSION defined? Is it depreciated? Is it shunned? We'll never know. But we DO know that if it WAS defined it would be 0x3098.
So, making this all magically work was as simple as saying:
int[] attrib_list=new int[]{0x3098,3,EGL10.EGL_NONE};
m_GLContext = m_GL.eglCreateContext(m_GLDisplay, m_GLConfig, EGL10.EGL_NO_CONTEXT,attrib_list);
Is it dangerous to do this? Probably. I'll do a little more research into it. If I never return to edit this, then I found no real answer yea or nay.
Related
Previous versions of the Skobbler SDK allowed for the developer to request a screenshot of the current mapView state by calling something like
mapView.requestScreenshot();
Followed by overriding a callback to listen for the bitmap result similar to:
#Override
public void onScreenshotReady(Bitmap bitmap) {
// do something with passed bitmap
}
This code worked as expected in SDK version 2.5.1 but somewhere along the line the SDK appears to have changed such that this code no longer works. Instead now, when we receive the bitmap object, we see that the dimensions of the bitmap match those of the mapview, but the content is all just transparent pixels.
It's almost as if the bitmap was properly initialized with a transparent background with the correct width and height but when the surfaceview request to render to the bitmap was made internally, that portion failed.
Some additional details, the call to mapView.requestScreenshot() is made on the main UI thread, as is the handling of the callback onScreenshotReady().
Looking at the logs, the only output I see in between making the requestScreenshot() call and the calling of the callback is this possibly related error:
D/SKMapActivity: requesting screenshot
E/libEGL: call to OpenGL ES API with no current context (logged once per thread)
D/SKMapActivity: screenshot ready
Since I'm not sure of the internals of how the SKMapSurfaceView class requests the render, I'm not sure if there is an additional step I need to take to ensure that a current OpenGL ES context is in place when the screenshot request is made.
Does anyone have any thoughts on the matter? Thanks!
Keith
You could force to use openGL version 2.0.
Something like that.
EGL10 mEgl = (EGL10) EGLContext.getEGL();
int[] version = new int[2];
EGLDisplay display = mEgl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
boolean success = mEgl.eglInitialize(display, version);
int EGL_OPENGL_ES2_BIT = 4;
int[] configAttribs = {EGL10.EGL_RED_SIZE, 4, EGL10.EGL_GREEN_SIZE, 4, EGL10.EGL_BLUE_SIZE, 4, EGL10.EGL_RENDERABLE_TYPE,
EGL_OPENGL_ES2_BIT, EGL10.EGL_NONE};
EGLConfig[] configs = new EGLConfig[10];
int[] num_config = new int[1];
mEgl.eglChooseConfig(display, configAttribs, configs, 10, num_config);
Log.d("OpenGL", "glversion: " + String.valueOf(success));
EGLConfig eglConfig = this.mapView.chooseConfig(mEgl, display);
`
Target: Android API >=23, OpenGL ES 2.
The following code
private void deleteFBO()
{
android.util.Log.e("FBO", "deleting "+mFramebufferID);
int[] textureIds = new int[1];
int[] mFBORenderToTexture = new int[1];
textureIds[0] = mTextureID;
mFBORenderToTexture[0] = mFramebufferID;
if( GLES20.glGetError()!=GLES20.GL_NO_ERROR )
android.util.Log.e("FBO", "error before deleting");
GLES20.glDeleteTextures(1, textureIds, 0);
GLES20.glDeleteFramebuffers(1, mFBORenderToTexture, 0);
if( GLES20.glGetError()!=GLES20.GL_NO_ERROR )
android.util.Log.e("FBO", "error after deleting");
}
doesn't give me any errors (i.e. I cannot see the 'error before/after deleting') even though it is for sure called from a thread which does NOT hold any OpenGL contexts.
How is that possible? Or maybe the glDelete() calls really DO fail, but my code fails to detect this?
Seems like I don't understand WHICH OpenGL calls need to be made when holding the context? Certainly glDrawArrays gives me an error when I try to call it without holding the context, and I thought I need to be holding it in every single case, including the two glDelete*() above?
WHICH OpenGL calls need to be made when holding the context?
All of them. Which includes glGetError(). This means that your error checks themselves are invalid if there is no current context.
Even though, I found some claims that glGetError() returns GL_INVALID_OPERATION if there is no current context. But I have not been able to find that behavior defined in the spec. So until somebody shows me otherwise, I'll stick to my claim that calling glGetError() without a current context will give undefined (i.e. implementation dependent) results.
I am porting my game to android and decided to go with NativeActivity instead of Java activity and JNI calls (I am not avoiding JNI, just though it would be more convenient to set up callbacks and opengl context creation/destruction purely in c/c++).
I know that GLSurfaceView has a setPreserveEGLContextOnPause function, but that is in Java, not in native app. I create my context with the following code:
EGLConfig config;
EGLSurface surface;
EGLContext context;
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
eglInitialize(display, 0, 0);
eglChooseConfig(display, attribs, &config, 1, &numConfigs);
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
ANativeWindow_setBuffersGeometry(engine->app->window, 0, 0, format);
surface = eglCreateWindowSurface(display, config, engine->app->window, NULL);
const EGLint contextAttribs[] = {
EGL_CONTEXT_CLIENT_VERSION, 2,
EGL_NONE
};
context = eglCreateContext(display, config, NULL, contextAttribs);
if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE) {
ERR("Unable to eglMakeCurrent");
return -1;
}
I also know that setPreserveEGLContextOnPause is not 100% reliable and I should check if stuff is destroyed manually, but if it's not - I'd like to skip the asset reloading part for the sake of faster loading.
Basically what I want to do is to use setPreserveEGLContextOnPause (or it's equivalent for ndk world). Is it possible? Is GLSurfaceView being instantiated behind the curtains of android's egl calls?
GLSurfaceView is a Java-language utility class that sits on top of SurfaceView and GLES. Nothing is creating or calling into GLSurfaceView from EGL.
The "preserve EGL context" code in GLSurfaceView exists because GLSurfaceView does its own management of the EGL context on the render thread. The idea was to set things up so the app doesn't have to deal with it if it wants to use GLSurfaceView. If you want to do your own EGL management, don't use GLSurfaceView; when writing code in Java you'd use SurfaceView or TextureView instead.
You can see multiple examples in Grafika. The Java-language GLES implementation is a thin wrapper around the native implementation, so the way EGL is used in Grafika closely mirrors how you would use it in native code.
If you manage the EGL context yourself, it will not go away when Activities are torn down and recreated, but it will go away if the process is killed, so it's best to create it on the activity onPause() / onResume(). It's also bad form to continue holding contexts (and their associated textures and buffers) while the app is in the background. See this article on SurfaceView and Activity lifecycle interaction for some notes about working with surfaces. (And read the rest of the article if you'd like to understand how the Android graphics architecture works.)
So I have done a lot of looking around and the answer to this seems to be to use:
int[] maxSize = new int[1];
gl.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, maxSize, 0);
to detect the size of the texture, now my issue is how do I create or get access to the gl var that holds the function I need? Is it already there somewhere? I would like to support android 2.2 and above, so the 4.0+ new trick wont work. If this is a repeat question just point me in the right direction in the comments and I will take t down. Couldn't seem to find a good explanation of how to set this up properly anywhere, just those two lines of code.
If you take a look on how OpenGL Apps are made you will notice there are the main app thread (main activity) and a renderer class(http://developer.android.com/guide/topics/graphics/opengl.html). The heart of the renderer class if the method public void onDrawFrame(GL10 gl) , this is called by the android infrastructure when the frame needs to be redraw.
So basically, a context object (GL10 gl var) is passed to the renderer (yours) when required and there you can check your max texture size.
I have a large amount of textures in JPG format.
And I need to preload them in opengl memory before the actual drawing starts.
I've asked a question and I've been told that the way to do this is to separate JPEG unpacking from glTexImage2D(...) calls to another thread.
The problem is I'm not quite sure how to do this.
OpenGL (handler?), needed to execute glTexImage2D is only available in GLSurfaceView.Renderer's onSurfaceCreated and OnDrawFrame methods.
I can't unpack all my textures and then in onSurfaceCreated(...) load them in opnegl,
because they probably won't fit in limited vm's memory (20-40MB?)
That means I have to unpack and load them one-by one, but in that case I can't get an opengl pointer.
Could someone, please, give me and example of threading of textures loading for opengl game?
It must be some some typical procedure, and I can't get any info anywhere.
As explained in 'OpenGLES preloading textures in other thread' there are two separate steps: bitmap creation and bitmap upload. In most cases you should be fine by just doing the bitmap creation on a secondary thread --- which is fairly easy.
If you experience frame drops while uploading the textures, call texImage2D from a background thread. To do so you'll need to create a new OpenGL context which shares it's textures with your rendering thread because each thread needs it's own OpenGL context.
EGLContext textureContext = egl.eglCreateContext(display, eglConfig, renderContext, null);
Getting the parameters for eglCreateContext is a little bit tricky. You need to use setEGLContextFactory on your SurfaceView to hook into the EGLContext creation:
#Override
public EGLContext createContext(final EGL10 egl, final EGLDisplay display, final EGLConfig eglConfig) {
EGLContext renderContext = egl.eglCreateContext(display, eglConfig, EGL10.EGL_NO_CONTEXT, null);
// create your texture context here
return renderContext;
}
Then you are ready to start a texture loading thread:
public void run() {
int pbufferAttribs[] = { EGL10.EGL_WIDTH, 1, EGL10.EGL_HEIGHT, 1, EGL14.EGL_TEXTURE_TARGET,
EGL14.EGL_NO_TEXTURE, EGL14.EGL_TEXTURE_FORMAT, EGL14.EGL_NO_TEXTURE,
EGL10.EGL_NONE };
EGLSurface localSurface = egl.eglCreatePbufferSurface(display, eglConfig, pbufferAttribs);
egl.eglMakeCurrent(display, localSurface, localSurface, textureContext);
int textureId = loadTexture(R.drawable.waterfalls);
// here you can pass the textureId to your
// render thread to be used with glBindTexture
}
I've created a working demonstration of the above code snippets at https://github.com/perpetual-mobile/SharedGLContextsTest.
This solution is based on many sources around the internet. The most influencing ones where these three:
http://www.khronos.org/message_boards/showthread.php/9029-Loading-textures-in-a-background-thread-on-Android
http://www.khronos.org/message_boards/showthread.php/5843-Texture-Sharing
Why is eglMakeCurrent() failing with EGL_BAD_MATCH?
You just have your main thread with the uploading routine, that has access to OpenGL and calls glTexImage2D. The other thread loads (and decodes) the image from file to memory. While the secondary thread loads the next image, the main thread uploads the previously loaded image into the texture. So you only need memory for two images, the one currently loaded from file and the one currently uploaded into the GL (which is the one loaded previously). Of course you need a bit of synchronization, to prevent the loader thread from overwriting the memory, that the main thread currently sends to GL and to prevent the main thread from sending unfinished data.
"There has to be a way to call GL functions outside of the initialization function." - Yes. Just copy the pointer to gl and use it anywhere.
"Just be sure to only use OpenGL in the main thread." Very important. You cannot call in your Game Engine (which may be in another thread) a texture-loading function which is not synchronized with the gl-thread. Set there a flag to signal your gl-thread to load a new texture (for example, you can place a function in OnDrawFrame(GL gl) which checks if there must be a new texture loaded.
To add to Rodja's answer, if you want an OpenGL ES 2.0 context, then use the following to create the context:
final int EGL_CONTEXT_CLIENT_VERSION = 0x3098;
int[] contextAttributes =
{ EGL_CONTEXT_CLIENT_VERSION, 2, EGL10.EGL_NONE };
EGLContext renderContext = egl.eglCreateContext(
display, config, EGL10.EGL_NO_CONTEXT, contextAttributes);
You still need to call setEGLContextClientVersion(2) as well, as that is also used by the default config chooser.
This is based on Attribute list in eglCreateContext
Found a solution for this, which is actually very easy: After you load the bitmap (in a separate thread), store it in an instance variable, and in draw method, you check if it's initialized, if yes, load the texture. Something like this:
if (bitmap != null && textureId == -1) {
initTexture(gl, bitmap);
}