I have a large amount of textures in JPG format.
And I need to preload them in opengl memory before the actual drawing starts.
I've asked a question and I've been told that the way to do this is to separate JPEG unpacking from glTexImage2D(...) calls to another thread.
The problem is I'm not quite sure how to do this.
OpenGL (handler?), needed to execute glTexImage2D is only available in GLSurfaceView.Renderer's onSurfaceCreated and OnDrawFrame methods.
I can't unpack all my textures and then in onSurfaceCreated(...) load them in opnegl,
because they probably won't fit in limited vm's memory (20-40MB?)
That means I have to unpack and load them one-by one, but in that case I can't get an opengl pointer.
Could someone, please, give me and example of threading of textures loading for opengl game?
It must be some some typical procedure, and I can't get any info anywhere.
As explained in 'OpenGLES preloading textures in other thread' there are two separate steps: bitmap creation and bitmap upload. In most cases you should be fine by just doing the bitmap creation on a secondary thread --- which is fairly easy.
If you experience frame drops while uploading the textures, call texImage2D from a background thread. To do so you'll need to create a new OpenGL context which shares it's textures with your rendering thread because each thread needs it's own OpenGL context.
EGLContext textureContext = egl.eglCreateContext(display, eglConfig, renderContext, null);
Getting the parameters for eglCreateContext is a little bit tricky. You need to use setEGLContextFactory on your SurfaceView to hook into the EGLContext creation:
#Override
public EGLContext createContext(final EGL10 egl, final EGLDisplay display, final EGLConfig eglConfig) {
EGLContext renderContext = egl.eglCreateContext(display, eglConfig, EGL10.EGL_NO_CONTEXT, null);
// create your texture context here
return renderContext;
}
Then you are ready to start a texture loading thread:
public void run() {
int pbufferAttribs[] = { EGL10.EGL_WIDTH, 1, EGL10.EGL_HEIGHT, 1, EGL14.EGL_TEXTURE_TARGET,
EGL14.EGL_NO_TEXTURE, EGL14.EGL_TEXTURE_FORMAT, EGL14.EGL_NO_TEXTURE,
EGL10.EGL_NONE };
EGLSurface localSurface = egl.eglCreatePbufferSurface(display, eglConfig, pbufferAttribs);
egl.eglMakeCurrent(display, localSurface, localSurface, textureContext);
int textureId = loadTexture(R.drawable.waterfalls);
// here you can pass the textureId to your
// render thread to be used with glBindTexture
}
I've created a working demonstration of the above code snippets at https://github.com/perpetual-mobile/SharedGLContextsTest.
This solution is based on many sources around the internet. The most influencing ones where these three:
http://www.khronos.org/message_boards/showthread.php/9029-Loading-textures-in-a-background-thread-on-Android
http://www.khronos.org/message_boards/showthread.php/5843-Texture-Sharing
Why is eglMakeCurrent() failing with EGL_BAD_MATCH?
You just have your main thread with the uploading routine, that has access to OpenGL and calls glTexImage2D. The other thread loads (and decodes) the image from file to memory. While the secondary thread loads the next image, the main thread uploads the previously loaded image into the texture. So you only need memory for two images, the one currently loaded from file and the one currently uploaded into the GL (which is the one loaded previously). Of course you need a bit of synchronization, to prevent the loader thread from overwriting the memory, that the main thread currently sends to GL and to prevent the main thread from sending unfinished data.
"There has to be a way to call GL functions outside of the initialization function." - Yes. Just copy the pointer to gl and use it anywhere.
"Just be sure to only use OpenGL in the main thread." Very important. You cannot call in your Game Engine (which may be in another thread) a texture-loading function which is not synchronized with the gl-thread. Set there a flag to signal your gl-thread to load a new texture (for example, you can place a function in OnDrawFrame(GL gl) which checks if there must be a new texture loaded.
To add to Rodja's answer, if you want an OpenGL ES 2.0 context, then use the following to create the context:
final int EGL_CONTEXT_CLIENT_VERSION = 0x3098;
int[] contextAttributes =
{ EGL_CONTEXT_CLIENT_VERSION, 2, EGL10.EGL_NONE };
EGLContext renderContext = egl.eglCreateContext(
display, config, EGL10.EGL_NO_CONTEXT, contextAttributes);
You still need to call setEGLContextClientVersion(2) as well, as that is also used by the default config chooser.
This is based on Attribute list in eglCreateContext
Found a solution for this, which is actually very easy: After you load the bitmap (in a separate thread), store it in an instance variable, and in draw method, you check if it's initialized, if yes, load the texture. Something like this:
if (bitmap != null && textureId == -1) {
initTexture(gl, bitmap);
}
Related
I am aware that different versions of this question have been answered before and I have read alot of them. Yet none seems to help me with my problem.
I am using SDL to create a context to hold OpenGL ES. If I make OpenGL ES calls from the thread that contains the current context I am able to render as normal. However if I make an OpenGL ES call from another thread I'm unable to do any rendering and I get the error message in the title.
E/libEGL: call to OpenGL ES API with no current context (logged once per thread)
I am using the android project template provided in SDL source package. This uses a class called SDLActivity to communicate with android and native c/c++ code.
The question is how can I create multiple SDL contexts so that I can still make OpenGL ES calls from another thread?
EDIT: Here is where I create the SDL context in my code:
int SDL_main(int argc, char **argv)
{
window = nullptr;
if (SDL_Init(SDL_INIT_EVERYTHING) < 0)
return false;
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_ES);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);
SDL_DisplayMode currentDisplayMode;
int success = SDL_GetCurrentDisplayMode(0, ¤tDisplayMode);
assert(success == 0);
WIDTH = currentDisplayMode.w;
HEIGHT = currentDisplayMode.h;
window = SDL_CreateWindow(Name.c_str(),
SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED,
WIDTH, HEIGHT, SDL_WINDOW_OPENGL);
assert(window != nullptr);
SDL_SetHint("SDL_HINT_ORIENTATIONS", "LandscapeRight LandscapeLeft");
context = SDL_GL_CreateContext(window);
assert(context != nullptr);
glGetError();
//OpenGL configuration
glViewport(0, 0, WIDTH, HEIGHT);
glEnable(GL_CULL_FACE);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//initialise
obj.functionA();
obj.functionB();
SDL_SetEventFilter(FilteringEvents, NULL);
SDL_Event event;
shouldQuit = false;
while (!shouldQuit)
{
SDL_PollEvent(event);
//Render
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
obj.Render();
SDL_GL_SwapWindow(window);
}
SDL_Quit();
return 0;
}
For the sake of simplicity I have included only the context creation of Opengl ES via SDL, and a few simply obj function calls such as the call to render.
As you can see a render call is made within the thread of the OpenGL context created by SDL. So how do I create a shared context using SDL, since when I make OpenGL ES calls outside this thread I get the error. Please note, that I dont have this problem when I run the same code on my PC, so I'm sure its a issue peculiar to Android.
Traditionally, OpenGL ES applications only render to one surface from one thread.
A rendering thread is one CPU thread associated with one graphics context. By default, each graphics context will not be able to access the resources (textures, shaders and vertex buffers) of another context. For this reason, shared contexts are used so one or more background loading threads can access the resources of a primary thread.
I think your problem is EGL rendering context doesn't bind to the current rendering thread.
Take a look at eglCreateContext to know how to make shared context and eglMakeCurrent to bind EGL rendering context to the current rendering thread.
Using SDL with OpenGL ES
This can be accomplished with SDL_GetWMInfo() call to get the required handles, like:
SDL_SysWMinfo sysWmInfo;
SDL_VERSION(&sysWmInfo.version);
SDL_GetWMInfo(&sysWmInfo);
eglDisplay = eglGetDisplay((EGLNativeDisplayType)sysWmInfo.info.x11.display);
...
eglSurface = eglCreateWindowSurface(eglDisplay, eglConfig, (NativeWindowType)sysWmInfo.info.x11.window, NULL);
Or,
Did you try to use SDL_GL_MakeCurrent ?
(via Xamarin C#) On Android I am unable to create the framebuffer.
I have a subclassed Android.OpenGL.GLSurfaceView, set to GL ES 2, with preserve EGL Context, and my other GL commands are drawing successfully, with no GLErrors:
public class PaintingView : GLSurfaceView, GLSurfaceView.IRenderer
{
public PaintingView( Context context, IAttributeSet attrs ) :
base( context, attrs )
{
SetEGLContextClientVersion( 2 );
// Set the Renderer for drawing on the GLSurfaceView
SetRenderer( this );
PreserveEGLContextOnPause = true;
}
public void OnSurfaceCreated( IGL10 gl, Javax.Microedition.Khronos.Egl.EGLConfig config )
{
...
}
}
OnSurfaceCreated does get called, and it invokes the following code on the UI thread via myActivity.RunOnUiThread( Action ), as part of creating a buffer of desired size for offscreen rendering:
frameBuffer = -123; // Invalid value, so we can test whether it was set.
GL.GenFramebuffers( 1, out frameBuffer );
Shader.CheckGLError( "Offscreen.Resize 1" ); // My method that checks result of "GL.GetErrorCode()"
if (frameBuffer <= -1) {
... failed to assign an FBO!
}
Problem:
GenFramebuffers failed to assign an FBO - "frameBuffer" is still set to "-123".
Q: Why might this occur? Is there something I was supposed to do first? Is there a working Xamarin example of using an FBO on Android?
NOTES:
On iOS we successfully used code that created a second EGLContext and pbuffer for offscreen rendering, but after reading that switching between two EGLContexts has poor performance on Android, we are attempting an FBO approach for Android.
The problem seems to be specific to Android, as it involves setting up the FBO.
I've seen several java examples of using an FBO on Android, but have been unable to get past this first step, using Xamarin.
This isn't a Xamarin-specific issue. The problem is attempting to create and use an FBO at a time that the current (active) EGLContext is not able to handle GL calls (even though GLError did not report an error).
The code needs to be part of the draw cycle for the GL view that the FBO will be rendered to.
For an Android.OpenGL.GLSurfaceView, that means "during GLSurfaceView.IRenderer.OnDrawFrame( IGL10 gl )":
public class PaintingView : GLSurfaceView, GLSurfaceView.IRenderer
{
...
public void OnDrawFrame( IGL10 gl )
{
Int32 frameBuffer = -123; // Invalid value, so we can test whether it was set.
GL.GenFramebuffers( 1, out frameBuffer );
Shader.CheckGLError( "PaintingView.OnDrawFrame 1" ); // My method that checks result of "GL.GetErrorCode()"
if (frameBuffer <= -1) {
...
}
...
}
}
Here, frameBuffer gets assigned "1". (On the first call. On second call it is "2", and so on. TBD whether these increasing values are a bad sign.)
TBD whether we should create it once and re-use it in future calls, or recreate it each time.
TBD whether there is an earlier point in GLSurfaceView's lifecycle, at which this can be initialized. If done too early, the view is not attached to the window, so there is no surface, so all GL calls fail because Android has not initialized an EGLContext yet for this view. In addition, it needs to be done on the GL rendering thread, which AFAIK is only accessible via OnDrawFrame.
I have an Android OpenGL-ES application featuring two threads. Call Thread 1 the "display thread" which "blends" its current texture with a texture emanating from Thread 2 a.k.a the "worker thread". Thread 2 performs off-screen rendering (render to texture), and then Thread 1 combines this texture with it's own texture to generate the frame which is displayed to the user.
I have a working solution but I know it is inefficient and am trying to improve upon it. In it's OnSurfaceCreated() method, Thread 1 creates two textures. Thread 2, in it's draw method, does a glReadPixels() into a ByteBuffer (let's refer to it as bb). Thread 2 then signals to Thread 1 that a new frame is ready, at which point Thread 1 invokes glTexSubImage2D(bb) to update it's texture with the new data from Thread 2, and proceed with it's "blending" in order to generate a new frame.
This architecture works better on some Android devices than others, and I have been able to garner a slight improvement in performance by using PBOs. But I figured that by using so-called "direct textures" via the EGL Image extension (https://software.intel.com/en-us/articles/using-opengl-es-to-accelerate-apps-with-legacy-2d-guis) I would gain some benefit by removing the need for the costly glTexSubImage2D() call. Yes, I'd still have the glReadPixels() call which bothers me still but at least I should measure some improvement. In fact, at least on a Samsung Galaxy Tab S (Mali T628 GPU) my new code is dramatically slower than before! How can this be?
In the new code Thread 1 instantiates the EGLImage object by using gralloc and proceeds to bind it to a texture:
// note gbuffer::create() is a wrapper around gralloc
buffer = gbuffer::create(width, height, gbuffer::FORMAT_RGBA_8888);
EGLClientBuffer anb = buffer->getNativeBuffer();
EGLImageKHR pEGLImage = _eglCreateImageKHR(eglGetCurrentDisplay(), EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, (EGLClientBuffer)anb, attrs);
glBindTexture(GL_TEXTURE_2D, texid); // texid from glGenTextures(...)
_glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, pEGLImage);
Then Thread 2 in it's main loop does it's off-screen render-to-texture stuff and essentially pushes the data back over to Thread 1 via glReadPixels() with the destination address as the backing storage behind the EGLImage:
void* vaddr = buffer->lock();
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, vaddr);
buffer->unlock();
How can this be slower than glReadPixels() into a ByteBuffer followed by glTexSubImage2D from the aforementioned ByteBuffer? I'm also interested in alternative techniques as I am not limited to OpenGL-ES 2.0 and can use OpenGL-ES 3.0. I have tried FBOs but ran into some issues.
In response to the first answer I decided to take a stab at implementing a different approach. Namely, sharing the textures between Thread 1 and Thread 2. While I don't have the sharing part working yet, I do have Thread 1's EGLContext passed down to Thread 2's EGLContext so that in theory, Thread 2 can share textures with Thread 1. With these changes in place, and with the glReadPixels() and glTexSubImage2D() calls remaining, the app works but is far slower than before. Strange.
The other oddity I uncovered is the deal with the difference between javax.microedition.khronos.egl.EGLContext and android.opengl.EGLContext. GLSurfaceView exposes an interface method setEGLContextFactory() which allows me to pass Thread 1's EGLContext to Thread 2, as in the following:
public Thread1SurfaceView extends GLSurfaceView {
public Thread1SurfaceView(Context context) {
super(context);
// here is how I pass Thread 1's EGLContext to Thread 2
setEGLContextFactory(new EGLContextFactory() {
#Override
public javax.microedition.khronos.egl.EGLContext createContext(
final javax.microedition.khronos.egl.EGL10 egl,
final javax.microedition.khronos.egl.EGLDisplay display,
final javax.microedition.khronos.egl.EGLConfig eglConfig) {
// Configure context for OpenGL ES 3.0.
int[] attrib_list = {EGL14.EGL_CONTEXT_CLIENT_VERSION, 3, EGL14.EGL_NONE};
javax.microedition.khronos.egl.EGLContext renderContext =
egl.eglCreateContextdisplay, eglConfig, EGL10.EGL_NO_CONTEXT, attrib_list);
mThread2 = new Thread2(renderContext);
}
});
}
Previously, I used stuff out of the EGL14 namespace but since the interface for GLSurfaceView apparently relies on EGL10 stuff I had to change the implementation for Thread 2. Everywhere I used EGL14 I replaced with javax.microedition.khronos.egl.EGL10. Then my shaders stopped compiling until I added GLES3 to the attribute list. Now things work, albeit slower than before (but next I will remove the calls to glReadPixels and glTexSubImage2D).
My follow-on question is, is this the right way to handle the javax.microedition.khronos.egl.* versus android.opengl.* issue? Can I typecast javax.microedition.khronos.egl.EGL10 to android.opengl.EGL14, javax.microedition.khronos.egl.EGLDisplay to android.opengl.EGLDisplay, and javax.microedition.khronos.egl.EGLContext to android.opengl.EGLContext? What I have right now just seems ugly and doesn't feel right, although this proposed casting doesn't sit right either. Am I missing something?
Based on the description of what you're trying to do, both approaches sound way more complicated and inefficient than necessary.
The way I've always understood EGLImage, it's a mechanism for sharing images between different processes, and possibly different APIs.
For multiple OpenGL ES contexts in the same process, you can simply share the textures. All you need to do is make both contexts part of the same share group, and they can both use the same textures. In your use case, you can then have one thread rendering to a texture using an FBO, and then sample from it in the other thread. This way, there is no extra data copying, and your code should become much simpler.
The only slightly tricky aspect is synchronization. ES 2.0 does not have synchronization mechanisms in the API. The best you can probably do is call glFinish() in one thread (e.g. after your thread 2 finished rendering to the texture), and then use standard IPC mechanisms to signal the other thread. ES 3.0 has sync objects, which make this much more elegant.
My earlier answer here sketches some of the steps needed to create multiple contexts that are in the same share group: about opengles and texture on android. The key part of creating multiple contexts in the same share group is the 3rd argument to eglCreateContext, where you specify the context that you want to share objects with.
I'm having the following situation:
In a cross platform rendering library for iOS and Android (written in c(++)), i have two threads that each need their own EGLContext:
Thread A is the main thread; it renders to the Window.
Thread B is a generator thread, that does various calculations and renders the results into textures that are later used by thread A.
Since i can't use EGL on iOS, the library uses function pointers to static Obj.-C functions to create a new context and set it current.
This already works, i create the context for thread A using
EAGLContext *contextA = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
The context for thread B is created using
EAGLContext *contextB = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2 sharegroup:[contextA sharegroup]];
I can then set either of the two current:
[EAGLContext setCurrentContext:context];
To use the same logic (function pointers passed to the library) on Android, i want to do this in the C side of the JNI bindings, this time using real EGL instead of Apple's EAGL.
I can easily create contextA using a WindowSurface and the native Window, i can create contextB and pass contextA to the shareContext parameter of the eglCreateContext call.
But when i want to make contextB current, i have to pass a surface to the eglMakeCurrent call, and i'm trying to figure out what kind of surface to pass there.
I cannot use the WindowSurface i use for contextA as the spec says in section 3.7 that "At most one context for each supported client API may be current to a particular thread at a given time, and at most one context may be bound to a particular surface at a given time."
I cannot specify EGL_NO_SURFACE, because that would result in an EGL_BAD_MATCH error in the eglMakeCurrent call.
It seems I could use a PBuffer surface, but I hesitate because I'd have to specify the width and height when I create such a surface, and thread B might want to create textures of different sizes. In addition to that, the "OpenGL ES 2.0 Programming Guide" by Munshi, Ginsburg, and Shreiner states in section 3.8 that "Pbuffers are most often used for generating texture maps. If all you want to do is render to a texture, we recommend using framebuffer objects [...] instead of pbuffers because they are more efficient", which is exactly what i want to do in thread B.
I don't understand what Munshi, Ginsurg and Shreiner mean by that sentence, how would a framebuffer object be a replacement for a pbuffer surface? What if I create a very small (say 1x1px) pbuffer surface to make the context current - can i then still render into arbitrarily large FBOs? Are there any other possibilities I'm not yet aware of?
Thanks a lot for your help!
The surface you pass to eglMakeCurrent() must be an EGL surface from eglCreateWindowSurface(). For example:
EGLSurface EglSurface = mEgl.eglCreateWindowSurface(mEglDisplay, maEGLconfigs[0], surfaceTexture, null);
mEgl.eglMakeCurrent(mEglDisplay, EglSurface, EglSurface, mEglContext);
But, eglCreateWindowSurface() requires a SurfaceTexture which is provided to the onSurfaceTextureAvailable() callback when a TextureView is created, but you can also create off-screen SurfaceTextures without any View.
There is an example app that uses TextureView in the Android SDK here, although it uses the SurfaceTexture for camera video rather than OpenGL ES rendering:
sources\android-17\com\android\test\hwui\GLTextureViewActivity.java
By default, the EGL surface for FBOs will have the same size as the SurfaceTexture they were created from. You can change the size of a SurfaceTexture with:
surfaceTexture.setDefaultBufferSize(width, height);
Don't use pbuffers on Android because some platforms (Nvidia Tegra) do not support them.
This article explains the advantages of FBOs over pbuffers in detail:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES
I ended up using a PBuffer surface (sized 1x1) - i then create an FBO and render into textures just fine. For displaying them (in a different thread and a different (shared) opengl context), i use a windowsurface with an ANativeWindow (there's a sample of that somewhere in the sdk).
If drawing to FBO is the only things you want to do, you can grab any EGLContext that already created by you or someone else (e.g. GLSurfaceView)and make it current then you just generate your FBO then draw with it.
The problem is how to share the context say created by GLSurfaceView to your cross-platform c++ library. I did this by calling a static function inside the c++ to get eglcontext and surface immediately after the context been made current by Java layer. like the code below:
//This is a GLSurfaceView renderer method
#override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
//up to this point, we know the EGLContext
//has already been set current on this thread.
//call JNI function to setup context
graphics_library_setup_context();
}
the c++ counterpart
void setup_context() {
context = eglGetCurrentContext();
display = eglGetCurrentDisplay();
surface = eglGetCurrentSurface(EGL_DRAW);
}
I'm working on an app that needs to load textures for frame animations at certain times while it's executing, the rendering thread needs to continue to run and I need to load the textures in a bg thread. Is there a way to do this in android? I was able to in ios by creating a separate opengl context on the other thread that used the same sharegroup but am not sure if there is a similar facility on android?
Yes, you can share textures between contexts (as long as your driver supports it). Create your background loading context like this (meaning you want to share objects with rendering_context):
eglCreateContext(display, config, rendering_context, attrs);
Then after doing something like this in your background context:
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(...);
You can then bind and use tex from your rendering context.