android opengl texture loading thread - android

I'm working on an app that needs to load textures for frame animations at certain times while it's executing, the rendering thread needs to continue to run and I need to load the textures in a bg thread. Is there a way to do this in android? I was able to in ios by creating a separate opengl context on the other thread that used the same sharegroup but am not sure if there is a similar facility on android?

Yes, you can share textures between contexts (as long as your driver supports it). Create your background loading context like this (meaning you want to share objects with rendering_context):
eglCreateContext(display, config, rendering_context, attrs);
Then after doing something like this in your background context:
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(...);
You can then bind and use tex from your rendering context.

Related

glReadPixels to EGLImage direct texture slower than glReadPixels to ByteBuffer and glTexSubImage2D?

I have an Android OpenGL-ES application featuring two threads. Call Thread 1 the "display thread" which "blends" its current texture with a texture emanating from Thread 2 a.k.a the "worker thread". Thread 2 performs off-screen rendering (render to texture), and then Thread 1 combines this texture with it's own texture to generate the frame which is displayed to the user.
I have a working solution but I know it is inefficient and am trying to improve upon it. In it's OnSurfaceCreated() method, Thread 1 creates two textures. Thread 2, in it's draw method, does a glReadPixels() into a ByteBuffer (let's refer to it as bb). Thread 2 then signals to Thread 1 that a new frame is ready, at which point Thread 1 invokes glTexSubImage2D(bb) to update it's texture with the new data from Thread 2, and proceed with it's "blending" in order to generate a new frame.
This architecture works better on some Android devices than others, and I have been able to garner a slight improvement in performance by using PBOs. But I figured that by using so-called "direct textures" via the EGL Image extension (https://software.intel.com/en-us/articles/using-opengl-es-to-accelerate-apps-with-legacy-2d-guis) I would gain some benefit by removing the need for the costly glTexSubImage2D() call. Yes, I'd still have the glReadPixels() call which bothers me still but at least I should measure some improvement. In fact, at least on a Samsung Galaxy Tab S (Mali T628 GPU) my new code is dramatically slower than before! How can this be?
In the new code Thread 1 instantiates the EGLImage object by using gralloc and proceeds to bind it to a texture:
// note gbuffer::create() is a wrapper around gralloc
buffer = gbuffer::create(width, height, gbuffer::FORMAT_RGBA_8888);
EGLClientBuffer anb = buffer->getNativeBuffer();
EGLImageKHR pEGLImage = _eglCreateImageKHR(eglGetCurrentDisplay(), EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, (EGLClientBuffer)anb, attrs);
glBindTexture(GL_TEXTURE_2D, texid); // texid from glGenTextures(...)
_glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, pEGLImage);
Then Thread 2 in it's main loop does it's off-screen render-to-texture stuff and essentially pushes the data back over to Thread 1 via glReadPixels() with the destination address as the backing storage behind the EGLImage:
void* vaddr = buffer->lock();
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, vaddr);
buffer->unlock();
How can this be slower than glReadPixels() into a ByteBuffer followed by glTexSubImage2D from the aforementioned ByteBuffer? I'm also interested in alternative techniques as I am not limited to OpenGL-ES 2.0 and can use OpenGL-ES 3.0. I have tried FBOs but ran into some issues.
In response to the first answer I decided to take a stab at implementing a different approach. Namely, sharing the textures between Thread 1 and Thread 2. While I don't have the sharing part working yet, I do have Thread 1's EGLContext passed down to Thread 2's EGLContext so that in theory, Thread 2 can share textures with Thread 1. With these changes in place, and with the glReadPixels() and glTexSubImage2D() calls remaining, the app works but is far slower than before. Strange.
The other oddity I uncovered is the deal with the difference between javax.microedition.khronos.egl.EGLContext and android.opengl.EGLContext. GLSurfaceView exposes an interface method setEGLContextFactory() which allows me to pass Thread 1's EGLContext to Thread 2, as in the following:
public Thread1SurfaceView extends GLSurfaceView {
public Thread1SurfaceView(Context context) {
super(context);
// here is how I pass Thread 1's EGLContext to Thread 2
setEGLContextFactory(new EGLContextFactory() {
#Override
public javax.microedition.khronos.egl.EGLContext createContext(
final javax.microedition.khronos.egl.EGL10 egl,
final javax.microedition.khronos.egl.EGLDisplay display,
final javax.microedition.khronos.egl.EGLConfig eglConfig) {
// Configure context for OpenGL ES 3.0.
int[] attrib_list = {EGL14.EGL_CONTEXT_CLIENT_VERSION, 3, EGL14.EGL_NONE};
javax.microedition.khronos.egl.EGLContext renderContext =
egl.eglCreateContextdisplay, eglConfig, EGL10.EGL_NO_CONTEXT, attrib_list);
mThread2 = new Thread2(renderContext);
}
});
}
Previously, I used stuff out of the EGL14 namespace but since the interface for GLSurfaceView apparently relies on EGL10 stuff I had to change the implementation for Thread 2. Everywhere I used EGL14 I replaced with javax.microedition.khronos.egl.EGL10. Then my shaders stopped compiling until I added GLES3 to the attribute list. Now things work, albeit slower than before (but next I will remove the calls to glReadPixels and glTexSubImage2D).
My follow-on question is, is this the right way to handle the javax.microedition.khronos.egl.* versus android.opengl.* issue? Can I typecast javax.microedition.khronos.egl.EGL10 to android.opengl.EGL14, javax.microedition.khronos.egl.EGLDisplay to android.opengl.EGLDisplay, and javax.microedition.khronos.egl.EGLContext to android.opengl.EGLContext? What I have right now just seems ugly and doesn't feel right, although this proposed casting doesn't sit right either. Am I missing something?
Based on the description of what you're trying to do, both approaches sound way more complicated and inefficient than necessary.
The way I've always understood EGLImage, it's a mechanism for sharing images between different processes, and possibly different APIs.
For multiple OpenGL ES contexts in the same process, you can simply share the textures. All you need to do is make both contexts part of the same share group, and they can both use the same textures. In your use case, you can then have one thread rendering to a texture using an FBO, and then sample from it in the other thread. This way, there is no extra data copying, and your code should become much simpler.
The only slightly tricky aspect is synchronization. ES 2.0 does not have synchronization mechanisms in the API. The best you can probably do is call glFinish() in one thread (e.g. after your thread 2 finished rendering to the texture), and then use standard IPC mechanisms to signal the other thread. ES 3.0 has sync objects, which make this much more elegant.
My earlier answer here sketches some of the steps needed to create multiple contexts that are in the same share group: about opengles and texture on android. The key part of creating multiple contexts in the same share group is the 3rd argument to eglCreateContext, where you specify the context that you want to share objects with.

What Surface to use for eglMakeCurrent for context that only renders into FBOs

I'm having the following situation:
In a cross platform rendering library for iOS and Android (written in c(++)), i have two threads that each need their own EGLContext:
Thread A is the main thread; it renders to the Window.
Thread B is a generator thread, that does various calculations and renders the results into textures that are later used by thread A.
Since i can't use EGL on iOS, the library uses function pointers to static Obj.-C functions to create a new context and set it current.
This already works, i create the context for thread A using
EAGLContext *contextA = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
The context for thread B is created using
EAGLContext *contextB = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2 sharegroup:[contextA sharegroup]];
I can then set either of the two current:
[EAGLContext setCurrentContext:context];
To use the same logic (function pointers passed to the library) on Android, i want to do this in the C side of the JNI bindings, this time using real EGL instead of Apple's EAGL.
I can easily create contextA using a WindowSurface and the native Window, i can create contextB and pass contextA to the shareContext parameter of the eglCreateContext call.
But when i want to make contextB current, i have to pass a surface to the eglMakeCurrent call, and i'm trying to figure out what kind of surface to pass there.
I cannot use the WindowSurface i use for contextA as the spec says in section 3.7 that "At most one context for each supported client API may be current to a particular thread at a given time, and at most one context may be bound to a particular surface at a given time."
I cannot specify EGL_NO_SURFACE, because that would result in an EGL_BAD_MATCH error in the eglMakeCurrent call.
It seems I could use a PBuffer surface, but I hesitate because I'd have to specify the width and height when I create such a surface, and thread B might want to create textures of different sizes. In addition to that, the "OpenGL ES 2.0 Programming Guide" by Munshi, Ginsburg, and Shreiner states in section 3.8 that "Pbuffers are most often used for generating texture maps. If all you want to do is render to a texture, we recommend using framebuffer objects [...] instead of pbuffers because they are more efficient", which is exactly what i want to do in thread B.
I don't understand what Munshi, Ginsurg and Shreiner mean by that sentence, how would a framebuffer object be a replacement for a pbuffer surface? What if I create a very small (say 1x1px) pbuffer surface to make the context current - can i then still render into arbitrarily large FBOs? Are there any other possibilities I'm not yet aware of?
Thanks a lot for your help!
The surface you pass to eglMakeCurrent() must be an EGL surface from eglCreateWindowSurface(). For example:
EGLSurface EglSurface = mEgl.eglCreateWindowSurface(mEglDisplay, maEGLconfigs[0], surfaceTexture, null);
mEgl.eglMakeCurrent(mEglDisplay, EglSurface, EglSurface, mEglContext);
But, eglCreateWindowSurface() requires a SurfaceTexture which is provided to the onSurfaceTextureAvailable() callback when a TextureView is created, but you can also create off-screen SurfaceTextures without any View.
There is an example app that uses TextureView in the Android SDK here, although it uses the SurfaceTexture for camera video rather than OpenGL ES rendering:
sources\android-17\com\android\test\hwui\GLTextureViewActivity.java
By default, the EGL surface for FBOs will have the same size as the SurfaceTexture they were created from. You can change the size of a SurfaceTexture with:
surfaceTexture.setDefaultBufferSize(width, height);
Don't use pbuffers on Android because some platforms (Nvidia Tegra) do not support them.
This article explains the advantages of FBOs over pbuffers in detail:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES
I ended up using a PBuffer surface (sized 1x1) - i then create an FBO and render into textures just fine. For displaying them (in a different thread and a different (shared) opengl context), i use a windowsurface with an ANativeWindow (there's a sample of that somewhere in the sdk).
If drawing to FBO is the only things you want to do, you can grab any EGLContext that already created by you or someone else (e.g. GLSurfaceView)and make it current then you just generate your FBO then draw with it.
The problem is how to share the context say created by GLSurfaceView to your cross-platform c++ library. I did this by calling a static function inside the c++ to get eglcontext and surface immediately after the context been made current by Java layer. like the code below:
//This is a GLSurfaceView renderer method
#override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
//up to this point, we know the EGLContext
//has already been set current on this thread.
//call JNI function to setup context
graphics_library_setup_context();
}
the c++ counterpart
void setup_context() {
context = eglGetCurrentContext();
display = eglGetCurrentDisplay();
surface = eglGetCurrentSurface(EGL_DRAW);
}

How to detect if bitmap is too large for texture

So I have done a lot of looking around and the answer to this seems to be to use:
int[] maxSize = new int[1];
gl.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, maxSize, 0);
to detect the size of the texture, now my issue is how do I create or get access to the gl var that holds the function I need? Is it already there somewhere? I would like to support android 2.2 and above, so the 4.0+ new trick wont work. If this is a repeat question just point me in the right direction in the comments and I will take t down. Couldn't seem to find a good explanation of how to set this up properly anywhere, just those two lines of code.
If you take a look on how OpenGL Apps are made you will notice there are the main app thread (main activity) and a renderer class(http://developer.android.com/guide/topics/graphics/opengl.html). The heart of the renderer class if the method public void onDrawFrame(GL10 gl) , this is called by the android infrastructure when the frame needs to be redraw.
So basically, a context object (GL10 gl var) is passed to the renderer (yours) when required and there you can check your max texture size.

How load a lot of textures in andengine

I am trying to load textures as follows:
private Texture mTexture;
...
public Textures(final BaseGameActivity activity, final Engine engine) {
this.mTexture = new Texture(2048, 1024,
TextureOptions.BILINEAR_PREMULTIPLYALPHA);
this.mBackgroundTextureRegion = TextureRegionFactory.createFromAsset(
this.mTexture, activity, "img/back.png", 0, 0);
this.mSwingBackTextureRegion = TextureRegionFactory.createFromAsset(
this.mTexture, activity, "img/player.png", 836, 0);
...
I want to load more than 200 textures. However, the current method that I am using is too long.
Are there faster methods to complete it?
I am working in GLES1.
The easiest way to do it is with Texture Packer, found here
This allows you to add multiple image files in to one easy to load spritesheet. The engine loads this spritesheet in to a texture and creates a class that lets you easily reference each image from that spreadsheet. Turn 200 TextureRegions in to 1 TexturePack.
I'm using GLES2 and I'm not sure where the source files are for GLES1. Poke around the forums and you should be able to find out how to use them. There has been plenty of talk about it.
There is a texture packer built in AndEngine which does this automagically. Try searching the AndEngine forum.
http://www.andengine.org/forums/

Threading textures load process for android opengl game

I have a large amount of textures in JPG format.
And I need to preload them in opengl memory before the actual drawing starts.
I've asked a question and I've been told that the way to do this is to separate JPEG unpacking from glTexImage2D(...) calls to another thread.
The problem is I'm not quite sure how to do this.
OpenGL (handler?), needed to execute glTexImage2D is only available in GLSurfaceView.Renderer's onSurfaceCreated and OnDrawFrame methods.
I can't unpack all my textures and then in onSurfaceCreated(...) load them in opnegl,
because they probably won't fit in limited vm's memory (20-40MB?)
That means I have to unpack and load them one-by one, but in that case I can't get an opengl pointer.
Could someone, please, give me and example of threading of textures loading for opengl game?
It must be some some typical procedure, and I can't get any info anywhere.
As explained in 'OpenGLES preloading textures in other thread' there are two separate steps: bitmap creation and bitmap upload. In most cases you should be fine by just doing the bitmap creation on a secondary thread --- which is fairly easy.
If you experience frame drops while uploading the textures, call texImage2D from a background thread. To do so you'll need to create a new OpenGL context which shares it's textures with your rendering thread because each thread needs it's own OpenGL context.
EGLContext textureContext = egl.eglCreateContext(display, eglConfig, renderContext, null);
Getting the parameters for eglCreateContext is a little bit tricky. You need to use setEGLContextFactory on your SurfaceView to hook into the EGLContext creation:
#Override
public EGLContext createContext(final EGL10 egl, final EGLDisplay display, final EGLConfig eglConfig) {
EGLContext renderContext = egl.eglCreateContext(display, eglConfig, EGL10.EGL_NO_CONTEXT, null);
// create your texture context here
return renderContext;
}
Then you are ready to start a texture loading thread:
public void run() {
int pbufferAttribs[] = { EGL10.EGL_WIDTH, 1, EGL10.EGL_HEIGHT, 1, EGL14.EGL_TEXTURE_TARGET,
EGL14.EGL_NO_TEXTURE, EGL14.EGL_TEXTURE_FORMAT, EGL14.EGL_NO_TEXTURE,
EGL10.EGL_NONE };
EGLSurface localSurface = egl.eglCreatePbufferSurface(display, eglConfig, pbufferAttribs);
egl.eglMakeCurrent(display, localSurface, localSurface, textureContext);
int textureId = loadTexture(R.drawable.waterfalls);
// here you can pass the textureId to your
// render thread to be used with glBindTexture
}
I've created a working demonstration of the above code snippets at https://github.com/perpetual-mobile/SharedGLContextsTest.
This solution is based on many sources around the internet. The most influencing ones where these three:
http://www.khronos.org/message_boards/showthread.php/9029-Loading-textures-in-a-background-thread-on-Android
http://www.khronos.org/message_boards/showthread.php/5843-Texture-Sharing
Why is eglMakeCurrent() failing with EGL_BAD_MATCH?
You just have your main thread with the uploading routine, that has access to OpenGL and calls glTexImage2D. The other thread loads (and decodes) the image from file to memory. While the secondary thread loads the next image, the main thread uploads the previously loaded image into the texture. So you only need memory for two images, the one currently loaded from file and the one currently uploaded into the GL (which is the one loaded previously). Of course you need a bit of synchronization, to prevent the loader thread from overwriting the memory, that the main thread currently sends to GL and to prevent the main thread from sending unfinished data.
"There has to be a way to call GL functions outside of the initialization function." - Yes. Just copy the pointer to gl and use it anywhere.
"Just be sure to only use OpenGL in the main thread." Very important. You cannot call in your Game Engine (which may be in another thread) a texture-loading function which is not synchronized with the gl-thread. Set there a flag to signal your gl-thread to load a new texture (for example, you can place a function in OnDrawFrame(GL gl) which checks if there must be a new texture loaded.
To add to Rodja's answer, if you want an OpenGL ES 2.0 context, then use the following to create the context:
final int EGL_CONTEXT_CLIENT_VERSION = 0x3098;
int[] contextAttributes =
{ EGL_CONTEXT_CLIENT_VERSION, 2, EGL10.EGL_NONE };
EGLContext renderContext = egl.eglCreateContext(
display, config, EGL10.EGL_NO_CONTEXT, contextAttributes);
You still need to call setEGLContextClientVersion(2) as well, as that is also used by the default config chooser.
This is based on Attribute list in eglCreateContext
Found a solution for this, which is actually very easy: After you load the bitmap (in a separate thread), store it in an instance variable, and in draw method, you check if it's initialized, if yes, load the texture. Something like this:
if (bitmap != null && textureId == -1) {
initTexture(gl, bitmap);
}

Categories

Resources