OpenGL ES 2.0 Load scene (shaders and textures) asynchronously - android

I write Android game that using OpenGL ES 2.0. For example, a some game scene is drawing. How I can load another scene in background and when it loaded switch them. In background scene needs to load texture, generate their IDs and compile GL programs (shaders). But if I just create new Thread game crushes because game entities haven't access to GL context. And if I try to make it in GLSurfaceView.queueEvent get same error.
How I can do it? I found nothing by this theme.
P.S. Sorry for my English.

My advice is to keep the texture generation on the main thread.
Just do the image loading on a different thread.
Once the images are all loaded, notify the main thread, so that the main thread can do the actual creation of GL resources.
The file IO is probably slower than the actual gl texture creation anyway.
Also, for loading scenes, it pays to do the collision mesh generation on a helper thread, as that can be quite costly as well for large triangle meshes.

Related

Android OpenGLES Separate Texture Loading Thread

I am using OpenGL ES on Android using a GLSurfaceView and I want to be able to load textures on a separate thread to the render thread used by GLSurfaceView since I want to load textures on demand rather than loading them all in onSurfaceCreated.
Currently I load bitmap files on a background thread and then at the start of onDrawFrame if there are any bitmaps to load then GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0) is utilised, however this can cause some laggy behaviour.
Based upon some other articles I believe that I need to create an off-screen surface using eglCreatePbufferSurface with an associated thread separate to the thread used in the GLSurfaceView for rendering, but with a shared context being the EGLContext used by the rendering thread. This would enable me to have a dedicated thread for uploading textures but the context would be shared so then the rendering thread could access the uploaded textures.
Is this the correct way to approach this problem or am I misunderstanding what is required?
I have seen this post, which describes a similar issue: Threading textures load process for android opengl game but unfortunately it uses OpenGLES 1.0 and the example project doesn't seem to work. I have tried to modify it for OpenGLES 2.0 but I am unable to generate a texture, which suggests an issue with associating the EGLContext to the thread. I have also looked at the Grafika project, which has examples of creating a WindowSurface and an OffscreenSurface but I haven't seen these used together and probably need to migrate to using SurfaceView which if necessary is fine to do.
The basic approach is to create a secondary EGLContext in the same sharegroup as the original application context. You can create and upload textures in the secondary context, and use them in the primary context.
Importantly note that you will need to include synchronization between the contexts to ensure that the data upload to the texture has actually completed before the primary thread starts to use it. OpenGL ES provides automatic synchronization of the texture object state when bound, but not the data the texture contains.
Most modern devices allow a surfaceless context, so you may not need to allocate a dummy Pbuffer surface for context creation - YMMV here.

How to synchronize native OpenGL rendering when using flutter's Texture widget on Android

I'm using a Texture widget, rendering its content from native code using OpenGL ES. In native code I call ANativeWindow_fromSurface and from that create an EGL surface. AIUI what happens is:
The ANativeWindow represents the producer side of a buffer queue.
Calling eglSwapBuffers causes a texture to be sent to this queue.
Flutter receives the texture and renders it using Skia when the TextureLayer is painted.
The texture is scaled to match the size of the TextureLayer (the scaling happens in AndroidExternalTextureGL::Paint()).
I'm trying to figure out how to synchronise the OpenGL rendering. I think I can use the choreographer to synchronise with the display vsync, but I'm unclear on how much latency this bufferqueue-then-render-with-skia mechanism introduces. I don't see any means to explicitly synchronise my native code's generation of textures with the TextureLayer's painting of them.
The scaling appears to be a particularly tricky aspect. I would like to avoid it entirely, by ensuring that the textures the native code generates are always of the right size. However there doesn't appear to be any direct link between the size of the TextureLayer and the size of the Surface/ANativeWindow. I could use a SizeChangedLayoutNotifier (or one of various alternative hacks) to detect changes in the size and communicate them to the native code, but I think this would lag by at least a frame so scaling would still take place when resizing.
I did find this issue, which talks about similar resizing challenges, but in the context of using an OEM web view. I don't understand Hixie's detailed proposal in that issue, but it appears to be specific to embedding of OEM views so I don't think it would help with my case.
Perhaps using a Texture widget here is the wrong approach. It seems to be designed mainly for displaying things like videos and camera previews. Is there another way to host natively rendered, interactive OpenGL graphics in Flutter?

Record frames displayed on TextureView to mp4

I managed to write a demo displaying a 3D model on TextureView and the model can move according to the sensors of the phone. The 3D engine is wrote by C++ and what I need to do is giving the SurfaceTexture of TextureView to the 3D engine.
The engine calls the function ANativeWindow_fromSurface to retrieve a native window and draw 3D model on it. 3D engine is not the key point I want to talk about in this question.
Now I want to record the moving 3d model to a video. One way is using GL_TEXTURE_EXTERNAL_OES texture just like grafika, make 3D engine draw frames to the oes texture and draw the texture content to screen after every call of updateTexImage().But for some restrictions, I am not allowed to use this way.
I plan to use the SurfaceTexture of TextureView directly. I think functions such as attachToGLContext() and detachFromGLContext() will be useful for my work.
Could anyone give me some advices?
Grafika's "record GL app" has three different modes of operation:
Draw everything twice.
Render to an offscreen pbuffer, then blit that twice.
Draw once, then copy between framebuffers (requires GLES 3).
If you can configure the EGL surface that is rendered to, approaches 2 and 3 will work. For approach #3, bear in mind that the pixels don't go to the Surface (that's the Android Surface, not the EGL surface) until you call eglSwapBuffers().
If the engine code is managing the EGL surface and calling eglSwapBuffers() for you, then things are a bit more annoying. The SurfaceTexture attach/detach calls will let you access the GLES texture with the output from a different EGL context, but the render thread needs that when rendering the View UI. I'm not entirely sure how that's going to work out.

Which View is best choice for android camera preview?

As we know, we can choose TextureView, SurfaceView and GLSurfaceView for android camera preview.
Which one is best choice for camera preview ? I'm focused on the camera performance.
From a performance perspective, SurfaceView is the winner.
With SurfaceView, frames come from the camera and are forwarded to the system graphics compositor (SurfaceFlinger) with no copying. In most cases, any scaling will be done by the display processor rather than the GPU, which means that instead of scanning the pixels once for scaling and again for scan-out, they're only scanned once.
GLSurfaceView is a SurfaceView with some wrapper classes that handle EGL setup and thread management. You can't use OpenGL ES on a Surface that is receiving camera frames, so you're doing extra work with no benefit. (The overhead is minor one-time setup, not per-frame, so you likely won't be able to measure the difference.)
TextureView receives the frames in a SurfaceTexture as an "external" OpenGL ES texture, then uses GLES to render them onto the app's UI surface. The scaling and rendering are performed by the GPU, and the result is then forwarded to SurfaceFlinger. This is the slowest option, but also the most flexible of the Views.
If you'd like to learn more about how the system works, see the Android Graphics Architecture document.
" The SurfaceView creates a new window in the Android Windowsystem. Its advantage is, that if the SurfaceView gets refreshed, only this window will be refreshed. If you additionally update UI Elements (which are in another window of the windowsystem), then both refresh operations block themselfes (especially when ui drawing is hardwaresupported) because opengl cannot handle multi thread drawing properly.
For such a case it could be better using the TextureView, cause it's not another window of the Android Windowsystem. so if you refresh your View, all UI elements get refreshed as well. (Probably) everything in one Thread.
Hope I could help some of you! "
Source : stackoverflow.com
GLSurfaceView is a SurfaceView with a wrapper class that does all the EGL setup and inter-thread messaging for you.
Its completely upto you what you put to use.. They have their pros and cons over eachother :)

Difference between SurfaceView and GLSurfaceView in Android

Can anyone tell me what the basic difference is between SurfaceView and GLSurfaceView? When should I use SurfaceView, and when should I use GLSurfaceView?
I read some already answered questions on Stack Overflow, but they did not satisfy my queries.
Any help would be appreciated.
A GLSurfaceView is a SurfaceView that you can render into with OpenGL. Choosing between them is simple:
If you're familiar with OpenGL and need what it provides, use a GLSurfaceView.
Otherwise, use a SurfaceView.
OpenGL is low-level. If you're not already familiar with it, it's an undertaking to learn. If you only need 2D drawing, SurfaceView uses the high-level, reasonably high-performance Canvas. It's very easy to work with.
Unless you have a strong reason to use a GLSurfaceView, you should use a regular SurfaceView. I would suggest that if you don't already know that you need GL, then you probably don't.
GLSurfaceView is the primary building block for 3D applications as View is for 2D applications. It is widely used not only in 3D games but also multimedia applications such as camera to create special preview affect.
GLSurfaceView extends SurfaceView and additionally owns a render thread and a render object set by the client. The render thread keeps running , continuously or on-demand, and delegates to the render object to draw frame using OpenGL API. For both SurfaceView and GLSurfaceView, rendering is performing in a separate thread other than main thread. The difference is with SurfaceView the rendering thread is created by client while with GLSurfaceView it is created by the system. What's more, GLSurfaceView will internally handle the synchronization between main thread and rendering thread.
For more, check out this and this
SurfaceView
AFAIK Canvas is Simple to implement and effective in 2D drawing but 3D drawing are not supported on it
GLSurfaceView
If you want to design some 3D Game then you shold go with GLSurfaceView and OGLES
Whats my experience is if you just want to do 2D processing then select Canvas because its easier to implement and effective compare to GLSurfaceView.

Categories

Resources