Android: Is it possible to create a SurfaceTexture without a SurfaceView? - android

I want to create a SurfaceTexture with an OpenGL texture that I can manage, in a setup similar to this answer. (Quoted here:)
Create a texture through OpenGL
Pass this texture to the constructor of new SurfaceTexture.
Give this new SurfaceTexture to the camera.
Make sure you are using OES_External (see documentation for
details).
However, creating an OpenGL texture (as in step 1 of the answer), requires an EGL context, which requires a EGLSurface to be made current, which requires a SurfaceTexture. It seems the only way of creating the EGL context is by creating a SurfaceView (or another view that has a SurfaceTexture), and use it to initialise the EGLSurface and then make the EGLContext current.
My objective is to create an EGLContext and make it current in a background thread, to do some offscreen computation on the camera preview image (mostly using the NDK). I want to create a library, and make it as independent of the UI as possible. Two related questions:
On the Java side, is it possible to create an EGLContext without having a SurfaceTexture created beforehand?
On the NDK side, there used to be a private API call to create native windows android_createDisplaySurface(), but it doesn't work anymore, and well, it's a private API. Is there any way of creating a surface with the NDK?
I'm quite new to using EGL, and I fail to understand why you need an EGLSurface for an EGLContext to be made current. In iOS, EAGLContexts can be created first, and then framebuffers can be created as needed. Using EGL it seems you always need a native window.

You can see a number of examples that manipulate Camera output, SurfaceTexture, and EGL in Grafika. The "Continuous capture" activity is one, but it uses the technique you mentioned: to avoid having to create an EGLSurface, it just borrows the one from the nearby SurfaceView.
You do need to have an EGLSurface, but it doesn't need to be a window surface. You can create a 1x1 pbuffer surface and just use that. This is done with the eglCreatePbufferSurface() call; see the EglCore class in Grafika for an example.
These examples are in Java, but the Java implementation just wraps the native EGL/GLES calls.
android_createDisplaySurface() is an internal call that, as you discovered, doesn't work on newer devices. Search the NDK for ANativeWindow instead.
Update: for anyone who gets here by searching, android_createDisplaySurface() relied on an internal class called FramebufferNativeWindow, which was marked obsolete in Android 5.0 in this change. The internal OpenGL ES test code that used it was updated with a SurfaceFlinger-based replacement in this change. The original version of the test code required shutting down the Android app framework so it could grab the frame buffer; the newer version just asks SurfaceFlinger for a window that covers the screen and is composited on top of everything else.

Related

Android OpenGLES Separate Texture Loading Thread

I am using OpenGL ES on Android using a GLSurfaceView and I want to be able to load textures on a separate thread to the render thread used by GLSurfaceView since I want to load textures on demand rather than loading them all in onSurfaceCreated.
Currently I load bitmap files on a background thread and then at the start of onDrawFrame if there are any bitmaps to load then GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0) is utilised, however this can cause some laggy behaviour.
Based upon some other articles I believe that I need to create an off-screen surface using eglCreatePbufferSurface with an associated thread separate to the thread used in the GLSurfaceView for rendering, but with a shared context being the EGLContext used by the rendering thread. This would enable me to have a dedicated thread for uploading textures but the context would be shared so then the rendering thread could access the uploaded textures.
Is this the correct way to approach this problem or am I misunderstanding what is required?
I have seen this post, which describes a similar issue: Threading textures load process for android opengl game but unfortunately it uses OpenGLES 1.0 and the example project doesn't seem to work. I have tried to modify it for OpenGLES 2.0 but I am unable to generate a texture, which suggests an issue with associating the EGLContext to the thread. I have also looked at the Grafika project, which has examples of creating a WindowSurface and an OffscreenSurface but I haven't seen these used together and probably need to migrate to using SurfaceView which if necessary is fine to do.
The basic approach is to create a secondary EGLContext in the same sharegroup as the original application context. You can create and upload textures in the secondary context, and use them in the primary context.
Importantly note that you will need to include synchronization between the contexts to ensure that the data upload to the texture has actually completed before the primary thread starts to use it. OpenGL ES provides automatic synchronization of the texture object state when bound, but not the data the texture contains.
Most modern devices allow a surfaceless context, so you may not need to allocate a dummy Pbuffer surface for context creation - YMMV here.

Record frames displayed on TextureView to mp4

I managed to write a demo displaying a 3D model on TextureView and the model can move according to the sensors of the phone. The 3D engine is wrote by C++ and what I need to do is giving the SurfaceTexture of TextureView to the 3D engine.
The engine calls the function ANativeWindow_fromSurface to retrieve a native window and draw 3D model on it. 3D engine is not the key point I want to talk about in this question.
Now I want to record the moving 3d model to a video. One way is using GL_TEXTURE_EXTERNAL_OES texture just like grafika, make 3D engine draw frames to the oes texture and draw the texture content to screen after every call of updateTexImage().But for some restrictions, I am not allowed to use this way.
I plan to use the SurfaceTexture of TextureView directly. I think functions such as attachToGLContext() and detachFromGLContext() will be useful for my work.
Could anyone give me some advices?
Grafika's "record GL app" has three different modes of operation:
Draw everything twice.
Render to an offscreen pbuffer, then blit that twice.
Draw once, then copy between framebuffers (requires GLES 3).
If you can configure the EGL surface that is rendered to, approaches 2 and 3 will work. For approach #3, bear in mind that the pixels don't go to the Surface (that's the Android Surface, not the EGL surface) until you call eglSwapBuffers().
If the engine code is managing the EGL surface and calling eglSwapBuffers() for you, then things are a bit more annoying. The SurfaceTexture attach/detach calls will let you access the GLES texture with the output from a different EGL context, but the render thread needs that when rendering the View UI. I'm not entirely sure how that's going to work out.

Android MediaCodec/NdkMediaCodec GLES2 interop

we are trying to decode AVC/h264 bitstreams using the new NdkMediaCodec API. While decoding works fine now, we are struggling to the the contents of the decoded
video frame mapped to GLES2 for rendering.
The API allows passing a ANativeWindow at configuration time, but we want to control scheduling of the video rendering and ultimately just provide N textures which are filled
with the decoded frame data.
All attempts to map the memory returned by getOutputBuffer() to GLES vie eglCreateImageKHR/external image failed. The NdkMediaCodec seems to use libstagefright/OMX internally.
So the output buffers are very likely allocated using gralloc - arent they? Is there a way to get the gralloc handle/GraphicsBuffer to bind the frame to EGL/GLES2?
Since there are lots of pixel formats for the media frame without any further documentation on their memory layout, it's hard to use NdkMediaCodec robustly.
Thanks alot for any hints!
For general MediaCodec in java, create a SurfaceTexture for the GL ES texture you want to have the data in, then create a Surface out of this SurfaceTexture, and use this as target for the MediaCodec decoder. See http://bigflake.com/mediacodec/ (e.g. EncodeDecodeTest) for an example on doing this.
The SurfaceTexture and Surface classes aren't available directly in the NDK right now (as far as I know), though, so you'll need to call these via JNI. Then you can create an ANativeWindow from the Surface using ANativeWindow_fromSurface.
You're right that the output buffers are gralloc buffers, but since there's public APIs for doing this it's safer to rely on those than trying to take shortcuts.

Using OpenGL for Image Processing - Setting up an OpenGL context

I want to do image processing on a raw image without displaying it on screen (I want to do some calculations based on the image data and display some results on screen based on these calculations.) I found an interesting answer to this question, shown here:
Do your actual processing on the GPU: Set up an OpenGL context (OpenGL
ES 2 tutorial), and create a SurfaceTexture object in that context.
Then pass that object to setPreviewTexture, and start preview. Then,
in your OpenGL code, you can call SurfaceTexture.updateTexImage, and
the texture ID associated with the SurfaceTexture will be updated to
the latest preview frame from the camera. You can also read back the
RGB texture data to the CPU for further processing using glReadPixels,
if desired.
I have a question on how to go about implementing it though.
Do I need to create a GLSurfaceView and a Renderer, I don't actually want to use OpenGL to draw anything on screen so I am not sure if I need them? From what I have read online though it seems very essential to have these in order to setup an OpenGL context? Any pointers anybody can give me on this?
You don't have to use a GLSurfaceView. GLSurfaceView is a convenience class written purely in Java. It simplifies the setup part for applications that want to use OpenGL rendering in Android, but all of its functionality is also available through lower level interfaces in the Android frameworks.
For purely offscreen rendering, you can use the EGL interfaces to create contexts, surfaces, etc. Somewhat confusingly, there are two versions in completely different parts of the Android frameworks:
EGL10 and EGL11 in the javax.microedition.khronos.egl package, available since API level 1.
EGL14 in the android.opengl package, available since API level 17.
They are fairly similar, but the newer EGL14 obviously has some more features. If you're targeting at least API level 17, I would go with the newer version.
Using the methods in the EGL14 class, you can then create contexts, surfaces, etc. For offscreen rendering, one option is to create a Pbuffer surface for rendering. To complete the setup, you will typically use functions like:
eglInitialize
eglChooseConfig
eglCreatePbufferSurface
eglCreateContext
eglMakeCurrent
The Android documentation does not really describe these functions, but you can find the actual documentation at the Khronos web site: https://www.khronos.org/registry/egl/sdk/docs/man/.

Is there a way to share a texture between two contexts / threads using OpenGL on Android?

I want to have two threads. One thread writes into a texture using an FBO and the other uses it to render to the screen.
This can be done on windows etc, but how do I do it on Android?
I am using GL ES 2, and am using Textureview
I have read about the egl image extensions but I cannot figure out to use them
http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image.txt
http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt
I read that the egl extensions are not fully supported on all platforms. Is it ok to use it on Android?
I cannot use something that is not assured to work properly.
This is what I read at this link:
http://www.khronos.org/message_boards/showthread.php/7319-EGLImage-on-Android-NDK
The EGL image extensions are not as necessary on Android now that the new TextureView class has been added with Android 4.0. Use TextureView to transfer texture images between OpenGL ES and the Canvas API.
I am using a TextureView. How do I use it to 'transfer texture images' ?
Also I read somewhere that egl defines textures as shared by default. What does this mean? How do I use the texture in a different context if it is already defined as shared?
I do not want to make the same context current in the other thread as I want the texture loading to be done without blocking the render to screen. Does this make sense?
I do not have much experience with OpenGL.
Apparently, firefox uses the same thing that I am trying to use
http://snorp.net/2011/12/16/android-direct-texture.html
But I can't understand how I should do it.
I am using Java, not NDK.
You have described the way OpenGL ES works on Android by default. In other words, you use the SurfaceTexture provided by TextureView to render OpenGL ES by one thread. Android's Surfaceflinger will then composite that SurfaceTexture to the display as part of it's normal View compositing - by another thread.
The EGL Image extensions are for getting pointers to the surfaces, which requires native code and is unnecessary. Use TextureView instead. There is an example app that uses TextureView in the Android SDK (although it uses the SurfaceTexture for camera video rather than OpenGL ES rendering):
sources\android-17\com\android\test\hwui\GLTextureViewActivity.java
So, use the SurfaceTexture (which is provided to the onSurfaceTextureAvailable() callback when the TextureView is created) to create the EGL Surface with eglCreateWindowSurface(). That SurfaceTexture will be the target of your OpenGL ES rendering and it will be displayed in the associated TextureView.
EGLSurface EglSurface = mEgl.eglCreateWindowSurface(mEglDisplay, maEGLconfigs[0], surfaceTexture, null);
mEgl.eglMakeCurrent(mEglDisplay, EglSurface, EglSurface, mEglContext);
I think this article will help:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1

Categories

Resources