How can GLSurfaceView use my EGLDisplay, EGLContext and eglSurface? - android

I want to render alternatively to an EGLSurface created with eglCreateWindowSurface and one with eglCreatePbufferSurface, reusing the EGLDisplay and EGLContext. I am using a GLSurfaceView for the case when I want the result to be visible to the user, but I don't know how to initialize it to use my EGLDisplay, EGLContext and EGLSurface. I want to use GLSurfaceView.EGLWindowSurfaceFactory, but I see its override method createWindowSurface already has as input params those variables, so I suppose thy are already created by GLSurfaceView. How can it be done?

The whole point of GLSurfaceView is to manage things like that for you, so it's hard to make it do what you want.
One thing you can do is to wait until the GLSurfaceView is created and then create a second EGL context in a share group. This is a bit awkward but can be made to work. In many ways it's simpler to just switch to SurfaceView or TextureView and manage EGL and threading yourself.
You can see various implementations in Grafika. "Show + capture camera" uses GLSurfaceView with a shared EGLContext, "Record GL app with FBO" uses SurfaceView, "Play movie (TextureView)" uses a TextureView, etc.

Related

How does "ImageReader.getSurface()" work?

I am working with the camera2 API in android and am trying to understand this code I am using. Part of the code goes like this:
previewReader = ImageReader.newInstance(previewSize.getWidth(), previewSize.getHeight(),
ImageFormat.YUV_420_888, 4);
previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
// This adds another output surface but not sure where this surface comes from..
previewRequestBuilder.addTarget(previewReader.getSurface());
imageListener is an object from another class that implements android.media.ImageReader.OnImageAvailableListener and backgroundHandler is just a background thread. I am not including code for these two or previewRequestBuilder as they do not seem to be important for understanding my question.
I have searched extensively but it just seems like some magic happens and previewReader finds some surface somewhere, somehow. According to the documentation, what getSurface() does is to:
Get a Surface that can be used to produce Image for this ImageReader
Can anyone explain where it gets this?
That Surface belongs to the ImageReader; it was created in the native equivalent of the ImageReader's constructor, and is (effectively) an ImageReader private member, with a getter.
Here is the line in the native constructor that sets up the IGraphicBufferProducer (gbProducer), which is basically the native equivalent of a Surface.
Here is where you can see that the native code uses that same member to form the return value from getSurface()/nativeGetSurface() (you may have to trace through the code a bit, but it's all there).
So that's the literal answer to your question. But maybe you were asking because it isn't clear why the camera doesn't create the Surface, and force you to give it to the ImageReader, instead: A Surface is a complex object (actually, a buffer queue), and shouldn't be thought of as a simple, pre-allocated bitmap. At the time the capture takes place, the camera pipeline will communicate with its output Surfaces, and set up the correct dimensions and color planes and so forth. (Note that you can add multiple targets via addTarget(); the camera can use each of them.) All the camera needs to know is where it's going to send its output; it doesn't need to create the output Surface itself.

how to interact Vulkan with Android Java Activity

Currently, android Vulkan only supports NativeActivity, but is there any way we can use Java Activity and SurfaceView or any other view and pass Native through JNI to get NativeWindow handler.
I tried looking around and link my surface view but it didn't work for me, any sample code or example will be appreciated.
I don't know of any sample code off-hand, but if you have a SurfaceView you want to get the Surface from it, and from that you can get (in C) the ANativeWindow for creating the VkSurfaceKHR/VkSwapchainKHR. The sequence is something like:
Java: surface = surfaceView->getHolder()->getSurface();
Pass surface to a JNI call into C as a jobject.
C: window = ANativeWindow_fromSurface(env, jsurface);
That function is declared in the NDK android/native_window_jni.h header.
You'll want to register callbacks with the SurfaceView's SurfaceHolder and manage the window lifecycle (which is tied to the Activity lifecycle) correctly.

Why The OpenGL ES texture object will be deleted as a result of call to SurfaceTexture.detachFromGLContext?

I am referring SurfaceTexture.detachFromGLContext, it mentioned, "The OpenGL ES texture object will be deleted as a result of this call."
Anyone please explain why framework need to delete this texture object when calling detach, Because the texture is created outside SurfaceTexture and provided to constructor. So, I expect to use the texture even after detach and creator should be able to control its life cycle.
We are trying to use this with combination of attach method and MediaCodec. In our use case, we need to copy the video frame texture for the future use.
Following is the sample code to create SurfaceTexture:
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
SurfaceTexture surfaceTexture = new SurfaceTexture(textures[0]);
Then a surface will be created and pass to MediaCodec as following code sample
Surface surface = new Surface(surfaceTexture);
MediaCodec videoDecoder.configure(..., surface, null, 0);
While performing the decoding, once a video frame decoded in to texture, I am trying keep the texture and assign different texture to the SurfaceTexture, so that I can use that later and decoder can decode another frame to the new texture.
surfaceTexture.detachFromGLContext();
surfaceTexture.attachToGLContext(newTexture);
But the issue is android framework delete the texture. My point is it should be a bug or there should be a reason to delete the texture.
Each openGL item is bound to a context so as it is detached it must also be deleted.
The context is responsible for creating buffers and handling states which are bound with some id or enumeration (same thing but enumeration is predefined). So when the context is deleted all its items are deleted as well. If you have 2 contexts then none of the states or buffers are in any way connected and any call you make to the openGL will only effect the currently bound context. So detaching the item means deleting it, the only connection that the the SurfaceTexture has is the texture ID which is then not usable anywhere else. It might be confusing but think of it this way: If you deleted the connection between your CPU object (surface texture) and the GPU data (the texture) then the texture should be deleted or the texture data becomes garbage and you have a memory leak.
It is hard to understand why you even want to detach the texture from the context but if you need to move it from the context for some reason you will need another context which is the actual owner of the texture. For this you will need a shared context which means creating a new context with the main one as an argument in constructor. This way your main context should be able to access the textures owned by a shared context and the other way around. This will also allow you to use multiple threads since the "current context" is per thread.
"The OpenGL ES texture object will be deleted as a result of this call."
Note that for external surfaces deleting the texture object is not the same as the deleting the physical memory; the physical external memory buffers are not owned by the GL, and will still be allocated and available for reuse (you'll just need to recreate a GL texture object to wrap it again if you reimport in to GL).

Where should I initialize resources for an Android Live Wallpaper

I am developing a live wallpaper for Android. The wallpaper allocates some resources such as background bitmaps, sprites, textures, etc.
The question is: where should I allocate and initialize all of the resources? Should I allocate them in WallpaperService.Engine inherited object's constructor or onCreate(SurfaceHolder surfaceHolder) method?
Short Answer is yes SurfaceHolder or WallpaperService . since explaining the whole method procedure is pretty huge.
Am gonna hook you up by some nice tutorials, just follow them
http://www.rajeeshcv.com/post/details/36/create-a-live-aquarium-wallpaper-in-android
Another one
http://learnandroideasily.blogspot.ae/2013/07/android-livewallpaer-tutorial.html

Loading textures at random place

I'm trying to load all the game data at the same time, in my game scene's constructor. But it fails, because texture loading works only in an opengl context, like if load method called from draw frame or surfacechanged. But i think it's ugly to load textures when the drawframe first called or something similar. So is it possible somehow to separate my loading part from opengl functions?
I have exactly the same problem.
My solution is using the proxy textures. It means that when you're creating textures using some data from memory or file you're creating the dummy texture that holds the copy of that memory data or the file path (you can preload the data into memory for faster loading).
After that the next time my renderer calls bind() (which is something like glBindTexture) I check whether there is data to load and if it exists I just create the new texture and load the data.
This approach fits best to me, because in my case textures could be created from any thread and any time.
But if you want to preload all textures you can just do that in onSurfaceCreated or onSurfaceChanged
The same applies to buffers.
Another approach is using the native activity (check the NDK example). In this case you can handle context manually but it requires API level 9.
But i think it's ugly to load textures when the drawframe first called or something similar.
Actually deferred texture loading is the most elegant methods. It's one of the key ingredients for games that allow for traveling the world without interrupting loading screens. Just don't try to load the whole world at once, but load things, as soon they are about to become visible. Use Pixel Buffer Objects to do things asynchronously.

Categories

Resources