Resize Surface When Resizing VirtualDisplay - android

A while ago I askedthis question which received an answer.
I have implemented an intermediary Surface as the answer suggested, but now I've run into another problem. At some points in time during my application, my VirtualDisplay can change resolution. So, I'd like to also update the size of my intermediary Surface to match the change in resolution of the VirtualDisplay. I was hoping this would be a simple call to setDefaultBufferSize on the Surface's underlying SurfaceTexture, but that doesn't appear to work.
I've poked around at releasing my intermediary Surface and SurfaceTexture and making new ones, but then I have to set the output surface for the VirtualDisplay to be null and do some other synchronization steps which I'd like to avoid if possible.
Is there a way to dynamically update the size of a Surface/SurfaceTexture after creation?
UPDATE:
I've tried calling VirtualDisplay.setSurface(null) along with VirtualDisplay.resize(newSize.width, newSize.height) and then sending a message to the thread which handles the callbacks for the intermediary SurfaceTextureto resize the texture via setDefaultBufferSize and then having the main thread poll the secondary thread until that set call is finished and then call VirtualDisplay.setSurface(surfaceFromSecondaryThread)
This works sometimes. Other times the texture is all green with a gray bar across it (which is also my glClearColor, not sure if that is related as seen here). Sometimes the current screen image is seen duplicates/smaller in my VirtualDisplay. So, it seems like a timing issue, but what timing I should wait for, I am unsure. The documentation for setDefaultBufferSize states:
For OpenGL ES, the EGLSurface should be destroyed (via eglDestroySurface), made not-current (via eglMakeCurrent), and then recreated (via eglCreateWindowSurface) to ensure that the new default size has taken effect.
The problem is that my code does not create an EGLSurface from the SurfaceTexture/Surface so, I have no way of destroying it. I'm assuming that the producer (VirtualDisplay) does, but there are no public APIs for me to get at the EGLSurface.
[UPDATE 2]
So, when I see the problem (green screen with bar, corruption, perhaps because my glClearColor is green) if I do a glReadPixels before I call eglSwapBuffers to write to the Surface for the MediaCodec, I read green pixels. This tells me that it isn't a MediaCodec problem, that either the information written to the Surface from the VirtualDisplay is corrupt (and remains corrupt) or the conversion from YUV space to RGBA space when going from Surface -> OpenGL texture is broken somehow. I'm leaning towards there being a problem with VirtualDisplay

Related

Is it possible to take Screenshot of GLSurfaceView from another thread in android?

I am trying to capture the screen shot of my android application from onDrawFrame function of my Renderer. But it is dropping the frame and freezing the application till the process is complete. I am trying to do it on a different thread but no success. Is it possible to get the screen shot on a different thread.
You can't perform the glReadPixels() from another thread, because the EGL context can only be current in one thread at a time, and you have no control over it with GLSurfaceView.
However, 95+% of the time spent grabbing a screen shot from GLES is spent on PNG/JPEG compression and disk I/O. So if you make the glReadPixels() call from onDrawFrame(), and then hand the data off to a new thread, you should be able to continue running while the screen shot is processed in the background.
As already mentioned a direct read may be impossible from another thread but there are other ways.
Instead of drawing your scene directly to the surface view you may draw it to a frame buffer object (FBO) with attached texture and then redraw the texture to your main buffer. This is a pretty standard procedure for post processing and such and does not have a high performance impact. To realize this you actually just need to look into how to create the FBO, bind it before the drawing begins and the rest of your code should keep as it is. Then also add the code to redraw the FBO texture on the screen by binding the main buffer (index 0 should do).
For the screenshot on a separate thread you now need to create a new thread and a new context which is set on that thread. The new context must be shared with the main one (I believe there is a constructor that accepts the main context for that) so you may share the texture. Now the fun part: When you want to take the screenshot you need to create a new texture, attach the new texture to the FBO, detach the old texture. By doing so you kind of steal the texture to which the content is drawn and you can do whatever you want with it and on any thread you want. So put it into your secondary thread and on that context you may create another FBO, bind it and read the pixels from it on that thread. Do not forget to cleanup though.
Some caution is advised though. This kind of procedure may be good when you make a relatively small amount of screenshot. If one screenshot is not completely done before another one begins you will inflate in memory usage and your app will most likely crash. So be careful not to do that, create some kind of locking mechanism or limit the number of screenshot processing in progress. And this is not just for openGL, the same issue may occur when simply encoding the image data.
And just a note on your original idea: Even if you could use the surface view on a separate thread and read pixels from its main buffer you may not expect you will get a good result. The buffer may be drawn to while you are reading so it would be possible you would get chunks of data from different frames. Still this is just in theory because the truth is the buffer would simply be locked and your application would crash while trying to access it. So this is in no way possible and even if it was the result would be unpredictable.

CPU processing on camera image

Currently I'm showing a preview of the camera on the screen providing the camera preview texture - camera.setPreviewTexture(...) (doing it using opengl of course).
I have a native library which get bytes[] as an image, and return a byte[] - the result image related to the input image. I want to call it, and then draw the input image and the result to the screen - one on each other.
I know that in Opengl, in order to get the data of texture back in the CPU we must be read it using glReadPixel and after process i will have to load the result to a texture - which will have big impact on performances to do it each frame.
I thought about using camera.setPreviewCallback(...), There i'm getting the frame (Calling the process method and transfer the result to the my SurfaceView), and parallel continue using the texture preview Technic for drawing on the screen, but than i'm afraid of synchronizing between the frames that i got in the previewCallback to those i got in the texture.
Am i missing anything ? or there is not easy way to solve this issue?
One approach that may be useful is to direct the output of the Camera to an ImageReader, which provides a Surface. Each frame sent to the Surface is made available as YUV data without a copy, which makes it faster than some of the alternatives. The variations in color formats (stride, alignment, interleave) are handled by ImageReader.
Since you want the camera image to be presented simultaneously with the processing output, you can't send frames down two independent paths.
When the frame is ready, you will need to do a color-space conversion and upload the pixels with glTexImage2D(). This will likely be the performance-limiting factor.
From the comments it sounds like you're familiar with image filtering using a fragment shader; for anyone else who finds this, you can see an example here.

Move SurfaceView across Activities

I'm working on a video app where user can watch a video, open it il fullscreen if needed and come back to default view and so on. I was using ExoPlayer and recently switch to default MediaPlayer due to the upcoming explanation.
I need to change "on the fly" the Surface of the player. I need to use the same player to display video among activities, with no delay to display the image. Using Exoplayer, the decoder wait for the next keyframe to draw pixels on the empty Surface.
So I need to use the same Surface so I don't need to push a new surface each time, just attachign the surface to a View parent. The Surface can stay the same but if I detach the SurfaceView to retrieve it from another activity and reattach it, the inner Surface is destroyed.
So is there a way to keep the same Surface across different activities ? With a Service ?
I know the question is a bit weird to understand, I will explain specified part is request in comment.
The Surface associated with a SurfaceView or TextureView will generally be destroyed when the Activity stops. It is possible to work around this behavior.
One approach is built into TextureView, and is described in the architecture doc, and demonstrated in the "double decode" activity in Grafika. The goal of the activity is to continue playing a pair of videos while the activity restarts due to screen rotation, not pausing at all. If you follow the code you can see how the return value from onSurfaceTextureDestroyed() is used to keep the SurfaceTexture alive, and how TextureView#setSurfaceTexture() attaches the SurfaceTexture to the new View. There's a bit of a trick to it -- the setSurfaceTexture() needs to happen in onCreate(), not onSurfaceTextureAvailable() -- but it's reasonably straightforward.
The example uses MediaCodec output for video playback, but it'll work equally well with anything that takes a Surface for output -- just create a Surface from the SurfaceTexture.
If you don't mind getting ankle-deep into OpenGL ES, you can just create your own SurfaceTexture, independent of Views and Activities, and render it yourself to the current SurfaceView. Grafika's "texture from camera" activity does this with live video from the camera (though it doesn't try to preserve it across Activity restarts).

Android MediaMuxer with openGL

I am trying to generate movie using MediaMuxer. The Grafika example is an excellent effort, but when i try to extend it, I have some problems.
I am trying to draw some basic shapes like square, triangle, lines into the Movie. My openGL code works well if I draw the shapes into the screen but I couldn't draw the same shapes into the video.
I also have questions about setting up openGL matrix, program, shader and viewport. Normally, there are methods like onSurfaceCreated and onSurfaceChanged so that I can setup these things. What is the best way to do it in GeneratedMovie?
Anybody has examples of writing into video with more complicated shapes would be welcome
The complexity of what you're drawing shouldn't matter. You draw whatever you're going to draw, then call eglSwapBuffers() to submit the buffer. Whether you draw one flat-shaded triangle or 100K super-duper-shaded triangles, you're still just submitting a buffer of data to the video encoder or the surface compositor.
There is no equivalent to SurfaceView's surfaceCreated() and surfaceChanged(), because the Surface is created by MediaCodec#createInputSurface() (so you know when it's created), and the Surface does not change.
The code that uses GeneratedMovie does some fairly trivial rendering (set scissor rect, call clear). The code in RecordFBOActivity is what you should probably be looking at -- it has a bouncing rect and a spinning triangle, and demonstrates three different ways to deal with the fact that you have to render twice.
(The code in HardwareScalerActivity uses the same GLES routines and demonstrates texturing, but it doesn't do recording.)
The key thing is to manage your EGLContext and EGLSurfaces carefully. The various bits of GLES state are held in the EGLContext, which can be current on only one thread at a time. It's easiest to use a single context and set up a separate EGLSurface for each Surface, but you can also create separate contexts (with or without sharing) and switch between them.
Some additional background material is available here.

TextureView vs. GLSurfaceView or How to use GLSurfaceView with EGL14

I am getting confused with EGL.
My GLSurfaceView creates an EGLContext. Now I create a shared context. Now I need to use a EGLExtension.
The Method I have to use is called (>=API18):
EGLExt.eglPresentationTimeANDROID(android.opengl.EGLDisplay display, android.opengl.EGLSurface surface, long time);
The Problem is, that the GLSurfaceView does only creates javax.microedition.khronos.egl.EGLContext s.
Which tells me, NOT to use GLSurfaceView. So I tried TextureView, which is slightly similar, with the difference that you have to handle your own EGL stuff. Which is good for that purpose.
But:
The TextureView is slower, at least it looked like that, so I recorded some diagrams with the Method Profiler:
Here the TextureView with own EGL Handling:
The Thread on the top is a clock that wakes the Thread in the middle, which renders onto the TextureView. The main Thread will be called after that, for redrawing the TextureView.
... and here the GLSurfaceView with their own EGL Handling
The clock is in the middle this time, it calls the Thread on the top to render my image into a framebuffer, which I give directly into the SurfaceView (RENDERMODE_WHEN_DIRTY) and call requestRender to request the view to render.
As you can see with a short look already that with the GLSurfaceView it looks way cleaner that with the TextureView.
On both Examples I havn't had anything else on the screen and they rendered exactly the same Meshes with the same shader.
To my question:
Is there a way to use GLSurfaceView with EGL14 Contexts?
Did I do something wrong?
What you probably want to do is use a plain SurfaceView.
Here's the short version:
SurfaceView has two parts, the Surface and a bit of fake stuff in the View. The Surface gets passed directly to the surface compositor (SurfaceFlinger), so when you draw on it with OpenGL there's relatively little overhead. This makes it fast, but it also makes it not play quite right with the View hierarchy, because the Surface is on one layer and the View-based UI is on a different layer.
TextureView also has two parts, but the part you draw on lives behind the scenes (that's where the SurfaceTexture comes in). When the frame is complete, the stuff you drew is blitted onto the View layer. The GPU can do this quickly, but "some work" is always slower than "no work".
GLSurfaceView is a SurfaceView with a wrapper class that does all the EGL setup and inter-thread messaging for you.
Edit: the long version is available here.
If you can do the GL/EGL setup and thread management yourself -- which, if you're now running on a TextureView, you clearly can -- then you should probably use a plain SurfaceView.
Having said all that, it should be possible to make your original code work with GLSurfaceView. I expect you want to call eglPresentationTimeANDROID() on the EGL context that's shared with the GLSurfaceView, not from within GLSurfaceView itself, so it doesn't matter that GLSurfaceView is using EGL10 internally. What matters for sharing the context is the context client version (e.g. GLES2 vs. GLES3), not the EGL interface version used to configure the context.
You can see examples of all of this working in Grafika. In particular:
"Show + capture camera" uses a GLSurfaceView, the camera, and the video encoder. Note the EGL context is shared. The example is convoluted and somewhat painful, mostly because it's deliberately trying to use GLSurfaceView and a shared EGL context. (Update: note this issue about race conditions with shared contexts.)
"Play video (TextureView)" and "Basic GL in TextureView" show TextureView in action.
"Record GL app with FBO" uses a plain SurfaceView.
Thanks to fadden! It worked as expected.
To everyone who thinks about doing something similar:
It has advantages AND disadvantages using the (GL)SurfaceView to render images on it.
My testresults in the post above do not have anything else on the screen than the rendered image itself.
If you have other UI elements on the screen, especially if they get updated frequently, you should reconsider my choice of prefering the (GL)SurfaceView.
The SurfaceView creates a new window in the Android Windowsystem. Its advantage is, that if the SurfaceView gets refreshed, only this window will be refreshed. If you additionally update UI Elements (which are in another window of the windowsystem), then both refresh operations block themselfes (especially when ui drawing is hardwaresupported) because opengl cannot handle multi thread drawing properly.
For such a case it could be better using the TextureView, cause it's not another window of the Android Windowsystem. so if you refresh your View, all UI elements get refreshed as well. (Probably) everything in one Thread.
Hope I could help some of you!

Categories

Resources