I'm working on a video app where user can watch a video, open it il fullscreen if needed and come back to default view and so on. I was using ExoPlayer and recently switch to default MediaPlayer due to the upcoming explanation.
I need to change "on the fly" the Surface of the player. I need to use the same player to display video among activities, with no delay to display the image. Using Exoplayer, the decoder wait for the next keyframe to draw pixels on the empty Surface.
So I need to use the same Surface so I don't need to push a new surface each time, just attachign the surface to a View parent. The Surface can stay the same but if I detach the SurfaceView to retrieve it from another activity and reattach it, the inner Surface is destroyed.
So is there a way to keep the same Surface across different activities ? With a Service ?
I know the question is a bit weird to understand, I will explain specified part is request in comment.
The Surface associated with a SurfaceView or TextureView will generally be destroyed when the Activity stops. It is possible to work around this behavior.
One approach is built into TextureView, and is described in the architecture doc, and demonstrated in the "double decode" activity in Grafika. The goal of the activity is to continue playing a pair of videos while the activity restarts due to screen rotation, not pausing at all. If you follow the code you can see how the return value from onSurfaceTextureDestroyed() is used to keep the SurfaceTexture alive, and how TextureView#setSurfaceTexture() attaches the SurfaceTexture to the new View. There's a bit of a trick to it -- the setSurfaceTexture() needs to happen in onCreate(), not onSurfaceTextureAvailable() -- but it's reasonably straightforward.
The example uses MediaCodec output for video playback, but it'll work equally well with anything that takes a Surface for output -- just create a Surface from the SurfaceTexture.
If you don't mind getting ankle-deep into OpenGL ES, you can just create your own SurfaceTexture, independent of Views and Activities, and render it yourself to the current SurfaceView. Grafika's "texture from camera" activity does this with live video from the camera (though it doesn't try to preserve it across Activity restarts).
Related
A while ago I askedthis question which received an answer.
I have implemented an intermediary Surface as the answer suggested, but now I've run into another problem. At some points in time during my application, my VirtualDisplay can change resolution. So, I'd like to also update the size of my intermediary Surface to match the change in resolution of the VirtualDisplay. I was hoping this would be a simple call to setDefaultBufferSize on the Surface's underlying SurfaceTexture, but that doesn't appear to work.
I've poked around at releasing my intermediary Surface and SurfaceTexture and making new ones, but then I have to set the output surface for the VirtualDisplay to be null and do some other synchronization steps which I'd like to avoid if possible.
Is there a way to dynamically update the size of a Surface/SurfaceTexture after creation?
UPDATE:
I've tried calling VirtualDisplay.setSurface(null) along with VirtualDisplay.resize(newSize.width, newSize.height) and then sending a message to the thread which handles the callbacks for the intermediary SurfaceTextureto resize the texture via setDefaultBufferSize and then having the main thread poll the secondary thread until that set call is finished and then call VirtualDisplay.setSurface(surfaceFromSecondaryThread)
This works sometimes. Other times the texture is all green with a gray bar across it (which is also my glClearColor, not sure if that is related as seen here). Sometimes the current screen image is seen duplicates/smaller in my VirtualDisplay. So, it seems like a timing issue, but what timing I should wait for, I am unsure. The documentation for setDefaultBufferSize states:
For OpenGL ES, the EGLSurface should be destroyed (via eglDestroySurface), made not-current (via eglMakeCurrent), and then recreated (via eglCreateWindowSurface) to ensure that the new default size has taken effect.
The problem is that my code does not create an EGLSurface from the SurfaceTexture/Surface so, I have no way of destroying it. I'm assuming that the producer (VirtualDisplay) does, but there are no public APIs for me to get at the EGLSurface.
[UPDATE 2]
So, when I see the problem (green screen with bar, corruption, perhaps because my glClearColor is green) if I do a glReadPixels before I call eglSwapBuffers to write to the Surface for the MediaCodec, I read green pixels. This tells me that it isn't a MediaCodec problem, that either the information written to the Surface from the VirtualDisplay is corrupt (and remains corrupt) or the conversion from YUV space to RGBA space when going from Surface -> OpenGL texture is broken somehow. I'm leaning towards there being a problem with VirtualDisplay
I have a special design requiring for the app I'm developing right now.
Right now, I have a third-party private video library which plays a video stream. The design of this screen includes a translucent panel overlaid on top of the video, blurring the portion of the video that lies behind.
Normally in order to blur the background, you are supposed to take a screenshot of the view behind, blur it and use it as an image for the foreground view.
In this case, the video keeps on playing, so the blurred image changes every frame. How would you implement this then?
A possible solution would be to create a thread, taking screenshots, cropping them and put them as a background. Even better if that view is a SurfaceView, I guess. But I'm wondering what would be the best approach in this case. Would a thread that is continually taking screenshots create a huge performance impact? Is it possible to feed a surfaceView buffer with these images?
Thanks!
A SurfaceView surface is a consumer of graphics buffers. You can't have two producers for one consumer, which means you can't send the video to it and draw on it at the same time.
You can have multiple layers; the SurfaceView surface is on a separate layer behind the View UI layer. So you could play the video to the SurfaceView's surface, and draw your blur rectangle on the SurfaceView's view. (Normally the SurfaceView's view is completely transparent, and is just used as a place-holder for layout purposes.)
Another option would be to render the video frame to a SurfaceTexture. You would then render that texture to the SurfaceView surface with GLES, and render the blur rectangle on top. You can find an example of treating live camera input as a GLES texture in Grafika ("texture from camera" activity). This has the additional advantage that, since you're not interacting with the View system -- the SurfaceView surface is composited by the system, not the app -- you can do it all on an independent thread.
In any event, rendering, grabbing a screenshot, and re-rendering is going to be slower than the options described above.
For more details about why things work the way they do, see the Android System-Level Graphics architecture doc.
I am getting confused with EGL.
My GLSurfaceView creates an EGLContext. Now I create a shared context. Now I need to use a EGLExtension.
The Method I have to use is called (>=API18):
EGLExt.eglPresentationTimeANDROID(android.opengl.EGLDisplay display, android.opengl.EGLSurface surface, long time);
The Problem is, that the GLSurfaceView does only creates javax.microedition.khronos.egl.EGLContext s.
Which tells me, NOT to use GLSurfaceView. So I tried TextureView, which is slightly similar, with the difference that you have to handle your own EGL stuff. Which is good for that purpose.
But:
The TextureView is slower, at least it looked like that, so I recorded some diagrams with the Method Profiler:
Here the TextureView with own EGL Handling:
The Thread on the top is a clock that wakes the Thread in the middle, which renders onto the TextureView. The main Thread will be called after that, for redrawing the TextureView.
... and here the GLSurfaceView with their own EGL Handling
The clock is in the middle this time, it calls the Thread on the top to render my image into a framebuffer, which I give directly into the SurfaceView (RENDERMODE_WHEN_DIRTY) and call requestRender to request the view to render.
As you can see with a short look already that with the GLSurfaceView it looks way cleaner that with the TextureView.
On both Examples I havn't had anything else on the screen and they rendered exactly the same Meshes with the same shader.
To my question:
Is there a way to use GLSurfaceView with EGL14 Contexts?
Did I do something wrong?
What you probably want to do is use a plain SurfaceView.
Here's the short version:
SurfaceView has two parts, the Surface and a bit of fake stuff in the View. The Surface gets passed directly to the surface compositor (SurfaceFlinger), so when you draw on it with OpenGL there's relatively little overhead. This makes it fast, but it also makes it not play quite right with the View hierarchy, because the Surface is on one layer and the View-based UI is on a different layer.
TextureView also has two parts, but the part you draw on lives behind the scenes (that's where the SurfaceTexture comes in). When the frame is complete, the stuff you drew is blitted onto the View layer. The GPU can do this quickly, but "some work" is always slower than "no work".
GLSurfaceView is a SurfaceView with a wrapper class that does all the EGL setup and inter-thread messaging for you.
Edit: the long version is available here.
If you can do the GL/EGL setup and thread management yourself -- which, if you're now running on a TextureView, you clearly can -- then you should probably use a plain SurfaceView.
Having said all that, it should be possible to make your original code work with GLSurfaceView. I expect you want to call eglPresentationTimeANDROID() on the EGL context that's shared with the GLSurfaceView, not from within GLSurfaceView itself, so it doesn't matter that GLSurfaceView is using EGL10 internally. What matters for sharing the context is the context client version (e.g. GLES2 vs. GLES3), not the EGL interface version used to configure the context.
You can see examples of all of this working in Grafika. In particular:
"Show + capture camera" uses a GLSurfaceView, the camera, and the video encoder. Note the EGL context is shared. The example is convoluted and somewhat painful, mostly because it's deliberately trying to use GLSurfaceView and a shared EGL context. (Update: note this issue about race conditions with shared contexts.)
"Play video (TextureView)" and "Basic GL in TextureView" show TextureView in action.
"Record GL app with FBO" uses a plain SurfaceView.
Thanks to fadden! It worked as expected.
To everyone who thinks about doing something similar:
It has advantages AND disadvantages using the (GL)SurfaceView to render images on it.
My testresults in the post above do not have anything else on the screen than the rendered image itself.
If you have other UI elements on the screen, especially if they get updated frequently, you should reconsider my choice of prefering the (GL)SurfaceView.
The SurfaceView creates a new window in the Android Windowsystem. Its advantage is, that if the SurfaceView gets refreshed, only this window will be refreshed. If you additionally update UI Elements (which are in another window of the windowsystem), then both refresh operations block themselfes (especially when ui drawing is hardwaresupported) because opengl cannot handle multi thread drawing properly.
For such a case it could be better using the TextureView, cause it's not another window of the Android Windowsystem. so if you refresh your View, all UI elements get refreshed as well. (Probably) everything in one Thread.
Hope I could help some of you!
I'm trying to do camera recording and drawings on top on Google Glass, using a LiveCard.
in a regular activity, this would be achieved by using a FrameLayout, with a SurfaceView for the camera preview 'in the back', and another View in front of it used for drawing.
but using a LiveCard, if one needs subsecond updates, one has to use the LiveCard itself as a Surface. according to the documentation: https://developers.google.com/glass/develop/gdk/reference/com/google/android/glass/timeline/LiveCard
If your application requires more frequent updates (several times per
second) or rendering more elaborate graphics than the standard widgets
support, enable direct rendering and add a SurfaceHolder.Callback to
the card's surface.
LiveCard liveCard; // initialized elsewhere
liveCard.setDirectRenderingEnabled(true);
liveCard.getSurfaceHolder().addCallback(callback); You can then draw directly on the surface inside a background thread or in response
to external events (for example, sensor or location updates). Use the
surfaceCreated and surfaceDestroyed methods to start and stop your
rendering logic when the card is displayed or hidden.
now I can either draw my on stuff on this Surface, or I can give this to the MediaRecorder as the camera preview service, but can't do both, as it will fail with an error
I wonder if anyone has ideas on how to make this work still?
the way I'd draw into the LiveCard myself is to 'manually' lock the canvas, and call FrameLayout.draw(canvas); one option would be to have a layout that contains two SurfaceViews - one for the camera preview, and one for my own drawings, and use the same approach. but, even if I define such a layout in XML, I can't get the SurfaceViews created (e.g. the appropriate SurfaceView callbacks are never called, and any attempt of drawing on them results in failure)
I have 2 Activitiys which use OpenGL for drawing. At a transition from one activity to the next I get an unsightly empty screen filled with my OpenGL clear colour (so its not as bad as a black screen).
I wish to effectively transition seamlessly between Activitys, but there are several high load regions when a GLSurfaceView is created. The main issue is texture loading as this is slowest.
Is there anyway to double buffer between Activitys so that the last Activity view is frozen until I explicitly tell my next Activity to draw? I want transitions to be seamless?
Moving everything into one GLSurfaceView instance isn't really an option I want to consider.
You can use setRenderMode( RENDERMODE_WHEN_DIRTY) in your GLSurfaceView, so the surface only will be redraw when you call requestRender().
This way, anything that you draw before calling another surface view will only be cleared when you request a new draw.
You can back to the continuous drawing by setting render mode as RENDERMODE_CONTINUOUSLY.
It is hard to do it in Android 2.x because of its OpenGL ES. Also, it is not recommended that you use two OpenGL in one applications if you are in render continously. If so, to control them easily, you will need RENDERMODE_WHEN_DIRTY.
If you use it in Android 4.x, TextureView is an optional to do it.
TextureView is as same as GLSurfaceView but with View compatible, it means that you can use ViewAnimation for TextureView.