I need to record a video that just contains one single frame: an image specified by the user (it can be of any length, but it will only have the same static image). So, I figured I could use the new MediaRecorder.VideoSource.SURFACE and just draw to the Surface being used by the recorder. I initialize the recorder properly, and I can even call MediaRecorder.getSurface() without an exception (something that is apparently tricky).
My problem is somewhat embarrassing: I don't know what to do with the surface returned. I need to draw to it somehow, but all examples I can find involve drawing to a SurfaceView. Is this surface the same surface used by MediaRecorder.setPreviewDisplay()? How do I draw something to it?
In theory you can use Surface#lockCanvas() to get a Canvas to draw on if you want to render in software. There used to be problems with this on some platforms; not sure if that has been fixed.
The other option is to create an EGLSurface from the Surface and render onto it with OpenGL ES. You can find examples of this, with some code to manage all of the EGL setup, in Grafika.
The GLES recording examples uses MediaCodec rather than MediaRecorder, but the idea is the same, and it should be much simpler with MediaRecorder.
Related
I want to use TextureView for draw Math curves(a lot), which data source is external device.
Here, every zone i draw, must add lines to previous.
Down to TextureView render using 3 buffers, i would like the buffer i draw in each moment, has like source the buffer i´ve just to release.
That is, i want the contain from buffer i release, fill the next buffer before i draw on it.
Other posibility, will be, force to use only one buffer.
I see, is possible get bitmap and setbitmap, but i would like do it without charge this in memory.
Anyone know if is this possible.
I would recommend two things:
Assuming you're rendering with Canvas, don't use TextureView. Use a custom View instead. TextureView doesn't really give you an advantage, and you lose hardware-accelerated rendering.
Render to an off-screen Bitmap, then blit the Bitmap to the View. Offscreen rendering is not hardware-accelerated, but you avoid re-drawing the entire scene. You will have to experiment to determine which is most efficient.
If you're drawing with OpenGL ES, just draw everything to the TextureView on every frame (unless you want to play with FBOs).
You can try lockCanvas(null) of Surface.
use TextureView.getSurfaceTexture to get a surfaceTexture
use new Surface(surfaceTexture) to create a surface
use Surface.lockCanvas or Canvas.lockHardwareCanvas to get a canvas.
Then you can do a lot of drawings on textureView with this canvas.
A while ago I askedthis question which received an answer.
I have implemented an intermediary Surface as the answer suggested, but now I've run into another problem. At some points in time during my application, my VirtualDisplay can change resolution. So, I'd like to also update the size of my intermediary Surface to match the change in resolution of the VirtualDisplay. I was hoping this would be a simple call to setDefaultBufferSize on the Surface's underlying SurfaceTexture, but that doesn't appear to work.
I've poked around at releasing my intermediary Surface and SurfaceTexture and making new ones, but then I have to set the output surface for the VirtualDisplay to be null and do some other synchronization steps which I'd like to avoid if possible.
Is there a way to dynamically update the size of a Surface/SurfaceTexture after creation?
UPDATE:
I've tried calling VirtualDisplay.setSurface(null) along with VirtualDisplay.resize(newSize.width, newSize.height) and then sending a message to the thread which handles the callbacks for the intermediary SurfaceTextureto resize the texture via setDefaultBufferSize and then having the main thread poll the secondary thread until that set call is finished and then call VirtualDisplay.setSurface(surfaceFromSecondaryThread)
This works sometimes. Other times the texture is all green with a gray bar across it (which is also my glClearColor, not sure if that is related as seen here). Sometimes the current screen image is seen duplicates/smaller in my VirtualDisplay. So, it seems like a timing issue, but what timing I should wait for, I am unsure. The documentation for setDefaultBufferSize states:
For OpenGL ES, the EGLSurface should be destroyed (via eglDestroySurface), made not-current (via eglMakeCurrent), and then recreated (via eglCreateWindowSurface) to ensure that the new default size has taken effect.
The problem is that my code does not create an EGLSurface from the SurfaceTexture/Surface so, I have no way of destroying it. I'm assuming that the producer (VirtualDisplay) does, but there are no public APIs for me to get at the EGLSurface.
[UPDATE 2]
So, when I see the problem (green screen with bar, corruption, perhaps because my glClearColor is green) if I do a glReadPixels before I call eglSwapBuffers to write to the Surface for the MediaCodec, I read green pixels. This tells me that it isn't a MediaCodec problem, that either the information written to the Surface from the VirtualDisplay is corrupt (and remains corrupt) or the conversion from YUV space to RGBA space when going from Surface -> OpenGL texture is broken somehow. I'm leaning towards there being a problem with VirtualDisplay
I am trying to generate movie using MediaMuxer. The Grafika example is an excellent effort, but when i try to extend it, I have some problems.
I am trying to draw some basic shapes like square, triangle, lines into the Movie. My openGL code works well if I draw the shapes into the screen but I couldn't draw the same shapes into the video.
I also have questions about setting up openGL matrix, program, shader and viewport. Normally, there are methods like onSurfaceCreated and onSurfaceChanged so that I can setup these things. What is the best way to do it in GeneratedMovie?
Anybody has examples of writing into video with more complicated shapes would be welcome
The complexity of what you're drawing shouldn't matter. You draw whatever you're going to draw, then call eglSwapBuffers() to submit the buffer. Whether you draw one flat-shaded triangle or 100K super-duper-shaded triangles, you're still just submitting a buffer of data to the video encoder or the surface compositor.
There is no equivalent to SurfaceView's surfaceCreated() and surfaceChanged(), because the Surface is created by MediaCodec#createInputSurface() (so you know when it's created), and the Surface does not change.
The code that uses GeneratedMovie does some fairly trivial rendering (set scissor rect, call clear). The code in RecordFBOActivity is what you should probably be looking at -- it has a bouncing rect and a spinning triangle, and demonstrates three different ways to deal with the fact that you have to render twice.
(The code in HardwareScalerActivity uses the same GLES routines and demonstrates texturing, but it doesn't do recording.)
The key thing is to manage your EGLContext and EGLSurfaces carefully. The various bits of GLES state are held in the EGLContext, which can be current on only one thread at a time. It's easiest to use a single context and set up a separate EGLSurface for each Surface, but you can also create separate contexts (with or without sharing) and switch between them.
Some additional background material is available here.
I am getting confused with EGL.
My GLSurfaceView creates an EGLContext. Now I create a shared context. Now I need to use a EGLExtension.
The Method I have to use is called (>=API18):
EGLExt.eglPresentationTimeANDROID(android.opengl.EGLDisplay display, android.opengl.EGLSurface surface, long time);
The Problem is, that the GLSurfaceView does only creates javax.microedition.khronos.egl.EGLContext s.
Which tells me, NOT to use GLSurfaceView. So I tried TextureView, which is slightly similar, with the difference that you have to handle your own EGL stuff. Which is good for that purpose.
But:
The TextureView is slower, at least it looked like that, so I recorded some diagrams with the Method Profiler:
Here the TextureView with own EGL Handling:
The Thread on the top is a clock that wakes the Thread in the middle, which renders onto the TextureView. The main Thread will be called after that, for redrawing the TextureView.
... and here the GLSurfaceView with their own EGL Handling
The clock is in the middle this time, it calls the Thread on the top to render my image into a framebuffer, which I give directly into the SurfaceView (RENDERMODE_WHEN_DIRTY) and call requestRender to request the view to render.
As you can see with a short look already that with the GLSurfaceView it looks way cleaner that with the TextureView.
On both Examples I havn't had anything else on the screen and they rendered exactly the same Meshes with the same shader.
To my question:
Is there a way to use GLSurfaceView with EGL14 Contexts?
Did I do something wrong?
What you probably want to do is use a plain SurfaceView.
Here's the short version:
SurfaceView has two parts, the Surface and a bit of fake stuff in the View. The Surface gets passed directly to the surface compositor (SurfaceFlinger), so when you draw on it with OpenGL there's relatively little overhead. This makes it fast, but it also makes it not play quite right with the View hierarchy, because the Surface is on one layer and the View-based UI is on a different layer.
TextureView also has two parts, but the part you draw on lives behind the scenes (that's where the SurfaceTexture comes in). When the frame is complete, the stuff you drew is blitted onto the View layer. The GPU can do this quickly, but "some work" is always slower than "no work".
GLSurfaceView is a SurfaceView with a wrapper class that does all the EGL setup and inter-thread messaging for you.
Edit: the long version is available here.
If you can do the GL/EGL setup and thread management yourself -- which, if you're now running on a TextureView, you clearly can -- then you should probably use a plain SurfaceView.
Having said all that, it should be possible to make your original code work with GLSurfaceView. I expect you want to call eglPresentationTimeANDROID() on the EGL context that's shared with the GLSurfaceView, not from within GLSurfaceView itself, so it doesn't matter that GLSurfaceView is using EGL10 internally. What matters for sharing the context is the context client version (e.g. GLES2 vs. GLES3), not the EGL interface version used to configure the context.
You can see examples of all of this working in Grafika. In particular:
"Show + capture camera" uses a GLSurfaceView, the camera, and the video encoder. Note the EGL context is shared. The example is convoluted and somewhat painful, mostly because it's deliberately trying to use GLSurfaceView and a shared EGL context. (Update: note this issue about race conditions with shared contexts.)
"Play video (TextureView)" and "Basic GL in TextureView" show TextureView in action.
"Record GL app with FBO" uses a plain SurfaceView.
Thanks to fadden! It worked as expected.
To everyone who thinks about doing something similar:
It has advantages AND disadvantages using the (GL)SurfaceView to render images on it.
My testresults in the post above do not have anything else on the screen than the rendered image itself.
If you have other UI elements on the screen, especially if they get updated frequently, you should reconsider my choice of prefering the (GL)SurfaceView.
The SurfaceView creates a new window in the Android Windowsystem. Its advantage is, that if the SurfaceView gets refreshed, only this window will be refreshed. If you additionally update UI Elements (which are in another window of the windowsystem), then both refresh operations block themselfes (especially when ui drawing is hardwaresupported) because opengl cannot handle multi thread drawing properly.
For such a case it could be better using the TextureView, cause it's not another window of the Android Windowsystem. so if you refresh your View, all UI elements get refreshed as well. (Probably) everything in one Thread.
Hope I could help some of you!
I have the task to record user activity in a webview, in other words I need to create an mp4 video file while the user navigates in a webview. Pretty challenging :)
I font that in Android 4.3 introduced MediaCodec : was expanded to include a way to provide input through a Surface (via the createInputSurface method). This allows input to come from camera preview or OpenGL ES rendering.
I even find an example where you could record a game written in opengl : http://bigflake.com/mediacodec/
My question is : how could I record a webview activity ? I assume that If I could draw the webview content to opengl texture, than everything would be fine. But I don't know how to do this.
Can anybody help me on this?
Why not try WebView.onDraw first, instead of using OpenGL? The latter approach may be more complicated, and not supported by all devices.
Once you will be able to obtain the screenshots, then you can create the video (to create video from image sequence on android), a separate task where mediacodec should help.
"I assume that If I could draw the webview content to opengl texture".
It is possible.
The SurfaceTexture is basically your entry point into the OpenGL layer. It is initialized with an OpenGL texture id, and performs all of it's rendering onto that texture.
The steps to render your view to opengl:
1.Initialize an OpenGL texture
2.Within an OpenGL context construct a SurfaceTexture with the texture id. Use SurfaceTexture.setDefaultBufferSize(int width, int height) to make sure you have enough space on the texture for the view to render.
3.Create a Surface constructed with the above SurfaceTexture.
4.Within the View's onDraw, use the Canvas returned by Surface.lockCanvas to do the view drawing. You can obviously do this with any View, and not just WebView. Plus Canvas has a whole bunch of drawing methods, allowing you to do funky, funky things.
The source code can be found here: https://github.com/ArtemBogush/AndroidViewToGLRendering And you can find some explanations here:http://www.felixjones.co.uk/neo%20website/Android_View/