How to record webview activity screen using Android MediaCodec? - android

I have the task to record user activity in a webview, in other words I need to create an mp4 video file while the user navigates in a webview. Pretty challenging :)
I font that in Android 4.3 introduced MediaCodec : was expanded to include a way to provide input through a Surface (via the createInputSurface method). This allows input to come from camera preview or OpenGL ES rendering.
I even find an example where you could record a game written in opengl : http://bigflake.com/mediacodec/
My question is : how could I record a webview activity ? I assume that If I could draw the webview content to opengl texture, than everything would be fine. But I don't know how to do this.
Can anybody help me on this?

Why not try WebView.onDraw first, instead of using OpenGL? The latter approach may be more complicated, and not supported by all devices.
Once you will be able to obtain the screenshots, then you can create the video (to create video from image sequence on android), a separate task where mediacodec should help.

"I assume that If I could draw the webview content to opengl texture".
It is possible.
The SurfaceTexture is basically your entry point into the OpenGL layer. It is initialized with an OpenGL texture id, and performs all of it's rendering onto that texture.
The steps to render your view to opengl:
1.Initialize an OpenGL texture
2.Within an OpenGL context construct a SurfaceTexture with the texture id. Use SurfaceTexture.setDefaultBufferSize(int width, int height) to make sure you have enough space on the texture for the view to render.
3.Create a Surface constructed with the above SurfaceTexture.
4.Within the View's onDraw, use the Canvas returned by Surface.lockCanvas to do the view drawing. You can obviously do this with any View, and not just WebView. Plus Canvas has a whole bunch of drawing methods, allowing you to do funky, funky things.
The source code can be found here: https://github.com/ArtemBogush/AndroidViewToGLRendering And you can find some explanations here:http://www.felixjones.co.uk/neo%20website/Android_View/

Related

How to crop Camera2 preview without overlay object?

I want to crop the camera preview in Android using camera2 api. I am using android-Camera2Basic the official example.
This is the result I am getting
And, the result exactly I want to achieve is this
I don't want to overlay the object on textureView. I want it actually to be of this size without stretching.
You'll need to edit the image yourself before drawing it, since the default behavior of a TextureView is to just draw the whole image sent to its Surface.
And adjusting the TextureView's transform matrix will only scale or move the whole image, not crop it.
Doing this requires quite a bit of boilerplate, since you need to re-implement most of a TextureView. For best efficiency, you likely want to implement the cropping in OpenGL ES; so you'll need a GLSurfaceView, and then you need to use the OpenGL context of that GLSurfaceView to create a SurfaceTexture object, and then using that texture, draw a quadrilateral with the cropping behavior you want in the fragment shader.
That's fairly basic EGL, but it's quite a bit if you've never done any OpenGL programming before. There's a small test program within the Android OS tree that uses this kind of path: https://android.googlesource.com/platform/frameworks/native/+/master/opengl/tests/gl2_cameraeye/#

Need to draw to surface used by MediaRecorder

I need to record a video that just contains one single frame: an image specified by the user (it can be of any length, but it will only have the same static image). So, I figured I could use the new MediaRecorder.VideoSource.SURFACE and just draw to the Surface being used by the recorder. I initialize the recorder properly, and I can even call MediaRecorder.getSurface() without an exception (something that is apparently tricky).
My problem is somewhat embarrassing: I don't know what to do with the surface returned. I need to draw to it somehow, but all examples I can find involve drawing to a SurfaceView. Is this surface the same surface used by MediaRecorder.setPreviewDisplay()? How do I draw something to it?
In theory you can use Surface#lockCanvas() to get a Canvas to draw on if you want to render in software. There used to be problems with this on some platforms; not sure if that has been fixed.
The other option is to create an EGLSurface from the Surface and render onto it with OpenGL ES. You can find examples of this, with some code to manage all of the EGL setup, in Grafika.
The GLES recording examples uses MediaCodec rather than MediaRecorder, but the idea is the same, and it should be much simpler with MediaRecorder.

Android: Attach SurfaceTexture to FrameBuffer

I am performing a video effect that requires dual pass rendering (the texture needs to be passed through multiple shader programs). Attaching a SurfaceTexture to a GL_TEXTURE_EXTERNAL_OES that is passed in the constructor does not seem to be a solution, since the displayed result is only rendered once.
One solution I am aware of is that the first rendering can be done to a FrameBuffer, and then the resulting texture can be rendered to where it actually gets displayed.
However, it seems that a SurfaceTexture must be attached to a GL_TEXTURE_EXTERNAL_OES texture, and not a FrameBuffer. I'm not sure if there is a workaround around this, or if there is a different approach I should take.
Thank you.
SurfaceTexture receives a buffer of graphics data and essentially wraps it up as an "external" texture. If it helps to see source code, start in updateTexImage(). Note the name of the class ("GLConsumer") is a more accurate description of the function than "SurfaceTexture": it consumes frames of graphic data and makes them available to GLES.
SurfaceTexture is expected to work with formats that OpenGL ES doesn't "naturally" work with, notably YUV, so it always uses external textures.

How to take snapshot of surfaceview?

Am working in H264 video rendering in Android application using SurfaceView. It has one feature to take snapshot while rendering the video on surface view. Whenever I take a snapshot, I get the Transparent/Black screen only. I use getDrawingCache() method to capture the screen that returns a null value only. I use the below code to capture the screen.
SurfaceView mSUrfaceView = new SurfaceView(this); //Member variable
if(mSUrfaceView!=null)
mSUrfaceView.setDrawingCacheEnabled(true); // After video render on surfaceview i enable the drawing cache
Bitmap bm = mSUrfaceView.getDrawingCache(); // return null
Unless you're rendering H.264 video frames in software with Canvas onto a View, the drawing-cache approach won't work (see e.g. this answer).
You cannot read pixels from the Surface part of the SurfaceView. The basic problem is that a Surface is a queue of buffers with a producer-consumer interface, and your app is on the producer side. The consumer, usually the system compositor (SurfaceFlinger), is able to capture a screen shot because it's on the other end of the pipe.
To grab snapshots while rendering video you can render video frames to a SurfaceTexture, which provides both producer and consumer within your app process. You can then render the texture for display with GLES, optionally grabbing pixels with glReadPixels() for the snapshot.
The Grafika app demonstrates various pieces, though none of the activities specifically solves your problem. For example, "continuous capture" directs the camera preview to a SurfaceTexture and then renders it twice (once for display, once for video encoding), which is similar to what you want to do. The GLES utility classes include a saveFrame() function that shows how to use glReadPixels() to create a bitmap.
See also the Android System-Level Graphics Architecture document.

Android MediaMuxer with openGL

I am trying to generate movie using MediaMuxer. The Grafika example is an excellent effort, but when i try to extend it, I have some problems.
I am trying to draw some basic shapes like square, triangle, lines into the Movie. My openGL code works well if I draw the shapes into the screen but I couldn't draw the same shapes into the video.
I also have questions about setting up openGL matrix, program, shader and viewport. Normally, there are methods like onSurfaceCreated and onSurfaceChanged so that I can setup these things. What is the best way to do it in GeneratedMovie?
Anybody has examples of writing into video with more complicated shapes would be welcome
The complexity of what you're drawing shouldn't matter. You draw whatever you're going to draw, then call eglSwapBuffers() to submit the buffer. Whether you draw one flat-shaded triangle or 100K super-duper-shaded triangles, you're still just submitting a buffer of data to the video encoder or the surface compositor.
There is no equivalent to SurfaceView's surfaceCreated() and surfaceChanged(), because the Surface is created by MediaCodec#createInputSurface() (so you know when it's created), and the Surface does not change.
The code that uses GeneratedMovie does some fairly trivial rendering (set scissor rect, call clear). The code in RecordFBOActivity is what you should probably be looking at -- it has a bouncing rect and a spinning triangle, and demonstrates three different ways to deal with the fact that you have to render twice.
(The code in HardwareScalerActivity uses the same GLES routines and demonstrates texturing, but it doesn't do recording.)
The key thing is to manage your EGLContext and EGLSurfaces carefully. The various bits of GLES state are held in the EGLContext, which can be current on only one thread at a time. It's easiest to use a single context and set up a separate EGLSurface for each Surface, but you can also create separate contexts (with or without sharing) and switch between them.
Some additional background material is available here.

Categories

Resources