Encoding a video with MediaCodec on Android - android

currently, I am playing a video on a GLSurfaceView using OpenGL ES 2.0. Now, I am searching for a way to encode this video played on the surface view into a MP4 video using MediaCodec.
Therefore, I found the bigflake example, which seems to solve my issue perfectly (http://bigflake.com/mediacodec/EncodeAndMuxTest.java.txt).
However, it seems that I am too stupid to set the input source right. This example uses mEncoder.createInputSurface() to create the input source, however I have a GLSurfaceView where the video is actually played. So how do I set my own surface as input source for the encoder?

Since you are using GLSurfaceView, you need to insert intercepting code in onDrawFrame(), while allocating surface in onSurfaceCreated().
Input surface can be created as usual after setting up encoder parameters.
Interceptor can be done in a form of copying egl scene into frame buffer via copying shader. And then do swapbuffer to encode frame.
Try look at tutorial for arbitrary elg scene capturing at
https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials-video-capturing-for-opengl-applications

Related

How to feed the video frames directly to Encoder surface, without using Decoder?

I am developing a video compressor app where the general architecture everyone follow is first decode the video frames on the Output surface of decoder then swap those buffer directly to the Input surface and then encode it. Here we are using both decoder and Encoder. that requires double time.
My question is is it possible to by pass my video frames directly to the Input surface of Encoder without passing it through the Output surface of decoder. I don't want to use decoder.
If it is possible then kindly suggest me the solution.
I looked into this example EncodeAndMux where the programme is creating frames and passing it directly to the Input surface of Encoder. I want the similar way how to pass video frames directly to the Encoder Surface.
Thank you in advance.
If you have already decoded frames or raw frames so yes you can pass them to input surface of encoder without using a decoder check this answer, otherwise you should decode frames first using input/output buffers of decoder.

android - MediaCodec record video with timestamp on each video frame

I need to record video with timestamp on each video frame. I see a example in cts which use InputSurace.java and OutputSurface.java to connect Decoder and Encoder to transcode video files. Is it possible to reuse these two android java class to implement a timestamp video recorder?
I try to use OutputSurface as Camera preview output and use InputSurface as MediaCodec Encoder input but sounds like only record 2 or 3 frames then it hang there forever!
Take your time and explore this link to get an idea how to feed the Camera preview into a video file. Once you are confident about the mechanism, you should be able to feed the MediaCodec input surface with some kind of OpenGL magic to put extra graphics on the top of the Camera's preview. I would recommend to tweak the example code's drawExtra() as a start.

Multiple videos on one Surface

I have a single, fullscreen SurfaceView. And I have multiple network streams with h264 video which I can decode using MediaCodec. Is it possible to specify to which coordinates of the Surface will the video be rendered? So I can create kind of video mozaic?
No, that's not possible. You'll need to use multiple SurfaceTextures instead, one per video decoder, and render all the textures into one view using Open GL.
See https://source.android.com/devices/graphics/architecture.html for more explanations on how this works; in particular, each Surface can only have one producer and one consumer.
In single SurfaceView - no. For more information you can explore SurfaceView source code. Maybe some effect of mosaic you can create using the few SurfaceView and adding special byte buffer trimer - to combine one video to several SV and getting full video.
But anyway! It will not be a good idea, if we talk about performance.

Processing Frames from Mediacodec Output and Update the Frames on Android

I am doing a project on image processing stuff. I receive a raw h264 video stream in real time and decode it using MediaCodec. I have successfully displayed the decoded video on a TextureView or SurfaceView. Now I want to process each frame, do something to it using OpenCV4Android and show the updated video frame on the screen. I know OpenCV has a sample project that demonstrates how to process video frames from the phone camera, but I wonder how to do it if I have another video source.
Also I have some questions on TextureView:
What does the onSurfaceTextureUpdated() from SurfaceTextureListener do? If I call getBitmap() in this function, then does that mean I get each frame of the video? And what about SurfaceTexture.onFrameAvailableListener?
Is it possible to use a hidden TextureView as an intermediate, extract its frames for processing and render it back to another surface, say, OpenGL ES texture for displaying?
The various examples in Grafika that use Camera as input can also work with input from a video stream. Either way you send the video frame to a Surface.
If you want to work with a frame of video in software, rather than on the GPU, things get more difficult. You either have to receive the frame on a Surface and copy it to a memory buffer, probably performing an RGB-to-YUV color conversion in the process, or you have to get the YUV buffer output from MediaCodec. The latter is tricky because a few different formats are possible, including Qualcomm's proprietary tiled format.
With regard to TextureView:
onSurfaceTextureUpdated() is called whenever TextureView receives a new frame. You can use getBitmap() to get every frame of the video, but you need to pace the video playback to match your filter speed -- TextureView will drop frames if you fall behind.
You could create a "hidden TextureView" by putting other View elements on top of it, but that would be silly. TextureView uses a SurfaceTexture to convert the video frames to OpenGL ES textures, then renders them as part of drawing the View UI. The bitmap data is retrieved with glReadPixels(). You can just use these elements directly. The bigflake ExtractMpegFramesTest demonstrates this.

Recording a Surface using MediaCodec

So, In my application, I am able to show effects(like blur filter, gaussian) to video that comes from Camera using GPUImage library.
Basically, I (library) will take the input from the Camera, get's the raw byte data, converts it into RGBA format from YUV format, then applies effects to this image and displays on the Surface of GLSurfaceView using OpenGL. finally, to the user, it looks like a video with effects applied.
Now I want to record the frames of Surface as a video using MediaCodec API.
but this discussion says that we can not pass a predefined Surface to the MediaCodec.
I have seen some samples at bigflake where he is creating Surface using MediaCodec.createInputSurface() but for me, Surface comes from the GLSurfaceView.
So, how can I record a frames of a Surface as a video?
I will record the audio parallelly, merge that video and audio using FFMPEG and present to the user as a Video with effects applied.
You can see a complete example of this in Grafika.
In particular, the "Show + capture camera" activity records camera output to .mp4. It also demonstrates applying some simple image processing techniques in the GL shader. It uses a GLSurfaceView and a convoluted dance to keep the recording going across orientation changes.
Also possibly of interest, the "Record GL app with FBO" activity records OpenGL ES rendering a couple different ways. It uses plain SurfaceView and is much more straightforward.

Categories

Resources