What is relation between MediaExtractor, MediaCodec and MediaMuxer in android SDK? - android

I'm trying to make some deep learning experiments on android on video samples. And I've got stuck into remuxing videos. I have a couple of questions to arrange information in my head:) I have read some pages: https://vec.io/posts/android-hardware-decoding-with-mediacodec and https://bigflake.com/mediacodec/#ExtractMpegFramesTest but still I have a mess.
My questions:
Can I read video with MediaExtractor and then pass data to MediaMuxer to save video in another file? Without using MediaCodec?
If I want to modify frames before saving, can I do that without using Surface? Just by modifying ByteBuffer? I assume that I need to decode data from MediaExtractor, then modify content, then encode it to MediaMuxer.
Does sample is the same as frame in context of method MediaExtractor::readSampleData ?
Do I need to decode sample?

This is a brief description of what each class does:
MediaExtrator: Extracts encoded video/audio data
MediaCodec: Depending on how its configured it can be a decoder or an encoder.
MediaMuxer: Muxes streams of data into an output file.
This is how you pipeline should generally look like:
MediaExtractor -> MediaCodec(As Decoder) -> Your editing -> MediaCodec(As Encoder) -> MediaMuxer
To answer you questions:
MediaExtractor will give you encoded data, if you want to do
anything with it you will have to decode it using a MediaCodec.
It might be possible to do so without a surface but it will be
pretty limited. Surfaces is the way to go. You can find more info
here:
Editing frames and encoding with MediaCodec
Sample can be a video frame or an audio sample
Yes you do need to decode samples to edit them

Related

Is it possible to render frames in Exoplayer?

I am pulling h264 and AAC frames and at the moment I am feeding them to MediaCodec, decoding and rendering them myself, but the code is getting too complicated and I need to cover all cases. I was thinking if it's possible to set up an Exoplayer instance and feed them as a source.
I can only find that it supports normal files and streams, but not separate frames? Do I need to mux the frames myself, and if so is there an easy way to do it?
If you mean that you are extracting frames from a video file or a live stream, and then want to work on them individually or display them individually, you may find that OpenCV would suit your use case.
You can fairly simply open a stream or file, go frame by frame and do what you want with the resulting decoded bitmap.
This answer has a Python and Android example that might be useful: https://stackoverflow.com/a/58921325/334402

Android mediacodec: Is it possible to encode audio and video at the same time using mediacodec and muxer?

There is some good documentation on this site called big flake about how to use media muxer and mediacodec to encode then decode video as mp4, or extract video then encode it again and more stuff.
But it doesnt seem that there is a way to encode audio with video at the same time, no documentation or code about this. It doesn't seem impossible.
Question
Do you know any stable way of doing it that will work on all devices greater than android 18?
Why no one implemented it, is it hard to implement?
You have to create 2 Mediacodec instances, one for video and one for audio and then use MediaMuxer to mux the video with audio after encoding, you can take a look at ExtractDecodeEditEncodeMuxTest.java and at this project to capture camera/mic and save to mp4 file using Mediamuxer and Mediacodec

How to record video and audio with MediaCodec and MediaMuxer

I am able to record(encode) video with the help of MediaCodec and MediaMuxer. Next, I need to work on audio part and mux audio with video with help of MediaCodec and MediaMuxer.
I am facing two problems:
How to encode audio with MediaCodec. Do I need to encode audio and
video in separate threads?
How can I pass audio and video data to MediaMuxer (as
writeSampleData() method takes only one type of data at a time)?
I referred to MediaMuxerTest but it is using MediaExtractor. I need to use MediaCodec as video encoding is done with MediaCodec. Please correct me if I am wrong.
Any suggestion or advice will be very helpful as there is no proper documentation available for these new APIs.
Note:
My app is targeting to API 18+ (Android 4.3+).
I have referred Grafika for video encoding.
No, you don't necessarily need a separate thread for audio, just use two separate MediaCodec instances.
The first parameter of writeSampleData is trackIndex, which allows you to specify which track each packet corresponds to. (By running addTrack twice, once for each track, you get two separate track IDs.)

Android - using MediaCodec with byte array only

I'm moving around some samples of MediaCodec usage through the BigFlake and Grafika examples.
I cant find any example or way to create video without GLsurface as a buffer.
my app using customize byte array and send it to FFmpeg library-which create and encode video file.
but i like to add support for 4.3/4.4 version with the media codec.
Is there any way to send a byte[] array to the Encoder ?
for example if i like to use onPreviewFrame(byte[] frame) ,from the camera.
I dont want to send it to the glsurface,just encode it as is.
Any idea?

How to capture H.264 encoded frame in android?

I understand like, there are two ways of capturing video in android.
1) using SurfaceView API
2) using MediaRecorder API
I want to capture the H.264 encoded frames using the Android (3.0+) 's default encoder to send it over network using RTP.
While using preview callbacks with SurfaceView and SurfaceHolder classes, we are able to get raw frames shown as preview to the user. We were getting the frames in "onPreviewFrame" method of "PreviewCallback" class.
But, those frames are H.264 encoded.
So, I tried with "MediaRecorder" API to set H.264 encoding and "SurfaceView" to get the preview frames.
In this case, the previewcallbacks are not getting called.
Can you please let me know how to achieve this. Our main aim is to get the H.264 encoded frame (which hass been encoded using android's default codec).
Ref: 1) https://stackoverflow.com/a/8655244/698316
2) Similar issue: http://permalink.gmane.org/gmane.comp.handhelds.android.devel/214422
Can you suggest a way to capture the H.264 encoded frames using android's default H.264 codec support.
See Spydroid http://code.google.com/p/spydroid-ipcamera/
Basically you let the video encoder write a .mp4 with H.264 to a special file descriptor that calls your code on write. Then strip off the MP4 header and turn the H.264 NALUs into RTP packets.

Categories

Resources