I am using Google's Open Source Example: Grafika. I am using it's ContinuousCaptureActivity.java The CircularBuffer's Implementation is demonstrated in this Activity, but there is no audio included in the resultant Video file.
I want to add the Audio Recording functionality within this Activity and add the recorded Audio into the Video in the same CircularBuffered Fashion.
For achieving this i have explored the MediaCodec Library, which was introduced in 4.3+ versions. I have also used MediaMuxer to capture Video and Audio Streams and Muxed them into a single Video.
But, I am not sure about how to implement the Audio Recording functionality into the ContinuousCaptureActivity.java class. Any Help is highly appreciated.
Related
There is some good documentation on this site called big flake about how to use media muxer and mediacodec to encode then decode video as mp4, or extract video then encode it again and more stuff.
But it doesnt seem that there is a way to encode audio with video at the same time, no documentation or code about this. It doesn't seem impossible.
Question
Do you know any stable way of doing it that will work on all devices greater than android 18?
Why no one implemented it, is it hard to implement?
You have to create 2 Mediacodec instances, one for video and one for audio and then use MediaMuxer to mux the video with audio after encoding, you can take a look at ExtractDecodeEditEncodeMuxTest.java and at this project to capture camera/mic and save to mp4 file using Mediamuxer and Mediacodec
I know Twilio doesn't support video call recording on server but I've been trying to figure out how to do it locally on the android end. I have studied the video-quickstart-android code in my try to figure out how i can extract the video stream from the LocalVideoTrack and VideoTrack classes of the Twilio android conversations API but couldn't find any such method from where i could extract the underlying Video Stream and record it locally on the android device.
Anyone have any idea how I can get video stream for recording the video locally on the android device from Twilio conversations api for android?
You would have to write a custom video renderer that takes each frame and converts them into your preferred media format.
As an example the VideoViewRenderer takes frames and passes them to the org.webrtc.SurfaceViewRenderer, rendering them to a View. In this case you would write another renderer, perhaps named VideoRecorderRenderer that implemented the VideoRenderer interface and did the work of taking each I420Frame and converting to a media type. You could then add the VideoRecorderRenderer to the VideoTrack. However, this alone may not be the solution you are looking for since this is only the video portion of the media, and does not provide the audio. The AudioTrack does not expose an interface to capture the audio output at the moment.
I have an MPEG2-transport file, and using the newfangled MediaCodec API in Android, I can play the video frames. I'm using MediaExtractor to load the file, and call getTrackFormat to get a MediaFormat. The reported MIME type for the format is "video/avc".
Is there a straightforward way to play the audio at the same time, and make sure the video and audio are synchronized?
I'm aware of the SimplePlayer sample code for stagefright, but that's in C and uses undocumented interfaces. Can it be done in Java?
I am able to record(encode) video with the help of MediaCodec and MediaMuxer. Next, I need to work on audio part and mux audio with video with help of MediaCodec and MediaMuxer.
I am facing two problems:
How to encode audio with MediaCodec. Do I need to encode audio and
video in separate threads?
How can I pass audio and video data to MediaMuxer (as
writeSampleData() method takes only one type of data at a time)?
I referred to MediaMuxerTest but it is using MediaExtractor. I need to use MediaCodec as video encoding is done with MediaCodec. Please correct me if I am wrong.
Any suggestion or advice will be very helpful as there is no proper documentation available for these new APIs.
Note:
My app is targeting to API 18+ (Android 4.3+).
I have referred Grafika for video encoding.
No, you don't necessarily need a separate thread for audio, just use two separate MediaCodec instances.
The first parameter of writeSampleData is trackIndex, which allows you to specify which track each packet corresponds to. (By running addTrack twice, once for each track, you get two separate track IDs.)
I am trying to add a short audio clip on top of a video clip. I have been able to use Android's MediaMuxer to combine a .mp4 with video only and a .mp4 with audio only, but my current task is to overlay an .mp4 audio clip on top of a video+audio .mp4 clip in the middle. I have tried using the mp4parser library that has been suggested in other threads but have found troubles with that route (SampleDescriptionBoxes never match).
My idea was to use writeSampleData() to the MediaMuxer with data extracted from separate .mp4 clips. I would write the audio from the original video, then at the given point start writing the new audio. Finally, I would return to the audio from the original video again. Does anyone know if this is feasible? I am having troubles mostly because I don't understand how presentationTimeUs is supposed to work and the result is not streamable.
Any suggestions/help are appreciated!