How to combine 2 .mp4 videos on android device? - android

The goal is to combine 2 .mp4 videos like one playing after the other. There are countless references to ffmpeg, which is not a very good option since it involves NDK making the project really heavy for some not so significant function.
I came to know that mediacodec has been improved a lot. I need a guide to help me through it. I couldn't find any.
Also I am looking for combining .mp3 and a photo (single) and making a .mp4
Please help.
I tried using javacv library for 2nd query, it was a dead end.
The code is as follows:
{FFmpegFrameGrabber videoFrames = FFmpegFrameGrabber.createDefault(videoSource);
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(videoFile, 200, 200);
recorder.setFrameRate(10);
recorder.start();
Frame frame = videoFrames.grab();
for (int i = 0; i < (numSeconds * 10); i++) {
recorder.record(frame);
}
}
It takes 12 seconds just to make a 5 frame video. fps is 25.

If you want to merge two video files the easy way is mp4parser (it has everything you need). The other way is to try to extract the audio and video channels with MediaExtractor, than use the MediaMuxer to merge the two files into one. Also, make sure the files are identical in terms of resolution, framerate and bitrate, otherwise the merge may not be perfect. In such case, you may need to re-encode the files. You can do that with FFMPEG, or you might want to consider sophisticated Android tools such as the MediaCodec.
UPDATE:
Creating an audio .mp4 with a single frame should be the easiest in FFMPEG. If you need a lighter solution, you have to encode your image with MediaCodec, and extract the audio from the .mp3 first. After that, you have to mux the given video and audio samples into a video file. You can find brilliant MediaCodec examples in Grafika and BigFlake

Related

Is it possible to render frames in Exoplayer?

I am pulling h264 and AAC frames and at the moment I am feeding them to MediaCodec, decoding and rendering them myself, but the code is getting too complicated and I need to cover all cases. I was thinking if it's possible to set up an Exoplayer instance and feed them as a source.
I can only find that it supports normal files and streams, but not separate frames? Do I need to mux the frames myself, and if so is there an easy way to do it?
If you mean that you are extracting frames from a video file or a live stream, and then want to work on them individually or display them individually, you may find that OpenCV would suit your use case.
You can fairly simply open a stream or file, go frame by frame and do what you want with the resulting decoded bitmap.
This answer has a Python and Android example that might be useful: https://stackoverflow.com/a/58921325/334402

I want to attach Pre-Rolls to videos taken on android devices

I'm using mp4parser and the videos need to be of the same kind.
I was thinking of using android's media codec to decode & encode the preroll video to fit the same encoding output of the cameras (front & back)
any suggestion on how this can be done (how to get specific device encoding params)?
If you want to find out what encoding your Android camera is using, try using this: https://developer.android.com/reference/android/media/CamcorderProfile
This should suffice to answer your questions for detecting the video encoding including : The file output format, Video codec format, Video bit rate in bits per second, Video frame rate in frames per second, Video frame width and height, Audio codec format, Audio bit rate in bits per second, Audio sample rate Number of audio channels for recording.
Pulled a lot of the above information from here as well: https://developer.android.com/guide/topics/media/camera#capture-video
As for transcoding videos that are already in the users roll, I found this useful transcoder that was written in pure java using the Android MediaCodec API and can be found here: https://github.com/ypresto/android-transcoder
Also, as rupps mentioned below, you can use ffmeg which is proven to work countless times on Android. However, the reasoning for me linking the other transcoder first is because, as the author states, "FFmpeg is the most famous solution for transcoding. But using FFmpeg binary on Android can cause GPL and/or patent issues. Also using native code for Android development can be troublesome because of cross-compiling, architecture compatibility, build time and binary size." So use whichever one you believe better suits you. Here is the link for ffmeg for Android:
https://github.com/WritingMinds/ffmpeg-android
If you don't want to use the transcoder that someone else made then I reccomend making your own transcoder using the MediaCodec API that can be found here: https://developer.android.com/reference/android/media/MediaCodec
If you want magic, try this library.
https://github.com/INDExOS/media-for-mobile/
Take a look at the MediaComposer class.
Here's also a code snippet on how it's done.
org.m4m.MediaComposer mediaComposer = new org.m4m.MediaComposer(factory, progressListener);
mediaComposer.addSourceFile(mediaUri1);
int orientation = mediaFileInfo.getRotation();
mediaComposer.setTargetFile(dstMediaPath, orientation);
// set video encoder
VideoFormatAndroid videoFormat = new VideoFormatAndroid(videoMimeType, width, height);
videoFormat.setVideoBitRateInKBytes(videoBitRateInKBytes);
videoFormat.setVideoFrameRate(videoFrameRate);
videoFormat.setVideoIFrameInterval(videoIFrameInterval);
mediaComposer.setTargetVideoFormat(videoFormat);
// set audio encoder
AudioFormatAndroid aFormat = new AudioFormatAndroid(audioMimeType, audioFormat.getAudioSampleRateInHz(), audioFormat.getAudioChannelCount());
aFormat.setAudioBitrateInBytes(audioBitRate);
aFormat.setAudioProfile(MediaCodecInfo.CodecProfileLevel.AACObjectLC);
mediaComposer.setTargetAudioFormat(aFormat);
mediaComposer.setTargetFile(dstMediaPath, orientation);
mediaComposer.start();

get audio from one mp4 and use it in resampled (smaller) mp4 in android

I have found a solution for resampling an .mp4 video taken with the camera on the device to make it smaller (resizing by resolution, bitrate, and framerate). The problem is, it doesn't carry the audio over.
I have looked at several different options for trying to get the audio out of my source (large) mp4 and push it into my smaller mp4 and I can't not seem to get any of these procedures to work correctly.
I've tried the following:
1) extracting the PCM audio from the source using: How do I extractor audio to mp3 from mp4 using java in Android?
2) converting the PCM to M4A and then adding the M4A to the smaller MP4 using: https://github.com/tqnst/MP4ParserMergeAudioVideo/blob/master/Mp4ParserSample-master/src/jp/classmethod/sample/mp4parser/MainActivity.java
that's the method I got closest with but the audio was really slow and didn't match up at all with the video in the smaller mp4.
I also tried a "direct copy" from one mp4 to the other with a variation of this: Concatenate multiple mp4 audio files using android´s MediaMuxer
that made my smaller mp4 actually larger (in file size) than my source mp4 and it didn't actually move the sound over.
The android documentation for MediaMuxer is pretty terrible and I can't make heads or tails of what I need to do to get this to work. It seems like it should be a pretty trivial task....
any suggestions or advice would be greatly appreciated.
TIA
I ended up just using ffmpeg with this solution:
https://github.com/WritingMinds/ffmpeg-android-java

Android record square video and concat

Is there a way to record square (640x640) videos and concat them in Android? I looked up in the Internet and found some solutions. The solution seems to be "ffmpeg". However, to use ffmpeg I need to dive into NDK and build ffmpeg from its sources. Is there a solution by only using the Android SDK?
My basic needs are:
Record multiple videos (square format)
Resize captured videos (i.e. 480x480 to 640x640)
Concat captured videos
Rotate final video (clockwise 90)
Final output will be in mp4 or mpg format
Is there a solution by only using the Android SDK?
Not really.
Your primary video recording option is MediaRecorder, and it supports exactly nothing of what you list. For example, there is no requirement for any Android device to support taking square videos.
You are also welcome to use the camera preview stuff to assemble your own videos from individual frames. Vine does this, AFAIK. There, you could perhaps use existing Bitmap facilities to handle the cropping, resizing, and rotating. However, this will be slow, and doing this work in a way that can keep up with a reasonable frame rate will be difficult. Also, I do not know if there is a library that can stitch those frames together into a video, or blend in any sort of audio (camera previews are pure images).

Trying to capture a video on Android with timestamps

I'm trying to implement an Android app that records a video, while also writing a file containing the times at which each frame was taken. I tried using MediaRecorder, got to a point where I can get the video saved, but I couldn't find a way to get the timestamps. I tried doing something like:
while (previous video file length != current video file length)
write current time to text file;
but this did not seem to work, as file length doesn't seem to be updated frequently enough (or am I wrong?).
I then tried using OpenCV and managed to capture the images frame by frame (and therefore getting timestamps was easy), but I couldn't find a way to join all the frames to one video. I saw answers referring to using NDK and FFmpeg, but I feel like there should be an easier solution (perhaps similar to the one I tried at the top?).
You could use MediaCodec to capture the video, but that gets complicated quickly, especially if you want audio as well. You can recover timestamps from a video file by walking through it with MediaExtractor. Just call getSampleTime() on each frame to get the presentation time in microseconds.
This question is old, but I just needed the same thing. With the camera2 api you could stream to multiple targets. So you can stream from one and the same physical camera to the MediaRecorder and at the same time to an ImageReader. The ImageReader has the ImageReader.OnImageAvailableListener, the "onImageAvailable" callback you can use for querying the frames for their timestamps:
image = reader.acquireLatestImage();
long cameraTime = image.getTimestamp();
Not sure whether streaming to mulitple targets requires certain hardware capabilities, though (such as multiple image processors). I guess, modern devices have like 2-3 of them.
EDIT: This looks like every camera2 capable device can stream to up to 3 targets.
EDIT: Apparently, you can use an ImageReader for capturing the individual frames and their timestamps, and then use an ImageWriter to provide those images to downstream consumers such as a MediaCodec, which records the video for you.
Expanding on fadden's answer, the easiest way to go is to record using MediaRecorder (the Camera2 example helped me a lot) and after that extracting timestamps via MediaExtractor. I want to add a working MediaExtractor code sample:
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(pathToVideo);
do {
long sampleMillis = extractor.getSampleTime() / 1000;
// do something with sampleMillis
}
while (extractor.advance() && extractor.getSampleTime() != -1);

Categories

Resources