I need to rotate a video to adjust some of my needs. I'll explain the details on the following list.
I'm creating a Vine like app. I have to record video segments and then merge all the parts into just one file. I'm doing this without issue on an Android app using mp4 parser library with last version 1.0-RC-26 using the example provided on their website: here
The append video example works fine if all the videos have the same orientation but I discovered some issues recording video from the front camera so the quick solution was to set the video orientation recording on 270. The bad part on this solution is that the segment with this orientation appear with the wrong orientation on the merged video.
My possible solution to this is to rotate the video to apply what I need in different situations but I'm not having a working example with my code. Searching the internet I found solutions like this one here. The problem with this code is that is not compatible with the last version (It gives an compilation error) . I tried too to understand the logic of the library but I'm not having results. For example I experimented using the setMatrix instruction on the Movie object but It simply don't work.
public static void mergeVideo(int SegmentNumber) throws Exception {
Log.d("PM", "Merge process started");
Movie[] inMovies = new Movie[SegmentNumber] ;
//long[] Matrix = new long[SegmentNumber];
for (int i = 1 ; i <= SegmentNumber; i++){
File file = new File(getCompleteFilePath(i));
if (file.exists()){
FileInputStream fis = new FileInputStream(getCompleteFilePath(i));
//Set rotation I tried to experiment with this instruction but is not working
inMovies [i-1].setMatrix(Matrix.ROTATE_90);
inMovies [i-1] = MovieCreator.build(fis.getChannel());
Log.d("PM", "Video " + i + " merged" );
}
//fis.close();
}
List<Track> videoTracks = new LinkedList<Track>();
List<Track> audioTracks = new LinkedList<Track>();
for (Movie m : inMovies) {
for (Track t : m.getTracks()) {
if (t.getHandler().equals("soun")) {
audioTracks.add(t);
}
if (t.getHandler().equals("vide")) {
videoTracks.add(t);
}
}
}
Movie result = new Movie();
if (audioTracks.size() > 0) {
result.addTrack(new AppendTrack(audioTracks.toArray(new Track[audioTracks.size()])));
}
if (videoTracks.size() > 0) {
result.addTrack(new AppendTrack(videoTracks.toArray(new Track[videoTracks.size()])));
}
Container out = new DefaultMp4Builder().build(result);
//out.getMovieBox().getMovieHeaderBox().setMatrix(Matrix.ROTATE_180); //set orientation, default merged video have wrong orientation
// Create a media file name
//
String filename = getCompleteMergedVideoFilePath() ;
FileChannel fc = new RandomAccessFile(String.format(filename), "rw").getChannel();
out.writeContainer(fc);
fc.close();
//don't leave until the file is on his place
File file = new File (filename);
do {
if (! file.exists()){
Log.d("PM", "Result file not ready");
}
} while (! file.exists() );
//
Log.d("PM", "Merge process finished");
}
Have someone rotated video with the very last version of Mp4 parser? English is not my native language so I apologize any grammar error.
for (int i = 1; i <= SegmentNumber; i++) {
IsoFile isoFile = new IsoFile(getCompleteFilePath(i));
Movie m = new Movie();
List<TrackBox> trackBoxes = isoFile.getMovieBox().getBoxes(
TrackBox.class);
for (TrackBox trackBox : trackBoxes) {
trackBox.getTrackHeaderBox().setMatrix(Matrix.ROTATE_90);
m.addTrack(new Mp4TrackImpl(trackBox));
}
inMovies[i - 1] = m;
}
This is what I did to rotate a video.
Related
I have a audio recording in multiple files. I am creating one continues audio file using com.googlecode.mp4parser:isoparser:1.0.2 library.
Below is my code :
String mediaKey = isAudio ? "soun" : "vide";
List<Movie> listMovies = new ArrayList<>();
for (String filename : sourceFiles) {
listMovies.add(MovieCreator.build(filename));
}
List<Track> listTracks = new LinkedList<>();
for (Movie movie : listMovies) {
for (Track track : movie.getTracks()) {
if (track.getHandler().equals(mediaKey)) {
listTracks.add(track);
}
}
}
Movie outputMovie = new Movie();
if (!listTracks.isEmpty()) {
outputMovie.addTrack(new AppendTrack(listTracks.toArray(new Track[listTracks.size()])));
}
Container container = new DefaultMp4Builder().build(outputMovie);
FileChannel fileChannel = new RandomAccessFile(String.format(targetFile), "rw").getChannel();
container.writeContainer(fileChannel);
fileChannel.close();
Above code runs on Android phone. As its mobile environment there are memory limitations per application.
Problem with above code is when I load Movie from file and create a track list for small files its working fine. But as file size grows the operation starts to become non-responsive. It takes lot of memory. How can I make it memory efficient. Is their any way of doing this operations in small streams as we do in case of file copy operations in Java ?
Update :
For recording audio in files, I am using android MediaRecorder for this operations with Output format as MPEG_4 and Audio encoder as AAC
mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
I'm building an android app using Xamarin. The requirement of the app is to capture video from the camera and encode the video to send it across to a server.
Initially, I was using an encoder library on the server-side to encode recorded video but it was proving to be extremely unreliable and inefficient especially for large-sized video files. I have posted my issues on another thread here
I then decided to encode the video on the client-side and then send it to the server. I've found encoding to be a bit complicated and there isn't much information available on how this can be done. So, I searched for the only way I knew how to encode a video that is by using FFmpeg codec. I've found some solutions. There's a project on GitHub that demonstrates how FFmpeg is used inside a Xamarin android project. However, running the solution doesn't give any output. The project has a binary FFmpeg file which is installed to the phone directory using the code below:
_ffmpegBin = InstallBinary(XamarinAndroidFFmpeg.Resource.Raw.ffmpeg, "ffmpeg", false);
Below is the example code for encoding video into a different set of outputs:
_workingDirectory = Android.OS.Environment.ExternalStorageDirectory.AbsolutePath;
var sourceMp4 = "cat1.mp4";
var destinationPathAndFilename = System.IO.Path.Combine (_workingDirectory, "cat1_out.mp4");
var destinationPathAndFilename2 = System.IO.Path.Combine (_workingDirectory, "cat1_out2.mp4");
var destinationPathAndFilename4 = System.IO.Path.Combine (_workingDirectory, "cat1_out4.wav");
if (File.Exists (destinationPathAndFilename))
File.Delete (destinationPathAndFilename);
CreateSampleFile(Resource.Raw.cat1, _workingDirectory, sourceMp4);
var ffmpeg = new FFMpeg (this, _workingDirectory);
var sourceClip = new Clip (System.IO.Path.Combine(_workingDirectory, sourceMp4));
var result = ffmpeg.GetInfo (sourceClip);
var br = System.Environment.NewLine;
// There are callbacks based on Standard Output and Standard Error when ffmpeg binary is running as a process:
var onComplete = new MyCommand ((_) => {
RunOnUiThread(() =>_logView.Append("DONE!" + br + br));
});
var onMessage = new MyCommand ((message) => {
RunOnUiThread(() =>_logView.Append(message + br + br));
});
var callbacks = new FFMpegCallbacks (onComplete, onMessage);
// 1. The idea of this first test is to show that video editing is possible via FFmpeg:
// It results in a 150x150 movie that eventually zooms on a cat ear. This is desaturated, and there's a fade-in.
var filters = new List<VideoFilter> ();
filters.Add (new FadeVideoFilter ("in", 0, 100));
filters.Add(new CropVideoFilter("150","150","0","0"));
filters.Add(new ColorVideoFilter(1.0m, 1.0m, 0.0m, 0.5m, 1.0m, 1.0m, 1.0m, 1.0m));
var outputClip = new Clip (destinationPathAndFilename) { videoFilter = VideoFilter.Build (filters) };
outputClip.H264_CRF = "18"; // It's the quality coefficient for H264 - Default is 28. I think 18 is pretty good.
ffmpeg.ProcessVideo(sourceClip, outputClip, true, new FFMpegCallbacks(onComplete, onMessage));
//2. This is a similar version in command line only:
string[] cmds = new string[] {
"-y",
"-i",
sourceClip.path,
"-strict",
"-2",
"-vf",
"mp=eq2=1:1.68:0.3:1.25:1:0.96:1",
destinationPathAndFilename2,
"-acodec",
"copy",
};
ffmpeg.Execute (cmds, callbacks);
// 3. This lists codecs:
string[] cmds3 = new string[] {
"-codecs",
};
ffmpeg.Execute (cmds, callbacks);
// 4. This convers to WAV
// Note that the cat movie just has some silent house noise.
ffmpeg.ConvertToWaveAudio(sourceClip, destinationPathAndFilename4, 44100, 2, callbacks, true);
I have tried different commands but no output file is generated. I have tried to use another project found here but this one has the same issue. I don't get any errors but no output file is generated. I'm really hoping someone can help me find a way I can manage to use FFmpeg in my project or some way to compress video to transport it to the server.
I will really appreciate if someone can point me in the right direction.
Just figure how to get the output by adding the permission in AndroidManifest file.
android.permission.WRITE_EXTERNAL_STORAG
Please read the update on the repository, it says that there is a second package, Xamarin.Android.MP4Transcoder for Android 6.0 onwards.
Install NuGet https://www.nuget.org/packages/Xamarin.Android.MP4Transcoder/
await Xamarin.MP4Transcoder.Transcoder
.For720pFormat()
.ConvertAsync(inputFile, ouputFile, f => {
onProgress?.Invoke((int)(f * (double)100), 100);
});
return ouputFile;
For Previous Android versions
Soruce Code https://github.com/neurospeech/xamarin-android-ffmpeg
Install-Package Xamarin.Android.FFmpeg
Use this as template, this lets you log output as well as calculates progress.
You can take a look at source, this one downloads ffmpeg and verifies sha1 hash on first use.
public class VideoConverter
{
public VideoConverter()
{
}
public File ConvertFile(Context contex,
File inputFile,
Action<string> logger = null,
Action<int,int> onProgress = null)
{
File ouputFile = new File(inputFile.CanonicalPath + ".mpg");
ouputFile.DeleteOnExit();
List<string> cmd = new List<string>();
cmd.Add("-y");
cmd.Add("-i");
cmd.Add(inputFile.CanonicalPath);
MediaMetadataRetriever m = new MediaMetadataRetriever();
m.SetDataSource(inputFile.CanonicalPath);
string rotate = m.ExtractMetadata(Android.Media.MetadataKey.VideoRotation);
int r = 0;
if (!string.IsNullOrWhiteSpace(rotate)) {
r = int.Parse(rotate);
}
cmd.Add("-b:v");
cmd.Add("1M");
cmd.Add("-b:a");
cmd.Add("128k");
switch (r)
{
case 270:
cmd.Add("-vf scale=-1:480,transpose=cclock");
break;
case 180:
cmd.Add("-vf scale=-1:480,transpose=cclock,transpose=cclock");
break;
case 90:
cmd.Add("-vf scale=480:-1,transpose=clock");
break;
case 0:
cmd.Add("-vf scale=-1:480");
break;
default:
break;
}
cmd.Add("-f");
cmd.Add("mpeg");
cmd.Add(ouputFile.CanonicalPath);
string cmdParams = string.Join(" ", cmd);
int total = 0;
int current = 0;
await FFMpeg.Xamarin.FFMpegLibrary.Run(
context,
cmdParams
, (s) => {
logger?.Invoke(s);
int n = Extract(s, "Duration:", ",");
if (n != -1) {
total = n;
}
n = Extract(s, "time=", " bitrate=");
if (n != -1) {
current = n;
onProgress?.Invoke(current, total);
}
});
return ouputFile;
}
int Extract(String text, String start, String end)
{
int i = text.IndexOf(start);
if (i != -1)
{
text = text.Substring(i + start.Length);
i = text.IndexOf(end);
if (i != -1)
{
text = text.Substring(0, i);
return parseTime(text);
}
}
return -1;
}
public static int parseTime(String time)
{
time = time.Trim();
String[] tokens = time.Split(':');
int hours = int.Parse(tokens[0]);
int minutes = int.Parse(tokens[1]);
float seconds = float.Parse(tokens[2]);
int s = (int)seconds * 100;
return hours * 360000 + minutes * 60100 + s;
}
}
The idea
I am creating a save to device feature for a movie editing application that merges one video track with one (or optionally two) audio tracks.
First, there are multiple video clips that I merge into one single video track using MP4Parser (link).
Then, there are multiple audio clips that I would like to merge into one single audio track. These clips should not be appended, but merged into a single audio track at specific times. E.g. we have two audio clips (A1, A2) and a 60 sec video track (V1). These audio clips can be overlapping, or having white noise inbetween them. The length of the whole audio track has to match the Video track, that can be up to 60 seconds. There can be up to 100 audio clips added to the audio track 1
V1 - 60.0 s
A1 - 0.3 s
A2 - 1.1 s
Last, there might be an optional second audio track that contains a soundtrack as well, fit to the V1 video track.
Summary
This is how it would look like:
Video track 1: [--------------------------------------------------------------------------------] 60 sec
Audio track 1: [-A1--A2--------------------------------------------------------------------] 60 sec
Audio track 2: [-------------------------------------------------------------------------------] 60 sec
The problem
I tried approaching the problem by appending x second of white noise (empty wav file) to the audio track to get a full length track as described above, but that obviously would not work if the sounds are overlapping. What other ways can I try to tackle this problem?
private static final String OUTPUT = "output.mp4";
private static final String STORED_LOCATION = "/storage/emulated/0/"
/**
* Merges two videos that are located in /storage/emulated/0/ and saves it to the same place with the given parameters. Uses the ffmpeg/javacv library. All this is done in an Async task, not blocking the UI thread but showing a progress bar and a toast at the end.
*
*/
private void mergeVideosAsync()
{
new AsyncTask<Void, Void, String>()
{
#Override
protected String doInBackground(Void... arg0)
{
try
{
List<Movie> movieList = new ArrayList<>();
for (int i = 0; i < mVideoPathList.size(); ++i)
{
movieList.add(MovieCreator.build(new File(mVideoPathList.get(i)).getAbsolutePath()));
}
List<Track> videoTracks = new LinkedList<>();
List<Track> audioTracks = new LinkedList<>();
for (Movie m : movieList)
{
for (Track t : m.getTracks())
{
if (t.getHandler().equals("soun"))
{
//TODO: Add audio tracks here to the merging process
// audioTracks.add(t);
}
if (t.getHandler().equals("vide"))
{
videoTracks.add(t);
}
}
}
Movie result = new Movie();
if (audioTracks.size() > 0)
{
result.addTrack(new AppendTrack(audioTracks.toArray(new Track[audioTracks.size()])));
}
if (videoTracks.size() > 0)
{
result.addTrack(new AppendTrack(videoTracks.toArray(new Track[videoTracks.size()])));
}
BasicContainer out = (BasicContainer) new DefaultMp4Builder().build(result);
mOutputPath = String.format(STORED_LOCATION + File.separator + OUTPUT_FILENAME);
WritableByteChannel fc = new RandomAccessFile(mOutputPath, "rw").getChannel();
out.writeContainer(fc);
fc.close();
}
catch (Exception e)
{
e.printStackTrace();
}
return mOutputPath;
}
}.execute();
}
If your audio tracks are overlapping then you have a problem as you'll need reencode the audio.
If the audio tracks are non-overlapping then you might be able to use the SilenceTrackImpl:
Track nuAudio = new AppendTrack(
audioTrackA1, new SilenceTrackImpl(100),
audioTrackA2, new SilenceTrackImpl(500),
audioTrackA3, new SilenceTrackImpl(1000),
audioTrackA4, new SilenceTrackImpl(50),
)
and so on.
How to write (wrap) MPEG4 data into a MP4 file in android?
I am doing some kind video processing on android platform, but I don't know how to write the processed data (encoded in some kind standard, like MPEG4) back into video file like mp4. I think it is best to use API to do this, but I can't find the needed API.
Is there anyone have any ideas?
mp4parser can work only with fully created frame streams, u cant write frame by frame with it. Correct me if im wrong
H264TrackImpl h264Track = new H264TrackImpl(new BufferedInputStream(some input stream here));
Movie m = new Movie();
IsoFile out = new DefaultMp4Builder().build(m);
File file = new File("/sdcard/encoded.mp4");
FileOutputStream fos = new FileOutputStream(file);
out.getBox(fos.getChannel());
fos.close();
Now we need to know how to write frame by frame there.
OpenCV might be a little too much for the job, but I can't think of anything easier. OpenCV is a computer vision library that offers an API for C, C++ and Python.
Since you are using Android, you'll have to download a Java wrapper for OpenCV named JavaCV, and it's a 3rd party API. I wrote a small post with instructions to install OpenCV/JavaCV on Windows and use it with Netbeans, but at the end you'll have to search for a tutorial that shows how to install OpenCV/JavaCV for the Android platform.
This is a C++ example that shows how to open an input video and copy the frames to an output file. But since you are using Android an example using JavaCV is better, so the following code copies frames from an input video and writes it to an output file named out.mp4:
package opencv_videowriter;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_highgui.*;
public class OpenCV_videowriter
{
public static void main(String[] args)
{
CvCapture capture = cvCreateFileCapture("cleanfish47.mp4");
if (capture == null)
{
System.out.println("!!! Failed cvCreateFileCapture");
return;
}
int fourcc_code = (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FOURCC);
double fps = cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
int w = (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH);
int h = (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT);
CvVideoWriter writer = cvCreateVideoWriter("out.mp4", // filename
fourcc_code, // video codec
fps, // fps
cvSize(w, h), // video dimensions
1); // is colored
if (writer == null)
{
System.out.println("!!! Failed cvCreateVideoWriter");
return;
}
IplImage captured_frame = null;
while (true)
{
// Retrieve frame from the input file
captured_frame = cvQueryFrame(capture);
if (captured_frame == null)
{
System.out.println("!!! Failed cvQueryFrame");
break;
}
// TODO: write code to process the captured frame (if needed)
// Store frame in output file
if (cvWriteFrame(writer, captured_frame) == 0)
{
System.out.println("!!! Failed cvWriteFrame");
break;
}
}
cvReleaseCapture(capture);
cvReleaseVideoWriter(writer);
}
}
Note: frames in OpenCV store pixels in the BGR order.
Your question doesn't make 100% sense. MPEG-4 is a family of specification (all ISO/IEC 14496-*) and MP4 is a the file format that is specified in ISO/IEC 14496-14.
If you want to create an MP4 file from a raw AAC and/or H264 stream I would suggest using the mp4parser library. There is an example that shows how to mux AAC and H264 into an MP4 file.
// Full working solution:
// 1. Add to app/build.gradle -> implementation 'com.googlecode.mp4parser:isoparser:1.1.22'
// 2. Add to your code:
try {
File mpegFile = new File(); // ... your mpeg file ;
File mp4file = new File(); // ... you mp4 file;
DataSource channel = new FileDataSourceImpl(mpegFile);
IsoFile isoFile = new IsoFile(channel);
List<TrackBox> trackBoxes = isoFile.getMovieBox().getBoxes(TrackBox.class);
Movie movie = new Movie();
for (TrackBox trackBox : trackBoxes) {
movie.addTrack(new Mp4TrackImpl(channel.toString()
+ "[" + trackBox.getTrackHeaderBox().getTrackId() + "]", trackBox));
}
movie.setMatrix(isoFile.getMovieBox().getMovieHeaderBox().getMatrix());
Container out = new DefaultMp4Builder().build(movie);
FileChannel fc = new FileOutputStream(mp4file).getChannel();
out.writeContainer(fc);
fc.close();
isoFile.close();
channel.close();
Log.d("TEST", "file mpeg " + mpegFile.getPath() + " was changed to " + mp4file.getPath());
// mpegFile.delete(); // if you wish!
} catch (Exception e) {
e.printStackTrace();
}
// It's all! Happy coding =)
I'm writing an Android project where I'm recording several audio files. Therefore, I'm setting the following parameters. The recording works fine.
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
My problem is that each time I record, the output is written in a separate file. Now, I need to combine these files to one file. Does anyone have an idea how to combine several MPEG 4 files in an Android project?
Thanks for your help....
I would suggest using my mp4parser library then you don't have to deal with native libs. Have a look at the AppendExample. It does exactly what you want to do. Look out for the latest version.
See below for AppendExample to get an idea how it works.
Movie[] inMovies = new Movie[]{MovieCreator.build(Channels.newChannel(AppendExample.class.getResourceAsStream("/count-deutsch-audio.mp4"))),
MovieCreator.build(Channels.newChannel(AppendExample.class.getResourceAsStream("/count-english-audio.mp4")))};
List<Track> videoTracks = new LinkedList<Track>();
List<Track> audioTracks = new LinkedList<Track>();
for (Movie m : inMovies) {
for (Track t : m.getTracks()) {
if (t.getHandler().equals("soun")) {
audioTracks.add(t);
}
if (t.getHandler().equals("vide")) {
videoTracks.add(t);
}
}
}
Movie result = new Movie();
if (audioTracks.size() > 0) {
result.addTrack(new AppendTrack(audioTracks.toArray(new Track[audioTracks.size()])));
}
if (videoTracks.size() > 0) {
result.addTrack(new AppendTrack(videoTracks.toArray(new Track[videoTracks.size()])));
}
IsoFile out = new DefaultMp4Builder().build(result);
FileChannel fc = new RandomAccessFile(String.format("output.mp4"), "rw").getChannel();
fc.position(0);
out.getBox(fc);
fc.close();
In android there is no any inbuilt functionality for combining two audio files, If you are done it through any file operation then this also down to work because its not working as the headers of audio files are different.
I recommended to use a external library FFMPEG for your android application.