I m using below ffmpeg command to generate video(slideshow) from list of images,but the issue is that its only displaying first image.
ffmpeg -loop 1 -t 3 -i image1.jpg -i image2.jpg -i image3.jpg -filter_complex [v][v1][v2] concat=n=3:v=1,format=yuv422p[a] -map [a] out.mp4
Any help will be appreciate.
Thanks.
Finally after lot of practices got the solution and better ffmpeg command then above command.
ffmpeg -f concat -safe 0 -i img-list.txt -f concat -safe 0 -i audio-list.txt -c:a aac -pix_fmt yuv420p -crf 23 -r 24 -shortest -y -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" video.mp4
in above command input.txt will contains path of image files seperated with new line.
input.txt
file '*/image1.jpg'
file '*/image2.jpg'
file '*/image3.jpg'
Related
I have 2 commands, one for overlay(work alone), one for add text (work alone), i want this 2 commands in one.
ffmpeg -i myvideo.mp4 -i image.png -filter_complex [0:v][1:v]overlay=5:5,drawtext=fontfile=:text=mytext:fontcolor=orange#1.0:fontsize=30:x=30:y=200[v] -map [output] output.mp4
This command generate empty file without error.
Your -map option is using a label that does not reference anything.
You should be getting this error:
Output with label 'output' does not exist in any defined filter graph, or was already used elsewhere.
The -filter_complex output and the -map option should use the same label. It can be almost any arbitrary name as long as they match. Also, your fontfile is missing the font path. You may have to quote your text string, but you're using Android and it is weird with quoting. Lastly, you should stream copy the audio.
Use this: both filter output and -map are using [v]
ffmpeg -i myvideo.mp4 -i image.png -filter_complex [0:v][1:v]overlay=5:5,drawtext=text=mytext:fontcolor=orange#1.0:fontsize=30:x=30:y=200[v] -map [v] -map 0:a -c:a copy output.mp4
or this: both filter output and -map are using [output]
ffmpeg -i myvideo.mp4 -i image.png -filter_complex [0:v][1:v]overlay=5:5,drawtext=text=mytext:fontcolor=orange#1.0:fontsize=30:x=30:y=200[output] -map [output] -map 0:a -c:a copy output.mp4
or this: use the default stream selection
ffmpeg -i myvideo.mp4 -i image.png -filter_complex [0:v][1:v]overlay=5:5,drawtext=text=mytext:fontcolor=orange#1.0:fontsize=30:x=30:y=200 -c:a copy output.mp4
Using Ffmpeg on Android to make a video from Single image merged with an Audio file, it works but the output video is not seeking to any time stamps
ex - always starts from 0:00, on seeking the video ahead it just restarts the whole video from start.
The command I used is -
-y -framerate 30 -i "+ImagePath+" -i "+AudioPath+" -vsync vfr -c:v libx264 -codec:a copy -pix_fmt yuv420p -crf 23 "+OutputVideoPath
can this be due to single frame only? (one image in video)
if so what can be used to convert a single image to seekable video.
This command will work to make a seekable MP4 (using input from image file and MP3 audio).
ffmpeg -y -i AUDIO.file -f image2 -loop 1 -r 2 -i IMAGE.file -shortest -c:a copy -c:v libx264 -crf 18 -framerate 30 -preset veryfast -movflags +faststart OUTPUT.mp4
Just replace AUDIO.file and IMAGE.fileand OUTPUT.mp4 with your custom file names.
Try this in your Android code:
-y -i "+AudioPath+" -f image2 -loop 1 -r 2 -i "+ImagePath+" -shortest -c:a copy -c:v libx264 -crf 18 -framerate 30 -preset veryfast -movflags +faststart "+OutputVideoPath+"
Try setting up your code like this:
"-y -i "+AudioPath+" -f image2 -loop 1 -r 2 -i "+ImagePath+" -shortest -c:a copy -c:v libx264 -crf 18 -framerate 30 -preset veryfast -movflags +faststart "+OutputVideoPath
I have used this command to concate multiple images with transition effects to create video.
"-y -f concat -safe 0 -i <txt file path> -filter_complex [0:0][1:0]concat=n=2:v=0:a=1[out] -map [v] -shortest -vf fps=40 -pix_fmt yuv420p <video path>"
But it is showing error :
Stream specifier ':0' in filtergraph description [0:0][1:0]concat=n=2:v=0:a=1[out] matches no streams.
Here is my txt file
file '/storage/emulated/0/image1.jpg'
duration 5
file '/storage/emulated/0/image2.jpg'
duration 5
file '/storage/emulated/0/image3.jpg'
However if i am not applying any filter effect, it is successfully creating a video.
Following command creates the video at a frame rate of 1 frame for 5 seconds.
ffmpeg -y -r 1/5 -i image1.jpg -i image2.jpg -i image3.jpg -filter_complex 'concat=n=3:v=1:a=0 [out]' -map [out] -c:v libx264 output.mp4
Here, New in FFmpeg . I am using and testing in console in FFmpeg .
I already done with 2 video join with cross fading with this question :
I am doing for 5 videos merging with cross fade I just done 90% in merging
i just need to manage setpts=PTS-STARTPTS Just look into this pls.
ffmpeg -i big_buck.mp4 -i big_buck.mp4 -i big_buck.mp4 -i big_buck.mp4 -i
big_buck.mp4 -filter_complex "[0:v]trim=0:4,setpts=PTS-
STARTPTS,fade=out:st=4:d=1:alpha=1[1]; [1:v]trim=1:4,setpts=PTS-
STARTPTS,format=yuva420p,fade=in:st=0:d=1:alpha=1,fade=out:st=4:d=1:alpha=1[2];
[2:v]trim=1:4,setpts=PTS-
STARTPTS,format=yuva420p,fade=in:st=0:d=1:alpha=1,fade=out:st=4:d=1:alpha=1[3];
[3:v]trim=1:4,setpts=PTS-
STARTPTS,format=yuva420p,fade=in:st=0:d=1:alpha=1,fade=out:st=4:d=1:alpha=1[4];
[4:v]trim=1:4,setpts=PTS-STARTPTS,format=yuva420p,fade=in:st=0:d=1:alpha=1[5];
[1][2]overlay,format=yuv420p[12]; [12][3]overlay,format=yuv420p[123]; [4]
[5]overlay,format=yuv420p[45]; [123][45]concat=n=2 [v]" -map [v] result.mp4
****Note that every input video big_buck.mp4 length is 5 sec ****. now see setpts=PTS-STARTPTS in code How to Manage that in every video Input????
I also see in variuos forums about that but i didnt find!!!
Thank you
Use
ffmpeg -i big_buck.mp4 -i big_buck.mp4 -i big_buck.mp4 -i big_buck.mp4 -i
big_buck.mp4 -filter_complex \
"[0:v]setpts=PTS-STARTPTS[v1]; \
[1:v]format=yuva420p,fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(4/TB)[v2];
[2:v]format=yuva420p,fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(8/TB)[v3];
[3:v]format=yuva420p,fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(12/TB)[v4];
[4:v]format=yuva420p,fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(16/TB)[v5];
[v1][v2]overlay[v12]; [v12][v3]overlay[v123]; [v123][v4]overlay[v1234]; [v1234][v5]overlay,format=yuv420p[v]" \
-map [v] result.mp4
The PTS has to be modified so that each new clip starts 1 second before the current combination of clips ends i.e. the 3rd clip should start fading in at 8 seconds, since the combination of the first two clips is 9 seconds (4 seconds of first clip + 1 second transition + 4 seconds of 2nd clip).
You don't need the fade out as the next clip is fading in on top. The concat is only required if you want a cut.
With audio crossfades:
ffmpeg -i big_buck.mp4 -i big_buck.mp4 -i big_buck.mp4 -i big_buck.mp4 -i
big_buck.mp4 -filter_complex \
"[0:v]setpts=PTS-STARTPTS[v1]; \
[1:v]format=yuva420p,fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(4/TB)[v2];
[2:v]format=yuva420p,fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(8/TB)[v3];
[3:v]format=yuva420p,fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(12/TB)[v4];
[4:v]format=yuva420p,fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(16/TB)[v5];
[v1][v2]overlay[12]; [12][v3]overlay[123]; [123][v4]overlay[1234]; [1234][v5]overlay,format=yuv420p[v]; \
[1][2]acrossfade=d=1[a12]; [a12][3]acrossfade=d=1[a123]; [a123][4]acrossfade=d=1[a];" \
-map [v] -map [a] result.mp4
Based on the Gyan's answer I have created a convenient Bash script video_crossfade.sh to crossfade any number of videos of different duration.
#!/bin/bash
INPUT="$1"
CMD="ffmpeg"
SIZE=$(find . -iname "$INPUT" | wc -l)
if (( SIZE < 2 ))
then
echo "2 or more videos are required"
exit 1
fi
VIDEO=""
OUT=""
i="0"
total_duration="0"
for file in $(find . -iname "$INPUT" | sort)
do
echo $file
CMD="$CMD -i $file"
duration=$(ffprobe -v error -select_streams v:0 -show_entries stream=duration -of csv=p=0 "$file" | cut -d'.' -f1)
if [[ "$i" == "0" ]]
then
VIDEO="[0:v]setpts=PTS-STARTPTS[v0];"
else
fade_start=$((total_duration-$i))
VIDEO="${VIDEO}[${i}:v]format=yuva420p,fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(${fade_start}/TB)[v${i}];"
if (( i < SIZE-1 ))
then
if (( i == 1 ))
then
OUT="${OUT}[v0][v1]overlay[outv1];"
else
OUT="${OUT}[outv$((i-1))][v${i}]overlay[outv${i}];"
fi
else
if (( SIZE == 2 ))
then
OUT="${OUT}[v0][v1]overlay,format=yuv420p[outv]"
else
OUT="${OUT}[outv$((i-1))][v${i}]overlay,format=yuv420p[outv]"
fi
fi
fi
total_duration=$((total_duration+duration))
i=$((i+1))
done
CMD="$CMD -filter_complex \"${VIDEO}${OUT}\" -c:v libx264 -map [outv] crossfade.MP4"
echo "$CMD"
bash -c "$CMD"
Example:
./video_crossfade.sh '*.MP4'
Result:
The script takes all videos form by wildcard pattern and uses ffprobe to get the video duration.
I am cutting out segments from a long mp4 file and then rejoining parts of them. However, since FFMPEG apparently keeps the same MOOV atom for the trimmed files as the original, it looks to FFMPEG that the trimmed videos are all identical since they all have the same MOOV atom, and therefore only uses the first segment when trying to join the videos. Is there a way around this? Unfortunately since FFMPEG is embedded in an Android app, I can only use version 0.11.
Edit:
This is a sample of the processes:
ffmpeg -i /sdcard/path/movie.mp4 -ss 00:00:06.000, -t 00:00:05.270, -c:a aac -c:v libx264 /sdcard/path/file1.mp4
ffmpeg -i /sdcard/path/movie.mp4 -ss 00:00:12.000, -t 00:00:04.370, -c:a aac -c:v libx264 /sdcard/path/file2.mp4
ffmpeg -i /sdcard/path/movie.mp4 -ss 00:00:23.000, -t 00:00:03.133, -c:a aac -c:v libx264 /sdcard/path/file3.mp4
ffmpeg -i "concat:/sdcard/path/file1.mp4|/sdcard/path/file2.mp4|/sdcard/path/file3.mp4" -c:a aac -c:v libx264 /sdcard/path/output.mp4
I've also tried using the copy codec option but that hasn't helped.
Can you try something like:
ffmpeg -i input1.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts
ffmpeg -i input2.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts
ffmpeg -i "concat:intermediate1.ts|intermediate2.ts" -c copy -bsf:a aac_adtstoasc output.mp4
I hope this works with ffmpeg version 0.11. We need to make sure you can concat before applying the cut of file.
Also can you check the intermediary file*.mp4 works well? Can you try to put your video files on the same folder as ffmpeg I am not sure how the concat protocol reacts to /sdcard/path/file1.mp4|/sdcard/path/file2.mp4?