Splitting audio tracks with incorrect length - FFMPEG - android

Version: com.writingminds:FFmpegAndroid:0.3.2
I have an audio file with length 43 seconds. And I wrote an algorithm to split at each 10 seconds mark where a word ends (For this I used IBM Watson to get ending timestamp). So cropping duration is always around 10 seconds to 11 seconds. Of course except the 5th one. I have printed my commands so that you will understand my use-case better.
System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:00.000 -codec copy -t 00:00:10.010 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_1.wav
System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:10.010 -codec copy -t 00:00:21.090 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_2.wav
System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:21.090 -codec copy -t 00:00:30.480 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_3.wav
System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:30.480 -codec copy -t 00:00:40.120 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_4.wav
System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:40.120 -codec copy -t 00:00:43.000 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_5.wav
However when playing all cropped audio files I noticed segment_1 is about 10 seconds and segment_2 is about 20 seconds etc. Therefore some of the audio parts belong to segment_1 also available in segment 2 etc etc. Why is this happening?
Appreciate your response.

-t represents duration. Use -to instead.

Related

Splitting a wav file into multiple files for an Android application

I need to split a given .wav file into 1 second wav files for an android application. Is there any library I can use and if not what method should I follow.
here is a bash shell script which splits a wav file into 1 second clips by calling ffmpeg ... I run this on my linux laptop ... I do not know whether ffmpeg is available on android it is available as a library callable from say c or java
#!/bin/bash
# https://stackoverflow.com/questions/50087271/how-to-preprocess-audio-data-for-input-into-a-neural-network/50088265#50088265
# Split a wav file into multiple optionally overlapping .wav files using ffmpeg
# https://stackoverflow.com/questions/51188268/split-a-wav-file-into-multiple-overlapping-wav-files-using-ffmpeg
input_audio=${HOME}/Lee_Smolin_Physics_Envy_and_Economic_Theory-cWn86ESze6M_mono_1st_few_seconds.wav
# input_audio=${HOME}/Lee_Smolin_Physics_Envy_and_Economic_Theory-cWn86ESze6M_mono_1st_few_seconds.mp3
output_dir="./output_v08"
if [[ ! -d "$output_dir" ]]; then
echo "mkdir -p ${output_dir}"
mkdir -p "${output_dir}"
fi
# output_audio_prefix=output_v03/aaa
output_audio_prefix="${output_dir}/aaa"
snip_duration=1.0 # in seconds
# snip_duration=1.5 # in seconds
# https://ffmpeg.org/ffmpeg.html
# https://ffmpeg.org/ffmpeg-utils.html#time-duration-syntax
# ffmpeg -i input.mp3 -ss 10 -t 6 -acodec copy output.mp3
# ffmpeg -i $input_audio -ss 0 -t 1 -acodec copy $output_audio_prefix
:<<'good_here' # this is a bulk comment to show an example of what below loop is doing
ffmpeg -i $input_audio -ss 0 -t $snip_duration -acodec copy ${output_audio_prefix}.0.00.wav
ffmpeg -i $input_audio -ss 0.20 -t $snip_duration -acodec copy ${output_audio_prefix}.0.20.wav
ffmpeg -i $input_audio -ss 0.40 -t $snip_duration -acodec copy ${output_audio_prefix}.0.40.wav
ffmpeg -i $input_audio -ss 0.60 -t $snip_duration -acodec copy ${output_audio_prefix}.0.60.wav
ffmpeg -i $input_audio -ss 0.80 -t $snip_duration -acodec copy ${output_audio_prefix}.0.80.wav
ffmpeg -i $input_audio -ss 1.00 -t $snip_duration -acodec copy ${output_audio_prefix}.1.00.wav
good_here
start_point=0 # start from beginning of audio file
# slide_window_over_time=500 # in milliseconds
slide_window_over_time=1000 # in milliseconds
# ffmpeg -i $input_audio -af astats -f null -
echo input_audio $input_audio
if [[ ! -f $input_audio ]]; then
echo "ERROR - input file does not exist -->$input_audio<-- "
exit
fi
# duration_input_audio=$( ffprobe -i $input_audio -show_entries format=duration -v quiet -of csv="p=0" | bc * 1000 )
duration_input_audio=$( ffprobe -i $input_audio -show_entries format=duration -v quiet -of csv="p=0" )
echo duration_input_audio $duration_input_audio
duration_in_milli_float=$( echo "$duration_input_audio * 1000" | bc )
echo duration_in_milli_float $duration_in_milli_float
duration_in_milli_int=${duration_in_milli_float%.*}
echo duration_in_milli_int $duration_in_milli_int
echo start_point $start_point
for((curr_window_start=$start_point;$curr_window_start<=$duration_in_milli_int;)) do
echo curr_window_start $curr_window_start
curr_window_start_seconds=$( echo "$curr_window_start / 1000" | bc -l )
echo curr_window_start_seconds $curr_window_start_seconds
curr_output_file=${output_audio_prefix}.${curr_window_start_seconds}.wav
echo curr_output_file $curr_output_file
# ffmpeg -i $input_audio -ss 0.60 -t $snip_duration -acodec copy ${output_audio_prefix}.0.60.wav
ffmpeg -i $input_audio -ss $curr_window_start_seconds -t $snip_duration -acodec copy ${curr_output_file}
# ...
curr_window_start=$( echo "$curr_window_start + $slide_window_over_time" | bc )
done
# .......... above can also be achieved directly by executing
# echo
# echo above can also be achieved directly by executing
# echo
# ffprobe -f lavfi -i amovie=${input_audio},astats=metadata=1:reset=1 -show_entries frame=pkt_pts_time:frame_tags=lavfi.astats.Overall.RMS_level,lavfi.astats.1.RMS_level,lavfi.astats.2.RMS_level -of csv=p=0
above works however you can certainly roll your own code in say java to split a wav file into clips ... it may take a couple days of coding or less once up to speed on digital audio, make that a couple of weeks if starting from a stand still as it involves notions like data endianness and multi byte storage of say 16 bit integers

How to drawtext colon with pts gmtime using ffmpeg?

I am adding timestamp to video using FFMPEG where i am using below command:
ffmpeg -y -i input.mp4 -vf "drawtext=fontfile=roboto.ttf:fontsize=36:fontcolor=yellow:text='%{pts\:gmtime\:1575526882\:%d/%m/%y %H:%M}'" -preset ultrafast -f mp4 output.mp4
in this command i am using : between %H and %M in text attribute of drawtext
text='%{pts\:gmtime\:1575526882\:%d/%m/%y %H:%M}'
because i want to print time like this 06:25
it show me this error:
Unterminated %{} near '{pts:gmtime:1575526882:%d/%m/%y %H'
how can i print : between %H and %M where %H is for hours and %M is for minutes?
Lazy method is to use %R:
text='%{pts\:gmtime\:1575526882\:%d/%m/%y %R}'
Otherwise you'll have to deal with the annoyance of escaping:
text='%{pts\:gmtime\:1575526882\:%d/%m/%y %H\\\\\:%M}'
You may have to vary the number of backslashes depending on your environment.

FFMPEG SlideShow - Only display first frame

I m using below ffmpeg command to generate video(slideshow) from list of images,but the issue is that its only displaying first image.
ffmpeg -loop 1 -t 3 -i image1.jpg -i image2.jpg -i image3.jpg -filter_complex [v][v1][v2] concat=n=3:v=1,format=yuv422p[a] -map [a] out.mp4
Any help will be appreciate.
Thanks.
Finally after lot of practices got the solution and better ffmpeg command then above command.
ffmpeg -f concat -safe 0 -i img-list.txt -f concat -safe 0 -i audio-list.txt -c:a aac -pix_fmt yuv420p -crf 23 -r 24 -shortest -y -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" video.mp4
in above command input.txt will contains path of image files seperated with new line.
input.txt
file '*/image1.jpg'
file '*/image2.jpg'
file '*/image3.jpg'

Command in ffmpeg for merge image and generated video

I am using ffmpeg in android for create video from multiple images, it has done and now i want to add more images in generated video. i found many times but not getting any proper command in ffmpeg for that.please help
Try using this command,
1.
ffmpeg -y -framerate 1/2 -start_number 1 -i img/%1d.jpg -vf scale=720:756 -c:v libx264 -r 10 -pix_fmt yuv420p videos.mp4
If you want to add watermark image to your generated video, you can use this one:
2.
ffmpeg -i videos.mp4 -i watermark_img.jpg -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -codec:a copy watermark_videos.mp4
In order to create video from random images, You have to keep all images in dir called img, as you can see in first command, and just name all images in the dir, like: 1.jpg,2.jpg...N.jpg,etc , you have to name all the images in sequential number format, which is required by the ffmpeg command as you can see in the first command -i img/%1d.jpg , here -start_number 1 indicates it should start from image preceding with 1 number, and go on with all the images stored in img/ dir, sequentially.
In order to understand the ffmeg command please read all of it's CLI parameters:
man ffmpeg
OR
ffmpeg --help

how do i write the ffmpeg command line to gen a video use images which have different size and named irregularly?

I want use images to generate a video, i found a ffmpeg command option name '-f concat' ,anybody can give me more infomations?
i try like this:
ffmpeg -r 1/5 -s 480*480 -f concat -safe 0 -i test/b.txt -c:v libx264 test/test.mp4
but an error appear:
Option video_size not found.

Categories

Resources