How to drawtext colon with pts gmtime using ffmpeg? - android

I am adding timestamp to video using FFMPEG where i am using below command:
ffmpeg -y -i input.mp4 -vf "drawtext=fontfile=roboto.ttf:fontsize=36:fontcolor=yellow:text='%{pts\:gmtime\:1575526882\:%d/%m/%y %H:%M}'" -preset ultrafast -f mp4 output.mp4
in this command i am using : between %H and %M in text attribute of drawtext
text='%{pts\:gmtime\:1575526882\:%d/%m/%y %H:%M}'
because i want to print time like this 06:25
it show me this error:
Unterminated %{} near '{pts:gmtime:1575526882:%d/%m/%y %H'
how can i print : between %H and %M where %H is for hours and %M is for minutes?

Lazy method is to use %R:
text='%{pts\:gmtime\:1575526882\:%d/%m/%y %R}'
Otherwise you'll have to deal with the annoyance of escaping:
text='%{pts\:gmtime\:1575526882\:%d/%m/%y %H\\\\\:%M}'
You may have to vary the number of backslashes depending on your environment.

Related

Splitting audio tracks with incorrect length - FFMPEG

Version: com.writingminds:FFmpegAndroid:0.3.2
I have an audio file with length 43 seconds. And I wrote an algorithm to split at each 10 seconds mark where a word ends (For this I used IBM Watson to get ending timestamp). So cropping duration is always around 10 seconds to 11 seconds. Of course except the 5th one. I have printed my commands so that you will understand my use-case better.
System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:00.000 -codec copy -t 00:00:10.010 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_1.wav
System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:10.010 -codec copy -t 00:00:21.090 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_2.wav
System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:21.090 -codec copy -t 00:00:30.480 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_3.wav
System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:30.480 -codec copy -t 00:00:40.120 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_4.wav
System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:40.120 -codec copy -t 00:00:43.000 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_5.wav
However when playing all cropped audio files I noticed segment_1 is about 10 seconds and segment_2 is about 20 seconds etc. Therefore some of the audio parts belong to segment_1 also available in segment 2 etc etc. Why is this happening?
Appreciate your response.
-t represents duration. Use -to instead.

Command in ffmpeg for merge image and generated video

I am using ffmpeg in android for create video from multiple images, it has done and now i want to add more images in generated video. i found many times but not getting any proper command in ffmpeg for that.please help
Try using this command,
1.
ffmpeg -y -framerate 1/2 -start_number 1 -i img/%1d.jpg -vf scale=720:756 -c:v libx264 -r 10 -pix_fmt yuv420p videos.mp4
If you want to add watermark image to your generated video, you can use this one:
2.
ffmpeg -i videos.mp4 -i watermark_img.jpg -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -codec:a copy watermark_videos.mp4
In order to create video from random images, You have to keep all images in dir called img, as you can see in first command, and just name all images in the dir, like: 1.jpg,2.jpg...N.jpg,etc , you have to name all the images in sequential number format, which is required by the ffmpeg command as you can see in the first command -i img/%1d.jpg , here -start_number 1 indicates it should start from image preceding with 1 number, and go on with all the images stored in img/ dir, sequentially.
In order to understand the ffmeg command please read all of it's CLI parameters:
man ffmpeg
OR
ffmpeg --help

how do i write the ffmpeg command line to gen a video use images which have different size and named irregularly?

I want use images to generate a video, i found a ffmpeg command option name '-f concat' ,anybody can give me more infomations?
i try like this:
ffmpeg -r 1/5 -s 480*480 -f concat -safe 0 -i test/b.txt -c:v libx264 test/test.mp4
but an error appear:
Option video_size not found.

Android - Concat 2 videos using com.netcompss.loader.LoadJNI

I'm using com.netcompss.loader.LoadJNI (FFmpeg4Android) and it was working fine until I try to concat 1 video with audio track and other video without audio track.
I was using this command line:
ffmpeg -y -i /storage/emulated/0/app/1.mp4 -i /storage/emulated/0/DCIM/2.mp4 -strict experimental -filter_complex [0:v]scale=640x360,setsar=1:1[v0];[1:v]scale=1280x720,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1 -ab 48000 -ac 2 -ar 22050 -s 640x360 -r 30 -vcodec libx264 -acodec aac -crf 18 /storage/emulated/0/app/out.mp4
1.mp4 has resolution at 640x360 and has audio track.
2.mp4 has resolution at 1280x720 and has no audio track.
On vk.log it was showing this:
Setting 'n' to value '2'
Setting 'v' to value '1'
Setting 'a' to value '1'
Stream specifier ':a' in filtergraph description [0:v]scale=1280x720,setsar=1:1[v0];[1:v]scale=1280x720,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1 matches no streams.
exit_program: 1
Cannot find a matching stream for unlabeled input pad 3 on filter Parsed_concat_4
I'm not good with ffmpeg, so I did some changes to the command line without success.
When all video being concatenated have audio track, there's no problem. But when one of the videos has no audio track, it fails.
What would be the correct command line in this case?
The concat filter requires all segments to have the same number of (matching) streams, and in this case, since you only have two files, where the first one has the audio, you can skip the audio concat.
ffmpeg -y -i /storage/emulated/0/app/1.mp4 -i /storage/emulated/0/DCIM/2.mp4 \
-strict experimental -filter_complex \
[0:v]scale=640x360,setsar=1[v0];[1:v]scale=640x360,setsar=1[v1]; \
[v0][v1]concat=n=2[v] -map "[v]" -r 30 -vcodec libx264 -crf 18 \
-map 0:a -acodec aac -ab 48000 -ac 2 -ar 22050 /storage/emulated/0/app/out.mp4
Since you are scaling the final video to 640x360, there's no point scaling the 2nd video to 1280x720; just scale it directly to final size. And once you do that, there's no need to insert another scaler using the -s option.
The answer from Mulvya works fine and answer what I asked. But it will keep only the first audio from first video. In my case a user may want to concat, for example, 4 videos and only one of it has no audio track. The user want to keep other videos audio.
My solution was to check each video before concat. If a video has no audio track, I add a silent audio track. Then, when running my concat command line it won't face any problem.

Live streaming of mp4 from Linux to Android with ffmpeg

I want to stream endless live video data, e.g. from my webcam of my Ubuntu machine to an Android client. Therefore, I use ffmpeg (2.2.git) and ffserver (2.2.git) in order to encode with H.264, wrap into mp4 and, finally, stream via RTSP.
I succeed in streaming (send, receive and play) files, e.g. with ffmpeg being configured like that:
ffmpeg -re -i input.mp4 \
-pix_fmt yuv420p \
-c:v libx264 -profile:v baseline -level 3.0 \
-c copy http://127.0.0.1:8888/feed1.ffm
However, also with help of (1), (2), (3) and other articles, I do not arrive in successfully streaming 'endless' webcam data - let alone anything Android compatible. The idea is to use fragmented mp4.
When I try the following:
ffmpeg -re -f v4l2 -video_size 1280x720 -i /dev/video0 \
-pix_fmt yuv420p \
-c:v libx264 -profile:v baseline -level 3.0 \
-f mp4 -movflags frag_keyframe+empty_moov \
-c copy http://127.0.0.1:8888/feed1.ffm
ffmpeg shows errors:
[NULL # 0x26b3ce0] [Eval # 0x7fffa0738a20] Undefined constant or missing '(' in 'baseline'
[NULL # 0x26b3ce0] Unable to parse option value "baseline"
[NULL # 0x26b3ce0] Error setting option profile to value baseline.
[mp4 # 0x26b2f20] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
[mp4 # 0x26b2f20] Could not find tag for codec rawvideo in stream #0, codec not currently supported in container
With the following slight differences:
ffmpeg \
-re -f v4l2 -i /dev/video0 \
-pix_fmt yuv420p \
-c:v libx264 \
-f mp4 -movflags frag_keyframe+empty_moov http://127.0.0.1:8888/feed2.ts
ffmpeg starts encoding but stops after ~2 seconds with the following output:
[libx264 # 0x2549aa0] frame= 0 QP=23.20 NAL=3 Slice:I Poc:0 I:1200 P:0 SKIP:0 size=15471 bytes
It lets the shell crash.
I also tried multiple variants of the configurations above. V4l2 works fine. I assume that because I can record videos from my webcam, for example.
What do I have to do in order to stream webcam data?
EDIT: I use the combination of H.264, H.264's baseline profile and mp4 because I know about Android's compatibility. As I said, streaming works well when used with ordinary files.
Try outputting as HLS for Android which is basically fragmented mp4 cut up into a playlist.
E.g.
ffmpeg -re -f v4l2 -video_size 1280x720 -i /dev/video0 -acodec libfdk_aac -b:a 64k -pix_fmt yuv420p -vcodec libx264 -x264opts level=41 -r 25 -profile:v baseline -b:v 1500k -maxrate 2000k -force_key_frames 50 -s 640×360 -map 0 -flags -global_header -f segment -segment_list index_1500.m3u8 -segment_time 10 -segment_format mpeg_ts -segment_list_type m3u8 segment%05d.ts
I haven't had a chance to test this but it is copied from previous code snippets so should work. There have also been a few changes that have gone into ffmpeg of late I need to catch up on and one of your error messages has an open bug.

Categories

Resources