Compression in Android taking long time - android

I am new to Video related operations. I am using [FFMPEG][1] to compress a video in my Android application.
I am trying to compress a video of 50MB to 10MB by executing following command
ffmpeg -y -i /videos/in.mp4 -strict experimental -s 640x480 -r 25 -vcodec mpeg4 -b 150k -ab 48000 -ac 2 -ar 22050 /videos/out.mp4
Video compressing successfully, but it taking more than 150 seconds. I am unable find out how to reduce that time.
I want to change this command to complete this process in less time.

You can't do much here,few things you can try :
you can set the -preset value to fast/veryfast/ultrafast
you can set the -crf value (usually 18 to 28).
If you do not want to alter your audio/video codecs, you should retain the original settings using: -c copy (this can drastically improve the execution time, depending upon your use-case)
Also read this :https://github.com/WritingMinds/ffmpeg-android-java/issues/54
3.possibly use x264 "ultrafast" options (you can set the -preset value to fast/veryfast/ultrafast), or if you want mp4 video codec, maybe decrease the resolution or some option it allows

Related

Compress Video with FFmpeg

I want to compress a 1 minute video in less than a minute on Android using ffmpeg in the best possible quality. But when the time is short, the quality is low and when the quality is good, it takes a long time to compress.
Do you know the right command?
For "best quality" at fastest encoding speed:
ffmpeg -i input.mp4 -c:v libx264 -crf 18 -preset veryfast output.mp4
This assumes you want H.264 + AAC in MP4.
If input audio is AAC add -c:a copy.
If -preset veryfast is too slow use -preset ultrafast.
See FFmpeg Wiki: H.264 for more info and examples.
FFMPEG cannot do any magic....Low quality takes less time than better quality, so you have to choose right compromise between compression speed and video quality.
You didn't tell us what are size dimensions (WxH and Size in bytes) and neither current Audio and Video Codec (H264 or others).
For example 1 minute of 8K video takes more than 1 minute to be converted using HEVC codec using 10Mbps as bitrate, and cannot be reduced.....so.....
Compress video using FFMPEG:
ffmpeg -i videoPath -s 1280x720 -vf transpose=2 -metadata:s:v:0 rotate=90 -b:v 150k -vcodec mpeg4 -acodec copy outputFile
i solved problem with this command
ffmpeg -i innerPath -c:v libx264 -preset superfast -b:a 256k -vf -b:v 1.5M -c:a aac targetFilePath

Videos splitting using FFMpeg in android

Hi i am using FFMpeg for splitting video(39MB) as 15 parts using below command and that's fine video is splitting and there is no damage.
But problem is actually my main file size is 39MB but after splitting this video then splitting video files size become very large compare to main file
When i count splitting files size it was 69MB
Can some one suggested me please.
-i "input.mp4" -ss 00:00:01 -t 00:00:03 -acodec copy -vcodec copy "output.mp4"

Command in ffmpeg for merge image and generated video

I am using ffmpeg in android for create video from multiple images, it has done and now i want to add more images in generated video. i found many times but not getting any proper command in ffmpeg for that.please help
Try using this command,
1.
ffmpeg -y -framerate 1/2 -start_number 1 -i img/%1d.jpg -vf scale=720:756 -c:v libx264 -r 10 -pix_fmt yuv420p videos.mp4
If you want to add watermark image to your generated video, you can use this one:
2.
ffmpeg -i videos.mp4 -i watermark_img.jpg -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -codec:a copy watermark_videos.mp4
In order to create video from random images, You have to keep all images in dir called img, as you can see in first command, and just name all images in the dir, like: 1.jpg,2.jpg...N.jpg,etc , you have to name all the images in sequential number format, which is required by the ffmpeg command as you can see in the first command -i img/%1d.jpg , here -start_number 1 indicates it should start from image preceding with 1 number, and go on with all the images stored in img/ dir, sequentially.
In order to understand the ffmeg command please read all of it's CLI parameters:
man ffmpeg
OR
ffmpeg --help

Android - Concat 2 videos using com.netcompss.loader.LoadJNI

I'm using com.netcompss.loader.LoadJNI (FFmpeg4Android) and it was working fine until I try to concat 1 video with audio track and other video without audio track.
I was using this command line:
ffmpeg -y -i /storage/emulated/0/app/1.mp4 -i /storage/emulated/0/DCIM/2.mp4 -strict experimental -filter_complex [0:v]scale=640x360,setsar=1:1[v0];[1:v]scale=1280x720,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1 -ab 48000 -ac 2 -ar 22050 -s 640x360 -r 30 -vcodec libx264 -acodec aac -crf 18 /storage/emulated/0/app/out.mp4
1.mp4 has resolution at 640x360 and has audio track.
2.mp4 has resolution at 1280x720 and has no audio track.
On vk.log it was showing this:
Setting 'n' to value '2'
Setting 'v' to value '1'
Setting 'a' to value '1'
Stream specifier ':a' in filtergraph description [0:v]scale=1280x720,setsar=1:1[v0];[1:v]scale=1280x720,setsar=1:1[v1];[v0][0:a][v1][1:a] concat=n=2:v=1:a=1 matches no streams.
exit_program: 1
Cannot find a matching stream for unlabeled input pad 3 on filter Parsed_concat_4
I'm not good with ffmpeg, so I did some changes to the command line without success.
When all video being concatenated have audio track, there's no problem. But when one of the videos has no audio track, it fails.
What would be the correct command line in this case?
The concat filter requires all segments to have the same number of (matching) streams, and in this case, since you only have two files, where the first one has the audio, you can skip the audio concat.
ffmpeg -y -i /storage/emulated/0/app/1.mp4 -i /storage/emulated/0/DCIM/2.mp4 \
-strict experimental -filter_complex \
[0:v]scale=640x360,setsar=1[v0];[1:v]scale=640x360,setsar=1[v1]; \
[v0][v1]concat=n=2[v] -map "[v]" -r 30 -vcodec libx264 -crf 18 \
-map 0:a -acodec aac -ab 48000 -ac 2 -ar 22050 /storage/emulated/0/app/out.mp4
Since you are scaling the final video to 640x360, there's no point scaling the 2nd video to 1280x720; just scale it directly to final size. And once you do that, there's no need to insert another scaler using the -s option.
The answer from Mulvya works fine and answer what I asked. But it will keep only the first audio from first video. In my case a user may want to concat, for example, 4 videos and only one of it has no audio track. The user want to keep other videos audio.
My solution was to check each video before concat. If a video has no audio track, I add a silent audio track. Then, when running my concat command line it won't face any problem.

Live streaming of mp4 from Linux to Android with ffmpeg

I want to stream endless live video data, e.g. from my webcam of my Ubuntu machine to an Android client. Therefore, I use ffmpeg (2.2.git) and ffserver (2.2.git) in order to encode with H.264, wrap into mp4 and, finally, stream via RTSP.
I succeed in streaming (send, receive and play) files, e.g. with ffmpeg being configured like that:
ffmpeg -re -i input.mp4 \
-pix_fmt yuv420p \
-c:v libx264 -profile:v baseline -level 3.0 \
-c copy http://127.0.0.1:8888/feed1.ffm
However, also with help of (1), (2), (3) and other articles, I do not arrive in successfully streaming 'endless' webcam data - let alone anything Android compatible. The idea is to use fragmented mp4.
When I try the following:
ffmpeg -re -f v4l2 -video_size 1280x720 -i /dev/video0 \
-pix_fmt yuv420p \
-c:v libx264 -profile:v baseline -level 3.0 \
-f mp4 -movflags frag_keyframe+empty_moov \
-c copy http://127.0.0.1:8888/feed1.ffm
ffmpeg shows errors:
[NULL # 0x26b3ce0] [Eval # 0x7fffa0738a20] Undefined constant or missing '(' in 'baseline'
[NULL # 0x26b3ce0] Unable to parse option value "baseline"
[NULL # 0x26b3ce0] Error setting option profile to value baseline.
[mp4 # 0x26b2f20] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
[mp4 # 0x26b2f20] Could not find tag for codec rawvideo in stream #0, codec not currently supported in container
With the following slight differences:
ffmpeg \
-re -f v4l2 -i /dev/video0 \
-pix_fmt yuv420p \
-c:v libx264 \
-f mp4 -movflags frag_keyframe+empty_moov http://127.0.0.1:8888/feed2.ts
ffmpeg starts encoding but stops after ~2 seconds with the following output:
[libx264 # 0x2549aa0] frame= 0 QP=23.20 NAL=3 Slice:I Poc:0 I:1200 P:0 SKIP:0 size=15471 bytes
It lets the shell crash.
I also tried multiple variants of the configurations above. V4l2 works fine. I assume that because I can record videos from my webcam, for example.
What do I have to do in order to stream webcam data?
EDIT: I use the combination of H.264, H.264's baseline profile and mp4 because I know about Android's compatibility. As I said, streaming works well when used with ordinary files.
Try outputting as HLS for Android which is basically fragmented mp4 cut up into a playlist.
E.g.
ffmpeg -re -f v4l2 -video_size 1280x720 -i /dev/video0 -acodec libfdk_aac -b:a 64k -pix_fmt yuv420p -vcodec libx264 -x264opts level=41 -r 25 -profile:v baseline -b:v 1500k -maxrate 2000k -force_key_frames 50 -s 640×360 -map 0 -flags -global_header -f segment -segment_list index_1500.m3u8 -segment_time 10 -segment_format mpeg_ts -segment_list_type m3u8 segment%05d.ts
I haven't had a chance to test this but it is copied from previous code snippets so should work. There have also been a few changes that have gone into ffmpeg of late I need to catch up on and one of your error messages has an open bug.

Categories

Resources