I have recently used ffmpeg library for android to compress the video of length 10 seconds and size nearly 25 MB. Following are the commands i tried to use:
ffmpeg -i /video.mp4 -vcodec h264 -b:v 1000k -acodec mp2 /output.mp4
OR
ffmpeg -i input.mp4 -vcodec h264 -crf 20 output.mp4
Both of the commands were too slow. I canceled the task before it completed because it was taking too much time. It took more than 8 minutes to process JUST 20% of the video. Time is really critical for me so i can't opt for ffmpeg. I have following question:
Is there something wrong with the command or ffmpeg is slow anyway?
If its slow then is there any other well documented and reliable way/library for video compression that i can use in android?
Your file is in mp4 container and already has its streams in some predefined codec.
Now the size of any container(not specifically mp4) will depend upon what kind of compression(loosely codec) is used for compressing the data. This is why you will see different size for the same content in different formats.
There are other parameters which can affect the size of the file i.e frame rate, resolution, audio bit rate etc. If you reduce them then the size of file becomes less. e.g. in youtube you can choose to play video at a lower rate when bandwidth is the issue.
However, if you choose to do this you will have to re-process the entire file again and its going to to take a lot of time since you are demuxing the container, decoding the codec, applying filter(reducing frame etc), then recording, and then again remuxing. This entire process if not worth few extra MB of saving unless you have some compelling use case.
One solution is to use a more powerful machine but again this is limited by the architecture/constraint of the application to utilize powerful machine. To answer specifically for ffmpeg it wont make much difference.
Related
I have been working on FFMpeg from past 7 days. I am required to create a video where I need to perform following:
Concat few images into video picked from android storage.
Add a music file [same picked from android storage].
Set Duration per image to be shown in the video. e.g. if 3 images are picked then total duration of video should be 3*duration choosen by user.
What I have done so far.
I am using implementation 'nl.bravobit:android-ffmpeg:1.1.5' for the FFmpeg prebuild binaries.
Following is the command that has been used to concatenation the images and set the duration:
ffmpeg -r 1/duration_selected_by_user -f concat -safe 0 -i path_of_concat_file.txt -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p path_of_output_video_file.mp4
Note:
By adding the duration chosen by a user as frame rate I am able to set the duration of each image to be shown.
Concat work well and images are merged to for the video.
Problem:
When I try to add audio in the same process, progress run for a long time even for small audio by using the following command:
ffmpeg -r 1/duration_selected_by_user -i path_of_audio_file.mp3 -f concat -safe 0 -i path_of_concat_file.txt -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p path_of_output_video_file.mp4
It looks like there is some issue in command as I am not much familiar and experienced with the technology. How can I improve the performance of merging images and audio to get the better results?
Important Links related to context:
https://github.com/bravobit/FFmpeg-Android
As #HB mentioned in the comments:
Try adding -preset ultrafast after -pix_fmt yuv420p
A preset is a collection of options that will provide a certain encoding speed to compression ratio. A slower preset will provide better compression (compression is quality per filesize). This means that, for example, if you target a certain file size or constant bit rate, you will achieve better quality with a slower preset. Similarly, for constant quality encoding, you will simply save bitrate by choosing a slower preset.
The available presets in descending order of speed are:
ultrafast
superfast
veryfast
faster
fast
medium – default preset
slow
slower
veryslow
placebo – ignore this as it is not useful
Reference: https://trac.ffmpeg.org/wiki/Encode/H.264#Preset
I have a project based off of https://ikaruga2.wordpress.com/2011/06/15/video-live-wallpaper-part-1/, which uses an older copy of the ffmpeg libraries from http://bambuser.com/opensource. Within the C++ code in this project we have the following lines of code:
unsigned long long current = GetCurrentTimeInNanoseconds();
avcodec_decode_video(pCodecCtx, pFrame, &frameFinished, packet.data, packet.size);
__android_log_print(ANDROID_LOG_DEBUG, "getFrame>>>>", "decode video time: %llu", (GetCurrentTimeInNanoseconds() - current)/1000000);
This code continually reports between 60 and 90 ms to decode each frame on an Xperia Ion, using a 1280x720 h264 source video file. Other processing to get the frame out to the screen takes an average of 30ms more with very little variation. This leads to frame rates of 10-11fps.
Ignoring that other processing, a decode that takes an average of 75ms would result in 13fps. However, when I browse my SD card and click on that mp4 file to open it in the native viewer, it shows at a full 30fps. Further, when I open a 1920x1080 version of the same mp4 in the native viewer it also runs at a full 30fps without stutter or lag. This implies (to my novice eye) that something is very very wrong, as the hardware is obviously capable of decoding many times faster.
What flags or options can be passed to avcode_decode_video to optimize decode speed to match that of the native viewer? Can optimizations be made elsewhere to optimize speed further? Is there a reason that the native viewer can decode almost an order of magnitude faster (taking into account the 1920x1080 source results)?
EDIT
The answer below is very helpful, but is not practical for me at this time. In the mean time I have managed to decrease decoding time by 70% with some optimal encoding flags found through many many hours of trial and error. Here are the ffmpeg arguments I'm using for encoding in case it helps anyone else who stumbles across this post:
ffmpeg.exe -i "#inputFilePath#" -c:v libx264 -preset veryslow -g 2 -y -s 910x512 -b 5000k -minrate 2000k -maxrate 8000k -pix_fmt yuv420p -tune fastdecode -coder 0 -flags -loop -profile:v main -x264-params subme=5:ref=4 "#ouputFilePath#"
With these settings ffmpeg is decoding frames in 20-25 seconds, though with the sws_scale and then writing out to the texture I'm still hovering at ~22 FPS on an Xperia Ion at a lower resolution than I'd like.
The native viewer uses hardware h264 decoder, while ffmpeg is usually compiled software-only. You must build ffmpeg with libstagefright.
libstagefright option has been pulled
I'm not an expert in Video Editing but what I want to understand the logic of Whatsapp video processing.
First of all I have noticed that whatever the file is, Whatsapp sets the limit of Uploaded videos to 16MB, after which whatsapp crops the video to not exceed the limit. is this a convention or it's a personal choice ?
Secondly, When a video is recorded using the Camera it's not compressed by default, so whatsapp compresses it using FFMPEG I guess, and it takes no time. (tried for a video of 1min 1920x1080 with 125MB of size, becomes 640x360 with 5MB of size in no time, and the upload starts automatically).. how may they do this ? and why the choice of 640x360, It seems to me very fast for 2 asynchronous tasks : Compression + Upload.
When I run the compression command ffmpeg -y -i in.mp4 -codec:v libx264 -crf 23 -preset medium -codec:a libfdk_aac -vbr 4 -vf scale=-1:640,format=yuv420p out.mp4 it takes approximatively 1 min and the video is being rotated !! :D
Finally, when we download a video from Youtube it's already compressed (I guess) and whatsapp doesn't even try to compress it. So I think that it automatically detects thats the video is compressed. How can we detect this ?
Thank you.
Here are possible answers to your questions:
Quest. 1: Its a personal choice. The whatsapp team is trying to offer the best User Experience (UX) they could to users of their app, that is why they have kept a limit of 16MB for video file. Imagine how long it would take to upload a file of about 125MB. Hence, the app compresses the file for quicker upload and seamless experience.
Quest. 2: I guess you already answered this question yourself - Asynchronous programming. The large video file you feed it, gets encoded into a compressed format according to the algorithm they have written for the app. As Devs., we all know about algorithms and we all know there are things you can do to speed up execution. I guess they implemented their very own algorithm using Asynchronous programming that speeds up the process. The ffmpeg library you mentioned i guess was coded in C which i think doesn't support async call (not so sure though). After this, upload takes over.
Quest. 3&Finally: Codecs are standards. If you encode a video file to MPEG4, then try to re-encode it again to MPEG4 even using another program, you will get the same result as far as both programs are using same encoding standards, i.e. they didn't implement a specific algorithm for their programs(this takes years of work). So, when your Whatsapp tries to encode the file, it gives the same result.
Hope I have been able to answer your questions.
MichVeline
Use media codec instead of ffmpeg for better performance. If your use case is only to compress the video, MediaCodec would be best option for android. It helps you to write the code asynchronously and also gives you lots of freedom to optimize you algorithm.
I have been playing around with ffmpeg and video encoding and even though my mp4's work great on desktop, they are smooth etc they are terrible on mobile devices. They stutter and load very slowly and I am trying to figure out the problem.
As an example I made a page using the media element plugin: http://mediaelementjs.com/ and on it I first placed the video that comes with mediaelementjs and it worked well, it scaled to desktop and mobile and loaded quickly and played without any stutter.
However I loaded my video and it was slow and full of stutter, but only on mobile. So I thought it might be S3 (where it is hosted) but saved the file locally and same thing.
I am hoping someone who knows h.264 and/or ffmpeg can point me in the direction of why; here is the current command I am running on ffmpeg:
ffmpeg -i $input_file_name -vcodec libx264 -r 100 -bt 300k -ac 2 -ar 48000 -ab 192k -strict -2 -y $output_temp_file 2>&1
So what have I missed?
So what have I missed?
Mobile devices have a very limited computing power. You are trying to play 100fps video file - there aren't any mobile device i know that can handle such framerate.
First - change framerate to reasonable value, then adjust resolution, set encoding profile (baseline, for example), video bitrate (quality, rate-factor). After that you can try out your files.
I am developing a player on android using ffmpeg. However, I found out that avcodec_decode_video2 very slow. Sometimes, it takes about 0.1, even 0.2 second to decode a frame from a video with 1920 × 1080 resolution.
How can I improve the speed of avcodec_decode_video2() ?
If your device has necessary hardware+firmware capabilities, you could use ffmpeg with libstagefright support.
Update: here is the easy procedure to decide whether it is worth while to switch to libstagefright on your device for a given class of videos: use ffmpeg on your PC to convert the representative video stream into mp4:
ffmpeg -i your_video -an -vcodec copy test.mp4
and try to open the resulting file with the stock video player on your device. If the video does play with reasonable quality, you can use libstagefright with ffmpeg to improve your player app. If you see "Cannot Play Video", your device hw+fw does not support the video.
That sounds about rite. HD video takes a lot of CPU. Some codecs may support multithread decode if you device has multiple cores. But the will consume massive amounts a battery, and heat the device. This is why most mobile devices use specialized hardware decoders instead of CPU. In Android using the MediaCodec API instead of libavcodec should invoke the hardware decoder.