I have been working on FFMpeg from past 7 days. I am required to create a video where I need to perform following:
Concat few images into video picked from android storage.
Add a music file [same picked from android storage].
Set Duration per image to be shown in the video. e.g. if 3 images are picked then total duration of video should be 3*duration choosen by user.
What I have done so far.
I am using implementation 'nl.bravobit:android-ffmpeg:1.1.5' for the FFmpeg prebuild binaries.
Following is the command that has been used to concatenation the images and set the duration:
ffmpeg -r 1/duration_selected_by_user -f concat -safe 0 -i path_of_concat_file.txt -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p path_of_output_video_file.mp4
Note:
By adding the duration chosen by a user as frame rate I am able to set the duration of each image to be shown.
Concat work well and images are merged to for the video.
Problem:
When I try to add audio in the same process, progress run for a long time even for small audio by using the following command:
ffmpeg -r 1/duration_selected_by_user -i path_of_audio_file.mp3 -f concat -safe 0 -i path_of_concat_file.txt -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p path_of_output_video_file.mp4
It looks like there is some issue in command as I am not much familiar and experienced with the technology. How can I improve the performance of merging images and audio to get the better results?
Important Links related to context:
https://github.com/bravobit/FFmpeg-Android
As #HB mentioned in the comments:
Try adding -preset ultrafast after -pix_fmt yuv420p
A preset is a collection of options that will provide a certain encoding speed to compression ratio. A slower preset will provide better compression (compression is quality per filesize). This means that, for example, if you target a certain file size or constant bit rate, you will achieve better quality with a slower preset. Similarly, for constant quality encoding, you will simply save bitrate by choosing a slower preset.
The available presets in descending order of speed are:
ultrafast
superfast
veryfast
faster
fast
medium – default preset
slow
slower
veryslow
placebo – ignore this as it is not useful
Reference: https://trac.ffmpeg.org/wiki/Encode/H.264#Preset
Related
I am adding watermark to video using FFMPEG where i use -preset ultrafast in FFMPEG command.which add watermark to video very fast but due to this my output video size increased.
ffmpeg -i input.mp4 -i mt.png -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -codec:a copy -preset ultrafast output.mp4
Without Using -preset ultrafast
input video size 5MB and output video size 5MB
Using -preset ultrafast
input video size 5MB and output video size 11MB
As FFMPEG documentation says:
A preset is a collection of options that will provide a certain encoding speed to compression ratio. A slower preset will provide better compression (compression is quality per filesize). This means that, for example, if you target a certain file size or constant bit rate, you will achieve better quality with a slower preset. Similarly, for constant quality encoding, you will simply save bitrate by choosing a slower preset.
In other words there is a trade off between encoding speed and space optimization/compression
.
Try going for other preset like veryfast or superfast
I'm streaming live from my GoPro inside my android app. I use ffmpeg to receive the streaming data from the GoPro and vlc to play it in a surfaceview. I used the code which is provided by KonradIT here. The main command used for the ffmpeg is:
-fflags nobuffer -f mpegts -i udp://:8554 -f mpegts udp://127.0.0.1:8555/gopro?pkt_size=64
and the options for vlclib are:
options.add("--aout=opensles");
options.add("--audio-time-stretch");
options.add("-vvv");
The output is something worse. It's laggy and its speed is about 17 FPS. And one annoying thing is the streamed picture is very small and as far as I tried, there was no way to make it larger and stretched.
I want to know if there is any command to speedup the streaming (in anyway, even by reducing the quality) ? Either on the side of ffmpeg or vlc.
If it is only doing a relay of packets, try this :
ffmpeg -fflags nobuffer -f mpegts -i udp://:8554 -c:v copy -c:a copy -f mpegts udp://127.0.0.1:8555/gopro?pkt_size=1316
You can play with different packet sizes based on the MTU size of your network (<1500). Check the delay.
Using this command we are not transcoding the incoming packets, resize and relay.
I have recently used ffmpeg library for android to compress the video of length 10 seconds and size nearly 25 MB. Following are the commands i tried to use:
ffmpeg -i /video.mp4 -vcodec h264 -b:v 1000k -acodec mp2 /output.mp4
OR
ffmpeg -i input.mp4 -vcodec h264 -crf 20 output.mp4
Both of the commands were too slow. I canceled the task before it completed because it was taking too much time. It took more than 8 minutes to process JUST 20% of the video. Time is really critical for me so i can't opt for ffmpeg. I have following question:
Is there something wrong with the command or ffmpeg is slow anyway?
If its slow then is there any other well documented and reliable way/library for video compression that i can use in android?
Your file is in mp4 container and already has its streams in some predefined codec.
Now the size of any container(not specifically mp4) will depend upon what kind of compression(loosely codec) is used for compressing the data. This is why you will see different size for the same content in different formats.
There are other parameters which can affect the size of the file i.e frame rate, resolution, audio bit rate etc. If you reduce them then the size of file becomes less. e.g. in youtube you can choose to play video at a lower rate when bandwidth is the issue.
However, if you choose to do this you will have to re-process the entire file again and its going to to take a lot of time since you are demuxing the container, decoding the codec, applying filter(reducing frame etc), then recording, and then again remuxing. This entire process if not worth few extra MB of saving unless you have some compelling use case.
One solution is to use a more powerful machine but again this is limited by the architecture/constraint of the application to utilize powerful machine. To answer specifically for ffmpeg it wont make much difference.
I'm trying to use FFmpeg to create an MP4 file from user-created content in an Android app.
The user-created content is rendered with OpenGL. I was thinking I can render it to a FrameBuffer and save it as a temporary image file. Then, take this image file and add it to an MP4 file with FFmpeg. I'd do this frame by frame, creating and deleting these temporary image files as I go while I build the MP4 file.
The issue is I will never have all of these image files at one time, so I can't just use the typical call:
ffmpeg -start_number n -i test_%d.jpg -vcodec mpeg4 test.mp4
Is this possible with FFmpeg? I can't find any information about adding frames to an MP4 file one-by-one and keeping the correct framerate, etc...
Use STDIO to get the raw frames to FFmpeg. Note that this doesn't mean exporting entire images... all you need is the pixel data. Something like this...
ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -pix_fmt rgb24 -r 30 -i - -vcodec mpeg4 test.mp4
Using -i - means FFmpeg will read from the pipe.
I think from there you would just send in the raw pixel values via the pipe, one byte per color per pixel. FFmpeg will know when you're done with each frame since you've passed the frame size to it.
I have a project based off of https://ikaruga2.wordpress.com/2011/06/15/video-live-wallpaper-part-1/, which uses an older copy of the ffmpeg libraries from http://bambuser.com/opensource. Within the C++ code in this project we have the following lines of code:
unsigned long long current = GetCurrentTimeInNanoseconds();
avcodec_decode_video(pCodecCtx, pFrame, &frameFinished, packet.data, packet.size);
__android_log_print(ANDROID_LOG_DEBUG, "getFrame>>>>", "decode video time: %llu", (GetCurrentTimeInNanoseconds() - current)/1000000);
This code continually reports between 60 and 90 ms to decode each frame on an Xperia Ion, using a 1280x720 h264 source video file. Other processing to get the frame out to the screen takes an average of 30ms more with very little variation. This leads to frame rates of 10-11fps.
Ignoring that other processing, a decode that takes an average of 75ms would result in 13fps. However, when I browse my SD card and click on that mp4 file to open it in the native viewer, it shows at a full 30fps. Further, when I open a 1920x1080 version of the same mp4 in the native viewer it also runs at a full 30fps without stutter or lag. This implies (to my novice eye) that something is very very wrong, as the hardware is obviously capable of decoding many times faster.
What flags or options can be passed to avcode_decode_video to optimize decode speed to match that of the native viewer? Can optimizations be made elsewhere to optimize speed further? Is there a reason that the native viewer can decode almost an order of magnitude faster (taking into account the 1920x1080 source results)?
EDIT
The answer below is very helpful, but is not practical for me at this time. In the mean time I have managed to decrease decoding time by 70% with some optimal encoding flags found through many many hours of trial and error. Here are the ffmpeg arguments I'm using for encoding in case it helps anyone else who stumbles across this post:
ffmpeg.exe -i "#inputFilePath#" -c:v libx264 -preset veryslow -g 2 -y -s 910x512 -b 5000k -minrate 2000k -maxrate 8000k -pix_fmt yuv420p -tune fastdecode -coder 0 -flags -loop -profile:v main -x264-params subme=5:ref=4 "#ouputFilePath#"
With these settings ffmpeg is decoding frames in 20-25 seconds, though with the sws_scale and then writing out to the texture I'm still hovering at ~22 FPS on an Xperia Ion at a lower resolution than I'd like.
The native viewer uses hardware h264 decoder, while ffmpeg is usually compiled software-only. You must build ffmpeg with libstagefright.
libstagefright option has been pulled