FFmpeg adding image watermark to video process is very slow - android

I am adding image watermark to video with help of FFmpeg but FFmpeg takes an inordinate amount of time with the below command-
String[] cmd = {"-i",videoPath, "-i", waterMark.toString(),"-filter_complex","overlay=5:5","-codec:a", "copy", outputPath};
so i tried another command which was little bit faster but increase output file size(which i do not want)
String[] cmd = {"-y","-i", videoPath, "-i", waterMark.toString(), "-filter_complex", "overlay=5:5", "-c:v","libx264","-preset", "ultrafast", outputPath};
Some one please explain to me how to increase the speed of FFmpeg watermarking speed without increasing the size of output.
Thanks.

You mentioned that a 7MB video takes between 30-60 seconds.
There is always a trade off when choosing between speed and quality.
I tested on my phone using a 7MB file and it took 13 seconds, still slow, but we can't expect much better then that.
Ways to increase speed:
Lowering the frame rate, using the -r command
Changing the bitrate, using the -b:v and -b:a commands
Change the Constant Rate Factor, using -crf. The default value is 21
The range of the quantizer scale is 0-51: where 0 is lossless, 23 is default, and 51 is worst possible. A lower value is a higher quality and a subjectively sane range is 18-28. Consider 18 to be visually lossless or nearly so: it should look the same or nearly the same as the input but it isn't technically lossless.
This is what I have found works the best on most android devices:
String[] s = {"-i", VideoPath, "-i", ImagePath, "-filter_complex", "[0:v]pad=iw:if(lte(ih\\,iw)\\,ih\\,2*trunc(iw*16/9/2)):(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", myFrameRate, directoryToStore[0] + "/" + SavedVideoName};
I reduced my framerate slightly, you can experiment what works best for you. I'm using mp4parser to retrieve the frame rate.
I have to give credit to #Gyan that provided me with a way to perfectly scale images that is being placed on top of a video, you can look at the question I asked here.
If you are unsure about the frame rate, you can remove it from the command and first test if your speed is reduced.
Try it, if you have any questions, please ask.
OP opted to go with the following command:
String[] cmd = {"-y","-i", videoPath, "-i", waterMark.toString(), "-filter_complex", "overlay=(main_w-overlay_w-10):5", "-map", "0:a","-c:v", "libx264", "-crf", "28","-preset", "ultrafast" ,outputPath};
Edit
Just to add on the command I mentioned and provide a detailed explanation of how to use it etc:
String[] cmd = {"-i", videoPath, "-i", waterMark.toString(), "-filter_complex", "[0:v]pad=iw:if(lte(ih\\,iw)\\,ih\\,2*trunc(iw*16/9/2)):(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", myFrameRate, outputPath};
This is for device's that has a display aspect ratio of 16:9. If you want this filter to work on all device's you will have to get the aspect ratio of the device and change the filter 16/9/2 respectively.
You can get the device aspect ratio by creating this methods:
int gcd(int p, int q) {
if (q == 0) return p;
else return gcd(q, p % q);
}
void ratio(int a, int b) {
final int gcd = gcd(a,b);
if(a > b) {
setAspectRatio(a/gcd, b/gcd);
} else {
setAspectRatio(b/gcd, a/gcd);
}
}
void setAspectRatio(int a, int b) {
System.out.println("aspect ratio = "+a + " " + b);
//This is the string that will be used in the filter (instead of hardcoding 16/9/2
filterAspectRatio = a + "/" + b + "/" + "2";
}
Now you have the correct aspect ratio and you can change the filter accordingly.
Next, create a watermark and add it to a view, make that view the size of the device (match_parent) and scale/place the watermark where you would like it to be. You can then get the bitmap by calling:
Bitmap waterMarkBitmap = watermarkView.getDrawingCache();
and create a file from the Bitmap, like this:
String outputFilename = "myCoolWatermark.png"; //provide a name for you saved watermark
File path = Environment.getExternalStorageDirectory(); //this can be changed to where you want to store the bitmap
File waterMark = new File(path, outputFilename);
try (FileOutputStream out = new FileOutputStream(waterMark)) {
waterMarkBitmap.compress(Bitmap.CompressFormat.PNG, 100, out); // PNG is a lossless format, the compression factor (100) is ignored
} catch (IOException e) {
e.printStackTrace();
}
The watermark is created and can be reused, or you can delete it when you are done with it.
Now you can call the command mentioned above.

This is a very common question here. The simple answer is that you can't increase the encoding speed of ffmpeg on Android. You're encoding on a phone, so you don't expect desktop/server performance using an encoder and no hardware acceleration support.
There are a few things users can do:
Stream copy the audio with -c:a copy (you're already doing that).
Use -preset ultrafast to give up encoding efficiency for encoding speed (you're also already doing that).
Make the output width x height smaller with the scale filter (probably not an acceptable option for you).
Make sure your x264 was not compiled with --disable-asm so you can take advantage of the various ARM and NEON optimizations in x264 for a significant increase in encoding speed. However, I don't know which Android devices support that, but it's something to look into. For a quick check to see if you are using any optimizations refer to the console output from ffmpeg and search for using cpu capabilities. If none! then it is not using any optimizations, otherwise it may say ARMv7 NEON or something like that.
Offload the encoding to a server. Saves your users' battery life too.
All this for an annoying watermark? Avoid re-encoding and use a player to overlay the watermark.
Apparently FFmpeg has MediaCodec decoding support on Android, but encoding is the bottleneck here. However maybe it will save a few fps.
Send a patch to FFmpeg that enables MediaCodec encoding support or wait a few years for someone else to do so.
Forget ffmpeg and use MediaCodec directly. I am clueless about this and too lazy to look it up, but I assume it uses hardware to encode and I'll guess you can use it to make an overlay. Someone correct me if I am wrong.

Related

Overlay text using FFmpeg command in android programmatically

I need to know what is the issue with my FFmpeg command for overlaying text to a video in android.
command = new String[]{"ffmpeg", "-i", original_path, "-vf", "drawtext=text='SiteName.local': fontsize=18: fontcolor=white: x=10:y=h-th-10", "-acodec", "copy", "-y", dest.getAbsolutePath()};
I am trying to create a video with a text overlay. However, i'm getting an error
[NULL # 0xea699600] Unable to find a suitable output format for 'ffmpeg'
ffmpeg: Invalid argument
I have tested for the input file and output file using different command for trimming videos and it worked. However, the FFmpeg command for overlaying text does not work. I kindly ask for help.
Besides, i also need to know how i can animate the text to scroll from left to right, bounce using FFmpeg commands in android etc
Finally i managed solving the issue over a long search. Drawtext can't work without including the fontfile. So the below code is what managed to settle my scores
command = new String[]{"-i", original_path, "-vf", "drawtext=fontfile=/system/fonts/DroidSans.ttf:text='SiteName hulluway':fontsize=40:fontcolor=black: x=w-(t-4.5)*(w+tw)/5.5:y=100", "-acodec", "copy", "-y", dest.getAbsolutePath()};

How to limit Seek Bar on the basis of video size?

I need to know how to limit a SeekBar with a particular video size,rather than limiting using time say like whats app 17 MB , i have done an example by taking time but i need to consider size.
I was using This example
A command in FF-MPEG, -fs , allows to specify the file size and in return the specified size of file will be trimmed from video file
String[] cm = {"-i", path, "-fs", "17M", "-c", "copy", videopath.getPath()};
After trimming check for duration of the trimmed video with maximum duration set for the seek bar.

Reverse video in android

I have recorded a video from camera in my app and saved in device storage.Now I want to reverse the video such that it plays from backwards.i.e. if video is of 10 seconds then the last frame at 10th second will become first frame and it starts playing from there to 1st second first frame.I want to save the reversed video in a file.How should i proceed in that?
If you are prepared to use ffmpeg you can use this approach - it essentially breaks the video into frames and then builds it again in reverse order:
https://stackoverflow.com/a/8137637/334402
There are several ways to use ffmpeg in Android but the 'wrapper' approach is one which I have found a reasonable blend of performance and ease of use. Some example Android ffmpeg wrapper:
http://hiteshsondhi88.github.io/ffmpeg-android-java/
https://github.com/guardianproject/android-ffmpeg
It's worth being aware that this will be time-consuming on a Mobile - if you have the luxury of being able to upload to a server and doing the reversal there it might be quicker.
Thanks to Mick for giving me an idea to use ffmpeg for reversing video.
I have posted code at github for reversing video along with performing other video editing operation using ffmpeg and complete tutorial in my blog post here.
As written in my blog post,
For reversing video,first we need to divide video into segments with
duration of 10 seconds or less because reverse video command for
ffmpeg will not work for long duration videos unless your device has
32 GB of RAM.
Hence,to reverse a video-
1.Divide video into segments with duration of 10 seconds or less.
2.Reverse the segmented videos
3.Concatenate reversed segmented videos in reverse order.
For dividing video into segments with duration of 6 seconds we can use
the below command-
String[] complexCommand = {"-i", inputFileAbsolutePath, "-c:v",
"libx264", "-crf", "22", "-map", "0", "-segment_time", "6", "-g", "9",
"-sc_threshold", "0", "-force_key_frames", "expr:gte(t,n_forced*6)",
"-f", "segment", outputFileAbsolutePath};
Here,
-c:v libx264
encodes all video streams with libx264
-crf
Set the quality for constant quality mode.
-segment_time
time for each segment of video
-g
GOP size
-sc_threshold
set scene change threshold.
-force_key_frames expr:gte(t,n_forced*n)
Forcing a keyframe every n seconds
After segmenting video,we need to reverse the segmented videos.For
that we need to run a loop where each segmented video file will be
reversed.
To reverse a video with audio(without removing its audio) we can use
the below command-
String command[] = {"-i", inputFileAbsolutePath, "-vf", "reverse",
"-af", "areverse", outputFileAbsolutePath};
To reverse a video with audio removing its audio we can use the below
command-
String command[] = {"-i", inputFileAbsolutePath, "-an", "-vf",
"reverse", outputFileAbsolutePath};
To reverse a video without audio we can use the below command-
String command[] = {"-i",inputFileAbsolutePath, "-vf", "reverse",
outputFileAbsolutePath};
After reversing segmented videos,we need to concatenate reversed
segmented videos in reverse order.For that we sort videos on the basis
of last modified file using Arrays.sort(files,
LastModifiedFileComparator.LASTMODIFIED_REVERSE).
Then, to concatenate reversed segmented videos(with audio) we can use the below
command-
String command[] =
{"-i",inputFile1AbsolutePath,"-i",inputFile2AbsolutePath
.....,"-i",inputFileNAbsolutePath,"-filter_complex","[0:v0] [0:a0]
[1:v1] [1:a1]...[N:vN] concat=n=N:v=1:a=1 [v]
[a],"-map","[v]","-map","[a]", outputFileAbsolutePath};
To concatenate reversed segmented videos(without audio) we can use the below
command-
String command[] =
{"-i",inputFile1AbsolutePath,"-i",inputFile2AbsolutePath
.....,"-i",inputFileNAbsolutePath,"-filter_complex","[0:0] [1:0]
[2:0]...[N:0] concat=n=N:v=1:a=0",outputFileAbsolutePath};
Here,
-filter_complex [0:v0] [0:a0] [1:v1] [1:a1]…[N:vN] tells ffmpeg what streams to send to the concat filter.In the above case, video stream 0
[0:v0] and audio stream 0 [0:a0] from input 0,video stream 1 [1:v1]
and audio stream 1 [1:v1] from input 1 and so on.
concat filter is used to concatenate audio and video streams, joining
them together one after the other.The filter accepts the following
options:
n
Set the number of segments. Default is 2.
v
Set the number of output video streams, that is also the number of
video streams in each segment. Default is 1.
a
Set the number of output audio streams, that is also the number of
audio streams in each segment. Default is 0.

After compressing the video, it's quality getting dull in Android

I have done with the video compressing using ffmpeg in Android and I am having some problem in it.
I have captured one video of exactly one minute and it has 123 MB of size on my nexus 5. I did video compressing of the same video from 123 MB to 1.30 MB approx and it will take 2 minutes near about and that was successfully done.
But the question is that I have the compressed video in my SD Card and when I'll play it, the quality of the video is totally dull, below is my code using ffmpeg.
String[] complexCommand = {"ffmpeg", "-i", videoPath, "-strict","experimental","-s", "160x120","-r","25", "-vcodec", "mpeg4", "-b", "150k", "-ab","48000", "-ac", "2", "-ar", "22050", demoVideoFolder + "Compressed_Video.mp4"};
LoadJNI vk = new LoadJNI();
try {
vk.run(complexCommand, workFolder, getApplicationContext(),
false);
GeneralUtils.copyFileToFolder(vkLogPath, demoVideoFolder);
} catch (Throwable e) {
Log.e(Prefs.TAG, "vk run exeption.", e);
} finally {
if (wakeLock.isHeld())
wakeLock.release();
else {
Log.i(Prefs.TAG,
"Wake lock is already released, doing nothing");
}
}
Log.i(Prefs.TAG, "doInBackground finished");
Here videopath is my filepath and demofolder is my output folder. I have attached the snapshot, just have a look into it.
Please, tell me what I should do, so just in advance your efforts will be highly appreciated and thanks for that.
"Dull" is very subjective, I really don't know what to make of that. If you have specific artifacts you want to discuss, please post screenshots. I can make some general comments on your commandline that may or may not be helpful:
-s 160x120 - are we back in 1995? This is what we used to refer to when we said "stamp-sized video" in the mid-90s. In case you didn't know, this resizes the video to a resolution of 160x120, which destroys quality.
-r 25 you're dropping and adding frames at random here. You most likely want to use a fps filter, or remove this option altogether.
-vcodec mpeg4 - people use H.264 nowadays (-vcodec libx264), if not HEVC/VP9 (-vcodec libx265/libvpx-vp9).
-b 150k - this is a very low bitrate. If you don't like the video quality, please increase the bitrate.
-strict experimental - don't use this unless you know what you're doing.

extract all video frames in Android

I recorded a Video for limited time. Now i want to fetch all frames of video. I am using the below code and by using it i am able to get frames but i am not getting all video frames. 3 to 4 frames are repeated then i got a different frame. But as we all know we can get 25- 30 frames in a second to display smooth video. How to get all frames.
for (int i = 0; i < 30; i++) {
Bitmap bArray = mediaMetadataRetriever.getFrameAtTime(
1000000 * i,
MediaMetadataRetriever.OPTION_CLOSEST);
savebitmap(bArray, 33333 * i);
}
I don't want to use NDK. I got this link don't know what should be the value for "argb8888". I am getting error here. Can anyone explain how to do it.
Getting frames from Video Image in Android
I faced the same problem before and the Android's MediaMetadataRetriever seems not appropriated for this task since it doesn't have a good precision.
I used a library called "FFmpegMediaMetadataRetriever" in android studio:
Add this line in your build.graddle under module app:
compile 'com.github.wseemann:FFmpegMediaMetadataRetriever:1.0.14'
Rebuild your project.
Use the FFmpegMediaMetadataRetriever class to grab frames with higher
precision:
FFmpegMediaMetadataRetriever med = new FFmpegMediaMetadataRetriever();
med.setDataSource("your data source");
and in your loop you can grab frame using:
Bitmap bmp = med.getFrameAtTime(i*1000000, FFmpegMediaMetadataRetriever.OPTION_CLOSEST);
To get image frames from video we can use ffmpeg.For integrating FFmpeg in android we can use precompiled libraries like ffmpeg-android.
To extract image frames from a video we can use below command
String[] complexCommand = {"-y", "-i", inputFileAbsolutePath, "-an",
"-r", "1/2", "-ss", "" + startMs / 1000, "-t", "" + (endMs - startMs)
/ 1000, outputFileAbsolutePath};
Here,
-y
Overwrite output files
-i
ffmpeg reads from an arbitrary number of input “files” specified by the -i option
-an
Disable audio recording.
-r
Set frame rate
-ss
seeks to position
-t
limit the duration of data read from the input file
Here in place of inputFileAbsolutePath you have to specify the absolute path of video file from which you want to extract images.
For complete code check out this on my repository .Inside extractImagesVideo() method I am running command for extracting images from video.
For complete tutorial regarding integration of ffmpeg library and using ffmpeg commands to edit videos, check out this post which I have written on my blog.
You need to do :
Decode the video.
Present the decoded images at least as fast as 24 images / second. I
suppose you can skip this step.
Save the decoded images.
It appears that decoding the video would be the most challenging step. People and companies have spent years developing codecs (encoder / decoder) for various video formats.
Use this library JMF for FFMPEG.

Categories

Resources