Using ffmpeg to watermark a capturing video on Android - android

I'm now researching on a project which utilizes ffmpeg to watermark captured video on Android. At the moment I can capture a video from the camera firstly and then use ffmpeg to watermark it. I don't know if it is possible to operate the video while it is being captured?

Here is an example, how to capture a webcam video on my Windows system and draw a counting timestamp.
To list all devices that can be used as input :
ffmpeg -list_devices true -f dshow -i dummy
To use my webcam as input and draw the timestamp (with -t 00:01:00 1 minute is recorded):
ffmpeg -f dshow -i video="1.3M HD WebCam" -t 00:01:00 -vf "drawtext=fontfile=Arial.ttf: timecode='00\:00\:00\:00': r=25: x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000#1" -an -y output.mp4
The font-file Arial.ttf was located in the folder I was with the terminal in.
(Source : http://trac.ffmpeg.org/wiki/How%20to%20capture%20a%20webcam%20input and http://trac.ffmpeg.org/wiki/FilteringGuide)
I hope it may help.
Have a nice day ;)

Related

Add graphical annotations to a video e.g random drawings using flutter

I have developed a flutter app in which i am using FFmpeg library for video manipulation ,
Actually i want to draw some random drawings in to a video and make it a part of the video.
what i am already doing is that , i added some annotated images into a video by using ffmpeg library in flutter , but the problem with ffmpeg is that it is very time consuming ,it is taking almost 10sec for a 10sec video clip.
ffmpeg command i am using is:
ffmpeg.execute("-y -i input.mp4" " -i input.png -filter_complex \"[0:v][1:v] overlay=(W-w)/2:(H-h)/2'\" -pix_fmt yuv420p -c:a copy output.mp4")
I want to do it as fast as whatsapp annotations...
any idea how to do that ?

How to speed up ffmpeg streaming on android using libvlc?

I'm streaming live from my GoPro inside my android app. I use ffmpeg to receive the streaming data from the GoPro and vlc to play it in a surfaceview. I used the code which is provided by KonradIT here. The main command used for the ffmpeg is:
-fflags nobuffer -f mpegts -i udp://:8554 -f mpegts udp://127.0.0.1:8555/gopro?pkt_size=64
and the options for vlclib are:
options.add("--aout=opensles");
options.add("--audio-time-stretch");
options.add("-vvv");
The output is something worse. It's laggy and its speed is about 17 FPS. And one annoying thing is the streamed picture is very small and as far as I tried, there was no way to make it larger and stretched.
I want to know if there is any command to speedup the streaming (in anyway, even by reducing the quality) ? Either on the side of ffmpeg or vlc.
If it is only doing a relay of packets, try this :
ffmpeg -fflags nobuffer -f mpegts -i udp://:8554 -c:v copy -c:a copy -f mpegts udp://127.0.0.1:8555/gopro?pkt_size=1316
You can play with different packet sizes based on the MTU size of your network (<1500). Check the delay.
Using this command we are not transcoding the incoming packets, resize and relay.

FFMpeg: merge images with audio for specific duration

I have been working on FFMpeg from past 7 days. I am required to create a video where I need to perform following:
Concat few images into video picked from android storage.
Add a music file [same picked from android storage].
Set Duration per image to be shown in the video. e.g. if 3 images are picked then total duration of video should be 3*duration choosen by user.
What I have done so far.
I am using implementation 'nl.bravobit:android-ffmpeg:1.1.5' for the FFmpeg prebuild binaries.
Following is the command that has been used to concatenation the images and set the duration:
ffmpeg -r 1/duration_selected_by_user -f concat -safe 0 -i path_of_concat_file.txt -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p path_of_output_video_file.mp4
Note:
By adding the duration chosen by a user as frame rate I am able to set the duration of each image to be shown.
Concat work well and images are merged to for the video.
Problem:
When I try to add audio in the same process, progress run for a long time even for small audio by using the following command:
ffmpeg -r 1/duration_selected_by_user -i path_of_audio_file.mp3 -f concat -safe 0 -i path_of_concat_file.txt -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p path_of_output_video_file.mp4
It looks like there is some issue in command as I am not much familiar and experienced with the technology. How can I improve the performance of merging images and audio to get the better results?
Important Links related to context:
https://github.com/bravobit/FFmpeg-Android
As #HB mentioned in the comments:
Try adding -preset ultrafast after -pix_fmt yuv420p
A preset is a collection of options that will provide a certain encoding speed to compression ratio. A slower preset will provide better compression (compression is quality per filesize). This means that, for example, if you target a certain file size or constant bit rate, you will achieve better quality with a slower preset. Similarly, for constant quality encoding, you will simply save bitrate by choosing a slower preset.
The available presets in descending order of speed are:
ultrafast
superfast
veryfast
faster
fast
medium – default preset
slow
slower
veryslow
placebo – ignore this as it is not useful
Reference: https://trac.ffmpeg.org/wiki/Encode/H.264#Preset

FFmpeg - Add frames to MP4 in multiple calls

I'm trying to use FFmpeg to create an MP4 file from user-created content in an Android app.
The user-created content is rendered with OpenGL. I was thinking I can render it to a FrameBuffer and save it as a temporary image file. Then, take this image file and add it to an MP4 file with FFmpeg. I'd do this frame by frame, creating and deleting these temporary image files as I go while I build the MP4 file.
The issue is I will never have all of these image files at one time, so I can't just use the typical call:
ffmpeg -start_number n -i test_%d.jpg -vcodec mpeg4 test.mp4
Is this possible with FFmpeg? I can't find any information about adding frames to an MP4 file one-by-one and keeping the correct framerate, etc...
Use STDIO to get the raw frames to FFmpeg. Note that this doesn't mean exporting entire images... all you need is the pixel data. Something like this...
ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -pix_fmt rgb24 -r 30 -i - -vcodec mpeg4 test.mp4
Using -i - means FFmpeg will read from the pipe.
I think from there you would just send in the raw pixel values via the pipe, one byte per color per pixel. FFmpeg will know when you're done with each frame since you've passed the frame size to it.

FFMPEG: Position images in video when creating slide show

I'm using FFMPEG shell utility in an Android app to convert users pictures to video, here's an example command:
cat *.jpg | ffmpeg -f image2pipe -r 10 -vcodec mjpeg -i - -vcodec libx264 -s 1280x720 -preset ultrafast slideshow.mp4
I used to crop images when the user import it in the app but now I would like to allow the user to reposition the image later, here's an example:
So the user could drag or zoom the image to position it in the clear area (video ratio).
So using the ffmpeg shell command can I specify the image coordinate for each image and position the image in the video.
I solved it by processing image in-memory serially, piping it to FFMPEG (command line tool) and then using the same reference to load the next image. For the UI, I used PhotoView and stored the info provided by the view to crop the image later.

Categories

Resources