I'm trying to use FFmpeg to create an MP4 file from user-created content in an Android app.
The user-created content is rendered with OpenGL. I was thinking I can render it to a FrameBuffer and save it as a temporary image file. Then, take this image file and add it to an MP4 file with FFmpeg. I'd do this frame by frame, creating and deleting these temporary image files as I go while I build the MP4 file.
The issue is I will never have all of these image files at one time, so I can't just use the typical call:
ffmpeg -start_number n -i test_%d.jpg -vcodec mpeg4 test.mp4
Is this possible with FFmpeg? I can't find any information about adding frames to an MP4 file one-by-one and keeping the correct framerate, etc...
Use STDIO to get the raw frames to FFmpeg. Note that this doesn't mean exporting entire images... all you need is the pixel data. Something like this...
ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -pix_fmt rgb24 -r 30 -i - -vcodec mpeg4 test.mp4
Using -i - means FFmpeg will read from the pipe.
I think from there you would just send in the raw pixel values via the pipe, one byte per color per pixel. FFmpeg will know when you're done with each frame since you've passed the frame size to it.
Related
I'm streaming live from my GoPro inside my android app. I use ffmpeg to receive the streaming data from the GoPro and vlc to play it in a surfaceview. I used the code which is provided by KonradIT here. The main command used for the ffmpeg is:
-fflags nobuffer -f mpegts -i udp://:8554 -f mpegts udp://127.0.0.1:8555/gopro?pkt_size=64
and the options for vlclib are:
options.add("--aout=opensles");
options.add("--audio-time-stretch");
options.add("-vvv");
The output is something worse. It's laggy and its speed is about 17 FPS. And one annoying thing is the streamed picture is very small and as far as I tried, there was no way to make it larger and stretched.
I want to know if there is any command to speedup the streaming (in anyway, even by reducing the quality) ? Either on the side of ffmpeg or vlc.
If it is only doing a relay of packets, try this :
ffmpeg -fflags nobuffer -f mpegts -i udp://:8554 -c:v copy -c:a copy -f mpegts udp://127.0.0.1:8555/gopro?pkt_size=1316
You can play with different packet sizes based on the MTU size of your network (<1500). Check the delay.
Using this command we are not transcoding the incoming packets, resize and relay.
I have been working on FFMpeg from past 7 days. I am required to create a video where I need to perform following:
Concat few images into video picked from android storage.
Add a music file [same picked from android storage].
Set Duration per image to be shown in the video. e.g. if 3 images are picked then total duration of video should be 3*duration choosen by user.
What I have done so far.
I am using implementation 'nl.bravobit:android-ffmpeg:1.1.5' for the FFmpeg prebuild binaries.
Following is the command that has been used to concatenation the images and set the duration:
ffmpeg -r 1/duration_selected_by_user -f concat -safe 0 -i path_of_concat_file.txt -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p path_of_output_video_file.mp4
Note:
By adding the duration chosen by a user as frame rate I am able to set the duration of each image to be shown.
Concat work well and images are merged to for the video.
Problem:
When I try to add audio in the same process, progress run for a long time even for small audio by using the following command:
ffmpeg -r 1/duration_selected_by_user -i path_of_audio_file.mp3 -f concat -safe 0 -i path_of_concat_file.txt -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p path_of_output_video_file.mp4
It looks like there is some issue in command as I am not much familiar and experienced with the technology. How can I improve the performance of merging images and audio to get the better results?
Important Links related to context:
https://github.com/bravobit/FFmpeg-Android
As #HB mentioned in the comments:
Try adding -preset ultrafast after -pix_fmt yuv420p
A preset is a collection of options that will provide a certain encoding speed to compression ratio. A slower preset will provide better compression (compression is quality per filesize). This means that, for example, if you target a certain file size or constant bit rate, you will achieve better quality with a slower preset. Similarly, for constant quality encoding, you will simply save bitrate by choosing a slower preset.
The available presets in descending order of speed are:
ultrafast
superfast
veryfast
faster
fast
medium – default preset
slow
slower
veryslow
placebo – ignore this as it is not useful
Reference: https://trac.ffmpeg.org/wiki/Encode/H.264#Preset
I'm trying to encode a local/static input file (can MP4, for example) into a smaller video file (either by resizing, lesser quality video, etc.) and stream it in parallel (i.e. I can't wait for the encoding process to finish before streaming it back), so it can be played by an Android client (the standard Android video player).
So I've tried using ffmpeg as follows:
ffmpeg -re -i input.mp4 -g 52 -acodec libvo_aacenc -ab 64k -vcodec libx264 -vb 448k -f mp4 -movflags frag_keyframe+empty_moov -
Notice I'm using stdout as the output so I can run ffmpeg and stream its output on the fly
However, such methods (and other similar methods) don't seem to work on Android - it can't simply play it once it receives "non-standard" files (such as a fragmented MP4s) - it seems like the empty moov atom messes it up.
I also tried other container formats, such as 3GPP and WebM.
I'd love to hear any kind of input on this issue...
Thanks
You can specify multiple outputs in ffmpeg, see here http://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
For Android if newer than 3.0 try HLS as an output
I'm now researching on a project which utilizes ffmpeg to watermark captured video on Android. At the moment I can capture a video from the camera firstly and then use ffmpeg to watermark it. I don't know if it is possible to operate the video while it is being captured?
Here is an example, how to capture a webcam video on my Windows system and draw a counting timestamp.
To list all devices that can be used as input :
ffmpeg -list_devices true -f dshow -i dummy
To use my webcam as input and draw the timestamp (with -t 00:01:00 1 minute is recorded):
ffmpeg -f dshow -i video="1.3M HD WebCam" -t 00:01:00 -vf "drawtext=fontfile=Arial.ttf: timecode='00\:00\:00\:00': r=25: x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000#1" -an -y output.mp4
The font-file Arial.ttf was located in the folder I was with the terminal in.
(Source : http://trac.ffmpeg.org/wiki/How%20to%20capture%20a%20webcam%20input and http://trac.ffmpeg.org/wiki/FilteringGuide)
I hope it may help.
Have a nice day ;)
I am using ffmpeg to create video from an image sequence that is taken from the Android Camera's PreviewCallback method onPreviewFrame...
The images are written to a pipe that is connected to ffmpeg's stdin using the command :
ffmpeg -f image2pipe -vcodec mjpeg -i - -f flv -vcodec libx264 <output_file>
The problem that's arising is that the output video is very short as compared to the actual recording time and all of the frames are shown very rapidly...
But when the frame size is set to the lowest supported preview size, the video appears to be in sync with the actual recording time...
As far as I reckon this seems to be an issue related to the frame rate of the input image sequence and that of the output video...
But the main problem is that the frames that are generated from onPreviewFrame are of variable rates...
Is there any way to construct a smooth video from an image sequence having variable frame rate...?
Also, the image sequence is muxed with audio from the microphone which also appears to be out of sync with the video...
Could the video generated using the above process and audio from the microphone be muxed in perfect synchronization...?