Video from VFR Image Sequence - android

I am using ffmpeg to create video from an image sequence that is taken from the Android Camera's PreviewCallback method onPreviewFrame...
The images are written to a pipe that is connected to ffmpeg's stdin using the command :
ffmpeg -f image2pipe -vcodec mjpeg -i - -f flv -vcodec libx264 <output_file>
The problem that's arising is that the output video is very short as compared to the actual recording time and all of the frames are shown very rapidly...
But when the frame size is set to the lowest supported preview size, the video appears to be in sync with the actual recording time...
As far as I reckon this seems to be an issue related to the frame rate of the input image sequence and that of the output video...
But the main problem is that the frames that are generated from onPreviewFrame are of variable rates...
Is there any way to construct a smooth video from an image sequence having variable frame rate...?
Also, the image sequence is muxed with audio from the microphone which also appears to be out of sync with the video...
Could the video generated using the above process and audio from the microphone be muxed in perfect synchronization...?

Related

How to trim a video file at a precise position without re-encoding using FFMPEG

I am having trouble trimming video files without causing the Video/Audio to go out of sync.
From what I understand, using the seek argument -ss before or after the input file results in two different behaviors.
For Example:
ffmpeg -ss 00:00:00.600 -i in.mp4 -c copy out.mp4
This produces a trim with accurate A/V sync but a rough audio trim (trim happens at a video key frame and not the precise seek value)
ffmpeg -i in.mp4 -ss 00:00:00.600 -c copy out.mp4
This produces a more accurate audio trim but causes A/V to go out of sync. Frames after the trim position are dependent on frames before the trim position.These frames are assigned negative timestamps and copied to output file, which results in video being out of sync with audio during playback.
On the other hand,
ffmpeg -i "in.mp4" -ss 00:00:00.600 -strict -2 out.mp4
This produces a more precise trimming and A/V sync.
The trim task takes a long time to run and results in quality loss.
My Question is:
Is there a way to get an accurate trim without re-encoding the video?
maybe by discarding the extra frames at the beginning that cause the A/V to get out of sync.
In my case, I can live with a few black frames in the beginning, as long as the audio is trimmed at the precise position and A/V sync is preserved.
Is it possible to accomplish this using FFMPEG?
if not, can MediaCodec on Android handle such precision when trimming?
Thanks for your time.

FFmpeg - Add frames to MP4 in multiple calls

I'm trying to use FFmpeg to create an MP4 file from user-created content in an Android app.
The user-created content is rendered with OpenGL. I was thinking I can render it to a FrameBuffer and save it as a temporary image file. Then, take this image file and add it to an MP4 file with FFmpeg. I'd do this frame by frame, creating and deleting these temporary image files as I go while I build the MP4 file.
The issue is I will never have all of these image files at one time, so I can't just use the typical call:
ffmpeg -start_number n -i test_%d.jpg -vcodec mpeg4 test.mp4
Is this possible with FFmpeg? I can't find any information about adding frames to an MP4 file one-by-one and keeping the correct framerate, etc...
Use STDIO to get the raw frames to FFmpeg. Note that this doesn't mean exporting entire images... all you need is the pixel data. Something like this...
ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -pix_fmt rgb24 -r 30 -i - -vcodec mpeg4 test.mp4
Using -i - means FFmpeg will read from the pipe.
I think from there you would just send in the raw pixel values via the pipe, one byte per color per pixel. FFmpeg will know when you're done with each frame since you've passed the frame size to it.

How can ffmpeg be made as efficient as Android's built-in video viewer?

I have a project based off of https://ikaruga2.wordpress.com/2011/06/15/video-live-wallpaper-part-1/, which uses an older copy of the ffmpeg libraries from http://bambuser.com/opensource. Within the C++ code in this project we have the following lines of code:
unsigned long long current = GetCurrentTimeInNanoseconds();
avcodec_decode_video(pCodecCtx, pFrame, &frameFinished, packet.data, packet.size);
__android_log_print(ANDROID_LOG_DEBUG, "getFrame>>>>", "decode video time: %llu", (GetCurrentTimeInNanoseconds() - current)/1000000);
This code continually reports between 60 and 90 ms to decode each frame on an Xperia Ion, using a 1280x720 h264 source video file. Other processing to get the frame out to the screen takes an average of 30ms more with very little variation. This leads to frame rates of 10-11fps.
Ignoring that other processing, a decode that takes an average of 75ms would result in 13fps. However, when I browse my SD card and click on that mp4 file to open it in the native viewer, it shows at a full 30fps. Further, when I open a 1920x1080 version of the same mp4 in the native viewer it also runs at a full 30fps without stutter or lag. This implies (to my novice eye) that something is very very wrong, as the hardware is obviously capable of decoding many times faster.
What flags or options can be passed to avcode_decode_video to optimize decode speed to match that of the native viewer? Can optimizations be made elsewhere to optimize speed further? Is there a reason that the native viewer can decode almost an order of magnitude faster (taking into account the 1920x1080 source results)?
EDIT
The answer below is very helpful, but is not practical for me at this time. In the mean time I have managed to decrease decoding time by 70% with some optimal encoding flags found through many many hours of trial and error. Here are the ffmpeg arguments I'm using for encoding in case it helps anyone else who stumbles across this post:
ffmpeg.exe -i "#inputFilePath#" -c:v libx264 -preset veryslow -g 2 -y -s 910x512 -b 5000k -minrate 2000k -maxrate 8000k -pix_fmt yuv420p -tune fastdecode -coder 0 -flags -loop -profile:v main -x264-params subme=5:ref=4 "#ouputFilePath#"
With these settings ffmpeg is decoding frames in 20-25 seconds, though with the sws_scale and then writing out to the texture I'm still hovering at ~22 FPS on an Xperia Ion at a lower resolution than I'd like.
The native viewer uses hardware h264 decoder, while ffmpeg is usually compiled software-only. You must build ffmpeg with libstagefright.
libstagefright option has been pulled

Transcoding and Streaming a video file for Android

I'm trying to encode a local/static input file (can MP4, for example) into a smaller video file (either by resizing, lesser quality video, etc.) and stream it in parallel (i.e. I can't wait for the encoding process to finish before streaming it back), so it can be played by an Android client (the standard Android video player).
So I've tried using ffmpeg as follows:
ffmpeg -re -i input.mp4 -g 52 -acodec libvo_aacenc -ab 64k -vcodec libx264 -vb 448k -f mp4 -movflags frag_keyframe+empty_moov -
Notice I'm using stdout as the output so I can run ffmpeg and stream its output on the fly
However, such methods (and other similar methods) don't seem to work on Android - it can't simply play it once it receives "non-standard" files (such as a fragmented MP4s) - it seems like the empty moov atom messes it up.
I also tried other container formats, such as 3GPP and WebM.
I'd love to hear any kind of input on this issue...
Thanks
You can specify multiple outputs in ffmpeg, see here http://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
For Android if newer than 3.0 try HLS as an output

To capture frame from video

I want to implement the functionality which will do the video recording but instead of saving the video I want to save the frames of video from that recording.How to do it
If the aim is to create jpegs of the video Camera class will not be of much help here. It captures videos or jpegs. Does not do any conversion.
If you want a full video to be converted to a set of images then simplest way is to use ffmpeg. Install it. After creating the video run a command after capturing the video and convert it to images.
ffmpeg -i input -s widthxheight -f image2 out%05d.jpg
that will create out00000.jpg out00001.jpg and so on from the video. You can use the -r option to generate every second frame or 10th frame and so on.

Categories

Resources