I am using ffmpeg library in my project with custom VideoPlayer. I can convert video from .avi to .mp4, but it takes more time to convert. Can I convert video on the fly using ffmpeg?
EDIT
So I want to play videos with different formats, that standard MediaPlayer doesn't support now. I've built ffmpeg for my project, now I have ffmpeg.so file and I can convert video through ffmpeg, for example:
ffmpeg input.avi -vcodec mpeg4 -acodec aac -strict -2 output.mp4
But it can spend more time, if video is big. How can I realize different formats support using ffmpeg?
Yes, you can convert video on the fly using ffmpeg (it would actually be called transcoding).
Here's an older guide written for linux. All of it should still work on Android.
Related
I have recently used ffmpeg library for android to compress the video of length 10 seconds and size nearly 25 MB. Following are the commands i tried to use:
ffmpeg -i /video.mp4 -vcodec h264 -b:v 1000k -acodec mp2 /output.mp4
OR
ffmpeg -i input.mp4 -vcodec h264 -crf 20 output.mp4
Both of the commands were too slow. I canceled the task before it completed because it was taking too much time. It took more than 8 minutes to process JUST 20% of the video. Time is really critical for me so i can't opt for ffmpeg. I have following question:
Is there something wrong with the command or ffmpeg is slow anyway?
If its slow then is there any other well documented and reliable way/library for video compression that i can use in android?
Your file is in mp4 container and already has its streams in some predefined codec.
Now the size of any container(not specifically mp4) will depend upon what kind of compression(loosely codec) is used for compressing the data. This is why you will see different size for the same content in different formats.
There are other parameters which can affect the size of the file i.e frame rate, resolution, audio bit rate etc. If you reduce them then the size of file becomes less. e.g. in youtube you can choose to play video at a lower rate when bandwidth is the issue.
However, if you choose to do this you will have to re-process the entire file again and its going to to take a lot of time since you are demuxing the container, decoding the codec, applying filter(reducing frame etc), then recording, and then again remuxing. This entire process if not worth few extra MB of saving unless you have some compelling use case.
One solution is to use a more powerful machine but again this is limited by the architecture/constraint of the application to utilize powerful machine. To answer specifically for ffmpeg it wont make much difference.
I'm trying to use FFmpeg to create an MP4 file from user-created content in an Android app.
The user-created content is rendered with OpenGL. I was thinking I can render it to a FrameBuffer and save it as a temporary image file. Then, take this image file and add it to an MP4 file with FFmpeg. I'd do this frame by frame, creating and deleting these temporary image files as I go while I build the MP4 file.
The issue is I will never have all of these image files at one time, so I can't just use the typical call:
ffmpeg -start_number n -i test_%d.jpg -vcodec mpeg4 test.mp4
Is this possible with FFmpeg? I can't find any information about adding frames to an MP4 file one-by-one and keeping the correct framerate, etc...
Use STDIO to get the raw frames to FFmpeg. Note that this doesn't mean exporting entire images... all you need is the pixel data. Something like this...
ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -pix_fmt rgb24 -r 30 -i - -vcodec mpeg4 test.mp4
Using -i - means FFmpeg will read from the pipe.
I think from there you would just send in the raw pixel values via the pipe, one byte per color per pixel. FFmpeg will know when you're done with each frame since you've passed the frame size to it.
I'm trying to encode a local/static input file (can MP4, for example) into a smaller video file (either by resizing, lesser quality video, etc.) and stream it in parallel (i.e. I can't wait for the encoding process to finish before streaming it back), so it can be played by an Android client (the standard Android video player).
So I've tried using ffmpeg as follows:
ffmpeg -re -i input.mp4 -g 52 -acodec libvo_aacenc -ab 64k -vcodec libx264 -vb 448k -f mp4 -movflags frag_keyframe+empty_moov -
Notice I'm using stdout as the output so I can run ffmpeg and stream its output on the fly
However, such methods (and other similar methods) don't seem to work on Android - it can't simply play it once it receives "non-standard" files (such as a fragmented MP4s) - it seems like the empty moov atom messes it up.
I also tried other container formats, such as 3GPP and WebM.
I'd love to hear any kind of input on this issue...
Thanks
You can specify multiple outputs in ffmpeg, see here http://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
For Android if newer than 3.0 try HLS as an output
I have the following stream mms://77.238.11.5:8080, you can access it using Windows Mediaplayer.
I don't find any solution to view it on Android devices using MediaPlayer or VideoView, so my idea is to convert is using VLC or FFMPEG to a different format like MP4 or else.
You can use ffmpeg to convert the stream to a file:
ffmpeg -i "mmsh://77.238.11.5:8080/" output.mp4
Option -t duration limits the length of the video, if needed.
They say that VPlayer can play such streams.
I have successfully ported ffmpeg library to Android by using Bambuser's ffmpeg port. I'm currently investigating the ffmpeg's source codes especially ffplay.c and api-examples.c files.
I want to extract elementary streams from Android 2.2 recorded videos. For example I can record a H.263 encoded video in the MPEG-4 container. Lets say; test.mp4 file.
What I want to achieve is, I want to extract H.263 elementary stream video from the test.mp4 file something like test.h263.
It can be extracted by using the CLI of the ffmpeg. Just like;
ffmpeg -i test.mp4 -vcodec copy test.h263
But unfortunately I don't know how to extract elementary stream from a container by using the API of ffmpeg.
Thanks.