We're trying to replicate the functionality of this command line ffmpeg directive using the FFMpeg c api through JNI calls on Android.
ffmpeg -ss 2 -t 120 -vcodec copy -acodec copy -i input.file output.file
Basically, given a start and end time, we wish to copy (not re-encode) a small(er) segment of video from the larger (input) video source.
We've been using the wonderful JavaCv wrapper to openCv and FFMpeg, but we just cannot figure out how to do this simple bit of work. We've been scouring the ffmpeg.c and related sources and while I now understand that it switches to stream_copy and remuxing rather than re-encoding when the codec is specified as copy I cannot for the life of me identify what series of method calls to make to replicate this through the C api. Does anyone have an example JNI file for doing this ? Or are there rockstar C types that can explain how I get from that command line to api calls? We've spent the better part of two weeks working on this (we're not native C guys) and we're at the point where we just need to ship some code. Any example code, especially JNI code or method call maps etc. would be greatly appreciated!
You have to include a JNI wrapper method in your Java code.
Maybe these two link are useful: Link 1 Link 2
Related
I have a video(.mp4) file in my SDCard,I want to reduce a size of .mp4 file and upload this file to a server.
One way you can do this is to use ffmpeg.
There are several ways of using ffmpeg in an Android program:
use the native libraries directly from c using JNI
use a library which provides a wrapper around the 'ffmpeg' cmd line utility (also uses JNI in the wrapper library)
call ffmpeg cmd line via 'exec' from within you Android app
Of the three, I personally have used the wrapper approach in the past and found it worked well. IMHO, the documentation and examples available with the native libraries represented quite a steep learning curve.
Note, if you do use 'exec' there are some things it is worth being aware of - see bottom of this answer: https://stackoverflow.com/a/25002844/334402.
The wrapper does have limitations - at heart, the ffmpeg cmd line tool is not intended to be used this way and you have to keep that in mind, but it does work. There is an example project available on github which seems to have a reasonable user base - I did not use it myself but I did refer to it and found it useful, especially for an issue you will find if you need to call your ffmpeg wrapper more than once from the same activity or task:
https://github.com/jhotovy/android-ffmpeg
See this answer (and the questions and answers it is part if) for some more specifics on the 'calling ffmpeg two times' solution:
https://stackoverflow.com/a/28752190/334402
I have successfully compiled and build ffmpeg library in android after 3,4 days research work.
Now I want to grab frames from video. But I don't know which ffmpeg method with command to be called from java class to grab the all frames. Any one have idea about it? or I want to overlay 2 videos. Is there any direct method available in ffmpeg to merge two videos one over another? If yes, how to call it from java class?
compile ffmpeg.c and invoke its main() via jni.
For more details see how to Use FFMPEG on Android .
Also refer this github project. And this tutorial too will help you.
I hope this will help you.
I want to develop video merge application in android. I just able to generate ffmpeg.so file using NDK and JNI. But now I want to get frames of input video(around 15 sec.). Any one have an idea how to get the frames from video using ffmpeg? Is there direct native method available for that to get frames in ffmpeg? And how to pass command from java class to native mathod to perform video merging functionality? Just give me solution for that.
The only tutorial I know is http://dranger.com/ffmpeg/. It is not Android-related but frames they do get in the very begining of the tutorial.
With ffmpeg you can get frames or thumbnails, try these commands in JNI and NDK.
To extract a thumbail
ffmpeg -i videojoin.mpg -vf thumbnail=25, scale=iw/4:ih/4 -frames:v 1 -y thumbs.png
To extract the first frame from video
ffmpeg –vframes 1 –i input_video_file_name –f image2 output_frame.bmp
For create thumbnail or frames you can find the commands in this FFMPEG Wiki link.
I need to merge images to video as an overlay and export it. I have found ways to create video from images using javacv but didn't find any jar or library which do add images as an overlay to existing video, some of the links suggest to use FFMPEG and JNI to achieve this but sadly i don't have any knowledge of JNI. They use avfoundation framework in IOS to achieve the same.
The above image is replica of my requirements, if any one can guide me in right direction and provide me some useful stuff to start with would be appreciated.
What i have achieved so far is:
1) Compiled FFMPEG.
2) Generated .so files
3) Compiled and able to run Hello Jni project.
What i am searching for is:
1) Splitting video into frames.
2) Merging my overlay images with video frames
3) Recreating the video with audio.
and i know JNI is the only way to achieve this so searched a lot but didn't find any good JNI stuff to start with. I am not asking for the whole code but if some one can point me out with some good tutorial or blog would be great help.
Thanks!!
The way to accomplish this is like you said - incorporate a video codec and use it to re-compose the video.
Decode the original video
Draw your overlay over the original video frames
Encode the frames again.
Using FFMPEG with JNI is the obvious solution, but if you find any other codec library that can accomplish the same with pure Java, it will work too.
No knowledge of JNI? That's about time to learn it =)
References for learning:
NDK docs on your file system: http://developer.android.com/tools/sdk/ndk/index.html#Docs
JNI - http://192.9.162.55/docs/books/jni/html/jniTOC.html
FFMPEG for Android - there are many tutorials out there and many source trees that contain a ready build environment.
You can either follow them or just do it on your own - provided that you understand the NDK environment and can read makefiles.
This is an example: http://vec.io/posts/how-to-build-ffmpeg-with-android-ndk
Using FFMpeg to encode\decode frames - sadly, no good up-to-date tutorials here, there is the API documentation: http://ffmpeg.org/doxygen/trunk/index.html
Blending bitmaps - use Android's Canvas infrastructure, or just manually copy pixels over each other and blend according to alpha values.
Warning - this is a complex library to build and you'd better experiment with easier NDK projects before attempting this one.
I have a very basic question regarding Android and ffmpeg. I obtained ffmpeg from http://bambuser.com/opensource and was able to compile it for ARM.
The results are the binaries (ffmpeg) as well as several libsomething.so files.
My question is: Is this enough to decode videos? How do I actually use ffmpeg then?
To load the library I have:
static {
System.load("/data/data/com.package/lib/libavcodec.so");
}
It loads fine. But what then?
More explanation: I saw other projects where people had their ffmpeg source in a JNI directory in the project. They also created some Android.mk files and some C code along with it. Would I need this as well? Why would I create the .so files first and then copy the ffmpeg source code again?
I know the NDK and how it should work but I've never seen an example of how one would actually call ffmpeg functions using it, because people seem to be hiding their implementations (which is sort of understandable) but not even giving useful pointers or examples.
Let's just say I wanted to decode a video file. Which kind of native methods would I need to implement? How do I run the project? Which data types need to be passed? etc. There are certainly a few people here who have at least done that, I know this from searching for hours and hours.
For your first question;
Just building is not enough for the proper use of the ffmpeg libraries. You should also wrap those so files in the right order because these so files NEED other libraries in the link time. You can display header information of the so file, by using.
objdump -x libavcodec.so | grep NEEDED
So you need to wrap these so files through Android.mk. You may check this link.
The second one;
You only need the header files from the ffmpeg project. The implementation will linked from the so libraries. Thats perhaps because, developers didn't bother to filter header files.
And the last one;
your thoughts seems right for the time being, most of the current developers are struggling to use ffmpeg but they lack of documentation and sample codes.