I want to develop video merge application in android. I just able to generate ffmpeg.so file using NDK and JNI. But now I want to get frames of input video(around 15 sec.). Any one have an idea how to get the frames from video using ffmpeg? Is there direct native method available for that to get frames in ffmpeg? And how to pass command from java class to native mathod to perform video merging functionality? Just give me solution for that.
The only tutorial I know is http://dranger.com/ffmpeg/. It is not Android-related but frames they do get in the very begining of the tutorial.
With ffmpeg you can get frames or thumbnails, try these commands in JNI and NDK.
To extract a thumbail
ffmpeg -i videojoin.mpg -vf thumbnail=25, scale=iw/4:ih/4 -frames:v 1 -y thumbs.png
To extract the first frame from video
ffmpeg –vframes 1 –i input_video_file_name –f image2 output_frame.bmp
For create thumbnail or frames you can find the commands in this FFMPEG Wiki link.
Related
I want to compress video and found only one solution to use ffmpeg.
I found this Trying to use FFMPEG for android. Compiling but still not working but not able to get how he compiled it.
You could use precompiled libs from the JavaCPP Presets along with FFmpegFrameRecorder from JavaCV to compress images, as shown in the RecordActivity sample, for example.
I am working on an Android Camera Application which have capability of Image and Video capturing. Later users can annotate on the image and add watermark to Video. All went fine when drawing Annotation on Image but failed to get no solution. In iPhone there AVComposition Library to draw watermark on Videos. I don't know whether such library exists for android or not but I would like to know if someone has come across such requirement and got any solution.
Can some one guide how to get it started for composing an image on Video. Atleast adding text to the Video somewhere
In case anyone is still looking for this - ffmpeg can be used on Android to add a watermark to a video, but you do need to be aware that video processing on a power and processing limited device like a mobile may be slow and may run down battery if used extensively.
An example of the ffmpeg command to add a watermark to a video using ffmpeg, from the ffmpeg documentation is:
ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' -map '[out]' out.mkv
See here for the official documentation: https://ffmpeg.org/ffmpeg.html and look at '-filter_complex filtergraph (global)'
ffmpeg can be used in Android projects using one of the wrapper libraries that exist to support this. Although, ffmpeg was designed as a command line tool, this approach does work reliably in my experience.
The library below is a good example of a well supported Android ffmpeg wrapper library and it includes a sample APK so you may be able to actually try out the ffmpeg command you need with that to ensure it works before adding to your own project.
https://github.com/WritingMinds/ffmpeg-android-java
I have successfully compiled and build ffmpeg library in android after 3,4 days research work.
Now I want to grab frames from video. But I don't know which ffmpeg method with command to be called from java class to grab the all frames. Any one have idea about it? or I want to overlay 2 videos. Is there any direct method available in ffmpeg to merge two videos one over another? If yes, how to call it from java class?
compile ffmpeg.c and invoke its main() via jni.
For more details see how to Use FFMPEG on Android .
Also refer this github project. And this tutorial too will help you.
I hope this will help you.
I want to create a video using ffmpeg by taking byte[] data from Android Camera. Now the problem is i don't have much knowledge about ffmpeg. So i need some documentation on ffmpeg. I will appreciate if anyone can provide some useful tutorial / sample code / example on ffmpeg and how it works , how it can be used to create video programmatically, Thanks.
ffmpeg is a C library so you will have to use NDK to build it and bridge it together with your Android device using a JNI interface. As far as I know, I dont think its possible to record a video directly using ffmpeg. However, you can use openCv to capture video stream then decode/encode it with ffmpeg if you decided to go this route. Once again, all of this must be done in C/C++, and the information can be sent to the android device via JNI using NDK once you finished processing it with ffmpeg.
Here is the link to OpenCV Library for Android
http://opencv.org/downloads.html
Once downloaded, there are sample projects which show you how to record video using native android camera and as well as using opencv feature.
We're trying to replicate the functionality of this command line ffmpeg directive using the FFMpeg c api through JNI calls on Android.
ffmpeg -ss 2 -t 120 -vcodec copy -acodec copy -i input.file output.file
Basically, given a start and end time, we wish to copy (not re-encode) a small(er) segment of video from the larger (input) video source.
We've been using the wonderful JavaCv wrapper to openCv and FFMpeg, but we just cannot figure out how to do this simple bit of work. We've been scouring the ffmpeg.c and related sources and while I now understand that it switches to stream_copy and remuxing rather than re-encoding when the codec is specified as copy I cannot for the life of me identify what series of method calls to make to replicate this through the C api. Does anyone have an example JNI file for doing this ? Or are there rockstar C types that can explain how I get from that command line to api calls? We've spent the better part of two weeks working on this (we're not native C guys) and we're at the point where we just need to ship some code. Any example code, especially JNI code or method call maps etc. would be greatly appreciated!
You have to include a JNI wrapper method in your Java code.
Maybe these two link are useful: Link 1 Link 2