FFmpeg - Finally compiled. Now what? - android

OK So here is my story:
I am creating an app that requires me to take a couple images and a video and merge them together. At first I had no idea what to use, never heard of ffmpeg or ndk.. After around 5 days of battling NDK, switching to Ubuntu and going crazy with ndk-build commands I finally got FFmpeg to compile using the dolphin-player example. Now that I can run ffmpeg on my computer and android device I have no idea what to do next.
Here are the main questions I have:
To use FFmpeg, I saw that I need to use some sort of commands. First off what are these commands, where do I run them?
Second of all, Are the commands all I need? By that I mean can i just run my application normally, somewhere in it execute the commands in some way and it will do the rest for me? or do I need some sort of element in the code, for example VideoEncoder instance or something..
Third of all, I saw people using NDK to use FFmpeg, Do I have to? Or is it optional? I would like to avoid using C if possible as I don't know it at all..
OPTIONAL: Last but not least, Is this the best way of handling what I need to do in my application? If so, can someone guide me in a brief manner of how to use FFmpeg to accomplish said task (mention commands or anything like this)..
I know it's a wall of text but every question is important to me!
Thank you very much stackoverflow community!

I see my answer may no longer relevant to your question but I still put it here as I've recently gone through that very same path and I understand the pain as well as the confusion causing by this matter (setting up NDK using mixed gradle plugin take me 1 day, building FFmpeg takes 2 days and then fail at wtf am I supposed to do next??)
So in short, as #Daniel has pointed out, if you just want to use FFmpeg to run command such ask compressing, cutting, inserting keyframes... then Writing mind's prebuilt FFmpeg Android Java is the easiest way to get FFmpeg running on your app. The downside is since it just run command so it needs to take an input and an output file for the process. See my question here for further clarification.
If you need to do more complex task than this then you have no choice but building the FFmpeg as a library and calling API from it. I've written down step by step instruction that work for me (May 2016). You can see it here:
Building FFmpeg v3.0.2 with NDK r11c (please use Ubuntu if you don't want to rebuild the whole thing, Linux Mint fails me)
Using FFmpeg in Android Studio 2.1.1
Please don't ask me to copy the whole thing here as its a very long instruction and it's easier for me to keep 1 source of information up-to-date. I hope this can save someone's keyboard ;).

1, FFmpeg can be either an app or a set of libraries. If you use it as an app (with an executable binary installed), you can type the commands in a terminal. The app only has limited functions and may not solve your problem. In this case you need to use ffmpeg as libraries and call APIs in your program.
2, To my understanding the commands cannot solve your problem. You need to call ffmpeg APIs. There are a bunch of sample codes for video/image encoding/decoding. You probably also need a container to package the outcome, and ffmpeg libraries can also do that.
3, NDK is preferred by me, since ffmpeg are written in C/C++. There are JAVA wrappers for ffmpeg; if you use them, NDK is not required. However, not all functions in ffmpeg are wrapped well - you may try. If not, then go back to the NDK solution.
4, The simplest way is to decode all your video/images into raw frames, combine them with desired order, and encode them. However in practice this consumes too much memory. The key point then becomes: how can I do the same on the fly? It's not too hard once you reach this step.

Related

Android video remove chroma key background

I have checked this question.
It is very similar:
I want to record a video with android camera.
After that with a library remove the background, which is with chroma key.
First I think I should use android NDK in order to escape from SDK memory limitation and use the whole memory.
The length of the video is short, a few seconds so maybe is able to handle it.
I would prefer to use an SDK implementation and set the android:largeHeap="true" , because of mismatching the .so files architecture.
Any library suggestion for SDK or NDK please.
IMO you should prefer NDK based solution, since video processing is a CPU-consuming operation and java code won't give you a better performance. Moreover, the most popular and reliable media-processing libraries are often written in C or C++.
I'd recommend you to take a look at FFmpeg. It offers reach abilities to cope with multimedia. chromakey filter may help you to remove green background (or whatever color you want). Then you can use another video as new background, if needed. See blend filter docs.
Filters are a nice and powerful concept. They may be used both via ffmpeg tool command line or via libavfilter API. For the former case you should find ffmpeg binary compiled for android and run it with traditional Runtime.exec(). For the latter case - you need to write native code, that creates proper filter graph and performs processing. This code must be linked against FFmpeg libraries.

How to use prebuilt FFmpeg in Android Studio

I'm sure this is a very basic question but since this is the my first time messing around with the NDK, a lot of thing is still very unclear to me.
Use case:
I'm trying to develop a video scrubbing feature so fast and accurate frame seeking is crucial. I've tried most of the available players out there but the performance is still not up to my demand. That's why I'm going down the FFmpeg route.
Basically, what I'm looking for is FFmpeg input seeking. I've tried WrtingMinds' ffmpeg-android-java. However it is a file based implementation which means the out.jpg need to be written to external memory and read back which has a big hit on performance (roughly 1000 milliseconds for 1 seek).
That's why I'm trying to built my own FFmpeg player to do the input seeking in JNI and push back the byte[] to be displayed in Java.
Question: After a lot of struggling with the NDK, I've managed to set it up and successfully calling the JNI method from my Java code. The structure is as below:
MyApp
-app
-MyFFmpegPlayer
-build
-libs
-src
-main
-java
-com.example.myffmpegplayer
+HelloJNI.java
-jni
+MyFFmpegPlayer.c
After some fail attempt to build FFmpeg on Windows, I've decided to use WritingMinds prebuilt FFmpeg. However, after extraction they just come up as plain ffmpeg files (not .so file) so I don't really know how to use these.
It would be a great gratitude, if someone can just chime in and give me a good starting point for my next step.
Thank you so much for your time.
So to answer my own question which I should change to "How to build FFmpeg and use it with Android Studio", I have create a detail step-by-step instruction that's working for me (at May 24th 2016).
Sorry I can't post the whole instruction here as it is very long, plus it is easier for me to keep 1 source of information up-to-date.
I hope this can help someone as I know there is way too much confusing and contradicting information regarding this topic out there.

Debug (ASM) running Android Application

So I have googled this but just can't find a definitive answer.
I have an Android application which does a type of background calculation. I no longer have access to the source code, and furthermore its written using the NDK so can't use dex2jar.
What I'd like to do is to somehow attach a debugger and see the asm of the calculation to work it out, as I can't think of any other way to do this?
There doesn't seem to be to much information on the web around this.
If the APK is built in debug mode (containing gdbserver), you might get it to work, but you might need to either dissect the ndk-gdb script quite a bit, or build a mock project which ndk-gdb can look at to do its magic.
Alternatively, if you're ok with not stepping through it at runtime (inspecting the registers etc) you can just disassemble the .so files as well. Try <NDK>/toolchains/arm-linux-androideabi-*/prebuilt/*/bin/arm-linux-androideabi-objdump -d libfoo.so. If the detection of arm vs thumb mode doesn't work, you might want to add -M force-thumb. This obviously requires more work, and you can't check the intermediate values, but should be doable with almost any library.

Simple Build of Libavcodec.so and libavformat.so on Mac

Who needs ffmpeg? Not me. What I need is to be able to decode a video stream along with its audio stream, so that can put the frames on an opengl surface in sync with the audio.
FFmpeg is a tool that transcodes video. That is not what I need. I need its libraries.
The problem is that every example for building FFmpeg includes junk I just dont need. The latest example I wasted my time on:
https://github.com/appunite/AndroidFFmpeg
uses things like freetype2 that I really, REALLY, do not need. Whats more annoying is that it wont even build as described because the example references freetype, not freetype2 so the build steps are broken. Don't even get me started on the problems I had with libtool.
The kicker is finding libav.org, where they describe on their about page the chaos in the ffmpeg project. Perhaps that is why this is so difficult.
So, should it be so hard to build just the shared libs? Can someone point me to some documentation, or a tutorial that works? I admit that this is new territory for me but all I have found using Google is chaos.

cvCreateVideoWriter (OpenCV 2.2 + FFMPEG)

I'm working with OpenCV 2.2 for Android under Windows, and faced a problem when using cvCreateVideoWriter. It always returns NULL. I'm guessing it has something to do with library FFMPEG not being properly built. The thing is that I followed instructions in http://opencv.willowgarage.com/wiki/Android2.2, and since FFMPEG is included as a 3rd party library (at least I can see the source withing the whole OpenCV package) I thought I didn't have to do anything extra to get this library installed. I might be wrong. How do I check if the library was correctly built (or built at all)? Do I need to make any changes to the default make files?
Any help is much appreciated.
Thanks!
There are 2 important things to consider when using cvCreateVideoWriter():
Your application needs rights to create files and be able to write on them. Make sure you have setup the necessary directory permissions for it to do so.
The 2nd argument of the function is the code of codec used to compress the frames. For For instance, CV_FOURCC('P','I','M','1') is MPEG-1 codec and CV_FOURCC('M','J','P','G') defines motion-jpeg.
A typical call may look like this:
CvVideoWriter *writer = cvCreateVideoWriter("video.avi", CV_FOURCC('M','J','P','G'), fps, size, 0);
if (!write)
{
// handle error
}
I suggest calling cvCreateVideoWriter with different codecs. It may be that your platform doesn't support the one you are using right now.
I don't know if the default build for Android enables the flag HAVE_FFMPEG, but you need to have ffmpeg installed and it's best to make sure this flag is enable when compiling OpenCV.

Categories

Resources