I'm working with OpenCV 2.2 for Android under Windows, and faced a problem when using cvCreateVideoWriter. It always returns NULL. I'm guessing it has something to do with library FFMPEG not being properly built. The thing is that I followed instructions in http://opencv.willowgarage.com/wiki/Android2.2, and since FFMPEG is included as a 3rd party library (at least I can see the source withing the whole OpenCV package) I thought I didn't have to do anything extra to get this library installed. I might be wrong. How do I check if the library was correctly built (or built at all)? Do I need to make any changes to the default make files?
Any help is much appreciated.
Thanks!
There are 2 important things to consider when using cvCreateVideoWriter():
Your application needs rights to create files and be able to write on them. Make sure you have setup the necessary directory permissions for it to do so.
The 2nd argument of the function is the code of codec used to compress the frames. For For instance, CV_FOURCC('P','I','M','1') is MPEG-1 codec and CV_FOURCC('M','J','P','G') defines motion-jpeg.
A typical call may look like this:
CvVideoWriter *writer = cvCreateVideoWriter("video.avi", CV_FOURCC('M','J','P','G'), fps, size, 0);
if (!write)
{
// handle error
}
I suggest calling cvCreateVideoWriter with different codecs. It may be that your platform doesn't support the one you are using right now.
I don't know if the default build for Android enables the flag HAVE_FFMPEG, but you need to have ffmpeg installed and it's best to make sure this flag is enable when compiling OpenCV.
Related
I have checked this question.
It is very similar:
I want to record a video with android camera.
After that with a library remove the background, which is with chroma key.
First I think I should use android NDK in order to escape from SDK memory limitation and use the whole memory.
The length of the video is short, a few seconds so maybe is able to handle it.
I would prefer to use an SDK implementation and set the android:largeHeap="true" , because of mismatching the .so files architecture.
Any library suggestion for SDK or NDK please.
IMO you should prefer NDK based solution, since video processing is a CPU-consuming operation and java code won't give you a better performance. Moreover, the most popular and reliable media-processing libraries are often written in C or C++.
I'd recommend you to take a look at FFmpeg. It offers reach abilities to cope with multimedia. chromakey filter may help you to remove green background (or whatever color you want). Then you can use another video as new background, if needed. See blend filter docs.
Filters are a nice and powerful concept. They may be used both via ffmpeg tool command line or via libavfilter API. For the former case you should find ffmpeg binary compiled for android and run it with traditional Runtime.exec(). For the latter case - you need to write native code, that creates proper filter graph and performs processing. This code must be linked against FFmpeg libraries.
OK So here is my story:
I am creating an app that requires me to take a couple images and a video and merge them together. At first I had no idea what to use, never heard of ffmpeg or ndk.. After around 5 days of battling NDK, switching to Ubuntu and going crazy with ndk-build commands I finally got FFmpeg to compile using the dolphin-player example. Now that I can run ffmpeg on my computer and android device I have no idea what to do next.
Here are the main questions I have:
To use FFmpeg, I saw that I need to use some sort of commands. First off what are these commands, where do I run them?
Second of all, Are the commands all I need? By that I mean can i just run my application normally, somewhere in it execute the commands in some way and it will do the rest for me? or do I need some sort of element in the code, for example VideoEncoder instance or something..
Third of all, I saw people using NDK to use FFmpeg, Do I have to? Or is it optional? I would like to avoid using C if possible as I don't know it at all..
OPTIONAL: Last but not least, Is this the best way of handling what I need to do in my application? If so, can someone guide me in a brief manner of how to use FFmpeg to accomplish said task (mention commands or anything like this)..
I know it's a wall of text but every question is important to me!
Thank you very much stackoverflow community!
I see my answer may no longer relevant to your question but I still put it here as I've recently gone through that very same path and I understand the pain as well as the confusion causing by this matter (setting up NDK using mixed gradle plugin take me 1 day, building FFmpeg takes 2 days and then fail at wtf am I supposed to do next??)
So in short, as #Daniel has pointed out, if you just want to use FFmpeg to run command such ask compressing, cutting, inserting keyframes... then Writing mind's prebuilt FFmpeg Android Java is the easiest way to get FFmpeg running on your app. The downside is since it just run command so it needs to take an input and an output file for the process. See my question here for further clarification.
If you need to do more complex task than this then you have no choice but building the FFmpeg as a library and calling API from it. I've written down step by step instruction that work for me (May 2016). You can see it here:
Building FFmpeg v3.0.2 with NDK r11c (please use Ubuntu if you don't want to rebuild the whole thing, Linux Mint fails me)
Using FFmpeg in Android Studio 2.1.1
Please don't ask me to copy the whole thing here as its a very long instruction and it's easier for me to keep 1 source of information up-to-date. I hope this can save someone's keyboard ;).
1, FFmpeg can be either an app or a set of libraries. If you use it as an app (with an executable binary installed), you can type the commands in a terminal. The app only has limited functions and may not solve your problem. In this case you need to use ffmpeg as libraries and call APIs in your program.
2, To my understanding the commands cannot solve your problem. You need to call ffmpeg APIs. There are a bunch of sample codes for video/image encoding/decoding. You probably also need a container to package the outcome, and ffmpeg libraries can also do that.
3, NDK is preferred by me, since ffmpeg are written in C/C++. There are JAVA wrappers for ffmpeg; if you use them, NDK is not required. However, not all functions in ffmpeg are wrapped well - you may try. If not, then go back to the NDK solution.
4, The simplest way is to decode all your video/images into raw frames, combine them with desired order, and encode them. However in practice this consumes too much memory. The key point then becomes: how can I do the same on the fly? It's not too hard once you reach this step.
I am currently working on an android application that evaluate images in different aspects, and I found that there are lots great open source algorithms can be used.
Problem 1: Most of the algorithms are designed on c/c++/matlab languages that cannot be applied directly.
I've search that NDK is a tool that allows us develop android application by other languages, but the setup procedures are quite complicated that I stuck for days. So before I go further on it, I would like to first ask whether I can include other's c/c++ source code directly like calling java library?
Problem 2: For example, I would like to use Point Matching algorithm's source code in my application, but there are lots files inside as it's just source code but not library/plugin. What are the steps to apply the require functions in my android application?
(the most desired way is to blindly input some images to the alogrithm, and it returns the results like calling functions, which I dont have to understand the working principle of it.)
You can't directly use other C++ libraries, you have to build them for Android first using NDK. If there is a version of the library built for Android, then, of course you can use it directly by linking to it using NDK.
You have two options here. First, you create a regular Java application for Android, write a C++ wrapper for handling calls to native side and build the necessary source files using NDK. Your java application will make calls to wrapper using JNI and the wrapper will call your actual C++ functions as in Java->JNI wrapper on C++->Your actual C++ source.
Second option is going fully native, which will leave out Java, JNI calls and the wrapper. You can access your source files directly as if you were writing a desktop C++ application. In the end you will have to build using NDK but this step is required in any case.
As a suggestion, you can also take a look at OpenCV for image processing purposes. It has libraries built for Android, all you will have to do is to link them.
Short version.
Download opencv4android library. Import it in eclipse and see if everything is fine (compile errors, output, etc.).
Secondly, try to import face detection application and try to understand how it works.
It has a java part and a native part.
In order to understand how it works you must understand how java interacts with C++, more or less how NDK works.
One important aspect is to learn how to create your interfaces in C++, based on the native ones defined in java. When you got there then you can try creating your own application.
After that you can come here and ask specific questions.
I want to use the codecs in Android from my application. For now I just want to use the H.264 codec for testing, unless the mp3 or aac codecs provide functions for sending the audio to the device's speaker in which case I would prefer one of those.
I have the NDK installed along with Cygwin, GNU Make, and GNU Awk. I can't figure out what I need to do from here though. I'm downloading the entire OpenCORE tree right now but I don't even know how to build it or make Eclipse plugin know it needs to include the files.
An example or a tutorial would be much appreciated.
EDIT:
It looks like I can use JNI like P/Invoke which would mean I don't have to build the OpenCORE libraries myself. However, I can't find any documentation on the names of the libraries I need to load.
I'm also confused as to how to do it. I'm looking at http://www.koushikdutta.com/2009/01/jni-in-android-and-foreword-of-why-jni.html and I don't understand what the purpose of writing a library to access a library is. Couldn't you just use something like System.loadLibrary("opencore.so")?
You cannot build opencore seperately. It has to be built with whole source code. What are you trying to acheive. If you just want to play a video/audio, use VideoView or MediaPlayer object.
Build the Android source and use the headers and the static library from it. This will propel you straight to the zone of unsupported APIs.
I was wondering if someone could provide me with a bit of theory. I read that;
Page alignment causes changes in
linking. Of very high impact on the
success of compiling software for
Android is the fact that Google forces
compatible binaries to not be page
aligned for the text and data section.
This requires changes in the way of
linking object files. For self-written
software, one can take precautions and
react on this fact with compiling all
shared libraries accordingly. For
already existing source code, changing
the linker’s behavior can present a
very tiring and, often, an even
impossible task.
But personally know very little of page alignment, what does this actually mean? Is this the reason we must change the asx file when compiling native libraries for use on Android? Here's the context for that statement.
Apologies if you think I should just Google this, I did try but I'm just looking for a bit more of an explanation than there is at that link.
Kind regards,
Gavin
using CodeSourcery and a linker script is not a valid way to create a native library on Android anymore.
Take a look at the native library example at development/samples/PlatformLibrary/ to see how you should build a native library.
For the start, you can modify the PlatformLibrary example to make your own library. Compiling the code should create the following files:
out/target/product/generic/system/app/PlatformLibraryClient.apk
out/target/product/generic/system/etc/permissions/com.example.android.platform_library.xml
out/target/product/generic/system/framework/com.example.android.platform_library.jar
out/target/product/generic/system/lib/libplatform_library_jni.so
You do not have to worry about those linker issues anymore.
Regards,
Manny