I want to use the codecs in Android from my application. For now I just want to use the H.264 codec for testing, unless the mp3 or aac codecs provide functions for sending the audio to the device's speaker in which case I would prefer one of those.
I have the NDK installed along with Cygwin, GNU Make, and GNU Awk. I can't figure out what I need to do from here though. I'm downloading the entire OpenCORE tree right now but I don't even know how to build it or make Eclipse plugin know it needs to include the files.
An example or a tutorial would be much appreciated.
EDIT:
It looks like I can use JNI like P/Invoke which would mean I don't have to build the OpenCORE libraries myself. However, I can't find any documentation on the names of the libraries I need to load.
I'm also confused as to how to do it. I'm looking at http://www.koushikdutta.com/2009/01/jni-in-android-and-foreword-of-why-jni.html and I don't understand what the purpose of writing a library to access a library is. Couldn't you just use something like System.loadLibrary("opencore.so")?
You cannot build opencore seperately. It has to be built with whole source code. What are you trying to acheive. If you just want to play a video/audio, use VideoView or MediaPlayer object.
Build the Android source and use the headers and the static library from it. This will propel you straight to the zone of unsupported APIs.
Related
I have checked this question.
It is very similar:
I want to record a video with android camera.
After that with a library remove the background, which is with chroma key.
First I think I should use android NDK in order to escape from SDK memory limitation and use the whole memory.
The length of the video is short, a few seconds so maybe is able to handle it.
I would prefer to use an SDK implementation and set the android:largeHeap="true" , because of mismatching the .so files architecture.
Any library suggestion for SDK or NDK please.
IMO you should prefer NDK based solution, since video processing is a CPU-consuming operation and java code won't give you a better performance. Moreover, the most popular and reliable media-processing libraries are often written in C or C++.
I'd recommend you to take a look at FFmpeg. It offers reach abilities to cope with multimedia. chromakey filter may help you to remove green background (or whatever color you want). Then you can use another video as new background, if needed. See blend filter docs.
Filters are a nice and powerful concept. They may be used both via ffmpeg tool command line or via libavfilter API. For the former case you should find ffmpeg binary compiled for android and run it with traditional Runtime.exec(). For the latter case - you need to write native code, that creates proper filter graph and performs processing. This code must be linked against FFmpeg libraries.
I'm looking to write an application that could combine images and then to form a video. I'm working with Phonegap framework to be available for use on Android and IOS.
My question is what sort of process is involved to achieve this?
At this stage I've tried to read about ffmpeg, most of the questions existing on stackoverflow talk of having to get the source, compiling to make a series of libraries for use. With those libraries it needs to be tied in with the Android/IOS libraries? (I notice there is an 'android.jar' with the project file in eclipse. Would it exist in there?) Afterwards my confusion lies with how is this implemented into Phonegap. Develop a plugin?
Just to add, libav according to wiki, has hardware accelerated H.264 decoding whilst using x.264 for encoding for Android. How does that work? Is this something accessed from libav libraries and then have to compiled in within the android.jar?
I may have confused terms in trying to describe what I do not know.
Any help would be appreciated.
I'm working with OpenCV 2.2 for Android under Windows, and faced a problem when using cvCreateVideoWriter. It always returns NULL. I'm guessing it has something to do with library FFMPEG not being properly built. The thing is that I followed instructions in http://opencv.willowgarage.com/wiki/Android2.2, and since FFMPEG is included as a 3rd party library (at least I can see the source withing the whole OpenCV package) I thought I didn't have to do anything extra to get this library installed. I might be wrong. How do I check if the library was correctly built (or built at all)? Do I need to make any changes to the default make files?
Any help is much appreciated.
Thanks!
There are 2 important things to consider when using cvCreateVideoWriter():
Your application needs rights to create files and be able to write on them. Make sure you have setup the necessary directory permissions for it to do so.
The 2nd argument of the function is the code of codec used to compress the frames. For For instance, CV_FOURCC('P','I','M','1') is MPEG-1 codec and CV_FOURCC('M','J','P','G') defines motion-jpeg.
A typical call may look like this:
CvVideoWriter *writer = cvCreateVideoWriter("video.avi", CV_FOURCC('M','J','P','G'), fps, size, 0);
if (!write)
{
// handle error
}
I suggest calling cvCreateVideoWriter with different codecs. It may be that your platform doesn't support the one you are using right now.
I don't know if the default build for Android enables the flag HAVE_FFMPEG, but you need to have ffmpeg installed and it's best to make sure this flag is enable when compiling OpenCV.
I am writting an app which needs to decode H.264(AVC) bitstream. I find there are AVC codec sources exist in /frameworks/base/media/libstagefright/codecs/avc, does anyone know how can one get access to those codecs in an Android app? I guess it's through JNI, but not clear about how this can be done.
After some investigation I think one approach is to create my own classes and JNI interfaces in the Android source to enable using the CODECS in an Android App.
Another way which does not require any changes in Android source is to include CODECS as shared library in my application, use NDK. Any thoughts on these? Which way is better(if feasible)?
I didn't find much information about Stagefright, it would be great if anyone can point out some? I am developing on Android 2.3.3.
Any comments are highly appreciated.Thanks!
Stagefright does not support elementary H.264 decoding. However it does have H.264 decoder component. In theory, this library could be used. But in reality, it will be tough to use it as a standalone library due to its dependencies.
Best approach would be use to JNI wrapped independent h.264 decoder(like the one available with ffmpeg).
I'm looking for a way to decode AAC natively to PCM on Android. The decoder source code is at https://android.googlesource.com/platform/external/opencore/+/master/codecs_v2/audio/aac/dec, but I'm not familiar with NDK at all.
1) There's no way of doing this directly using the Android SDK, but can this be done via the NDK?
2) I would especially be interested in a simple way of accessing the decoder from SDK, with a short "bridge" through the NDK. Is this feasible?
3) Would such a solution work all Android versions (1.5-2.2)?
4) I guess I could use http://code.google.com/p/aacplayer-android/ instead, but it looks like this implementation is fairly CPU intensive. Does anyone have experiences with this?
Not sure what the policy is here for answering really old questions but what is working well for me is using OpenSL with the NDK; it comes built in and in fact the NDK comes with an example "native-audio" that demonstrates what you need.
One thing you may look into is the FFMpeg stuff, it is GPL and TuneIn radio posted their mods here: http://radiotime.com/mobile/android#/support/open-source