I developed an Android app that reads an .mp3-file, does some manipulation on the PCM data and writes the result to another .mp3-file. So far, I have been using JLayer for decoding and LAME (with NDK) for mp3-encoding. Everything works fine - the only issue is that JLayer is very slow (as discussed here). Thus, I would like to switch to mpg123 that presumably is very fast. For LAME there are nice Android NDK tutorials around, but for mpg123 I have not found anything. The only functionality I need is to decode an mp3-file framewise and obtain the data as PCM. I also checked this, this and this question, but my problem remains unsolved. As I am new to Android NDK I have two questions:
Does any of you know a step-by-step tutorial showing how to compile mpg123 for Android using the NDK? In particular, I do not know what the Android.mk and the wrapper.c should look like.
Does any of you have a good alternative? In principle the only thing I am looking for is a fast version of JLayer because the functionality is excellent whereas the performance is poor.
Any help will greatly be appreciated!
Related
For days I am trying to find a working library that can decode the video stream of the Parrot AR Drone 2.0. The problem is actually that FFmpeg isn't working in Xamarin Android and the Xuggle-Xuggler is only for Java which makes it really difficult.
Furthermore, I tried to use FFmpeg, but everytime I got errors like this: DllImport error loading lbavcodec-55': 'dlopen failed: libavcodec-55" not found'. I have seen a lot of possible solutions but nothing works. I also tried to compile some .dll files which contains the FFmpeg source code, but unfortunately the same errors as before.
I just want create a TCP video stream to "192.168.1.1:5555". After that I want to use a possible decode class/library which could decode the bytes to frames or something like that and put it on the view using a VideoView, so the frames will be shown on the smartphone.
Has anyone experience with this? Or does someone know a working library for decoding the TCP video stream of the drone?
Thanks.
Good news, because I just solved the problem.
There is a possiblity to use FFMpeg, but you need to compile this specially for your platform. Actually this is in Windows a little bit harder than in Ununtu/Linux. However, I tried to implement a pre-compiled library into Xamarin Android, but there were errors like DllImport error loading lbavcodec-55': 'dlopen failed: libavcodec-55" not found', so that didn't work. Xuggle-Xuggler is a video decoder as well, but specially made for Java only and I am working in Xamarin Android, so I had to find something else.
After several weeks I saw a project which uses OpenCV. This could decode the video stream of the drone. However, there was this guy: https://github.com/AJRdev/ARDrone-Android-GEII who made the video stream in two different ways. Namely via OpenCV and a library called "Vitamio".
What I did was trying to use the Vitamio library which Xamarin Android supports. Because there is this Xamarin Android version known https://components.xamarin.com/gettingstarted/vitamiobinding, but that's an old version, so I decided to use the Vitamio library which can be found here: https://github.com/shaxxx/Xamarin.Vitamio. I am using this library because it's using .AAR which contains the same files as the Vitamio library in the project I was talking about before and the most important, no errors appeared :)
Unfortuantely there is no information on the internet about the Parrot AR Drone 2.0 using Xamarin Android. So, if there is someone with this problem, then you could use the source-code of the official app called "Freeflight 2.4", because that one is specially made for Android. However, there is a lot of code in the Freeflight 2.4 app which takes a lot of time to get the video stream part, but I did not have the time, so I chose for an easier way as I explained before.
After the implementation you should be able to see the video on your smartphone!
Good luck!
I am working on Android 2.2, and my goal is to covert a sequence of images to mp4 video, MPEG-4_Part_14, to be more exact.
The most reasonable solution would be using FFmpeg libraries, and compile them to Android using NDK. How ever I am looking for solution without NDK.
I do not expect from you to build this tool for me, or find some one else who did. I spent quit of time now searching for that, and it's some thing that probably no1 one did yet.
So the only thing that I am asking, is help me find the specs so I be able to build the decoder by my self. I know that it's not a trivial task(May be this is why no one did it yet), but I want to do it any way. So please just help me start it.
I'm looking to write an application that could combine images and then to form a video. I'm working with Phonegap framework to be available for use on Android and IOS.
My question is what sort of process is involved to achieve this?
At this stage I've tried to read about ffmpeg, most of the questions existing on stackoverflow talk of having to get the source, compiling to make a series of libraries for use. With those libraries it needs to be tied in with the Android/IOS libraries? (I notice there is an 'android.jar' with the project file in eclipse. Would it exist in there?) Afterwards my confusion lies with how is this implemented into Phonegap. Develop a plugin?
Just to add, libav according to wiki, has hardware accelerated H.264 decoding whilst using x.264 for encoding for Android. How does that work? Is this something accessed from libav libraries and then have to compiled in within the android.jar?
I may have confused terms in trying to describe what I do not know.
Any help would be appreciated.
I am working on a android audio project which requires BPM tracking. I decided that writing my own would not be a good idea and after looking around, I found a few libraries that does BPM tracking such as aubio, vamp, echonest etc. Out of the lot aubio seemed a good choice. The problem is I cannot find good documentation that can help understand how I can use the library, such as, what sort of input audio formats are compatible (should i pre-process the audio before passing it to the function), etc.
Can you point me to some documentations or implementations of aubio to some open source projects (on android would be a bonus).
If you think there is an easier way (another algorithm/library) to port on android (preferably in c), let me know.
Thanks.
I used the make files provided with aubio to cross compile it for android. I followed some tutorials such as this which shows how to cross compile open source libraries. As for the documentation for aubio, i just used it several times to understand how it works (i studied how the examples worked) and read the phd thesis of the author to get a rough idea on the technical stuff.
I'm looking for a way to decode AAC natively to PCM on Android. The decoder source code is at https://android.googlesource.com/platform/external/opencore/+/master/codecs_v2/audio/aac/dec, but I'm not familiar with NDK at all.
1) There's no way of doing this directly using the Android SDK, but can this be done via the NDK?
2) I would especially be interested in a simple way of accessing the decoder from SDK, with a short "bridge" through the NDK. Is this feasible?
3) Would such a solution work all Android versions (1.5-2.2)?
4) I guess I could use http://code.google.com/p/aacplayer-android/ instead, but it looks like this implementation is fairly CPU intensive. Does anyone have experiences with this?
Not sure what the policy is here for answering really old questions but what is working well for me is using OpenSL with the NDK; it comes built in and in fact the NDK comes with an example "native-audio" that demonstrates what you need.
One thing you may look into is the FFMpeg stuff, it is GPL and TuneIn radio posted their mods here: http://radiotime.com/mobile/android#/support/open-source