For days I am trying to find a working library that can decode the video stream of the Parrot AR Drone 2.0. The problem is actually that FFmpeg isn't working in Xamarin Android and the Xuggle-Xuggler is only for Java which makes it really difficult.
Furthermore, I tried to use FFmpeg, but everytime I got errors like this: DllImport error loading lbavcodec-55': 'dlopen failed: libavcodec-55" not found'. I have seen a lot of possible solutions but nothing works. I also tried to compile some .dll files which contains the FFmpeg source code, but unfortunately the same errors as before.
I just want create a TCP video stream to "192.168.1.1:5555". After that I want to use a possible decode class/library which could decode the bytes to frames or something like that and put it on the view using a VideoView, so the frames will be shown on the smartphone.
Has anyone experience with this? Or does someone know a working library for decoding the TCP video stream of the drone?
Thanks.
Good news, because I just solved the problem.
There is a possiblity to use FFMpeg, but you need to compile this specially for your platform. Actually this is in Windows a little bit harder than in Ununtu/Linux. However, I tried to implement a pre-compiled library into Xamarin Android, but there were errors like DllImport error loading lbavcodec-55': 'dlopen failed: libavcodec-55" not found', so that didn't work. Xuggle-Xuggler is a video decoder as well, but specially made for Java only and I am working in Xamarin Android, so I had to find something else.
After several weeks I saw a project which uses OpenCV. This could decode the video stream of the drone. However, there was this guy: https://github.com/AJRdev/ARDrone-Android-GEII who made the video stream in two different ways. Namely via OpenCV and a library called "Vitamio".
What I did was trying to use the Vitamio library which Xamarin Android supports. Because there is this Xamarin Android version known https://components.xamarin.com/gettingstarted/vitamiobinding, but that's an old version, so I decided to use the Vitamio library which can be found here: https://github.com/shaxxx/Xamarin.Vitamio. I am using this library because it's using .AAR which contains the same files as the Vitamio library in the project I was talking about before and the most important, no errors appeared :)
Unfortuantely there is no information on the internet about the Parrot AR Drone 2.0 using Xamarin Android. So, if there is someone with this problem, then you could use the source-code of the official app called "Freeflight 2.4", because that one is specially made for Android. However, there is a lot of code in the Freeflight 2.4 app which takes a lot of time to get the video stream part, but I did not have the time, so I chose for an easier way as I explained before.
After the implementation you should be able to see the video on your smartphone!
Good luck!
Related
I developed an Android app that reads an .mp3-file, does some manipulation on the PCM data and writes the result to another .mp3-file. So far, I have been using JLayer for decoding and LAME (with NDK) for mp3-encoding. Everything works fine - the only issue is that JLayer is very slow (as discussed here). Thus, I would like to switch to mpg123 that presumably is very fast. For LAME there are nice Android NDK tutorials around, but for mpg123 I have not found anything. The only functionality I need is to decode an mp3-file framewise and obtain the data as PCM. I also checked this, this and this question, but my problem remains unsolved. As I am new to Android NDK I have two questions:
Does any of you know a step-by-step tutorial showing how to compile mpg123 for Android using the NDK? In particular, I do not know what the Android.mk and the wrapper.c should look like.
Does any of you have a good alternative? In principle the only thing I am looking for is a fast version of JLayer because the functionality is excellent whereas the performance is poor.
Any help will greatly be appreciated!
I follow up on this article: TarsosDSP with Android
I am trying to implement an android application that reads mp3 files and processes them using WEKA.
The TarsosDSP seems to be a good step in the right direction, especially since the Berkley guys seems to have implemented a fork with android.
When I tried downloading their source code here: TarsosDSPAndroid Source Code
I still found a lot of references to javax.sound, which is kind of counter-productive.
So is something mixed up with their uploaded source code or am I looking in the wrong place?
Perhaps some background to what I am trying to accomplish overall:
I am writing an Android App that will read the entire mp3 library, and using WEKA and pre-loaded test-groups will classify each song to appropriate genre.
The part of reading the mp3 library is all done and so is the classification using WEKA, now I am stuck in joining them up - What seemed to be working fine using jAudio in a java project doesn't work for android because of the dependency in javax.sound, so I am trying to bypass that using a different library that works for android.
Thanks in advance!
-Alex
Version 2.0 of TarsosDSP supports Android out of the box. There are no more dependencies on javax.sound.*. This makes it a lot more easy to work with on Android. There is even an TarsosDSP Android jar file that can be included in your project directly.
I am working on Android 2.2, and my goal is to covert a sequence of images to mp4 video, MPEG-4_Part_14, to be more exact.
The most reasonable solution would be using FFmpeg libraries, and compile them to Android using NDK. How ever I am looking for solution without NDK.
I do not expect from you to build this tool for me, or find some one else who did. I spent quit of time now searching for that, and it's some thing that probably no1 one did yet.
So the only thing that I am asking, is help me find the specs so I be able to build the decoder by my self. I know that it's not a trivial task(May be this is why no one did it yet), but I want to do it any way. So please just help me start it.
I am working on an app for android that creates video file from a video at start and then set of images, and saves it.
Is there any way to accomplish that?
I tried JCodec and it has broken libraries, untrusted code on the web and lack of knowledge about this library.
I tried FFMpeg and it is unsupported enough on android and involves working with NDK.
I tried to create an animation with AnimationDrawable and save this animation as a video, but I can't find a way to save animation as video except using the feature of KITKAT 4.4, but it requires connecting to a computer and having a root.
Is there any other solutions or a trusted and explained way to do this using the ways above?
Thank in advance
I would vote for FFMPEG. You don't need NDK or other sourcery if you can afford a prebuilt solution, like FFmpeg 4 Android.
I'm trying to get the RTSP video stream play in my Android App using the build-in Videoview/MediaPlayer, but there're always various problems on different ROMs or different network status(UDP packets blocked), it's really annoying so I want to implement my own rtsp client with the live555 source and GLES and ffmpeg. I can figure out how to use ffmpeg and GLES to show a video, but I'm not familiar with live555.
Are there any compiled version of live555 on Android? or how could I do that myself?
Thanks.
I think I found a sample code from github, it works for me.
bad news - I think you won't find any precompiled versions of live555 - only a config-makefile-structure for several platforms - except android.
Since live555 is a pure c++ library you will most likely have problems with directly using the lib in Android.
jens.