I follow up on this article: TarsosDSP with Android
I am trying to implement an android application that reads mp3 files and processes them using WEKA.
The TarsosDSP seems to be a good step in the right direction, especially since the Berkley guys seems to have implemented a fork with android.
When I tried downloading their source code here: TarsosDSPAndroid Source Code
I still found a lot of references to javax.sound, which is kind of counter-productive.
So is something mixed up with their uploaded source code or am I looking in the wrong place?
Perhaps some background to what I am trying to accomplish overall:
I am writing an Android App that will read the entire mp3 library, and using WEKA and pre-loaded test-groups will classify each song to appropriate genre.
The part of reading the mp3 library is all done and so is the classification using WEKA, now I am stuck in joining them up - What seemed to be working fine using jAudio in a java project doesn't work for android because of the dependency in javax.sound, so I am trying to bypass that using a different library that works for android.
Thanks in advance!
-Alex
Version 2.0 of TarsosDSP supports Android out of the box. There are no more dependencies on javax.sound.*. This makes it a lot more easy to work with on Android. There is even an TarsosDSP Android jar file that can be included in your project directly.
Related
I am currently using Gzip to compress attachments in Couchbase on Android. Recently bumped into Snappy, that seems to be an efficient solution, so decided to use Snappy instead of GZip.
Snappy github - https://github.com/xerial/snappy-java
But what I am confused is how to use the Snappy library in Android. I downloaded the latest version of snappy (1.1.2.1) from http://central.maven.org/maven2/org/xerial/snappy/snappy-java/1.1.2.1/ and dropped it in the libs folder of the Android project. I now can reference the Snappy class methods in my Android source code, which made me think that everything was going great so far. Now when I run the app, I get the following error when I call Snappy.compress(byte[] data) -
org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Linux and os.arch=aarch64
org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
I created a sample java class to test Snappy, and it works great.
So, from my understanding, its missing some native libraries for the Linux kernel, but even from the description on the github page, cannot figure out a way to build the jar file that I can use on Android.
Any help would be greatly appreciated.
Thanks
Looks like Linux/aarch64 for Snappy is in the works, but not yet released:
https://github.com/xerial/snappy-java/issues/134
You could give a shot at https://github.com/dain/snappy. It's a pure java port so no need for native libs.
I'm trying to convert an XNA project to Android. I saw a video saying I need to have "opengl mono for android" as an option when creating project, but I don't.
Can anyone tell me how to download this extension?
Use MonoGame.If you don't you will need to rewrite your game including all the rendering logic in either C# using Xamarin.Android or a complete rewrite in Java.
So your best choice would be Monogame:
monogame.net
EDIT
I just found this on an other site it's called exen, but I still think Monogame is a better one because there are many people using it.
link to Exen:
http://andrewrussell.net/exen/
I would like to edit an audio file. As java doesn't support voice libraries(according to my knowledge), I would like to use juce library for this.
From some resources found in google, I came to know that we can do it using introjucer..
but i couldn't find proper tutorials for making android projects using introjucer. Can anyone help me out with this? Please correct me if i've misinterpreted any concept.
Best Useful library for audio editing is ringdroid: https://code.google.com/p/ringdroid/
Android Audio reference.
http://developer.android.com/reference/android/media/AudioTrack.html is the Android API for handling audio at the lowest level.
Second check out this -> Getting started with programmatic audio
If you dont want to use ringdroid.
Juce Hello world Tutorial : http://jucevst.wordpress.com/2011/08/17/hello-world-with-juce-actually-making-something/
Juce Beginner Tutorial: http://www.rawmaterialsoftware.com/viewtopic.php?f=13&t=10953#p61988
For training and fun I want to build an android app, that is able to stream audio from one device to another. It will be a simple baby-phone app.
I've tried using gstreamer, but have some trouble including the binaries and building the eclipse project. However, now I am looking for alternatives. Does any one know a simple one? Or is there even some android api stuff I can use? Please note: the difficult thing is not receive a stream, but provide one...
Thanks a lot in advance!
I'm looking to write an application that could combine images and then to form a video. I'm working with Phonegap framework to be available for use on Android and IOS.
My question is what sort of process is involved to achieve this?
At this stage I've tried to read about ffmpeg, most of the questions existing on stackoverflow talk of having to get the source, compiling to make a series of libraries for use. With those libraries it needs to be tied in with the Android/IOS libraries? (I notice there is an 'android.jar' with the project file in eclipse. Would it exist in there?) Afterwards my confusion lies with how is this implemented into Phonegap. Develop a plugin?
Just to add, libav according to wiki, has hardware accelerated H.264 decoding whilst using x.264 for encoding for Android. How does that work? Is this something accessed from libav libraries and then have to compiled in within the android.jar?
I may have confused terms in trying to describe what I do not know.
Any help would be appreciated.