Sound Analysis/Editing in Android - android

I'd like to know how difficult it is to analyze and edit sound in android. My project would be a kind of DJ application.
I think AudioTrack is the most appropriate library, right ? How does the few android DJ apps work to display the spectrum of the sound, apply effect, change the speed, mix ect ... Do they use a more powerful external sound library ?
For the performances issue, is Java fast enough ( or is there a risk of latency )?
And a last question : How does Korg ported its syth to Ipad, Wormux being ported to android, AngryBird works now on Iphone AND android ,ect ... all these apps seem to be ported to any platform without being re-written... how do they do ? It is because they are written in "Native Code" ? Should I consider this option so I can use other sound library ?
Thanks.

For purposes of editing sound or applying sound effects to a stream you can use class AudioTrack in conjuntion with one of AudioEffect subclasses of package android.media.audiofx, e.g. Equalizer etc ....
In order to analyse sound you should most probably apply FFT algorithms to the bytes you retrieve from class AudioRecord or some Reader.

Related

How to create a multi track player (like a dj mixer) for iOS and Android?

I would like to create a multi track player (like a dj mixer) for a website.
I've tried to create it with html 5 but it doesn't work on safari iOS (iPad, iPhone) because it's not possible to play several sounds at the same time with <audio>
<audio src='audio/son-AMBIANCE.mp3' id="yourAudio" preload='auto' autoplay loop></audio>
<audio src='audio/son-MUSIQUE.mp3' id="yourAudio" preload='auto' autoplay loop></audio>
<audio src='audio/son-VOIX.mp3' id="yourAudio" preload='auto' autoplay loop></audio>
On iOS, only the first mp3 is played.
I think about this solution : create an app.
Do you think it's possible to create a dj mixer in objective C ? Or even better for me, create a dj mixer with flash and export it for Android and iOS (better for me because i know actionScript 3).
Do you think it is a good idea and it is technically feasible?
You can create an AIR application with the latest Gaming SDK from Adobe which will work well on Android and IOS. It will give you everything you need to create a smooth DJ mixer sort of application and run multiple sound files at once. However if you have any very large lists, some users might complain about the sluggishness of the large list (over 30 items)
You can also create the application in IOS and provided you use the correct file formats, you can play multiple layers of Audio.
I recommend using AIR if the application is visually simple with only a few objects in a list being displayed at a time, and Objective C for IOS or Java for Andorid if you have very heavy animations or extremely large lists of songs.

How to play mp3+g on mediaplayer android

I know we can play mp3 file in MediaPlayer.
But can we play mp3+g on android??
I saw in the documentation on android, but i didn't see it.
http://developer.android.com/guide/appendix/media-formats.html
Is there any work around or library to do this?
Thanks
I don't "think" that Android is going to support mp3+g playback anytime soon. That being said an mp3+g "file" should either be one zipped file(with two files inside) or two separate files named the same with exception of the file extension. So other then playing the MP3 there is really nothing else that MediaPLayer can do, and changing MediaPlayer int the android framework to get this to work would not be portable from device to device.
Workaround 1
Use FFMPEG to transcode and mux these files to a different format that is supported such as mp4. Here is an example of someone using ffmpeg to mux mp3+g into FLV.
Workaround 2
Another option would be to use Android For VLC which is in pre-alpha found here. Now I'm not sure that VLC for android will support mp3+g, but libvlc does support decoding of the two files so I'm guessing it would work, or you could alter the code a bit to get it to work. I have checked out the VLC for Android code recently and I have to say its a cpu hog but since mp3 and cdg are generally smaller less cpu intensive files I think that android devices could handle the work load using VLC.
Workaround 3
Now as far as more complex options you could utilize the Android NDK and create a decoder yourself (This would take you a lot of time).
Hope some of this helps you.
I have found the solution..
http://code.google.com/p/cdg-toolkit/
It was written in java so we should porting it first to Android if you want to use it.

Walkie Talkie without SIP

I am trying to do a kind of walkie talkie in android. I did using the classes audiorecord and audiotrack but these are not suitable to transmit the PCM data. I would like to use other codec like AMR which are less bandwith consuming. Can you tell me how to implement it, I mean which classes or method to convert PCM to other codec?
Many thanks for your support.
There is no built-in solution for this one in Android, and you will have to bring your own.
Such a codec will usually run in the NDK, so some stitching using JNI will be required as well.
There should be several such libraries available - free and for a price.
Search google for AMR codec. Focus on those running on ARM CPUs or those that are specific to Android.

android sound app sdk or ndk?

I've been experimenting with making an android app for the past week or two and have reached a point where I need to decide on a general plan of action concerning the code I'm writing.
started with SoundPool, easy to use, not very flexible. went on to AudioTrack, seems good but slow.
So now I'm looking at the ndk..
Does the ndk have direct access to AudioTrack? or something else?
What is the general concensus on this kind of thing?
A guess is to make the UI in java and the 'sound engine' in C++
I'd like to get started on the right track before I write too much.
edit:
It is a music studio that plays and manipulates wav files from the sdcard as well as realtime sound synthesis.
The Android SDK doesn’t offer a fully featured audio processing solution like Core Audio on iOS. The available features are exposed through OpenSL ES interface. Any audio processing feature's availability is dependent on the device manufacturer’s Android flavor and configuration, so you can not rely on them.
To wit, the infamous Android fragmentation problem is even bigger in audio.
Even worse, if a device reports an audio feature available, it may still not work properly. For example: audio playback rate on a Samsung Galaxy Note III.
Realtime sound synthesis? None of the Android SDK’s features are designed for that.
The best way is doing the UI in Java and dealing with sound in C++.
There are some 'audio engine' offers on the web, but many of them are just “wrappers” around the same thing.
As cofounder of Superpowered, allow me to recommend the use the Superpowered Audio SDK, which is a total audio processing solution designed for real-time and highest performance, without any dependency on Android’s audio offerings, and runs on all.
http://superpowered.com/
There are a ton of Audio Latency issues in Android. There's really not anything that can be done about it. It seems like ICS (4.0) may have done some improvements on it, from what I've read.
You could subscribe to Andraudio and you'd actually be better off directing Android Audio questions through their emailing list than through Stackoverflow:
http://music.columbia.edu/mailman/listinfo/andraudio

Sound effect mixing with OpenSL on Android

I'm currently implementing a sound effect mixing on Android via OpenSL. I have an initial implementation going, but I've encountered some issues.
My implementation is as follows:
1) For each sound effect I create several AudioPlayer objects (one for each simultaneous sound) that uses an SLDataLocator_AndroidFD data source that in turn refers to an OGG file. For example, if I have a gun firing sound (lets call it gun.ogg) that is played in rapid succession, I use around 5 AudioPlayer objects that refer to the same gun.ogg audio source and also the same outputmix object.
2) When I need to play that sound effect, I search through all the AudioPlayer objects I created and find one that isn't currently in the SL_PLAYSTATE_PLAYING state and use it to play the effect.
3) Before playing a clip, I seek to the start of it using SLPlayItf::SetPosition.
This is working out alright so far, but there is some crackling noise that occurs when playing sounds in rapid succession. I read on the Android NDK newsgroup that OpenSL on Android has problems with switching data sources. Has anyone come across this issue?
I'm also wondering if anyone else seen or come up with a sound mixing approach for OpenSL on Android. If so, does your approach differ from mine? Any advice on the crackling noise?
I've scoured the internet for OpenSL documentation and example code, but there isn't much out there with regards to mixing (only loading which I've figured out already). Any help would be greatly appreciated.
This is probably not the best approach (creating many instances of audio players). Unfortunately the Android version (2.3) of OpenSL ES doesn't support SLDynamicSourceItf. Which would be similar to OpenAL's source binding interface. One approach would be to create multiple stream players. You would then search for a stream player that isn't currently playing and start streaming your sound effect to it from memory. It's not ideal but it's doable.
You should probably not use the ogg format for sound effects either. You're better off with WAV (PCM) as it won't need to be decoded.
Ogg is fine for streaming background music.

Categories

Resources