I've been experimenting with making an android app for the past week or two and have reached a point where I need to decide on a general plan of action concerning the code I'm writing.
started with SoundPool, easy to use, not very flexible. went on to AudioTrack, seems good but slow.
So now I'm looking at the ndk..
Does the ndk have direct access to AudioTrack? or something else?
What is the general concensus on this kind of thing?
A guess is to make the UI in java and the 'sound engine' in C++
I'd like to get started on the right track before I write too much.
edit:
It is a music studio that plays and manipulates wav files from the sdcard as well as realtime sound synthesis.
The Android SDK doesn’t offer a fully featured audio processing solution like Core Audio on iOS. The available features are exposed through OpenSL ES interface. Any audio processing feature's availability is dependent on the device manufacturer’s Android flavor and configuration, so you can not rely on them.
To wit, the infamous Android fragmentation problem is even bigger in audio.
Even worse, if a device reports an audio feature available, it may still not work properly. For example: audio playback rate on a Samsung Galaxy Note III.
Realtime sound synthesis? None of the Android SDK’s features are designed for that.
The best way is doing the UI in Java and dealing with sound in C++.
There are some 'audio engine' offers on the web, but many of them are just “wrappers” around the same thing.
As cofounder of Superpowered, allow me to recommend the use the Superpowered Audio SDK, which is a total audio processing solution designed for real-time and highest performance, without any dependency on Android’s audio offerings, and runs on all.
http://superpowered.com/
There are a ton of Audio Latency issues in Android. There's really not anything that can be done about it. It seems like ICS (4.0) may have done some improvements on it, from what I've read.
You could subscribe to Andraudio and you'd actually be better off directing Android Audio questions through their emailing list than through Stackoverflow:
http://music.columbia.edu/mailman/listinfo/andraudio
Related
I am developing an App for kids learn to compose music similar to a drum pad machine.
Is possible to play multiple audio files simultaneously with the minimum delay possible between each other, like audacity, in android and ios?
I already checked near all the stackoverflow (and google also) related questions. But the posts are very old (2016, 2017, ...), and seemed that it was difficult to play sounds simultaneously. Maybe, now in 2019 is more easy to do it.
As far as I know, it is possible to use audiopool (but is limited to 1mb size and i need more than 1mb) and Mediaplayer. About mediaplayer, I can not found much information and tutorials.
Also, there is the new flutter framework. Is it possible to do it in flutter? Would be great, since with the same code could run on android and ios.
For flutter, you should try this resource: https://pub.dev/packages/audioplayers
It supports playing multiple audio files, preloading audio files and playing them with minimal delay as possible
I have a Nativescript app the targets Android and iOS and produces audio but I want to be able to play concurrent audio files with low latency and at the moment I'm using nativescript-audio which doesn't have either of those features.
I found EZAudio which claims to have low latency but it's only for iOS and I want something for Android as well.
I also found a nativescript-audio issue requesting those features but it's remained open for a while.
I've seen some Javascript audio libraries like howler.js but not sure how to implement them in a Nativescript app.
Any help would be appreciated.
I want to create an Android app that plays multiple mp3s simultaneously, with precise sync (less than 1/10 of a second off) and independent volume control. Size of each mp3 could be over 1MB, run time up to several minutes. My understanding is that MediaPlayer will not do the precise sync, and SoundPool can't handle files over 1MB or 5 seconds run time. I am experimenting with superpowered and may end up using that, but I'm wondering if there's anything simpler, given that I don't need any processing (reverb, flange, etc.), which is superpowered's focus.
Also ran across the YouTube video on Android high-performance audio, from Google I/O 2016. Wondering if anyone has any experience with this.
https://www.youtube.com/watch?v=F2ZDp-eNrh4
Superpowered was originally made for my DJ app (DJ Player in the App Store), where precisely syncing multiple tracks is a requirement.
Therefore, syncing multiple mp3s and independent volume control is definitely possible and core to Superpowered. All you need is the SuperpoweredAdvancedAudioPlayer class for this.
The CrossExample project in the SDK has two players playing in sync.
The built-in audio features in Android are highly device and/or build dependent. You can't get a consistent feature set with those. In general, the audio features of Android are not stable. That's why you need a specialized audio library which does everything "inside" your application (so is not a "wrapper" around Android's audio features).
When you are playing compressed files (AAC, MP3, etc) on Android in most situations they are decoded in hardware to save power, except when the output goes to a USB audio interface. The hardware codec accepts data in big chunks (again, to save power). Since it's not possible to issue a command to start playing multiple streams at once, what will often be happening is that one stream will already send a chunk of compressed audio to hardware codec, and it will start playing, while others haven't yet sent their data.
You really need to decode these files in your app and mix the output to produce a single audio stream. Then you will guarantee the desired synchronization. The built-in mixing facilities are mostly intended to allow multiple apps to use the same sound output, they are not designed for multitrack mixing.
I'm writing a small call-recording application for my rooted phone.
I'm looking for a way to record audio directly from android system - or as low-level as possible, without using standard API's MediaRecorder or AudioRecord.
How can I do this? I can't find any good example on it, although I see it implemented in many apps (that require root).
My team and I are nearly done developing a music application for iPhone and Android that allows users to create their own music, built by playing and overlapping sampled sounds (up to 16 at a time). We are looking for a way to allow users to share these songs by embedding an audio player in our website which will (like the Android and iPhone applications already do) take the songs, which are expressed as a string representing pitch, duration, start time, and instrument, and convert them into a single playable audio file (any format).
We have looked into SoundManager 2 and WebAudio, and have run into the same problem with both: stopping sounds creates beeping or popping sounds that cannot be removed. Does anyone know of another framework or API that we should look into? A little googling also made sfplay stand out, but there isn't very much documentation on it. Any other suggestions?
Thanks!
There are still a lot of problems with javascript/html5 audio. WebAudio is very powerful but not very portable. Soundmanager 2 is less powerful, but more portable. You should not be hearing clicks/discontinuities with either library unless you are doing something wrong, but there are problems with synchronization and so on with browser-based audio.
That's why most people doing "serious" audio on the web are using java applets or flash, which won't work on devices.