I wrote an iPhone app some time ago that creates sound programatically. It uses an AudioQueue to generate sound. With the AudioQueue, I can register for a callback whenever the system needs sound, and respond by filling a buffer with raw audio data. The buffers are small, so the sound can respond to user inputs with reasonably low latency.
I'd like to do a similar app on Android, but I'm not sure how. The MediaPlayer and SoundPool classes seems to be for playing canned media from files, which is not what I need. The JetPlayer appears to be some sort of MIDI playback engine.
Is there an equivalent to AudioQueue in the Android Java API? Do I have to use native code to accomplish what I want?
Thanks.
With the AudioQueue, I can register for a callback whenever the system needs sound, and respond by filling a buffer with raw audio data.
The closest analogy to this in Android is AudioTrack. Rather than the callback (pull) mechanism you are using, AudioTrack is more of a push model, where you keep writing to the track (presumably in a background thread) using blocking calls.
Related
How can I play background audio, in Android, without interrupting the MediaPlayer playback, by either using MediaPlayer (preferred) or OpenSL ES?
I know SoundPool is able to play sound effects without interrupting any MediaPlayer playback, but the size is limited to 1M per effect, which is way to less. Not requesting audio focus, via AudioManager doesn't seem to work either, audio doesn't play at all in this case.
And in the case of OpenSL ES, all audio generally stops when I start to play a longer asset file. It's similar to the behaviour of SoundPool described above.
Edit from the comments:
I don't want to interrupt other music players, it's the background
audio of a game, which shall play without interrupting the, for
example, music of a player. Games like Subway Surfer, Clash Royale and
such seem to have this achieved somehow, but I could not achieve it
via OpenSL ES, or MediaPlayer.
In order to play sound in background you can use SoundPool, AudioTracks and OpenSlES.
Soundpool: Use small files and make a sequence. In my last project i use 148 sound files (all small) in different scenarios and played using soundpool. Make a list and play one by one or in parallel. Also in games usually you have a small set of sound for particular scenario and it loops. Best is soundpool for that. You can also perform some effects like rate change. Also ogg files are way small, so use them.
AudioTrack: You can use RAW PCM data in audio track. If you want you can extract pcm data using MediaExtractor from almost all formats and use it. But it will be a little work for you in your case, but it is really good (supports huge data, even live streaming).
OpenSLES: Apparently android uses opensles in background for all its purpose. So using it will help you a lot. But it's not easy to get everything done on it. You need to learn more for lesser work.
I have been deeply working on OpenSlES for about 20 days and still i will say for small purpose use soundpool, medium to high level implementation use AudioTracks and for kickass implementation use OpenSLES.
PS: It will be bad effect on your user if you play game sound in background while they are playing their music or their call. Personal experience.
Looking for some help with audio playback on Android. We have an OpenGL app (Java + C++) and now we want to play sounds effects. Players should allow to modify playback rate and volume while playing.
Might be OpenSL or Audiotrack.
First question? Is there any free or commercial library/wrapper that can do the thing?(might be java or native)
..,if not, I'll explain what we made so far, and problems we experience.
We created MusicPlayer class (extends AsyncTask) with AudioTrack instance. In activity's onResume() we created 5 instances of it, executing it on thread pool. In task's doInBackground() we have a running loop checks states change, load files, and write to buffer. In JNI we have singleton that stores events queue and send them to java once per 10 miliseconds. It somehow works, but is rather far away from being acceptable. We experience following problems:
When file starts to play we can hear short noise on start. Like click or something.
Even if we flush or release AudioTrack it seems the sound plays in queue (especially when need to change buffers quick)
We can't create a loop in MODE_STREAM
When we modify AsyncTask's local variable CHANGE_RATE and RATE it should call audioTrack.setPlaybackRate(RATE). It does, but nothing happens.
I used to write in Obj-C for iOS and there are plenty "ready-to-use" solutions(e.g. cocoacontrols). Never thought dealing with sound on Android would be such a nightmare;/ Any help will be highly appreciated:)
Android 4.0 or higher does not support playback rate control anymore. I'm not sure what was the reason for disabling it. I had to adopt SOLA algorithm implemented by SoundTouch library to add playback rate control to my app.
If you want to use it, you need to write a player using OpenSL. There are a lot of working examples around. Take the algorithms and use it for modifying PCM stream before sending the stream to the output sink.
Update 22.01.2016: Android appears to include Sonic library into 6.0 release. Thus playback rate control should be available starting from Android M.
Is there any way to intercept or just-read the audio output in android device?
I need to read the whole audio output in PCM from inside myActivity, including media player application in background, voice from calls, MediaPlayer istances inside myACtivity, etc., everything that's going to played by speakers. Actually, if it was possible to read them separately, would be great as well.
I tried with AudioRecord, giving it as audioSource parameter every constant found in MediaRecorder.AudioSource with no luck, should I try different audioSources?
Is it a so low-level task that has to be handled within native layer?
The visualizer class is helpful. I use it to playback immediately played audio here.
This audio comes in very low quality, however, so it's only really good for visualization.
I'm writing my first android app, trying to playback two 10min soundfiles synchronously (imagine an instrumental track and an acapella), to be able to change the volume of each track independently). I am using two MediaPlayers for this, since a SoundPool is targeted at shorter audio samples as far as I read.
Now my problem is that, when pausing and resuming the playback, sometimes the players are not synchronous anymore, even though I set their positions to the same value before resuming playback.
I know that this is kind of inevitable, because they cannot be started at exactly the same moment and they may require different amounts of time for starting playback, but: Is there maybe any other approach to meet my requirements?
You can take a look at JetPlayer, this may accomplish what you want as far as synchronization.
To use it you create audio channels (your instrument channel and vocal channel) as MIDI files in a track and the player can keep them synchronized while allowing you to mute or unmute the different channels as appropriate.
The user guide for creating JET resouces can be found here.
I have some design questions that I want to discuss with people interested in helping me. I am planning to develop a simple VoIP program that allows two Android phones in the same network to use VoIP. My goal is simply to capture sound, send the data with UDP, receive UDP data and play sound.
My current design is to have 2 threads: one captures the microphone and sends the data; the other one receives bytes and plays them.
I was starting to implement that using MediaPlayer and MediaRecorder. The issue that came up is how do I record and play sound? By that, I would like to know if I need to use a file, although that seems slow, or if there is anyway to have the recording automatically sent to my UDP socket please?
Basically, I wonder if I have to record to a file, then to be able to play it, or if I could just pass a socket (for recording and playing).
Does anyone has any suggestion please?
Thank you very much
MediaRecorder needs an FD so, you can use sockets as well. I dont see any issues with that. It all depends on how you would design your system.
Don't use those classes for streaming audio - use AudioTrack and AudioRecord instead.
They provide the functionality you need for playing and recording raw audio data, without dealing with an FD.
When you record a frame (either byte[] or short[]), wrap it with a UDP packet.
When you receive a UDP packet, unpack the relevant byte[] or short[] and play it.