In my application I issue the following statement:
toneGenerator.startTone(ToneGenerator.TONE_PROP_ACK, 600);
Which works very well on a cheap LG LS670 running Android 2.3.3 but doesn't sound at all on all other phones I have, ranging from Android 2.2.1 to Android 2.3.4.
So I know the OS version doesn't play a role here (I also verified in the documentation that it has been supported since API 1).
Also, both Ringer volume and Media volume are set to maximum and toneGenerator is initialized with:
toneGenerator = new ToneGenerator(ToneGenerator.TONE_DTMF_1, 100);
And I verified that Settings.System.DTMF_TONE_WHEN_DIALING is set to 1.
Baffled by this inconsistent behavior (across different phones), I examined the system logs when this happens and the only suspicious difference I have been able to find is that the phones who fail to sound TONE_PROP_ACK have this line in their log:
AudioFlinger setParameters(): io 25, keyvalue routing=0, tid 155, calling tid 121
What is the purpose of AudioFlinger and what could be its connection to muting TONE_PROP_ACK?
Any idea how to fix my code so that that TONE_PROP_ACK always sounds, regardless of phone model?
One work around is to generate the tone in something like Audacity and play it through SoundPool or the api of your choice.
According to the Android docs ToneGenerator.TONE_PROP_ACK is:
1200Hz, 100ms ON, 100ms OFF 2 bursts
If you choose SoundPool, I suggest saving in ogg file format and loop the tone until complete. This while provide seamless audio with a very small clip and not using a lot of resources.
The parsing/decoding is handled by Stage fright, which is used by the
media player service. The decoded data is written to an Audio Track
through an Audio Sink, and the tracks are then mixed by the
Audio Flinger's mixer thread(s) and written to an output stream
(Audio Hardware). The output stream object fills up its own buffer(s)
and then writes the data to the PCM output device file (which may or
may not be an ALSA driver).
Related
I'm using mp3 files to play sounds in my Android game, developed in Libgdx. The sounds play fine when they happen every now and then, but when I play them fast (footsteps in a running animation for example) the game freezes/stutters.
Every time a sound is played, I get this in the logs:
W/AudioTrack: AUDIO_OUTPUT_FLAG_FAST denied by client; transfer 4, track 44100 Hz, output 48000 Hz
I use libktx AssetStorage to store the sounds. I've been searching for this issue for a few days now and haven't gotten luck with any of the following solutions:
Override createAudio in AndroidLauncher and use AsynchronousAndroidAudio
Convert mp3 to ogg (using Audacity)
Convert to 48k rate sample (using Audacity)
Add 1 or seconds of silence to the file
I test it on my own device, Samsung Galaxy S5, which is quite old and has version Android 6.0.1.
What can I do to resolve this error and stuttering?
Decoding compressed audio can be a significant processing load. If it's a short recording (e.g., one footstep that is being repeated), I'd either package the sound file as a .wav or decode it into PCM to be held in memory, and use it that way. IDK if it's possible to output PCM directly with libgdx, though, but I do recall inspecting and tinkering with an ogg utility to have it decode into an array, and outputting it with a SourceDataLine for a non-libgdx Java project. I realize that SourceDataLine output is not an option with Android, but Android does have provisions for playing back raw PCM.
Another idea to explore is raising the priority of the thread that is processing the audio to Thread.MAX_PRIORITY if libgdx allows this. Theoretically, the audio processing thread spends most of its time in a blocked state, so doing this shouldn't hurt the performance, unless you are really going overboard with your audio requests.
I just saw the mismatch of sample rates. It's wasteful to repeatedly do conversions on the fly when you can do the conversion once in Audacity. I'm guessing the difference between outputting at 48000 vs 44100 isn't that big of a load difference. Seems to me 44100 should be fine, but I doubt using 48000 for everything adds much in terms of cpu load (or perceivable audio fidelity). So, whichever one you pick, spend a little time making sure all the assets match the format.
We're playing a wav file through Android's SoundPool API. We've created a device that loops one of the stereo channels back through the microphone, but not the other.
I should also mention that we've tested our loopback device with a computer and have confirmed it works correctly.
However, when we try to play a sound through the channel that is not looped back to the microphone, we're still "hearing" the sound from that channel on the microphone. When we plug headphones in, it seems Android is still sending a quieter version of the one-channel sound through to the other channel even when it's not supposed to.
We've tried stereo WAV files with silence on one channel and we've tried mono sound files -- both are played the same. Can anyone explain this and how to stop it? The code we're using is:
_Pool = new SoundPool(3, Stream.Music, 0);
var beep = assets.OpenFd("beep.wav");
var beepId = _Pool.Load(beep, 1);
//later, after the sound is loaded we call:
_Pool.Play(beepId, 1, 0, 1, 0, 1);
I should also mention that we've tried variations of the volume levels (0.01 instead of 0.0 and 0.99 instead of 1.0). We've also tried it on multiple test devices including a Google Pixel, Samsung Note, and an LG. Nothing seems to work. What gives?
This "feature" is inherent in all the Android.Media.* package media output and it happens on physical and emulated devices due to the Android audio stack due to the fact Android supports up to 8 channel audio and mixes all available channels to produce the audio output.
i.e. If you create an AudioFormat that masks out all channels except the right channel, you will still have some output on the left channel (assuming you are using a 2-channel output device) regardless of the AudioAttributes:
var audioFormat = new AudioFormat.Builder()
.SetChannelIndexMask(2) // Right channel
.SetSampleRate(8000)
.SetEncoding(Encoding.Pcm16bit)
.Build();
These APIs are above the AudioFlinger, libmedia, the HAL, etc... and thus are subject to the final mixing provide by the FastMixer/NormalMixer/AudioMixer, etc...
You can look ALSA (Advanced Linux Sound Architecture) and OSS (Open Sound System) in order to access the hardware audio services in order to bypass the normal audio handling.
I am currently working with AudioTrack. I am loading an mp3 file and playing it. However, depending on devices, the music plays either at half the rate or normal rate.
My code:
AudioTrack track= new AudioTrack( AudioManager.STREAM_MUSIC,
sampleRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
initBuffer,
AudioTrack.MODE_STREAM);
sampleRate is the sample rate return by AudioFormat and initBuffer the buffer size from AudioTrack.getMinBufferSize().
I have tried to change the sample rate but no difference. Buffer size also has no impact.
In fact, switching to CHANNEL_CONFIGURATION_STEREO does make the music play at normal rate on the devices that were slow, but it also makes the ones working fine playing at twice the normal speed. My problem is that I want all devices to play at the normal speed.
Any suggestions?
I have read this thread:Android AudioTrack slow playback, but it doesn't tell me how to find out which devices should play in mono or stereo.
Devices at normal speed: Urbano 4.2.2, Galaxy S4 4.3
Devices at half the speed, Galaxy S4 4.2.2, Experia Z 4.2.2
BTW, I cannot use MediaPlayer for playback. The AudioTrack is included in a custom player and I need to write audio data as I extract it. MediaPlayer won't do the trick.
Thanks.
I experienced it when trying to decode mono file using MediaCodec. On some devices,the codec would output stereo stream while mono on other. When setting up the audio track I used media format returned by MediaExtractor, which was mono. On devices where the codec would produce stereo stream, the audio track would be fed with twice as many samples. The solution is to listen for MediaFormatChanged event from MediaCodec and adjust the MediaFormat of the AudioTrack.
AudioTrack only allows PCM audio data, you cannot play mp3 wit AudioTrack directly, you have to convert your mp3 to PCM or use something else (e.g. MediaPlayer) than AudioTrack.
I finally solved my problem. Not the best fix but it works at least.
For those who are interested, I am reading the BufferInfo size and based on it, decide at which playback rate I should play. Basically when playback is slow the size is twice bigger than for normal speed playback. Just a guess, but the MediaCodec might duplicate data for stereo configuration.
This one is a little tricky but here is what you can do using adb and ALSA.
Android internally uses ALSA.
Your device should have ALSA, try :
root#user:/$ adb shell cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version 1.0.25.
Just take a look at Reading ALSA Device names.
Every device (subdivision of a card) has a capture and a playback component.
Like /proc/asound/card0/pcm1p and /proc/asound/card0/pcm1p where card is 0 device is 1
pcm1p is your playback and pcm1c is capture (for recording).
Access your device using adb:
root#user:/$ adb shell
shell#android:/$:
Identifying your device:
So you see /proc/asound/pcm will give you a long list.
shell#android:/$: cat /proc/asound/pcm
00-00: MultiMedia1 (*) : : playback 1 : capture 1
00-01: MultiMedia2 (*) : : playback 1 : capture 1
00-02: CS-Voice (*) : : playback 1 : capture 1
From the above I find 00-00: MultiMedia1 (*) as card0 and device0 is for Multimedia playback.
Getting Playback Parameter:
Play your loaded mp3 file using your standard music player Application.
While the song is playing.
Issue the following commands for card0, device0(p - playback), and subdevice0
shell#android:/$: cat /proc/asound/card0/pcm0p/sub0/hw_params
access: RW_INTERLEAVED
format: S16_LE
subformat: STD
channels: 2
rate: 44100 (44100/1)
period_size: 1920
buffer_size: 3840
So try using the same values when you call AudioTrack track= new AudioTrack(...);
The above value will only be visible when the device is in open (Playing some audio).
If you issue this to a wrong device (like say pcm1p), you will see the following:
shell#android:/$: cat /proc/asound/card0/pcm1p/sub0/hw_params
closed
Note :
Step 1 doesn't require phone to be rooted, How to Setup ADB.
Alternatives:
Have you tried the AudioTrack APIs like getChannelConfiguration() or getChannelCount() from here - Just Asking.
Have you tried looking at the MP3 file properties, the metadata has information on the audio parameters.
An audio capture application on rooted MK809/Android 4.1.1. There is no internal mic so I am trying to use a USB one which is correctly detected as "USB Audio Device" in Settings/Sound/Sound Devices Manager/Sound Input Devices when connected.
What is this device's AudioSource value to pass into AudioRecord constructor (first argument). I tried every one in MediaRecorder.AudioSource, none worked. I am only interested in reading the capture buffer, not saving into a file.
Answering my own question. The following values did work: DEFAULT, MIC, CAMCORDER, probably others too as it is the only input device.
I was trying to use sample rate of 48000 (works on Windows) and AudioRecord creation failed with:
ERROR/AudioRecord(1615): Could not get audio input for record source 1
ERROR/AudioRecord-JNI(1615): Error creating AudioRecord instance: initialization check failed.
ERROR/AudioRecord-Java(1615): [ android.media.AudioRecord ] Error code -20 when initializing native AudioRecord object.
Somewhat misleading info considering that a call to getMinBufferSize() with the same set of agruments does not return an error as it is supposed to. I assumed that it was a valid sample rate for the device. Setting it to 44100 (guranteed) fixed the problem.
USB audio input devices do work on Android, Jelly Bean at least. Hope this helps someone.
FWIW, this is implementation specific (it can differ between different platform vendors and OEMs).
On the devices I've worked on, the USB accessory's mic would be chosen if the AudioSource is DEFAULT, MIC or VOICE_RECOGNITION, and the only sample rates supported in the audio HAL for USB audio recording were 8, 16 and 48 kHz (although the AudioFlinger is able to resample to other rates within a certain range).
I'm writing an audio streaming app that buffers AAC file chunks, decodes those chunks to PCM byte arrays, and writes the PCM audio data to AudioTrack. Occasionally, I get the following error when I try to either skip to a different song, call AudioTrack.pause(), or AudioTrack.flush():
obtainbuffer timed out -- is cpu pegged?
And then what happens is that a split second of audio continues to play. I've tried reading a set of AAC files from the sdcard and got the same result. The behavior I'm expecting is that the audio stops immediately. Does anyone know why this happens? I wonder if its an Audio latency issue with Android 2.3.
edit: The AAC audio contains an ADTS Header. The header + audio payload constitute what I'm calling ADTSFrame. These are fed to the decoder one frame at a time. The resulting PCM byte array that gets returned from the C layer to the Java Layer gets fed to Android's AudioTrack API.
edit 2: I got my nexus 7 (Android 4.1 OS) today. Loaded the same APP onto the device. Didn't have any of these problems at all.
it is highly possible about sample rate. one of your devices might be supporting the sample rate u used while the other could not. Please check it. I had the same issue, it was about sample rate. use 44.1kHz (44100) and try again please.