Android SoundPool Cannot Play Through Single Channel - android

We're playing a wav file through Android's SoundPool API. We've created a device that loops one of the stereo channels back through the microphone, but not the other.
I should also mention that we've tested our loopback device with a computer and have confirmed it works correctly.
However, when we try to play a sound through the channel that is not looped back to the microphone, we're still "hearing" the sound from that channel on the microphone. When we plug headphones in, it seems Android is still sending a quieter version of the one-channel sound through to the other channel even when it's not supposed to.
We've tried stereo WAV files with silence on one channel and we've tried mono sound files -- both are played the same. Can anyone explain this and how to stop it? The code we're using is:
_Pool = new SoundPool(3, Stream.Music, 0);
var beep = assets.OpenFd("beep.wav");
var beepId = _Pool.Load(beep, 1);
//later, after the sound is loaded we call:
_Pool.Play(beepId, 1, 0, 1, 0, 1);
I should also mention that we've tried variations of the volume levels (0.01 instead of 0.0 and 0.99 instead of 1.0). We've also tried it on multiple test devices including a Google Pixel, Samsung Note, and an LG. Nothing seems to work. What gives?

This "feature" is inherent in all the Android.Media.* package media output and it happens on physical and emulated devices due to the Android audio stack due to the fact Android supports up to 8 channel audio and mixes all available channels to produce the audio output.
i.e. If you create an AudioFormat that masks out all channels except the right channel, you will still have some output on the left channel (assuming you are using a 2-channel output device) regardless of the AudioAttributes:
var audioFormat = new AudioFormat.Builder()
.SetChannelIndexMask(2) // Right channel
.SetSampleRate(8000)
.SetEncoding(Encoding.Pcm16bit)
.Build();
These APIs are above the AudioFlinger, libmedia, the HAL, etc... and thus are subject to the final mixing provide by the FastMixer/NormalMixer/AudioMixer, etc...
You can look ALSA (Advanced Linux Sound Architecture) and OSS (Open Sound System) in order to access the hardware audio services in order to bypass the normal audio handling.

Related

Android AudioRecord Configuration does not match recorded Audio

I am intending to record stereo audio on an Android 4.4.2 device. However the audio being recorded via a simple recording App (using AudioRecord) does not match the configuration supplied. I would hope to see error messages in logcat if the device is using default configuration values, but I can see that the supplied values appear to be accepted by AudioHardware, and AudioPolicyManagerBase.
The current configuration for:
recorder = new AudioRecord(Media.Recorder.MIC,
sampleRate,
AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
audioBufferSizeInBytes);
Changing the Media.Recorder.AudioSource has been raised an option for trying to resolve this issue; But this has not changed how the Android stack behaves- Except to (understandably) fail to load the recorder when the configuration is invalid.
Changing SampleRate has shown also produced no visible change in the output- both 44.1kHz, and 16kHz are valid options, however both produce 16kHz audio when examined. The output audio also appears to be one channel of audio upmixed to stereo.
TinyALSA/Tinycap is available to capture the audio, and this appears to behave as expected.
Could this be an issue within the Android Stack? Or is this more likely to be an issue with the code supplied by the OEM?
The reason for the downmixed audio in this case was that the Speex Codec was being used in the HAL to downmix and de-noise the stereo input.
This was configured under:
<android source tree>/hardware/<OEM>/audio/audiohardware.h
The alternatives to this problem would be to route the audio out of ALSA, and around the android stack via a unix domain stream socket. This would be accessible in the application layer with androids localSockets.

Android selecting top microphone

How to select top microphone as audio source for recording audio in android.
Now I am using MediaRecorder.AudioSource.CAMCORDER as audio source but its not taking top mic for recording.
The developer documentation lists the supported audio sources. I am unsure what you mean with "top microphone" however I think you should use MediaRecorder.AudioSource.MIC.
In my case I used audioSource = MediaRecorder.AudioSource.MIC and channelConfig=AudioFormat.CHANNEL_IN_STEREO. Then choosing the RIGHT channel input which receive sound from TOP MIC on HUAWEI Mate10/Mate9.
Anyway this method is relate to the hardware, your code only works on specific model

AudioTrack playing with inconsistent speed on different devices

I am currently working with AudioTrack. I am loading an mp3 file and playing it. However, depending on devices, the music plays either at half the rate or normal rate.
My code:
AudioTrack track= new AudioTrack( AudioManager.STREAM_MUSIC,
sampleRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
initBuffer,
AudioTrack.MODE_STREAM);
sampleRate is the sample rate return by AudioFormat and initBuffer the buffer size from AudioTrack.getMinBufferSize().
I have tried to change the sample rate but no difference. Buffer size also has no impact.
In fact, switching to CHANNEL_CONFIGURATION_STEREO does make the music play at normal rate on the devices that were slow, but it also makes the ones working fine playing at twice the normal speed. My problem is that I want all devices to play at the normal speed.
Any suggestions?
I have read this thread:Android AudioTrack slow playback, but it doesn't tell me how to find out which devices should play in mono or stereo.
Devices at normal speed: Urbano 4.2.2, Galaxy S4 4.3
Devices at half the speed, Galaxy S4 4.2.2, Experia Z 4.2.2
BTW, I cannot use MediaPlayer for playback. The AudioTrack is included in a custom player and I need to write audio data as I extract it. MediaPlayer won't do the trick.
Thanks.
I experienced it when trying to decode mono file using MediaCodec. On some devices,the codec would output stereo stream while mono on other. When setting up the audio track I used media format returned by MediaExtractor, which was mono. On devices where the codec would produce stereo stream, the audio track would be fed with twice as many samples. The solution is to listen for MediaFormatChanged event from MediaCodec and adjust the MediaFormat of the AudioTrack.
AudioTrack only allows PCM audio data, you cannot play mp3 wit AudioTrack directly, you have to convert your mp3 to PCM or use something else (e.g. MediaPlayer) than AudioTrack.
I finally solved my problem. Not the best fix but it works at least.
For those who are interested, I am reading the BufferInfo size and based on it, decide at which playback rate I should play. Basically when playback is slow the size is twice bigger than for normal speed playback. Just a guess, but the MediaCodec might duplicate data for stereo configuration.
This one is a little tricky but here is what you can do using adb and ALSA.
Android internally uses ALSA.
Your device should have ALSA, try :
root#user:/$ adb shell cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version 1.0.25.
Just take a look at Reading ALSA Device names.
Every device (subdivision of a card) has a capture and a playback component.
Like /proc/asound/card0/pcm1p and /proc/asound/card0/pcm1p where card is 0 device is 1
pcm1p is your playback and pcm1c is capture (for recording).
Access your device using adb:
root#user:/$ adb shell
shell#android:/$:
Identifying your device:
So you see /proc/asound/pcm will give you a long list.
shell#android:/$: cat /proc/asound/pcm
00-00: MultiMedia1 (*) : : playback 1 : capture 1
00-01: MultiMedia2 (*) : : playback 1 : capture 1
00-02: CS-Voice (*) : : playback 1 : capture 1
From the above I find 00-00: MultiMedia1 (*) as card0 and device0 is for Multimedia playback.
Getting Playback Parameter:
Play your loaded mp3 file using your standard music player Application.
While the song is playing.
Issue the following commands for card0, device0(p - playback), and subdevice0
shell#android:/$: cat /proc/asound/card0/pcm0p/sub0/hw_params
access: RW_INTERLEAVED
format: S16_LE
subformat: STD
channels: 2
rate: 44100 (44100/1)
period_size: 1920
buffer_size: 3840
So try using the same values when you call AudioTrack track= new AudioTrack(...);
The above value will only be visible when the device is in open (Playing some audio).
If you issue this to a wrong device (like say pcm1p), you will see the following:
shell#android:/$: cat /proc/asound/card0/pcm1p/sub0/hw_params
closed
Note :
Step 1 doesn't require phone to be rooted, How to Setup ADB.
Alternatives:
Have you tried the AudioTrack APIs like getChannelConfiguration() or getChannelCount() from here - Just Asking.
Have you tried looking at the MP3 file properties, the metadata has information on the audio parameters.

What is AudioFlinger and why does it fail TONE_PROP_ACK?

In my application I issue the following statement:
toneGenerator.startTone(ToneGenerator.TONE_PROP_ACK, 600);
Which works very well on a cheap LG LS670 running Android 2.3.3 but doesn't sound at all on all other phones I have, ranging from Android 2.2.1 to Android 2.3.4.
So I know the OS version doesn't play a role here (I also verified in the documentation that it has been supported since API 1).
Also, both Ringer volume and Media volume are set to maximum and toneGenerator is initialized with:
toneGenerator = new ToneGenerator(ToneGenerator.TONE_DTMF_1, 100);
And I verified that Settings.System.DTMF_TONE_WHEN_DIALING is set to 1.
Baffled by this inconsistent behavior (across different phones), I examined the system logs when this happens and the only suspicious difference I have been able to find is that the phones who fail to sound TONE_PROP_ACK have this line in their log:
AudioFlinger setParameters(): io 25, keyvalue routing=0, tid 155, calling tid 121
What is the purpose of AudioFlinger and what could be its connection to muting TONE_PROP_ACK?
Any idea how to fix my code so that that TONE_PROP_ACK always sounds, regardless of phone model?
One work around is to generate the tone in something like Audacity and play it through SoundPool or the api of your choice.
According to the Android docs ToneGenerator.TONE_PROP_ACK is:
1200Hz, 100ms ON, 100ms OFF 2 bursts
If you choose SoundPool, I suggest saving in ogg file format and loop the tone until complete. This while provide seamless audio with a very small clip and not using a lot of resources.
The parsing/decoding is handled by Stage fright, which is used by the
media player service. The decoded data is written to an Audio Track
through an Audio Sink, and the tracks are then mixed by the
Audio Flinger's mixer thread(s) and written to an output stream
(Audio Hardware). The output stream object fills up its own buffer(s)
and then writes the data to the PCM output device file (which may or
may not be an ALSA driver).

Audio Sound Too Low in Android App

I recorded some audio files to use in my app, around 50, so I would like to not record all of them again. I recently used SoundPool to play the audio files on a real device instead of the emulator and you can barely hear them. On the emulator with my PC volume set to max and device to max, I can hear it fine. Should I try to record the files louder or is there another option?
I've found that when targeting mobile devices (and cheap/small laptop speakers for that matter), it is best to do two things to your audio:
Compression: I do not mean data compression, I mean dynamic contrast compression. This will remove some of the level differences between loud and soft parts of the recording, allowing it all to be heard better.
Normalization: When you normalize audio, you take the loudest part of the audio, and scale the entire audio clip up so that the loudest part is at the loudest that can be stored in the audio file.
You can do both of these easily with any audio editing software, such as Audacity.
Finally, you should also keep in mind the reproduceable frequencies on such small speakers.
Most of these speakers are built with speech in mind. Because of this, you will find that they tend to be the loudest in the 700Hz-2.5kHz range.
That is, if your sound effects are low in frequency (think bass), then it will be almost impossible to hear them on a phone's small speaker which cannot reproduce such low frequencies.
If you have more questions on the matter, please visit https://video.stackexchange.com/.
If it is the volume of the recorded files, you can change it using a normalizer like MP3Gain.

Categories

Resources