What is the link between Usages and the Audio HAL? - android

I am trying to get audio working on a board to which we're porting Android.
Currently, sounds played with usages such as USAGE_ALARM (4) are audible, whereas sounds played with usages such as USAGE_MEDIA (1) are silent.
For the audible usages, we can see calls to the Audio HAL:
...
D AudioFlinger: Client defaulted notificationFrames to 11025 for frameCount 22050
D AF::TrackHandle: OpPlayAudio: track:64 usage:4 not muted
D audio_hw_primary: out_set_parameters: enter: kvpairs: routing=2
D audio_hw_primary: out_set_parameters: exit: code(0)
I audio_hw_primary: start_output_stream_primary... 0xf3856000, device 2, address , mode 0
I audio_hw_primary: select_output_device(), headphone 0 ,headset 0 ,speaker 2, earpiece 0,
I audio_hw_primary: get_card_for_device adev: 0xf3828000, device: 2, flag: 0, card_index: 0xf3856158
W audio_hw_primary: card 0, port 0 device 0x2
W audio_hw_primary: rate 48000, channel 2 period_size 0xc0
W StreamHAL: Error from HAL stream in function get_presentation_position: Operation not permitted
...
Whereas for silent usages, we see only:
...
D AudioFlinger: Client defaulted notificationFrames to 11025 for frameCount 22050
D AF::TrackHandle: OpPlayAudio: track:65 usage:1 not muted
W StreamHAL: Error from HAL stream in function get_presentation_position: Operation not permitted
...
I thought that /vendor/etc/audio_policy_configuration.xml might be important. I've paired it back to just having one output, but nothing has changed.
<attachedDevices>
<item>Speaker</item>
...
<defaultOutputDevice>Speaker</defaultOutputDevice>
<mixPorts>
<mixPort name="primary output" role="source" flags="AUDIO_OUTPUT_FLAG_PRIMARY">
<profile name="" format="AUDIO_FORMAT_PCM_16_BIT"
samplingRates="48000" channelMasks="AUDIO_CHANNEL_OUT_STEREO"/>
...
<devicePorts>
<devicePort tagName="Speaker" type="AUDIO_DEVICE_OUT_SPEAKER" role="sink" >
</devicePort>
...
<routes>
<route type="mix" sink="Speaker"
sources="primary output"/>
What part of AOSP is responsible for the routing of different usages?
Why do some work and other are silent?

Related

Media volume either 0% or 100%

I have a custom android AOSP ROM with a peculiar problem: The volume can only be set to either 0% or 100%. As a result, the volume buttons just turn the sound on or off. If I use the volume slider instead, and the volume is not muted, it jumps immediately to 100%. The volume is not reduced even momentarily.
Interestingly, the volume for ringtones and alarms is not affected and can be set as usual.
The problem occurs via headphones, internal speakers, and HDMI out.
I tried setprop ro.config.media_vol_steps 30, and that does work in that it changes the number of volume steps in the slider - but it does not affect the output volume. I found nothing in logcat, this is the only suspicious thing (I set the volume via slider to a low value):
02-08 06:18:43.117 1813 2298 V audio_hw_primary: out_set_parameters: routing=1024
02-08 06:18:43.670 22493 22493 I vol.Events: writeEvent touch_level_changed STREAM_MUSIC 3
02-08 06:18:44.127 22493 22493 I vol.Events: writeEvent touch_level_done STREAM_MUSIC 3
02-08 06:18:46.575 22493 22493 I vol.Events: writeEvent dismiss_dialog touch_outside
02-08 06:18:46.581 22493 24066 I vol.Events: writeEvent active_stream_changed UNKNOWN_STREAM_-1
02-08 06:18:46.695 1813 2298 V audio_hw_primary: out_set_parameters: routing=1024
02-08 06:18:49.842 1813 2298 D audio_hw_primary: out_standby
What could cause this? E.g. does the hardware report the current volume back to the UI (then it could be a driver problem)?
In your ROM, a fixed-volume (full) is set. Its a resource configuration. Usually done for HDMI devices.

How can I get a Bluetooth keyboard input event in Android?

I would like to give specific events when typing Bluetooth keyboard.
My app is always running as a service.
When you input a key with Bluetooth keyboard, the following log is recorded.
/? I / WCNSS_FILTER: do_write: IBS write: fc
/? I / WCNSS_FILTER: Direction (1): bytes: 19: bytes_written: 19
/? D / InputReader: Input event (15): value = 1 when = 586669729008000
/? I / InputDispatcher: Delivering key to (1099): action: 0x0 (0)
/? D / ViewRootImpl: ViewPostImeInputStage processKey 0
/? D / SecContentProvider2: query (), uri = 15 selection = getToastEnabledState
/? D / SecContentProvider2: query (), uri = 15 selection = getToastShowPackageNameState
Is there a way to get the call to 'InputReader: Input event' from the service?
or should I use another method?

Android Emulator Seems to Record Audio at 96khz

My app is recording audio from phone's microphones and does some real time processing on it. It's working fine on physical devices, but acts "funny" in emulator. It records something, but I'm not quite sure what it is it's recording.
It appears that on emulator the audio samples are being read at about double the rate as on actual devices. In the app I have a visual progress widget (a horizontally moving recording head), which moves about twice as fast in emulator.
Here is the recording loop:
int FREQUENCY = 44100;
int BLOCKSIZE = 110;
int bufferSize = AudioRecord.getMinBufferSize(FREQUENCY,
AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT) * 10;
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.CAMCORDER,
FREQUENCY, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT,
bufferSize);
short[] signal = new short[BLOCKSIZE * 2]; // Times two for stereo
audioRecord.startRecording();
while (!isCancelled()) {
int bufferReadResult = audioRecord.read(signal, 0, BLOCKSIZE * 2);
if (bufferReadResult != BLOCKSIZE * 2)
throw new RuntimeException("Recorded less than BLOCKSIZE x 2 samples:"
+ bufferReadResult);
// process the `signal` array here
}
audioRecord.stop();
audioRecord.release();
The audio source is set to "CAMCORDER" and it records in stereo. The idea is, if the phone has multiple microphones, the app will process data from both and use whichever has better SNR. But I have the same problems if recording mono from AudioSource.MIC. It reads audio data in a while loop, I am assuming that audioRecord.read() is a blocking call and will not let me read same data twice.
The recorded data looks OK – the record buffer contains 16-bit PCM samples for two channels. The loop just seems to be running at twice the speed as on real devices. Which leads me to think that maybe the emulator is using a higher sampling rate than the specified 44100Hz. If I query the sample rate with audioRecord.getSampleRate() it returns the correct value.
Also there are some interesting audio related messages in logcat while recording:
07-13 12:22:02.282 1187 1531 D AudioFlinger: mixer(0xf44c0000) throttle end: throttle time(154)
(...)
07-13 12:22:02.373 1187 1817 E audio_hw_generic: Error opening input stream format 1, channel_mask 0010, sample_rate 16000
07-13 12:22:02.373 1187 3036 I AudioFlinger: AudioFlinger's thread 0xf3bc0000 ready to run
07-13 12:22:02.403 1187 3036 W AudioFlinger: RecordThread: buffer overflow
(...)
07-13 12:22:24.792 1187 3036 W AudioFlinger: RecordThread: buffer overflow
07-13 12:22:30.677 1187 3036 W AudioFlinger: RecordThread: buffer overflow
07-13 12:22:37.722 1187 3036 W AudioFlinger: RecordThread: buffer overflow
I'm using up-to-date Android Studio and Android SDK, and I have tried emulator images running API levels 21-24. My dev environment is Ubuntu 16.04
Has anybody experienced something similar?
Am I doing something wrong in my recording loop?
I suspect it is caused by AudioFormat.CHANNEL_IN_STEREO. A mic on device is typically a mono audio source. If because of some reason emulator supports stereo, you will be receiving twice as much data on emulator (for both channels). To verify this, try to switch to AudioFormat.CHANNEL_IN_MONO, which is guarantied to work on all devices, and see whether you receive same amount of data on emulator then.

Soundpool Frequently Showing Significant Latency And Issuing "deep-buffer-playback" Message

I am testing Soundpool on a Moto-E running 5.1. It often starts with excellent latency - but then the audio begins hanging for a hundred milliseconds or more with the following message:
06-26 15:03:49.213 3865-9536/? E/DEBUG MESSAGE: Play Note BEFORE
06-26 15:03:49.331 299-876/? D/audio_hw_primary: out_set_parameters: enter: usecase(0: deep-buffer-playback) kvpairs: routing=8
06-26 15:03:49.331 299-876/? V/msm8916_platform: platform_get_output_snd_device: enter: output devices(0x8)
06-26 15:03:49.331 299-876/? V/msm8916_platform: platform_get_output_snd_device: exit: snd_device(headphones)
06-26 15:03:49.331 299-876/? D/audio_hw_extn: audio_extn_set_anc_parameters: anc_enabled:0
06-26 15:03:49.331 299-876/? E/soundtrigger: audio_extn_sound_trigger_set_parameters: str_params NULL
06-26 15:03:49.334 3865-9536/? E/DEBUG MESSAGE: Play Note AFTER
The DEBUG messages are mine. The others are system generated. Notice I am losing over 100ms. I checked my sample rate and it is good. It also doesn't happen for every note. May I ask if anyone is familiar with this type of error?
It is not an error. Your phone enters in sleep mode: you can stream music with long buffers (via the deep_buffer) and between each buffer refill, your CPU enter to sleep.
It is a normal behavior to spare the battery.
A quick solution is to comment the section containing the flags AUDIO_OUTPUT_FLAG_DEEP_BUFFER in the $system/etc/audio_policy.conf:
deep_buffer {
sampling_rates 8000|11025|12000|16000|22050|24000|32000|44100|48000
channel_masks AUDIO_CHANNEL_OUT_STEREO
formats AUDIO_FORMAT_PCM_16_BIT
devices AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_ALL_SCO|AUDIO_DEVICE_OUT_AUX_DIGITAL|AUDIO_DEVICE_OUT_PROXY|AUDIO_DEVICE_OUT_LINE
flags AUDIO_OUTPUT_FLAG_DEEP_BUFFER
}
Or, to use a soft like this one https://forum.xda-developers.com/apps/magisk/module-universal-deepbuffer-remover-t3577067

obtainBuffer timed out due to pcm_read() returned error n -5 on Galaxy S4

I have an application that uses the AudioRecord API to capture audio on Android devices and it repeatedly fails on Galaxy S4 devices. This also occurs in other applications that try to record audio with both AudioRecord and MediaRecorder (AudioRec HQ for example). I was able to reproduce it in a test application using the code below:
final int bufferSize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.VOICE_RECOGNITION, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize << 2);
mAudioRecord.startRecording();
mRecordThread = new Thread(new Runnable() {
#Override
public void run() {
BufferedOutputStream fileOutputStream = null;
try {
fileOutputStream = new BufferedOutputStream(new FileOutputStream(String.format(Locale.US, "/sdcard/%1$d.pcm", System.currentTimeMillis())));
final byte[] buffer = new byte[bufferSize];
int bytesRead;
do {
bytesRead = mAudioRecord.read(buffer, 0, buffer.length);
if (bytesRead > 0) {
fileOutputStream.write(buffer, 0, bytesRead);
}
}
while (bytesRead > 0);
} catch (Exception e) {
Log.e("RecordingTestApp", e.toString());
}
}
});
mRecordThread.start();
These are the relevant logcat entries:
02-03 15:36:10.913: W/AudioRecord(20986): obtainBuffer timed out (is the CPU pegged?) user=001699a0, server=001699a0
02-03 15:36:11.394: E/alsa_pcm(208): Arec: error5
02-03 15:36:11.394: W/AudioStreamInALSA(208): pcm_read() returned error n -5, Recovering from error
02-03 15:36:11.424: D/ALSADevice(208): close: handle 0xb7730148 h 0x0
02-03 15:36:11.424: D/ALSADevice(208): open: handle 0xb7730148, format 0x2
02-03 15:36:11.424: D/ALSADevice(208): Device value returned is hw:0,0
02-03 15:36:11.424: V/ALSADevice(208): flags 11000000, devName hw:0,0
02-03 15:36:11.424: V/ALSADevice(208): pcm_open returned fd 39
02-03 15:36:11.424: D/ALSADevice(208): handle->format: 0x2
02-03 15:36:11.434: D/ALSADevice(208): setHardwareParams: reqBuffSize 320 channels 1 sampleRate 8000
02-03 15:36:11.434: W/AudioRecord(20986): obtainBuffer timed out (is the CPU pegged?) user=001699a0, server=001699a0
02-03 15:36:11.444: D/ALSADevice(208): setHardwareParams: buffer_size 640, period_size 320, period_cnt 2
02-03 15:36:20.933: W/AudioRecord(20986): obtainBuffer timed out (is the CPU pegged?) user=0017ade0, server=0017ade0
02-03 15:36:21.394: E/alsa_pcm(208): Arec: error5
02-03 15:36:21.394: W/AudioStreamInALSA(208): pcm_read() returned error n -5, Recovering from error
02-03 15:36:21.424: D/ALSADevice(208): close: handle 0xb7730148 h 0x0
02-03 15:36:21.424: D/ALSADevice(208): open: handle 0xb7730148, format 0x2
02-03 15:36:21.424: D/ALSADevice(208): Device value returned is hw:0,0
02-03 15:36:21.424: V/ALSADevice(208): flags 11000000, devName hw:0,0
02-03 15:36:21.424: V/ALSADevice(208): pcm_open returned fd 39
02-03 15:36:21.424: D/ALSADevice(208): handle->format: 0x2
02-03 15:36:21.434: D/ALSADevice(208): setHardwareParams: reqBuffSize 320 channels 1 sampleRate 8000
02-03 15:36:21.434: D/ALSADevice(208): setHardwareParams: buffer_size 640, period_size 320, period_cnt 2
02-03 15:36:21.454: W/AudioRecord(20986): obtainBuffer timed out (is the CPU pegged?) user=0017ade0, server=0017ade0
Here is the full logcat:
http://pastebin.com/y3XQ1rMf
No exceptions are thrown when this happens, AudioRecord.read just blocks until the hardware has recovered and started recording again, but loses 2-4 seconds of audio data so it is very annoying for users that their audio files are missing large areas without any explanation as to why.
Is this a known hardware issue or are there things I should be doing differently to record more reliably?
Is there any way to detect that this issue has occurred
After trying a wide variety of frequencies, buffer sizes, and audio sources with the AudioRecord and several formats with the MediaRecorder I was not able to record audio without these pcm errors. These same errors happen with several audio recording applications I have downloaded from the play store.
I followed this tutorial to create an OpenSL ES jni library and it has been working well, I would reccomend this approach to anyone who is seeing these errors on the Galaxy S4
I don't have a galaxy at hand, but I see a couple of things wrong with your example.
new AudioRecord(MediaRecorder.AudioSource.VOICE_RECOGNITION, 8000,
First you initialize the AudioRecord at a samplerate of 8000 Hz. As per documentation, only 44100 is guaranteed available. The 8000 Hz samplerate might simply not be there. So you should check whether the AudioRecord is in a correct state by. That is:
mAudioRecord.getState()==STATE_INITIALIZED
You also might want to check the return from getMinimumBufferSize. If it is ERROR_BAD_VALUE then the parameters you passed were incorrect.
Then once done, you might want to start recording within the thread that will read the data. This is very likely not the cause of your problem, but it is often unclear what happens with audiodrivers when an overflow occured. E.g: the hardware might have produced too much data any you didn't read it fast enough. Often alsa drviers behave different depending on who made them. So, to avoid that problem, it is better to write startRecording directly before you start recording, thus in this case, within the runnable.
Once that is done you might want to check the
mAudioRecord.getRecordingState()==RECORDING_RECORDING
if it is not recording, then the driver already tells you there is a problem.
Once you're finished with recording you should also stop the device. Again, some alsa drivers have a timeout to close if you don't close them yourself, meaning that the next time you try to open them, you might simply have no access (this is of course very Linux specific).
So, at a first glance these are the avenues I would take, and my guess is that the combination samplerate/channel configuration is just not available.
Lastly, I also have a feeling that the parameter VOICE_RECOGNITION might be off. Maybe just replace it with DEFAULT. I have had some problems with this myself in the past.

Categories

Resources