I have an Android application which uses android.media.SoundPool to play audio cues for the user. It works as expected, but I'd like to be able to add effects to the playback (via android.media.audiofx.AudioEffect subclasses).
I understand how audio effects work on (e.g.) MediaPlayer, but I don't see how to use them with SoundPool. In particular, I'm not sure what to use for the session ID. (I happen to be using Xamarin Android because... reasons. So my example looks weird -- but I'm happy to adapt native Java or Kotlin answers!)
int session = ???; // What goes here?
var fx = new LoudnessEnhancer(session);
fx.SetTargetGain(2400);
fx.SetEnabled(true);
Pool.Play(soundID, 1.0F, 1.0F, 1, 0, 1.00F);
I tried using session ID 0, but (on my WT6000 running Android 7.1.1) it crashes in the LoudnessEnhancer ctor:
Java.Lang.RuntimeException: Cannot initialize effect engine for type: fe3199be-aed0-413f-87bb-11260eb63cf1 Error: -3
(and anyway, my understanding is that applying effects to session ID zero is deprecated).
Related
In a Xamarin Android (not Xamarin Forms) application on KitKat (API 19), using Visual Studio 2015 Update 3, I'm having trouble with playback of sounds from a SoundPool. Sometimes my sound plays fine. Other times it stutters.
Weirdest of all, on rare occasion, playback fails completely with a warning in the debug log like:
W/SoundPool(30751): sample 2 not READY
This is despite the fact that I have absolutely positively waited for the sample to be loaded before trying to play it, and despite the same sample ID playing just fine immediately before and after the "sample X not READY" failure.
The TL;dr version of the code:
var Pool = new SoundPool(6, Stream.Music, 0);
var ToneID = await Pool.LoadAsync(Android.App.Application.Context, Resource.Raw.tone, 1);
beepButton.Click += (sender, e) => Pool.Play(ToneID, 1.0F, 1.0F, 1, 0, 1.0F);
The Resource.Raw.tone is little-endian 16-bit PCM mono audio with a sample rate of 44100Hz and a Microsoft "WAV" header. It is a 0.25s 440Hz sine wave created using Audacity 2.1.3.
I know all about SoundPool.Builder. It was introduced in API 21. I'm targeting API 19.
The actual code disables the button for 500ms after each press, so there's no chance I'm trying to play the sample more than once concurrently. Indeed, I can reproduce the problem with individual presses some tens of seconds apart.
Things I have tried without success:
using smaller or larger maxStreams (first arg) values when calling the SoundPool ctor
using different Android.Media.Stream enumeration values for streamType (second arg) when calling the SoundPool ctor
using different srcQuality (third arg) values when calling the SoundPool ctor (despite the Android docco saying it does nothing and to always use zero)
using different priority (third arg) values when calling the SoundPool.Load() instance method
using sub-unity volume arguments (like 0.99F) when calling the SoundPool.Play() instance method
using different priority (fourth arg) values when calling the SoundPool.Play() instance mehtod
using different file formats (MP3, OggVorbis) for my sound resource
Here is a .zip file containing my entire ready-to-build application. This is essentially just the template app except for the MainActivity.cs.
What the app should do: beep each time you press the (one and only) button on the UI. It will probably do just that the first dozen presses or so. But then you'll hear "BaBeep" or "BeeeeBip" or "Bee<pause>beep" or silence.
FWIW, the same resource plays back just fine every time using MediaPlayer.
(Edited to link a much-simplified and cleaned-up .zip file example.)
I would like to modify Android OS (official image from AOSP) to add preprocessing to a normal phone call playback sound.
I've already achieved this filtering for app audio playback (by modifying HAL and audioflinger).
I'm OK with targeting only a specific device (Nexus 5X). Also, I only need to filter playback - I don't care about recording (uplink).
UPDATE #1:
To make it clear - I'm OK with modifying Qualcomm-specific drivers, or whatever part that it is that runs on Nexus 5X and can help me modify in-call playback.
UPDATE #2:
I'm attempting to create a Java layer app that routes the phone playback to the music stream in real time.
I've already succeeded in installing it as a system app, getting permissions for initializing AudioRecord with AudioSource.VOICE_DOWNLINK. However, the recording gives blank samples; it doesn't record the voice call.
This is the code inside my worker thread:
// Start recording
int recBufferSize = AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mRecord = new AudioRecord(MediaRecorder.AudioSource.VOICE_DOWNLINK, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, recBufferSize);
// Start playback
int playBufferSize = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, playBufferSize, AudioTrack.MODE_STREAM);
mRecord.startRecording();;
mTrack.play();
int bufSize = 1024;
short[] buffer = new short[bufSize];
int res;
while (!interrupted())
{
// Pull recording buffers and play back
res = mRecord.read(buffer, 0, bufSize, AudioRecord.READ_NON_BLOCKING);
mTrack.write(buffer, 0, res, AudioTrack.WRITE_BLOCKING);
}
// Stop recording
mRecord.stop();
mRecord.release();
mRecord = null;
// Stop playback
mTrack.stop();
mTrack.release();;
mTrack = null;
I'm running on a Nexus 5X, my own AOSP custom ROM, Android 7.1.1. I need to find the place which will allow call recording to work - probably somewhere in hardware/qcom/audio/hal in platform code.
Also I've been looking at the function voice_check_and_set_incall_rec_usecase at hardware/qcom/audio/hal/voice.c However, I wasn't able to make sense of it (how to make it work the way I want it to).
UPDATE #3:
I've opened a more-specific question about using AudioSource.VOICE_DOWNLINK, which might draw the right attention and will eventually help me solve this question's problem as well.
There are several possible issues that come to my mind. The blank buffer might indicate that you have the wrong source selected. Also since according to https://developer.android.com/reference/android/media/AudioRecord.html#AudioRecord(int,%20int,%20int,%20int,%20int) you might not always get an exception even if something's wrong with the configuration, you might want to confirm whether your object has been initialized properly. If all else fails, you could also do an
"mRecord.setPreferredDevice(AudioDeviceInfo.TYPE_BUILTIN_EARPIECE);"
to route the phone's built-in earpiece directly to the input of your recorder. Yeah, it's kinda dirty and hacky, but perhaps suits the purpose.
The other thing what was puzzling me that instead of using the builder class you've tried to configure the object directly via its constructor. Is there a specific reason why you don't want to use AudioRecord.Builder (there's even a nice example at https://developer.android.com/reference/android/media/AudioRecord.Builder.html ) instead?
I am attempting to intersperse sound effects with text in a Xamarin Android project, but I get an opaque error callback.
I first cache the earcon in a path given by
var filename = Path.Combine(Application.Context.CacheDir.ToString(), id + ".wav");
(where id is unique to the earcon; the result on one emulator is /data/data/My.Package/cache/fx1.wav) and then call
var result = _TTS.AddEarcon(id.ToString(), filename);
The result is SUCCESS.
Later I attempt to play it using
var parms = new Android.OS.Bundle();
// https://code.google.com/p/android/issues/detail?id=64925
parms.PutInt(TextToSpeech.Engine.KeyParamStream, 3); // AudioManager.STREAM_MUSIC
parms.PutString(TextToSpeech.Engine.KeyParamUtteranceId, "earcon");
var result = _TTS.PlayEarcon(id.ToString(), QueueMode.Add, parms, "earcon");
The result is a callback to OnError(string) on my registered UtteranceProgressListener.
I'm using API level 21, so I think it should call OnError(string, TextToSpeechError); without the error code, I'm unsure what the problem is.
My best guess was that the audio format was unsupported, but this happens even with a mono 16000 Hz 16-bit stream, which should be supported unconditionally. The file plays back from the cached location via MediaPlayer without problems, and in fact my best workaround is to play a silent utterance of the right length and use the UtteranceProgressListener to trigger a MediaPlayer playing the file.
I observe the same problem on an emulated machine and on a real tablet.
What other causes for the error could there be? Is there any way to tell what the cause is other than guessing and trying a workaround?
I have been trying to capture audio, within a native linux program running on an Android device via adb shell.
Since I seemed to be getting only (very quiet) noise, i.e. no actual signal (interestingly, an Android/Java program doing similar did show there was a signal on that input),
I executed alsa_amixer, which had one entry that looked like the right one:
Simple mixer control 'Capture',0
Capabilities: cvolume cswitch penum
Capture channels: Front Left - Front Right
Limits: Capture 0 - 63
Front Left: Capture 31 [49%] [0.00dB] [off]
Front Right: Capture 31 [49%] [0.00dB] [off]
"off". That would explain the noise.
So I looked for examples of how to use alsa_amixer to unmute the channels, I found different suggestions for parameters like "49% on" or "49% unmute", or just "unmute" none of which works. (if the volume% is left out, it says "Invalid command!", otherwise, the volume is set, but the on/unmute is ignored)
I also searched how to do this programatically (which I'll ultimately need to do, although the manual approach would be helpful for now), but wasn't too lucky there.
The only ALSA lib command I found which sounds like it could do something like that was "snd_mixer_selem_set_capture_switch_all", but the docs don't day what the parameter does (1/0 is not on/off, I tried that ;) )
The manual approach to set these things via alsa_amixer does work - but only if android is built with the 'BoardConfigCommon.mk' modified, at the entry: BOARD_USES_ALSA_AUDIO := false, instead of true.
Yeah, this will probably disable ALSA for android, which is why it wouldn't meddle with the mixer settings anymore.
To you android programmers out there, note that this is a very niche use case of course, as was to be expected by my original post to begin with.
This is not what most people would want to do.
I just happen to tinker with an android device here in unusual ways ;-)
Just posting the code as question giver suggested, also don't like external links.
#include <alsa/asoundlib.h>
int main()
{
snd_mixer_t *handle;
snd_mixer_selem_id_t *sid;
snd_mixer_open(&handle, 0);
snd_mixer_attach(handle, "default");
snd_mixer_selem_register(handle, NULL, NULL);
snd_mixer_load(handle);
snd_mixer_selem_id_alloca(&sid);
snd_mixer_selem_id_set_index(sid, 0);
snd_mixer_selem_id_set_name(sid, "Capture");
snd_mixer_elem_t* elem = snd_mixer_find_selem(handle, sid);
snd_mixer_selem_set_capture_switch_all(elem, 0);
snd_mixer_selem_set_capture_dB_all(elem, 0, 0);
snd_mixer_close(handle);
}
I'm working on some music analysis using the Visualizer class on Android 2.3.1. I am finding that the FFT and waveform magnitudes are affected by the volume of the device. This means that if the user has the volume turned down I receive little or not FFT data.
I've tested this on a Motorola Xoom, Samsung Galaxy Tab and the emulator and it behaves this way.
I am using the code below:
mp = new MediaPlayer();
mp.setDataSource("/sdcard/sine1.wav");
mp.prepare();
mp.setLooping(true);
mp.start();
int audioSessionID = mp.getAudioSessionId();
v = new Visualizer(audioSessionID);
v.setEnabled(true);
Looking at the docs for the Visualizer class it seems that if we are passing in a valid audio session id then the visualizer should operate upon this audio session. It appears that the Visualizer is operating upon the output mix.
Has anyone else encountered this or found a way around it?
Thanks
I was also facing the same problem, but it is working when i am enabled the Eqaulizer and Visualizer for same seession id.I dont know the reason for it ,i checked it remove the equalizer from visualizer class in api demos it is working as you said.
Equalizer mEqualizer = new Equalizer(0, SessionId);
mEqualizer.setEnabled(true); // need to enable equalizer
Visualizer mVisualizer = new Visualizer(SessionId);
There are two options for the Visualizer scaling mode:
SCALING_MODE_AS_PLAYED and SCALING_MODE_NORMALIZED
If you want the Visualizer to be normalized, as in it's consistent no matter what the volume is, then use SCALING_MODE_NORMALIZED.
mVisualizer.scalingMode = Visualizer.SCALING_MODE_AS_PLAYED
Keep in mind though that this drastically changes the values being sent to the Visualizer, so other adjustments may be needed.