Audio Codec For Android - android

Following is the code I came through while making some changes in the audio files. Can you please tell what exactly does this code do and what does "RX" in the following code specify. Any leads would be great
SectionDevice
Name "OutputLime"
Comment "Rx Lime jack output"
EnableSequence
'SLIM_0_RX Channels':0:Two
'RX3 MIX1 INP1':0:RX1
'RX5 MIX1 INP1':0:RX2
'RX4 DSM MUX':0:CIC_OUT
'RX6 DSM MUX':0:CIC_OUT
'LINEOUT1 Volume':1:66
'LINEOUT2 Volume':1:66
'LINEOUT3 Volume':1:66
'LINEOUT4 Volume':1:66
EndSequence
DisableSequence
'RX3 MIX1 INP1':0:ZERO
'RX5 MIX1 INP1':0:ZERO
'RX4 DSM MUX':0:DSM_INV
'RX6 DSM MUX':0:DSM_INV
'LINEOUT1 Volume':1:0
'LINEOUT2 Volume':1:0
'LINEOUT3 Volume':1:0
'LINEOUT4 Volume':1:0
EndSequence

what does "RX" in the following code specify
Output devices or paths are typically labeled RX; and conversly input devices/paths are labeled TX. You can remember that by thinking of an RX device as something that Recieves audio data from the system (e.g. a speaker), and a TX device as something that Transmits audio data to the system (e.g. a microphone).
What this code does is define an audio output device named "OutputLime" (is that a typo of "OutputLine" btw?), and the actions that should be taken by the ALSA Usecase Manager when that device is enabled or disabled.
Each line in the enable/disable sequences specifies an ALSA control (on the ALSA card corresponding to your codec, which typically would be card 0), and what value to write to the control.
SLIM_0_RX refers to a channel on the SLIMBus connecting the DSP and the codec. Typically you'll see a corresponding 'SLIMBUS_0_RX Audio Mixer MultiMedia1':1:1 in the verbs in your UCM file that refer to playback that should be routed through the codec, which basically says that anything written to MultiMedia1 (pcmC0D0p) should go to SLIM_0_RX.
So the code is setting this up as a stereo output device. Looks a lot like the loudspeaker device actually.
I don't remember exactly what all those other controls represent. Some are volumes obviously, and it's not a wild guess that the others are for specifying which channel on the physcial stereo device should get the left output and which should get the right output.Perhaps you can look it up in the codec data sheet if you've got one. Otherwise you can check if the driver source code for your codec is available and look there for clues (or perhaps in the msm-pcm-routing code, assuming that this is a Qualcomm platform).

Related

Phone crashes when calling mediaCodec.configure with error MediaCodec$CodecException: Error 0x80001001

The app I am working on gets the video from the camera through Surface and encodes it to video/avc (H264) I am doing that successfully and it is working great on phones like galaxy Note 10+ but on phones like Xiaomi note 10s which is a new phone I am having this issue. Here is what I am doing:
create format:
format = MediaFormat.createVideoFormat(
H264, videoWidth, videoHeight
).apply {
setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0)
setInteger(MediaFormat.KEY_BIT_RATE, bitrate)
setInteger(MediaFormat.KEY_FRAME_RATE, videoFrameRate)
setInteger(
MediaFormat.KEY_COLOR_FORMAT,
CodecCapabilities.COLOR_FormatSurface
)
setFloat(MediaFormat.KEY_I_FRAME_INTERVAL, 1f)
}```
Then create encoderName:
val encoderName = MediaCodecList(
MediaCodecList.ALL_CODECS
).findEncoderForFormat(format) //using the format I shared in the first step
Then create:
codec = MediaCodec.createByCodecName(encoderName)
Then .setCallback(callback) //not important since we won't make it till this point, it will crash before that.
4. And this is the line where it crashes.
codec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE) //CRASH => MediaCodec$CodecException: Error 0x80001001
The rest
codec.setInputSurface(surface)
codec.start()
I am suspecting the
setInteger(
MediaFormat.KEY_COLOR_FORMAT,
CodecCapabilities.COLOR_FormatSurface
) //I tried changing the value and completely removing this setInteger, no luck :/
Error 0x80001001 also known as OMX_ErrorUndefined says:
"There was an error, but the cause of the error could not be determined".
Most likely cause for this error is insufficient resources. This can happen for example if you try to configure a hardware codec but there is not enough graphics memory available at the moment.
Suggestion 1: Make sure you release the codecs when you are done using them. You need to check all code paths.
Suggestion 2: Knowing that this can happen, you can filter the MediaCodecList keeping all the encoders that support the given format. Then wrap the configure() call in a try/catch block. And, if the call fails, try the next option from the list of codecs.
Note that on most devices there are at least two codecs for H264: a hardware codec and a software codec. The former one having better performance, the latter one being more resilient.

MediaCodec decoding to buffer does not work while decoding to surface works

The video decoding code of an app is typical, just like the example code in the MediaCodec document. Nothing special. The configuration statement is like the following:
myMediaCodec.configure(myMediaFormat, mySurface, null, 0);
Everything works fine. However, if I change the above code to the following to decode the video to a buffer instead of a surface:
myMediaCodec.configure(myMediaFormat, null, null, 0);
then the following code:
int iOutputBufferIndex = myMediaCodec.dequeueOutputBuffer(myBufferInfo, 100000);
will always return MediaCodec.INFO_TRY_AGAIN_LATER. Even more strangly, any subsequent call of myMediaCodec.stop() or myMediaCodec.release() will hang (i.e. the call never returns or generates an exception).
This happens on a generic (AGPTek) tablet (Allwinner A31S, 1.5GHz Cortex A7 Quad Core). On a simulator and another tablet (Asus Memo Pad), everything works fine.
I am asking for any tip to help get around this problem.
Do you provide one single input buffer worth of data before trying this, or do you pass as many packets as you can before dequeueInputBuffer also blocks or returns INFO_TRY_AGAIN_LATER? A decoder might not output data after only one packet of input (if the decoder has got some delay), but if it works with Suface output it should probably behave in the same way there.
If that (queueing as many input buffers as possible) doesn't work, I would say that this sounds like a decoder bug.

Who will call "Visualizer_process" in Android

I want to capture the audio wave frame from the audio buffer, I found android.media.audiofx.Visualizer can do such thing, but it can only returns partial and low quality audio content
I found android.media.audiofx.Visualizer will call to the function Visualizer_command(VISUALIZER_CMD_CAPTURE) at android4.0\frameworks\base\media\libeffects\visualizer
I found the function Visualizer_process will make the audio content to low quality. I want to rewrite the Visualizer_process , and want to find who will call Visualizer_process, but I cannot find the caller from Android source code, can anyone help me ?
thanks very much!
The AudioFlinger::PlaybackThread::threadLoop calls AudioFlinger::EffectChain::process_l, which calls AudioFlinger::EffectModule::process, which finally calls the actual effect's process function.
As you can see in AudioFlinger::EffectModule::process, there's the call
int ret = (*mEffectInterface)->process(mEffectInterface,
&mConfig.inputCfg.buffer,
&mConfig.outputCfg.buffer);
mEffectInterface is an effect_handle_t, which is an effect_interface_s**. The effect_interface_s struct (defined here) contains a number of function pointers (process, command, ...). These are filled out with pointers the actual effect's functions when the effect is loaded. The effects provide these pointers through a struct (in EffectVisualizer it's gVisualizerInterface).
Note that the exact location of these functions may differ between different Android releases. So if you're looking at Android 4.0 you might find some of them in AudioFlinger.cpp (or somewhere else).

ALSA - unmuting devices?

I have been trying to capture audio, within a native linux program running on an Android device via adb shell.
Since I seemed to be getting only (very quiet) noise, i.e. no actual signal (interestingly, an Android/Java program doing similar did show there was a signal on that input),
I executed alsa_amixer, which had one entry that looked like the right one:
Simple mixer control 'Capture',0
Capabilities: cvolume cswitch penum
Capture channels: Front Left - Front Right
Limits: Capture 0 - 63
Front Left: Capture 31 [49%] [0.00dB] [off]
Front Right: Capture 31 [49%] [0.00dB] [off]
"off". That would explain the noise.
So I looked for examples of how to use alsa_amixer to unmute the channels, I found different suggestions for parameters like "49% on" or "49% unmute", or just "unmute" none of which works. (if the volume% is left out, it says "Invalid command!", otherwise, the volume is set, but the on/unmute is ignored)
I also searched how to do this programatically (which I'll ultimately need to do, although the manual approach would be helpful for now), but wasn't too lucky there.
The only ALSA lib command I found which sounds like it could do something like that was "snd_mixer_selem_set_capture_switch_all", but the docs don't day what the parameter does (1/0 is not on/off, I tried that ;) )
The manual approach to set these things via alsa_amixer does work - but only if android is built with the 'BoardConfigCommon.mk' modified, at the entry: BOARD_USES_ALSA_AUDIO := false, instead of true.
Yeah, this will probably disable ALSA for android, which is why it wouldn't meddle with the mixer settings anymore.
To you android programmers out there, note that this is a very niche use case of course, as was to be expected by my original post to begin with.
This is not what most people would want to do.
I just happen to tinker with an android device here in unusual ways ;-)
Just posting the code as question giver suggested, also don't like external links.
#include <alsa/asoundlib.h>
int main()
{
snd_mixer_t *handle;
snd_mixer_selem_id_t *sid;
snd_mixer_open(&handle, 0);
snd_mixer_attach(handle, "default");
snd_mixer_selem_register(handle, NULL, NULL);
snd_mixer_load(handle);
snd_mixer_selem_id_alloca(&sid);
snd_mixer_selem_id_set_index(sid, 0);
snd_mixer_selem_id_set_name(sid, "Capture");
snd_mixer_elem_t* elem = snd_mixer_find_selem(handle, sid);
snd_mixer_selem_set_capture_switch_all(elem, 0);
snd_mixer_selem_set_capture_dB_all(elem, 0, 0);
snd_mixer_close(handle);
}

How avoid automatic gain control with AudioRecord?

How can I do audio recordings using android.media.AudioRecord without any smartphone-manufacturer-dependent fancy signal processing like automatic gain control (AGC) and/or equalization, noise suppression, echo cancellation, ... just the pure microphone signal?
Background
MediaRecorder.AudioSource provides nine constants,
DEFAULT and MIC initially being there,
VOICE_UPLINK, VOICE_DOWNLINK, and VOICE_CALL added in API level 4,
CAMCORDER and VOICE_RECOGNITION added in API 7,
VOICE_COMMUNICATION added in API 11,
REMOTE_SUBMIX added in API 19 but not available to third-party applications.
But none of them does a clean job across all smartphones. Rather, I have to find out myself it seems, which device uses which combinations of signal processing blocks for which MediaRecorder.AudioSource constant.
Would be nice to have a tenth constant like PURE_MIC added in API level 20.
But as long as this is not available, what can I do instead?
Short answer is "Nothing".
The AudioSources correspond to various logical audio input devices depending on the accessories that you have connected to the phone and the current use-case, which in turn corresponds to physical devices (primary built-in mic, secondary mic, wired headset mic, etc) with different tunings.
Each such combination of physical device and tuning is trimmed by the OEM to meet both external requirements (e.g. CTS, operator requirements, etc) and internal acoustic requirements set by the OEM itself. This process may cause the introduction of various filters - such as AGC, noise suppression, equalization, etc - into the audio input path at the hardware codec or multimedia DSP level.
While a PURE_MIC source might be useful in for some applications, it's not something that's available today.
On many devices you can control things like microphone gain, and possibly even the filter chain, by using amixer to write to the hardware codec's ALSA controls. However, this would obviously be a very platform-specific approach, and I also suspect that you have to be running as either the root or audio user to be allowed to do this.
Some devices add AGC effect to the sound input tract by default. Therefore, you need to obtain reference to corresponding AudioEffect object and force it to disable.
First, obtain AutomaticGainControl object linked to the AudioRecord audio session, and then just set it disabled:
if (AutomaticGainControl.isAvailable()) {
AutomaticGainControl agc = AutomaticGainControl.create(
myAudioRecord.getAudioSessionId()
);
agc.setEnabled(false);
}
Note: Most of the audio sources (including DEFAULT) apply processing to the audio signal. To record raw audio select UNPROCESSED. Some devices do not support unprocessed input. Call AudioManager.getProperty("PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED") first to verify it's available. If it is not, try using VOICE_RECOGNITION instead, which does not employ AGC or noise suppression. You can use UNPROCESSED as an audio source even when the property is not supported, but there is no guarantee whether the signal will be unprocessed or not in that case.
Android documentation Link https://developer.android.com/guide/topics/media/mediarecorder.html#example
AudioManager audioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
if(audioManager.getProperty(AudioManager.PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED) !=null)
mRecorder.setAudioSource(MediaRecorder.AudioSource.UNPROCESSED);
else
mRecorder.setAudioSource(MediaRecorder.AudioSource.VOICE_RECOGNITION);
MIC should be fine, and for the rest you need to know if they are supported.
I've made a class for this:
enum class AudioSource(val audioSourceValue: Int, val minApi: Int) {
VOICE_CALL(MediaRecorder.AudioSource.VOICE_CALL, 4), DEFAULT(MediaRecorder.AudioSource.DEFAULT, 1), MIC(MediaRecorder.AudioSource.MIC, 1),
VOICE_COMMUNICATION(MediaRecorder.AudioSource.VOICE_COMMUNICATION, 11), CAMCORDER(MediaRecorder.AudioSource.CAMCORDER, 7),
VOICE_RECOGNITION(MediaRecorder.AudioSource.VOICE_RECOGNITION, 7),
VOICE_UPLINK(MediaRecorder.AudioSource.VOICE_UPLINK, 4), VOICE_DOWNLINK(MediaRecorder.AudioSource.VOICE_DOWNLINK, 4),
#TargetApi(Build.VERSION_CODES.KITKAT)
REMOTE_SUBMIX(MediaRecorder.AudioSource.REMOTE_SUBMIX, 19),
#TargetApi(Build.VERSION_CODES.N)
UNPROCESSED(MediaRecorder.AudioSource.UNPROCESSED, 24);
fun isSupported(context: Context): Boolean =
when {
Build.VERSION.SDK_INT < minApi -> false
this != UNPROCESSED -> true
else -> {
val audioManager: AudioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
Build.VERSION.SDK_INT >= Build.VERSION_CODES.N && "true" == audioManager.getProperty(AudioManager.PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED)
}
}
companion object {
fun getAllSupportedValues(context: Context): ArrayList<AudioSource> {
val values = AudioSource.values()
val result = ArrayList<AudioSource>(values.size)
for (value in values)
if (value.isSupported(context))
result.add(value)
return result
}
}
}

Categories

Resources