How to use the microphone's intensity in Android? (voice) - android

I need to implement into my app a functionality where works or needs to read the intensity of the voice through the microphone. But I don't know how to do it, can someone help me please? Just I need to read the intensity of the voice, not recognize any word.
I understand that I have to use AudioRecord class but I don't undestant whats steps I must write into my code, because I don't know if really necessary that I save a little of the voice into the SD card an after that convert it to PCM an after read the maximum of this signal.

The AudioRecord class will let you record into a buffer. You can then chose to process the buffer or save it to the SD card, depending on your needs. Which you want to do depends entirely on your application. Do you need the data after you process it? Or is the processed result all you need? Do you intend to play back the recordings?
A simplified example of how to use the AudioRecord class follows:
AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize);
recorder.startRecording();
short buf[] = new short[buffersize];
int n = 0;
while(<some condition>) {
n = recorder.read(buf, 0, bufferSize);
process(buf);
}
recorder.stop();
recorder.release();
You would obviously want to put the above code in a thread outside of the main UI thread.
You need to make sure that whatever you do in process is quick enough that you can get back around to reading the data before the buffer fills up, or you will drop data. Sample rate and buffer size will depend on how you are processing the data and what your latency requirements are.
After you get all that working, you may decide that you want to put the phone into 'Speaker Phone' in order to get better gain through the mic:
AudioManager amAudioManager;
amAudioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
amAudioManager.setMode(AudioManager.MODE_IN_CALL);
amAudioManager.setSpeakerphoneOn(true);
Yes, you have to put the phone in IN_CALL in order to enable the speaker phone. Yes, some phones apparently disable the ability to record when IN_CALL.

Related

Modifying in-call voice playback in Android custom ROM

I would like to modify Android OS (official image from AOSP) to add preprocessing to a normal phone call playback sound.
I've already achieved this filtering for app audio playback (by modifying HAL and audioflinger).
I'm OK with targeting only a specific device (Nexus 5X). Also, I only need to filter playback - I don't care about recording (uplink).
UPDATE #1:
To make it clear - I'm OK with modifying Qualcomm-specific drivers, or whatever part that it is that runs on Nexus 5X and can help me modify in-call playback.
UPDATE #2:
I'm attempting to create a Java layer app that routes the phone playback to the music stream in real time.
I've already succeeded in installing it as a system app, getting permissions for initializing AudioRecord with AudioSource.VOICE_DOWNLINK. However, the recording gives blank samples; it doesn't record the voice call.
This is the code inside my worker thread:
// Start recording
int recBufferSize = AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mRecord = new AudioRecord(MediaRecorder.AudioSource.VOICE_DOWNLINK, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, recBufferSize);
// Start playback
int playBufferSize = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, playBufferSize, AudioTrack.MODE_STREAM);
mRecord.startRecording();;
mTrack.play();
int bufSize = 1024;
short[] buffer = new short[bufSize];
int res;
while (!interrupted())
{
// Pull recording buffers and play back
res = mRecord.read(buffer, 0, bufSize, AudioRecord.READ_NON_BLOCKING);
mTrack.write(buffer, 0, res, AudioTrack.WRITE_BLOCKING);
}
// Stop recording
mRecord.stop();
mRecord.release();
mRecord = null;
// Stop playback
mTrack.stop();
mTrack.release();;
mTrack = null;
I'm running on a Nexus 5X, my own AOSP custom ROM, Android 7.1.1. I need to find the place which will allow call recording to work - probably somewhere in hardware/qcom/audio/hal in platform code.
Also I've been looking at the function voice_check_and_set_incall_rec_usecase at hardware/qcom/audio/hal/voice.c However, I wasn't able to make sense of it (how to make it work the way I want it to).
UPDATE #3:
I've opened a more-specific question about using AudioSource.VOICE_DOWNLINK, which might draw the right attention and will eventually help me solve this question's problem as well.
There are several possible issues that come to my mind. The blank buffer might indicate that you have the wrong source selected. Also since according to https://developer.android.com/reference/android/media/AudioRecord.html#AudioRecord(int,%20int,%20int,%20int,%20int) you might not always get an exception even if something's wrong with the configuration, you might want to confirm whether your object has been initialized properly. If all else fails, you could also do an
"mRecord.setPreferredDevice(AudioDeviceInfo.TYPE_BUILTIN_EARPIECE);"
to route the phone's built-in earpiece directly to the input of your recorder. Yeah, it's kinda dirty and hacky, but perhaps suits the purpose.
The other thing what was puzzling me that instead of using the builder class you've tried to configure the object directly via its constructor. Is there a specific reason why you don't want to use AudioRecord.Builder (there's even a nice example at https://developer.android.com/reference/android/media/AudioRecord.Builder.html ) instead?

Android AudioTrack stream cuts out early

I'm trying to play some looping sound in Android, and I have that going pretty well for me. All good things must come to an end, though, and I would like for that to include my audio loop. However, if I call AudioTrack.release() after this loop, as I should, the end of my audio stream gets cut off - there is extra data that I know I'm supposed to hear, but don't.
I've verified this by putting in a Thread.sleep(2000) before the release - the sound plays correctly with that in there. My code looks something like this:
// Initialize Audiotrack
int minBufferSize = AudioTrack.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, 2 * minBufferSize, AudioTrack.MODE_STREAM);
mAudioTrack.play();
// Play looping sound
while (stuff) {
mAudioTrack.write(stuff);
}
// Play one last bit of sound before returning
mAudioTrack.write(lastSound);
// Block until the AudioTrack has played everything we've given it
Thread.sleep(2000);
// Get rid of the Audiotrack
mAudioTrack.release();
I suppose I could leave the Thread.sleep(2000) in there and call it a day, but that sounds messy and irresponsible to me. I'd like to either have a while() loop block for the most appropriate amount of time, or use AudioTrack.setPlaybackPositionUpdateListener() and put the release() in there.
If I go the first route, I need something to pend on, and AudioTrack.getPlayState() appears to always report the track as playing. So I'm stuck there.
If I go the second route, I need a way of getting the position in the AudioTrack buffer that was written to last, so I can tell the AudioTrack what position I'm waiting for it to play up to. I don't have any ideas as to how to get that information, though.
I guess I don't really care which way I do it, so any help towards solving the problem one way or the other would be much appreciated.
The problem is related to the buffer size in the AudioTrack.
Imagine the minBufferSize is 8k. This means that the AudioTrack will play sound when the buffer is full.
mAudioTrack.write(stuff);
If stuff is only 4K, the AudioTrack will wait until the next call to write until it has enough data to play.
Conclusion: You need to keep track on how much data you have written, and at the end of your playback feed the AudioTrack with some dummy bytes to complete minBufferSize. To make thing easier you could just feed a whole minBufferSize amount of silence bytes.
By the way, to feed dummy or silence just fill the data with zeroes.

need to understad how AudioRecord and AudioTrack work for raw PCM capture and playback

I use the following code in a Thread to capture raw audio samples from the microphone and play it back through the speaker.
public void run(){
short[] lin = new short[SIZE_OF_RECORD_ARRAY];
int num = 0;
// am = (AudioManager) this.getSystemService(Context.AUDIO_SERVICE); // -> MOVED THESE TO init()
// am.setMode(AudioManager.MODE_IN_COMMUNICATION);
record.startRecording();
track.play();
while (passThroughMode) {
// while (!isInterrupted()) {
num = record.read(lin, 0, SIZE_OF_RECORD_ARRAY);
for(i=0;i<lin.length;i++)
lin[i] *= WAV_SAMPLE_MULTIPLICATION_FACTOR;
track.write(lin, 0, num);
}
// /*
record.stop();
track.stop();
record.release();
track.release();
// */
}
where record is an AudioRecord and track is an Audiotrack. I need to know in detail (and in a simplified way if possible) how the AudioRecord stores PCM data and AudioTrack plays PCM data. This is how I have understood it so far:
As the while() loop is continuously running, record obtains SIZE_OF_RECORD_ARRAY number of samples (which is 1024 for now) as shown in the figure. The samples get saved contiguously in the lin[] array of shorts (16 bit shorts, as I am using 16 bit PCM encoding). This is done by record.read(). Then track.write() places these samples in the speaker which is played by the hardware. Is this correct or am I missing something here?
As for how the samples are laid out in memory; they're just arrays of linear approximations to a sound wave, taken at discrete times (like your figure shows). In the case of stereo, the samples will be interleaved (LRLRLRLR...).
When it comes to the path the audio takes, you're essentially right, although there are a few more steps involved:
Writing data to your Java AudioTrack causes it to make a JNI (Java Native Interface) call to a native helper class, which in turn calls the native AudioTrack class.
The AudioTracks are owned by the AudioFlinger, which periodically takes data from all the AudioTracks on a given output thread (which have been mixed by the AudioMixer) and writes it to the audio HAL output stream class.
From there the data goes to the user-space ALSA library, and through a couple of intermediate steps to the kernel-space PCM driver. Then further on from there; typically going through some kind of DSP that applies various acoustic compensation filters, and eventually making it's way to the hardware codec, which controls the speaker DAC and amplifiers.
When recording from the internal microphone(s) you'd have more or less the same steps, except that they'd be done in the opposite order.
Note that some of these steps (essentially everything from the audio HAL and below) are platform-specific, and therefore might differ between platforms from different vendors (and even different platforms from the same vendor).

Playing back sound coming from microphone in real-time

I've been trying to get my application recording the sound coming from the microphone and playing it back in (approximately) real-time, however without success.
I'm using AudioRecord and AudioTrack classes for record and playback, respectively. I've tried different approaches, I've tried to record the incoming sound and write it to a file and it worked fine. I've also tried to playback sound from that file AFTER with AudioTrack and it worked fine too. The problem is when I try to play the sound in real-time, instead of reading a file after it's written.
Here is the code:
//variables
private int audioSource = MediaRecorder.AudioSource.MIC;
private int samplingRate = 44100; /* in Hz*/
private int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
private int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private int bufferSize = AudioRecord.getMinBufferSize(samplingRate, channelConfig, audioFormat);
private int sampleNumBits = 16;
private int numChannels = 1;
// …
AudioRecord recorder = new AudioRecord(audioSource, samplingRate, channelConfig, audioFormat, bufferSize);
recorder.startRecording();
isRecording = true;
AudioTrack audioPlayer = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM);
if(audioPlayer.getPlayState() != AudioTrack.PLAYSTATE_PLAYING)
audioPlayer.play();
//capture data and record to file
int readBytes=0, writtenBytes=0;
do{
readBytes = recorder.read(data, 0, bufferSize);
if(AudioRecord.ERROR_INVALID_OPERATION != readBytes){
writtenBytes += audioPlayer.write(data, 0, readBytes);
}
}
while(isRecording);
It is thrown a java.lang.IllegalStateException with the reason being caused by "play() called on a uninitialized AudioTrack".
However, if I change the AudioTrack initialization for example to use sampling rate 8000Hz and sample format 8 bits (instead of 16), it doesn't throw the exception anymore and the application runs, although it produces horrible noise.
When I play AudioTrack from a file, there is no problem with the initialization of the AudioTrack, I tried 44100 and 16 bits and it worked properly, producing the correct sound.
Any help ?
All native Android audio is encoded. You can only play out PCM formats in real time, or use a special streaming codec, which I don't think is trivial on Android.
The point is that if you want to record/play out audio simultaneously, you would have to create your own audio buffer and store raw PCM-encoded audio samples in there (I'm not sure if you're thinking duh! or whether this is all over your head, so I'll try to be clear but not to chew your own gum).
PCM is a digital representation of an analog signal in which your audio samples are a set of "snapshots" of the original acoustic wave. Because all kinds of clever mathematicians and engineers saw the potential in trying to reduce the number of bits you represent this data with, they came up with all sorts of encoders. The encoded (compressed) signal is represented very differently from the raw PCM signal and has to be decoded (en-cod-er+dec-oder = codec). Unless you're using special algorithms and media streaming codecs, it's impossible to play back an encoded signal like you're trying to, because it's not encoded sample by sample, but rather frame by frame, where you need the whole frame of samples, if not the complete signal, to decode this frame.
The way to do it is to manually store audio samples coming from the microphone buffer and manually feeding them to the output buffer. You will have to do some coding for that, but I believe there are some open-source apps that you can look at and take a peak at their source (unless you're willing to sell your app later on, of course, but that's a whole different discussion).
If you're developing for Android 2.3 or later and are not too scared of programming in native code, you can try using OpenSL ES. The Android-specific features of OpenSL ES are listed here. This platform allows you somewhat more flexible audio manipulation and you might find just what you need, if your app will be highly reliant on audio processing.
It is thrown a java.lang.IllegalStateException with the reason being
caused by "play() called on a uninitialized AudioTrack".
It is because the buffer size too small. I tried "bufferSize += 2048;", it's ok then.
I had a similar problem and I solved it by adding this permission to the manifest file:
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>
make sure that your var data is enough for samplingRate
Ex: if you use samplingRate as 44100 your data bytearrays's length should be 44101 or more

Cannot access AudioRecorder

I am trying to take small recordings to find the Sound Pressure Level from a service but Android wont give me access to the hardware. I get the following errors in Logcat:
The error comes from the following code:
AudioRecord recordInstance = null;
// We're important...
android.os.Process
.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
short bufferSize = 4096;// 2048;
recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, this //line 167
.getFrequency(), this.getChannelConfiguration(), this
.getAudioEncoding(), bufferSize); //object not created
tempBuffer = new short[bufferSize];
recordInstance.startRecording();
What happens is that recordInstance is never properly created and so when it gets to the end and calls recordInstance.startRecording(), recordInstance is still null. Android rejects my programs request at the definition. Does anyone know what those errno's indicate? I couldn't find a list online.
AudioRecord Docs
Thanks
Check three things:
Your permissions (you need to give permission for RECORD_AUDIO)
Have you run this once and not called recordInstance.release()? If so you may have tied up audio resources and it will likely not work until you restart the phone. I have found that in my experience anyway.
3.The buffer size. There is a static method AudioRecord.getMinBufferSize()

Categories

Resources