Record audio loops through sample rates? - android

I am trying to read and understand audio in Android. In my search I came along this article. Here he has wrote a code to record audio in wav format. But there is one thing I don't fully understand, and that is the first loop of his code:
public class ExtAudioRecorder
{
private final static int[] sampleRates = {44100, 22050, 11025, 8000};
public static ExtAudioRecorder getInstanse(Boolean recordingCompressed)
{
ExtAudioRecorder result = null;
if(recordingCompressed)
{
result = new ExtAudioRecorder( false,
AudioSource.MIC,
sampleRates[3],
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
}
else
{
int i=0;
do
{
result = new ExtAudioRecorder( true,
AudioSource.MIC,
sampleRates[i],
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
} while((++i<sampleRates.length) & !(result.getState() == ExtAudioRecorder.State.INITIALIZING));
}
return result;
He gives like a basic information about it, but I don't get this completely. Does this has anything to do with the performance of different types of Android devices? Anyway, hope somebody can brighten this up for me :)

He is trying to initialize the audio recorder with different sample rates, from these {44100, 22050, 11025, 8000}.
Depending on the underlying hardware, not all sample rates may be supported by the device.
Although the documentation says:
"44100Hz is currently the only rate that is guaranteed to work on all devices, but other rates such as 22050, 16000, and 11025 may work on some devices."
I think the author has written code to make sure that if initialization at a sample rate fails, an attempt is made to initialize at some other sample rate, unless the initialization is successful, which is given by the check he is making in the loop condition.

Related

Why Can't I Play Raw Audio Bytes Using AudioTrack's Static Mode?

I have an Android app where there is some raw audio bytes stored in a variable.
If I use an AudioTrack to play this audio data, it only works if I use AudioTrack.MODE_STREAM:
byte[] recordedAudioAsBytes;
public void playButtonPressed(View v) {
// this verifies that audio data exists as expected
for (int i=0; i<recordedAudioAsBytes.length; i++) {
Log.i("ABC", "byte[" + i + "] = " + recordedAudioAsBytes[i]);
}
// STREAM MODE ACTUALLY WORKS!!
/*
AudioTrack player = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLERATE, CHANNELS,
ENCODING, MY_CHOSEN_BUFFER_SIZE, AudioTrack.MODE_STREAM);
player.play();
player.write(recordedAudioAsBytes, 0, recordedAudioAsBytes.length);
*/
// STATIC MODE DOES NOT WORK
AudioTrack player = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLERATE, PLAYBACK_CHANNELS,
ENCODING, MY_CHOSEN_BUFFER_SIZE, AudioTrack.MODE_STATIC);
player.write(recordedAudioAsBytes, 0, recordedAudioAsBytes.length);
player.play();
}
If I use AudioTrack.MODE_STATIC, the output is glitchy -- it just makes a nasty pop and sounds very short with hardly anything audible.
So why is that? Does STATIC_MODE require that the audio data have a header?
That's all I can think of.
If you'd like to see all the code, check this question.
It seems to me that you are using the same MY_CHOSEN_BUFFER_SIZE for 'streaming' and 'static' mode!? This might explain why it sounds short...
In order to use Audiotracks 'static-mode' you have to use the size of your Byte-Array (bigger will also work) as buffersize. The Audio will be treated as one big chunk of data.
See: AudioTrack.Builder
setBufferSizeInBytes()... "If using the AudioTrack in static mode (see AudioTrack#MODE_STATIC), this is the maximum size of the sound that will be played by this instance."

Android DSP - Trouble with AudioTrack and Sub-Audible Signals

I'm having some trouble getting the AudioTrack class to do what I want it to do. I'm trying to make an app that first lets the user draw a single-cycle waveform on the screen, and then outputs an arbitrary number of cycles of that waveform at an arbitrary frequency through the headphone out. For the primary use, the frequency will be below the audible range, something like 1/60Hz - 20Hz (If anyone's familiar with Eurorack/modular synths, I'm hoping to use the headphone out as a CV source).
The problem I'm having is that the AudioTrack seems to output a highly inaccurate reproduction of the waveform at the low end of this frequency range, even though it does output an accurate reproduction at the high end. The lower the frequency gets, the more the waveform gets 'squished' to the left on my oscilloscope. The pictures below show this phenomenon on a waveform that is supposed to start each cycle low and increase linearly to a max value, but this phenomenon happens with other waveshapes too.
So far, I have the app set up (1) to create an ArrayList to hold the data points from the user's input, (2) to convert the ArrayList into a float[] to feed to the AudioTrack, and (3) to setup the AudioTrack and write the float[] to it when needed. Since the ArrayList and float[] both maintain an accurate reproduction of the waveform, I'm pretty sure my problem is at (3), or else it's a hardware limitation of my phone/scope.
Here's the relevant code for the AudioTrack.
Method that (re)initializes the AudioTrack:
public void updateTrackForWave(Waveform wave, Context context) {
if (wave.getInterpolatedWaveData() == null) return;
int sampleRate = Integer.parseInt(
((AudioManager) context.getSystemService(Context.AUDIO_SERVICE))
.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE));
if (wave.getOutputChannel() == Waveform.OutputChannelEnum.left) {
if (mTrackL != null) mTrackL.release();
mTrackL = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_FLOAT,
Float.BYTES * wave.getInterpolatedWaveData().length,
AudioTrack.MODE_STATIC);
mTrackL.setVolume(AudioTrack.getMaxVolume());
} else if (wave.getOutputChannel() == Waveform.OutputChannelEnum.right) {
if (mTrackR != null) mTrackR.release();
mTrackR = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_FLOAT,
Float.BYTES * wave.getInterpolatedWaveData().length,
AudioTrack.MODE_STATIC);
mTrackR.setVolume(AudioTrack.getMaxVolume());
}
}
Method that plays AudioTrack:
private void playWaveOnTrack(Waveform wave, AudioTrack track) {
if (track == null) return;
if (track.getPlayState() == AudioTrack.PLAYSTATE_PLAYING) {
return;
} else {
if (wave.isCycleMode()) track.setLoopPoints(0,
wave.getInterpolatedWaveData().length, -1);
else if (wave.isOneShotMode()) track.setLoopPoints(0,
wave.getInterpolatedWaveData().length, 0);
if (track.getPlayState() == AudioTrack.PLAYSTATE_PAUSED) {
track.play();
} else if (track.getPlayState() == AudioTrack.PLAYSTATE_STOPPED) {
track.write(wave.getInterpolatedWaveData(), 0,
wave.getInterpolatedWaveData().length, AudioTrack.WRITE_BLOCKING);
track.play();
}
}
}
It's worth noting that wave.getInterpolatedWaveData() returns the float[] from (3) above.
Anyways, any help you could spare on this would be greatly appreciated! Not sure if I've made a mistake in the code I have, or if there's some code I should add (maybe an AudioEffect of some kind?), or if I'm asking too much of my phone, or what.
PS I'm new here and to Android programming in general, so please do point out any forum norms I should be following but am not, or any alternative coding approaches I might not know about.
Pictures:
Waveform at 100Hz
Waveform at 10Hz
Waveform at 2Hz
It's a hardware limitation of your phone. In order to avoid a constant DC current flowing through each headphone speaker, they are capacitively coupled -- each channel is in series with one or more capacitors. This gives your output waveform a time constant; the average voltage level of the waveform will always tend towards 0 (GND), which makes it impossible to output DC, or even low-frequency signals.

How To Record Sound in Android with Better Quality and Reduce Noise

I’m trying to build a music analytics app for android platform.
the app is using MediaRecorder.AudioSource.MIC
to record the music form the MIC and them encode it PCM 16BIT with 11025 freq, but the recorded audio sample are very low quality is there any way to make it better, decrease the noise?
mRecordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC,FREQUENCY, CHANNEL,ENCODING, minBufferSize);
mRecordInstance.startRecording();
do
{
samplesIn += mRecordInstance.read(audioData, samplesIn, bufferSize - samplesIn);
if(mRecordInstance.getRecordingState() == AudioRecord.RECORDSTATE_STOPPED)
break;
}
while (samplesIn < bufferSize);
Thanks in Advance
The solution above didnt work for me.
So, i searched around and found this article.
Long story short, I used MediaRecorder.AudioSource.VOICE_RECOGNITION instead of AudioSource.MIC, which gave me really good results and noise in the background did reduce very much.
The great thing about this solution is, it can be used with both AudioRecord and MediaRecorder class.
The best combination of SR and buffer size is very device dependant, so your results will vary depending on the hardware. I use this utility to figure out what the best combination is for devices running Android 4.2 and above;
public static DeviceValues getDeviceValues(Context context) {
try {
AudioManager am = (AudioManager) context.getSystemService(Context.AUDIO_SERVICE);
try {
Method getProperty = AudioManager.class.getMethod("getProperty", String.class);
Field bufferSizeField = AudioManager.class.getField("PROPERTY_OUTPUT_FRAMES_PER_BUFFER");
Field sampleRateField = AudioManager.class.getField("PROPERTY_OUTPUT_SAMPLE_RATE");
int bufferSize = Integer.valueOf((String)getProperty.invoke(am, (String)bufferSizeField.get(am)));
int sampleRate = Integer.valueOf((String)getProperty.invoke(am, (String)sampleRateField.get(am)));
return new DeviceValues(sampleRate, bufferSize);
} catch(NoSuchMethodException e) {
return selectBestValue(getValidSampleRates(context));
}
} catch(Exception e) {
return new DeviceValues(DEFAULT_SAMPLE_RATE, DEFAULT_BUFFER_SIZE);
}
}
This uses reflection to check if the getProperty method is available, because this method was introduced in API level 17. If you are developing for a specific device type, you might want to experiment with various buffer sizes and sample rates. The defaults that I use as a fallback are;
private static final int DEFAULT_SAMPLE_RATE = 22050;
private static final int DEFAULT_BUFFER_SIZE = 1024;
Additionally I check the various SR by seeing if getMinBufferSize returns a reasonable value for use;
private static List<DeviceValues> getValidSampleRates(Context context) {
List<DeviceValues> available = new ArrayList<DeviceValues>();
for (int rate : new int[] {8000, 11025, 16000, 22050, 32000, 44100, 48000, 96000}) { // add the rates you wish to check against
int bufferSize = AudioRecord.getMinBufferSize(rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
if (bufferSize > 0 && bufferSize < 2048) {
available.add(new DeviceValues(rate, bufferSize * 2));
}
}
return available;
}
This depends on the logic that if getMinBufferSize returns 0, the sample rate is not available in the device. You should experiment with these values for your particular use case.
Though it is an old question following solution will be helpful.
We can use MediaRecorder to record audio with ease.
private void startRecording() {
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
recorder.setAudioEncodingBitRate(96000)
recorder.setAudioSamplingRate(44100)
recorder.setOutputFile(".../audioName.m4a");
try {
recorder.prepare();
} catch (IOException e) {
Log.e(LOG_TAG, "prepare() failed");
}
recorder.start();
}
Note:
MediaRecorder.AudioEncoder.AAC is used as MediaRecorder.AudioEncoder.AMR_NB encoding is no longer supported in iOS. Reference
AudioEncodingBitRate should be used either 96000 or 128000 as required for clarity of sound.

android java audio dsp sites or android sound library?

anyone know of any usefull links for learning audio dsp for android?
or a sound library?
im trying to make a basic mixer for playing wav files but realised i dont know enough about dsp, and i cant find anything at all for android.
i have a wav file loaded into a byte array and an AudioTrack on a short loop.
how can i feed the data in?
i expect this post will be ignored but its worth a try.
FileInputStream is = new FileInputStream(filePath);
BufferedInputStream bis = new BufferedInputStream(is);
DataInputStream dis = new DataInputStream(bis);
int i = 0;
while (dis.available() > 0) {
byteData[i] = dis.readByte(); //byteData
i++;
}
final int minSize = AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT );
track = new AudioTrack( AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT,
minSize, AudioTrack.MODE_STREAM);
track.play();
bRun=true;
new Thread(new Runnable() {
public void run() {
track.write(byteData, 0, minSize);
}
}).start();
I'll give this a shot just because I was in your position a few months ago...
If you already have the wav file audio samples in a byte array, you simple need to pass the samples to the audio track object (lookup the write() methods).
To mix audio together you simply add the sames from each track. For example, add the first sample from track 1 to track 2, add the second sample from track 1 to track 2 and so on. The end result would ideally be a third array containing the added samplws which you pass to the 'write' method of your audio track instance.
You must be mindful of clipping here. If your data type 'short' then the maximum value allowed is 32768. A simple way to ensure that your added samples do not exceed this limit is to peform the addition and store the result in a variable whose data type is larger than a short (eg. int) and evaluate the result. If it's greater than 32768 then make it equal to 32768 and cast it back to a short.
int result = track1[i] + track2[i];
if(result > 32768) {
result = 32768;
}
else if(result < -32768) {
result = -32768;
}
mixedAudio[i] = (short)result;
Notice how the snippet above also tests for the minimum range of a short.
Appologies for the lack of formatting here, I'm on my mobile phone on a train :-)
Good luck.

Android AudioRecord Supported Sampling Rates

I'm trying to figure out what sampling rates are supported for phones running Android 2.2 and greater. We'd like to sample at a rate lower than 44.1kHz and not have to resample.
I know that all phones support 44100Hz but was wondering if there's a table out there that shows what sampling rates are valid for specific phones. I've seen Android's documentation (
http://developer.android.com/reference/android/media/AudioRecord.html) but it doesn't help much.
Has anyone found a list of these sampling rates??
The original poster has probably long since moved on, but I'll post this in case anyone else finds this question.
Unfortunately, in my experience, each device can support different sample rates. The only sure way of knowing what sample rates a device supports is to test them individually by checking the result of AudioRecord.getMinBufferSize() is non negative (which means there was an error), and returns a valid minimum buffer size.
public void getValidSampleRates() {
for (int rate : new int[] {8000, 11025, 16000, 22050, 44100}) { // add the rates you wish to check against
int bufferSize = AudioRecord.getMinBufferSize(rate, AudioFormat.CHANNEL_CONFIGURATION_DEFAULT, AudioFormat.ENCODING_PCM_16BIT);
if (bufferSize > 0) {
// buffer size is valid, Sample rate supported
}
}
}
Android has AudioManager.getProperty() function to acquire minimum buffer size and get the preferred sample rate for audio record and playback. But yes of course, AudioManager.getProperty() is not available on API level < 17. Here's an example code sample on how to use this API.
// To get preferred buffer size and sampling rate.
AudioManager audioManager = (AudioManager) this.getSystemService(Context.AUDIO_SERVICE);
String rate = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE);
String size = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER);
Log.d("Buffer Size and sample rate", "Size :" + size + " & Rate: " + rate);
Though its a late answer, I thought this might be useful.
Unfortunately not even all phones support the supposedly guaranteed 44.1kHz rate :(
I' ve been testing a Samsung GalaxyY (GT-S5360L) and if you record from the Camcorder source (ambience microphone), the only supported rates are 8kHz and 16kHz. Recording # 44.1kHz produces utter garbage and # 11.025kHz produces a pitch-altered recording with slightly less duration than the original sound.
Moreover, both strategies suggested by #Yahma and #Tom fail on this particular phone, as it is possible to receive a positive, minimum-buffer size from an unsupported configuration, and worse, I've been forced to reset the phone to get the audio stack working again, after attempting to use an AudioRecord class initialized from parameters that produce a supposedly valid, (non-exception raising) AudioTrack or AudioRecord instance.
I'm frankly a little bit worried at the problems I envision when releasing a sound-app to the wild. In our case, we are being forced to introduce a costly sample-rate-conversion layer if we expect to reuse our algorithms (expecting a 44.1kHz recording rate)on this particular phone model.
:(
I have a phone (Acer Z3) where I get a positive buffer size returned from AudioRecord.getMinBufferSize(...) when testing 11025 Hz. However, if I subsequently run
audioRecord = new AudioRecord(...);
int state = audioRecord.getState();
if (state != AudioRecord.STATE_INITIALIZED) ...
I can see that this sampling rate in fact does not represent a valid configuration (as pointed out by user1222021 on Jun 5 '12). So my solution is to run both tests to find a valid sampling rate.
This method gives the minimum audio sample rate supported by your device.
NOTE : You may reverse the for loop to get the maximum sample rate supported by your device (Don't forget to change the method name).
NOTE 2 : Though android doc says upto 48000(48khz) sample rate is supported ,I have added all the possible sampling rates (as in wikipedia) since who know new devices may record UHD audio in higher (sampling) framerates.
private int getMinSupportedSampleRate() {
/*
* Valid Audio Sample rates
*
* #see <a
* href="http://en.wikipedia.org/wiki/Sampling_%28signal_processing%29"
* >Wikipedia</a>
*/
final int validSampleRates[] = new int[] { 8000, 11025, 16000, 22050,
32000, 37800, 44056, 44100, 47250, 48000, 50000, 50400, 88200,
96000, 176400, 192000, 352800, 2822400, 5644800 };
/*
* Selecting default audio input source for recording since
* AudioFormat.CHANNEL_CONFIGURATION_DEFAULT is deprecated and selecting
* default encoding format.
*/
for (int i = 0; i < validSampleRates.length; i++) {
int result = AudioRecord.getMinBufferSize(validSampleRates[i],
AudioFormat.CHANNEL_IN_DEFAULT,
AudioFormat.ENCODING_DEFAULT);
if (result != AudioRecord.ERROR
&& result != AudioRecord.ERROR_BAD_VALUE && result > 0) {
// return the mininum supported audio sample rate
return validSampleRates[i];
}
}
// If none of the sample rates are supported return -1 handle it in
// calling method
return -1;
}
I'd like to provide an alternative to Yahma's answer.
I agree with his/her proposition that it must be tested (though presumably it varies according to the model, not the device), but using getMinBufferSize seems a bit indirect to me.
In order to test whether a desired sample rate is supported I suggest attempting to construct an AudioTrack instance with the desired sample rate - if the specified sample rate is not supported you will get an exception of the form:
"java.lang.IllegalArgumentException: 2756Hz is not a supported sample rate"
public class Bigestnumber extends AsyncTask<String, String, String>{
ProgressDialog pdLoading = new ProgressDialog(MainActivity.this);
#Override
protected String doInBackground(String... params) {
final int validSampleRates[] = new int[]{
5644800, 2822400, 352800, 192000, 176400, 96000,
88200, 50400, 50000, 48000,47250, 44100, 44056, 37800, 32000, 22050, 16000, 11025, 4800, 8000};
TrueMan = new ArrayList <Integer> ();
for (int smaple : validSampleRates){
if(validSampleRate(smaple) == true) {
TrueMan.add(smaple);
}}
return null;
}
#Override
protected void onPostExecute(String result) {
Integer largest = Collections.max(TrueMan);
System.out.println("Largest " + String.valueOf(largest));
}
}
public boolean validSampleRate(int sample_rate) {
AudioRecord recorder = null;
try {
int bufferSize = AudioRecord.getMinBufferSize(sample_rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, sample_rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);
} catch(IllegalArgumentException e) {
return false;
} finally {
if(recorder != null)
recorder.release();
}
return true;
}
This Code will give you Max Supported Sample Rate on your Android OS. Just Declare ArrayList <Integer> TrueMan; in your beggining of the class. Then you can use high sample rate in AudioTrack and AudioRecord to get better sound quality. Reference.
Just some updated information here. I spent some time trying to get access to recording from the microphone to work with Android 6 (4.4 KitKat was fine). The error shown was the same as I got for 4.4 when using the wrong settings for sample rate/pcm etc. But my problem was in fact that the Permissions in AndroidManifest.xml are no longer sufficient to request access to the Microphone and in fact this now needs to be done run time:
https://developer.android.com/training/permissions/requesting.html

Categories

Resources