I'm working on a recorder app on iOS and a friend of mine he's doing it on Android, but when he want's to play my url sound which I send to server and which also works fine for me when I play it .. he hears a lot of noise and not my actual recorder.
func setupRecorder() {
let myID = userDefaults.value(forKey: "user_id") as! Int
let message_token = generateUniqueToken()
self.curentFileName = message_token
let currentFileName = "\(String(describing: self.curentFileName!))"+"."+"\(myID).m4a"
let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
self.soundFileURL = documentsDirectory.appendingPathComponent(currentFileName)
print("writing to soundfile url: '\(soundFileURL!)'")
if FileManager.default.fileExists(atPath: soundFileURL.absoluteString) {
print("soundfile \(soundFileURL.absoluteString) exists")
}
let recordSettings: [String : Any] = [AVFormatIDKey: kAudioFormatAppleLossless,
AVEncoderAudioQualityKey: AVAudioQuality.max.rawValue,
AVEncoderBitRateKey: 320000,
AVNumberOfChannelsKey: 2,
AVSampleRateKey: 44100.0]
do {
soundRecorder = try AVAudioRecorder.init(url: soundFileURL, settings: recordSettings)
soundRecorder.delegate = self
soundRecorder.isMeteringEnabled = true
soundRecorder.prepareToRecord()
}
catch {
soundRecorder = nil
print(error.localizedDescription)
}
soundRecorder.prepareToRecord()
}
This is my recorder settings....
What I have from him it s only
private static final int SAMPLE_PER_SEC = 8000;
private static final int CHANNEL_TYPE = AudioFormat.CHANNEL_IN_MONO;
private static final int AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
private static int BUFFER_ELEMENT_TO_REC = 1024; // want to play 2048 (2K) since 2 bytes we use only 1024
private static int BYTES_PER_ELEMENT = 2;
ENCODING_PCM_16BIT
He said I m encoding my sound somehow and he s doing a raw record...I have no ideea where i m messing it since it s my first time with AvFoundation
Instead of using kAudioFormatAppleLossless change that to kAudioFormatLinearPCM. kAudioFormatAppleLossless format is a format only for Apple devices. It is better to use a common format for Android and iOS
devices, so that if any processing in the server for the audio, server doesn't have to handle the audio in two different way
I am trying to develop an application which plays one of the predefined arrays of sounds continuously and also change the the tempo,pitch (individually) etc using soundpool class of android.
I know how to change the pitch of a sound.
But I do not know how to play the whole array and also the tempo of that whole array.
Please help!
Thank you in advance!
EDIT : I think I will have to use Handler in order to play the array of sounds, but i don't know how!
public class Sound {
private static int length = 22050 * 10; //10 seconds long
private static byte[] data = new byte[length];
static void fillRandom() {
new Random().nextBytes(data); //Create some random noise to listen to.
}
static void play() {
fillRandom();
final int TEST_SR = 22050; //This is from an example I found online.
final int TEST_CONF = AudioFormat.CHANNEL_OUT_MONO;
final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
final int TEST_MODE = AudioTrack.MODE_STATIC; //I need static mode.
final int TEST_STREAM_TYPE = AudioManager.STREAM_ALARM;
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, length, TEST_MODE);
track.write(data, 0, length);
track.play();
}
}
This code play some random noises. All you have to do is fill the byte[]data array with the proper music files you intend to play...
I am trying to read and understand audio in Android. In my search I came along this article. Here he has wrote a code to record audio in wav format. But there is one thing I don't fully understand, and that is the first loop of his code:
public class ExtAudioRecorder
{
private final static int[] sampleRates = {44100, 22050, 11025, 8000};
public static ExtAudioRecorder getInstanse(Boolean recordingCompressed)
{
ExtAudioRecorder result = null;
if(recordingCompressed)
{
result = new ExtAudioRecorder( false,
AudioSource.MIC,
sampleRates[3],
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
}
else
{
int i=0;
do
{
result = new ExtAudioRecorder( true,
AudioSource.MIC,
sampleRates[i],
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
} while((++i<sampleRates.length) & !(result.getState() == ExtAudioRecorder.State.INITIALIZING));
}
return result;
He gives like a basic information about it, but I don't get this completely. Does this has anything to do with the performance of different types of Android devices? Anyway, hope somebody can brighten this up for me :)
He is trying to initialize the audio recorder with different sample rates, from these {44100, 22050, 11025, 8000}.
Depending on the underlying hardware, not all sample rates may be supported by the device.
Although the documentation says:
"44100Hz is currently the only rate that is guaranteed to work on all devices, but other rates such as 22050, 16000, and 11025 may work on some devices."
I think the author has written code to make sure that if initialization at a sample rate fails, an attempt is made to initialize at some other sample rate, unless the initialization is successful, which is given by the check he is making in the loop condition.
I'm trying to figure out what sampling rates are supported for phones running Android 2.2 and greater. We'd like to sample at a rate lower than 44.1kHz and not have to resample.
I know that all phones support 44100Hz but was wondering if there's a table out there that shows what sampling rates are valid for specific phones. I've seen Android's documentation (
http://developer.android.com/reference/android/media/AudioRecord.html) but it doesn't help much.
Has anyone found a list of these sampling rates??
The original poster has probably long since moved on, but I'll post this in case anyone else finds this question.
Unfortunately, in my experience, each device can support different sample rates. The only sure way of knowing what sample rates a device supports is to test them individually by checking the result of AudioRecord.getMinBufferSize() is non negative (which means there was an error), and returns a valid minimum buffer size.
public void getValidSampleRates() {
for (int rate : new int[] {8000, 11025, 16000, 22050, 44100}) { // add the rates you wish to check against
int bufferSize = AudioRecord.getMinBufferSize(rate, AudioFormat.CHANNEL_CONFIGURATION_DEFAULT, AudioFormat.ENCODING_PCM_16BIT);
if (bufferSize > 0) {
// buffer size is valid, Sample rate supported
}
}
}
Android has AudioManager.getProperty() function to acquire minimum buffer size and get the preferred sample rate for audio record and playback. But yes of course, AudioManager.getProperty() is not available on API level < 17. Here's an example code sample on how to use this API.
// To get preferred buffer size and sampling rate.
AudioManager audioManager = (AudioManager) this.getSystemService(Context.AUDIO_SERVICE);
String rate = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE);
String size = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER);
Log.d("Buffer Size and sample rate", "Size :" + size + " & Rate: " + rate);
Though its a late answer, I thought this might be useful.
Unfortunately not even all phones support the supposedly guaranteed 44.1kHz rate :(
I' ve been testing a Samsung GalaxyY (GT-S5360L) and if you record from the Camcorder source (ambience microphone), the only supported rates are 8kHz and 16kHz. Recording # 44.1kHz produces utter garbage and # 11.025kHz produces a pitch-altered recording with slightly less duration than the original sound.
Moreover, both strategies suggested by #Yahma and #Tom fail on this particular phone, as it is possible to receive a positive, minimum-buffer size from an unsupported configuration, and worse, I've been forced to reset the phone to get the audio stack working again, after attempting to use an AudioRecord class initialized from parameters that produce a supposedly valid, (non-exception raising) AudioTrack or AudioRecord instance.
I'm frankly a little bit worried at the problems I envision when releasing a sound-app to the wild. In our case, we are being forced to introduce a costly sample-rate-conversion layer if we expect to reuse our algorithms (expecting a 44.1kHz recording rate)on this particular phone model.
:(
I have a phone (Acer Z3) where I get a positive buffer size returned from AudioRecord.getMinBufferSize(...) when testing 11025 Hz. However, if I subsequently run
audioRecord = new AudioRecord(...);
int state = audioRecord.getState();
if (state != AudioRecord.STATE_INITIALIZED) ...
I can see that this sampling rate in fact does not represent a valid configuration (as pointed out by user1222021 on Jun 5 '12). So my solution is to run both tests to find a valid sampling rate.
This method gives the minimum audio sample rate supported by your device.
NOTE : You may reverse the for loop to get the maximum sample rate supported by your device (Don't forget to change the method name).
NOTE 2 : Though android doc says upto 48000(48khz) sample rate is supported ,I have added all the possible sampling rates (as in wikipedia) since who know new devices may record UHD audio in higher (sampling) framerates.
private int getMinSupportedSampleRate() {
/*
* Valid Audio Sample rates
*
* #see <a
* href="http://en.wikipedia.org/wiki/Sampling_%28signal_processing%29"
* >Wikipedia</a>
*/
final int validSampleRates[] = new int[] { 8000, 11025, 16000, 22050,
32000, 37800, 44056, 44100, 47250, 48000, 50000, 50400, 88200,
96000, 176400, 192000, 352800, 2822400, 5644800 };
/*
* Selecting default audio input source for recording since
* AudioFormat.CHANNEL_CONFIGURATION_DEFAULT is deprecated and selecting
* default encoding format.
*/
for (int i = 0; i < validSampleRates.length; i++) {
int result = AudioRecord.getMinBufferSize(validSampleRates[i],
AudioFormat.CHANNEL_IN_DEFAULT,
AudioFormat.ENCODING_DEFAULT);
if (result != AudioRecord.ERROR
&& result != AudioRecord.ERROR_BAD_VALUE && result > 0) {
// return the mininum supported audio sample rate
return validSampleRates[i];
}
}
// If none of the sample rates are supported return -1 handle it in
// calling method
return -1;
}
I'd like to provide an alternative to Yahma's answer.
I agree with his/her proposition that it must be tested (though presumably it varies according to the model, not the device), but using getMinBufferSize seems a bit indirect to me.
In order to test whether a desired sample rate is supported I suggest attempting to construct an AudioTrack instance with the desired sample rate - if the specified sample rate is not supported you will get an exception of the form:
"java.lang.IllegalArgumentException: 2756Hz is not a supported sample rate"
public class Bigestnumber extends AsyncTask<String, String, String>{
ProgressDialog pdLoading = new ProgressDialog(MainActivity.this);
#Override
protected String doInBackground(String... params) {
final int validSampleRates[] = new int[]{
5644800, 2822400, 352800, 192000, 176400, 96000,
88200, 50400, 50000, 48000,47250, 44100, 44056, 37800, 32000, 22050, 16000, 11025, 4800, 8000};
TrueMan = new ArrayList <Integer> ();
for (int smaple : validSampleRates){
if(validSampleRate(smaple) == true) {
TrueMan.add(smaple);
}}
return null;
}
#Override
protected void onPostExecute(String result) {
Integer largest = Collections.max(TrueMan);
System.out.println("Largest " + String.valueOf(largest));
}
}
public boolean validSampleRate(int sample_rate) {
AudioRecord recorder = null;
try {
int bufferSize = AudioRecord.getMinBufferSize(sample_rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, sample_rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);
} catch(IllegalArgumentException e) {
return false;
} finally {
if(recorder != null)
recorder.release();
}
return true;
}
This Code will give you Max Supported Sample Rate on your Android OS. Just Declare ArrayList <Integer> TrueMan; in your beggining of the class. Then you can use high sample rate in AudioTrack and AudioRecord to get better sound quality. Reference.
Just some updated information here. I spent some time trying to get access to recording from the microphone to work with Android 6 (4.4 KitKat was fine). The error shown was the same as I got for 4.4 when using the wrong settings for sample rate/pcm etc. But my problem was in fact that the Permissions in AndroidManifest.xml are no longer sufficient to request access to the Microphone and in fact this now needs to be done run time:
https://developer.android.com/training/permissions/requesting.html
I am testing this on a Samsung Galaxy S i9000.
int sampleRate = 44100;
int bufferSize = AudioRecord.getMinBufferSize(sampleRate,
AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_8BIT);
It returns -2 ERROR_BAD_VALUE.
The native sample rate is 44100Hz, as returned by
AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_SYSTEM).
I have tried setting sampleRate to 1000, 8000, 22100 and 44100. I have also tried changing AudioFormat.CHANNEL_IN_MONO to AudioFormat.CHANNEL_CONFIGURATION_MONO. I have also tried STEREO (both IN_STEREO and CONFIGURATION_STEREO). I have also tried 16 bit encoding instead of 8 bit.
Update: my Manifest has AUDIO_RECORD as permission.
I keep getting -2 as a result. Why is this happening?
From the platform source file AudioRecord.java:
static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) {
...
// PCM_8BIT is not supported at the moment
if (audioFormat != AudioFormat.ENCODING_PCM_16BIT) {
loge("getMinBufferSize(): Invalid audio format.");
return AudioRecord.ERROR_BAD_VALUE;
}
...
}
Looks like your choice is 16-bit or nothing. :\
In emulator it will always return -2. With the same code it will ok on real mobile.