Get information about audio file in Android - android

I am new to Android and I want load an audio file (wav or mp3) from the file system and display audio information, such as sampling rate etc.
How can I do this? Do you know any examples?

You can approximate it by dividing the file size by the length of the audio in seconds, for instance, from a random AAC encoded M4A in my library:
File Size: 10.3MB (87013064 bits)
Length: 5:16 (316 Seconds)
Which gives: 87013064 bits / 316 seconds = 273426.147 bits/sec or ~273kbps
Actual Bitrate: 259kbps
Since most audio files have a known set of valid bitrate levels, you can use that to step the bit rate to the appropriate level for display.
Link to original answer by Jake Basile
Or use this code to get it much more accurate:
MediaExtractor mex = new MediaExtractor();
try {
mex.setDataSource(path);// the adresss location of the sound on sdcard.
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
MediaFormat mf = mex.getTrackFormat(0);
int bitRate = mf.getInteger(MediaFormat.KEY_BIT_RATE);
int sampleRate = mf.getInteger(MediaFormat.KEY_SAMPLE_RATE);
Link to original answer by architjn

Related

Getting bit rate or bit depth of an audio wav file

I am using AudioTrack to play a .wav audio file. Everything is fine but I for now I have hard coded the bit depth of the audio file while initializing the AudioTrack object in STATIC_MODE.
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, mSampleRate, mChannels,
AudioFormat.ENCODING_PCM_16BIT, dataSize, AudioTrack.MODE_STATIC);
I want to get the bit-depth/bitrate of the .wav file programmatically and then set the encoding in the AudioTrack object. I have tried to use MediaExtractor and MediaFormat but it gives me only the following information:
mediaFormat:{mime=audio/raw, durationUs=10080000, channel-count=1, channel-mask=0, sample-rate=16000}
In the documentation of MediaFormat, it says that KEY_BIT_RATE is encoder-only. Does that mean that I can only use this option while encoding raw PCM bits. If yes, what can be any other way to read the bitrate/bit-depth programmatically? I have already tried getting the information for the same file on the terminal using the mediainfo binary and it gives me the correct bit depth.
You could always look at the 34th and 35th bytes of the wav file's header. See this resource.
MediaExtractor mediaExtractor = new MediaExtractor();
try {
mediaExtractor.setDataSource(path);
return mediaExtractor.getTrackFormat(0).getInteger("bit-per-sample");
} catch (Exception e) {
e.printStackTrace();
}
int currentapiVersion = android.os.Build.VERSION.SDK_INT;
int bitDepth;
if (currentapiVersion >= android.os.Build.VERSION_CODES.N){
bitDepth = format.getInteger("pcm-encoding");
} else{
bitDepth = format.getInteger("bit-width");
and the format above android 7.0 like
mime: string(audio/raw), channel-count: int32(2), sample-rate: int32(48000), pcm-encoding: int32(2)}
below android 7.0 like
mime: string(audio/raw), channel-count: int32(2), sample-rate: int32(48000), bit-width: int32(16), what: int32(1869968451)}
https://developer.android.com/reference/android/media/MediaFormat.html#KEY_PCM_ENCODING

Using MediaCodec to save series of images as Video

I am trying to use MediaCodec to save a series of Images, saved as Byte Arrays in a file, to a video file. I have tested these images on a SurfaceView (playing them in series) and I can see them fine. I have looked at many examples using MediaCodec, and here is what I understand (please correct me if I am wrong):
Get InputBuffers from MediaCodec object -> fill it with your frame's
image data -> queue the input buffer -> get coded output buffer ->
write it to a file -> increase presentation time and repeat
However, I have tested this a lot and I end up with one of two cases:
All sample projects I tried to imitate have caused Media server to die when calling queueInputBuffer for the second time.
I tried calling codec.flush() at the end (after saving output buffer to file, although none of the examples I saw did this) and the media server did not die, however, I am not able to open the output video file with any media player, so something is wrong.
Here is my code:
MediaCodec codec = MediaCodec.createEncoderByType(MIMETYPE);
MediaFormat mediaFormat = null;
if(CamcorderProfile.hasProfile(CamcorderProfile.QUALITY_720P)){
mediaFormat = MediaFormat.createVideoFormat(MIMETYPE, 1280 , 720);
} else {
mediaFormat = MediaFormat.createVideoFormat(MIMETYPE, 720, 480);
}
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 700000);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 10);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
codec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
codec.start();
ByteBuffer[] inputBuffers = codec.getInputBuffers();
ByteBuffer[] outputBuffers = codec.getOutputBuffers();
boolean sawInputEOS = false;
int inputBufferIndex=-1,outputBufferIndex=-1;
BufferInfo info=null;
//loop to read YUV byte array from file
inputBufferIndex = codec.dequeueInputBuffer(WAITTIME);
if(bytesread<=0)sawInputEOS=true;
if(inputBufferIndex >= 0){
if(!sawInputEOS){
int samplesiz=dat.length;
inputBuffers[inputBufferIndex].put(dat);
codec.queueInputBuffer(inputBufferIndex, 0, samplesiz, presentationTime, 0);
presentationTime += 100;
info = new BufferInfo();
outputBufferIndex = codec.dequeueOutputBuffer(info, WAITTIME);
Log.i("BATA", "outputBufferIndex="+outputBufferIndex);
if(outputBufferIndex >= 0){
byte[] array = new byte[info.size];
outputBuffers[outputBufferIndex].get(array);
if(array != null){
try {
dos.write(array);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
if(sawInputEOS) break;
}
}else{
codec.queueInputBuffer(inputBufferIndex, 0, 0, presentationTime, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
info = new BufferInfo();
outputBufferIndex = codec.dequeueOutputBuffer(info, WAITTIME);
if(outputBufferIndex >= 0){
byte[] array = new byte[info.size];
outputBuffers[outputBufferIndex].get(array);
if(array != null){
try {
dos.write(array);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
break;
}
}
}
}
codec.flush();
try {
fstream2.close();
dos.flush();
dos.close();
} catch (IOException e) {
e.printStackTrace();
}
codec.stop();
codec.release();
codec = null;
return true;
}
My question is, how can I get a working video from a stream of images using MediaCodec. What am I doing wrong?
Another question (if I am not too greedy), I would like to add an Audio track to this video, can it be done with MediaCodec as well, or must I use FFmpeg?
Note: I know about MediaMux in Android 4.3, however, it is not an option for me as my App must work on Android 4.1+.
Update
Thanks to fadden answer, I was able to reach EOS without Media server dying (Above code is after modification). However, the file I am getting is producing gibberish. Here is a snapshot of the video I get (only works as .h264 file).
My Input image format is YUV image (NV21 from camera preview). I can't get it to be any playable format. I tried all COLOR_FormatYUV420 formats and same gibberish output. And I still can't find away (using MediaCodec) to add audio.
I think you have the right general idea. Some things to be aware of:
Not all devices support COLOR_FormatYUV420SemiPlanar. Some only accept planar. (Android 4.3 introduced CTS tests to ensure that the AVC codec supports one or the other.)
It's not the case that queueing an input buffer will immediately result in the generation of one output buffer. Some codecs may accumulate several frames of input before producing output, and may produce output after your input has finished. Make sure your loops take that into account (e.g. your inputBuffers[].clear() will blow up if it's still -1).
Don't try to submit data and send EOS with the same queueInputBuffer call. The data in that frame may be discarded. Always send EOS with a zero-length buffer.
The output of the codecs is generally pretty "raw", e.g. the AVC codec emits an H.264 elementary stream rather than a "cooked" .mp4 file. Many players won't accept this format. If you can't rely on the presence of MediaMuxer you will need to find another way to cook the data (search around on stackoverflow for ideas).
It's certainly not expected that the mediaserver process would crash.
You can find some examples and links to the 4.3 CTS tests here.
Update: As of Android 4.3, MediaCodec and Camera have no ByteBuffer formats in common, so at the very least you will need to fiddle with the chroma planes. However, that sort of problem manifests very differently (as shown in the images for this question).
The image you added looks like video, but with stride and/or alignment issues. Make sure your pixels are laid out correctly. In the CTS EncodeDecodeTest, the generateFrame() method (line 906) shows how to encode both planar and semi-planar YUV420 for MediaCodec.
The easiest way to avoid the format issues is to move the frames through a Surface (like the CameraToMpegTest sample), but unfortunately that's not possible in Android 4.1.

How to mix / overlay two mp3 audio file into one mp3 file (not concatenate)

I want to merge two mp3 files into one mp3 file.for example if 1st file is 1min and 2nd file is 30 sec then the output should be one min. In that one min it should play both the files.
First of all, in order to mix two audio files you need to manipulate their raw representation; since an MP3 file is compressed, you don't have a direct access to the signal's raw representation. You need to decode the compressed MP3 stream in order to "understand" the wave form of your audio signals and then you will be able to mix them.
Thus, in order to mix two compressed audio file into a single compressed audio file, the following steps are required:
decode the compressed file using a decoder to obtain the raw data (NO PUBLIC SYSTEM API available for this, you need to do it manually!).
mix the two raw uncompressed data streams (applying audio clipping if necessary). For this, you need to consider the raw data format obtained with your decoder (PCM)
encode the raw mixed data into a compressed MP3 file (as per the decoder, you need to do it manually using an encoder)
More info aboud MP3 decoders can be found here.
I am not sure if you want to do it on an Android phone (looks like that because of your tags), but if I'm right maybe try LoopStack, it's a mobile DAW (did not try it myself).
If you are just "mixing" two files without adjusting the output volume your output might clip. However I am not sure if it's possible to "mix" two mp3 files without decoding them.
If it is okay for you to merge them on your PC try Audacity, it's a free desktop DAW.
I have not done it in Android but I had done it using Adobe flex. I guess the logic remains the same. I followed the following steps:
I extracted both the mp3s into two byte arrays. (song1ByteArray, song2ByteArray)
Find out the bigger byte array. (Let's say song1ByteArray is the larger one).
Create a function which returns the mixed byte array.
private ByteArray mix2Songs(ByteArray song1ByteArray, ByteArray song2ByteArray){
int arrLength=song1ByteArray.length;
for(int i=0;i<arrLength;i+=8){ // here if you see we are incrementing the length by 8 because a sterio sound has both left and right channels 4 bytes for left +4 bytes for right.
// read left and right channel values for the first song
float source1_L=song1ByteArray.readFloat();// I'm not sure if readFloat() function exists in android but there will be an equivalant one.
float source1_R=song1ByteArray.readFloat();
float source2_L=0;
float source2_R=0;
if(song2ByteArray.bytesAvailable>0){
source2_L=song1ByteArray.readFloat();//left channel of audio song2ByteArray
source2_R=song1ByteArray.readFloat(); //right channel of audio song2ByteArray
}
returnResultArr.writeFloat((source_1_L+source_2_L)/2); // average value of the source 1 and 2 left channel
returnResultArr.writeFloat((source_1_R+source_2_R)/2); // average value of the source 1 and 2 right channel
}
return returnResultArr;
}
1. Post on Audio mixing in Android
2. Another post on mixing audio in Android
3. You could leverage Java Sound to mix two audio files
Example:
// First convert audiofile to audioinputstream
audioInputStream = AudioSystem.getAudioInputStream(soundFile);
audioInputStream2 = AudioSystem.getAudioInputStream(soundFile2);
// Create one collection list object using arraylist then add all AudioInputStreams
Collection list=new ArrayList();
list.add(audioInputStream2);
list.add(audioInputStream);
// Then pass the audioformat and collection list to MixingAudioInputStream constructor
MixingAudioInputStream mixer=new MixingAudioInputStream(audioFormat, list);
// Finally read data from mixed AudionInputStream and give it to SourceDataLine
nBytesRead =mixer.read(abData, 0,abData.length);
int nBytesWritten = line.write(abData, 0, nBytesRead);
4. Try AudioConcat that has a -m option for mixing
java AudioConcat [ -D ] [ -c ] | [ -m ] | [ -f ] -o outputfile inputfile ...
Parameters.
-c
selects concatenation mode
-m
selects mixing mode
-f
selects float mixing mode
-o outputfile
The filename of the output file
inputfile
the name(s) of input file(s)
5. You could use ffmpeg android wrapper using a syntax and approach as explained here
This guy used the JLayer library in a project quite similar to yours. He also gives you a guide on how to integrate that library in your android application directly recompiling the jar.
Paraphrasing his code it is very easy to accomplish your task:
public static byte[] decode(String path, int startMs, int maxMs)
throws IOException, com.mindtherobot.libs.mpg.DecoderException {
ByteArrayOutputStream outStream = new ByteArrayOutputStream(1024);
float totalMs = 0;
boolean seeking = true;
File file = new File(path);
InputStream inputStream = new BufferedInputStream(new FileInputStream(file), 8 * 1024);
try {
Bitstream bitstream = new Bitstream(inputStream);
Decoder decoder = new Decoder();
boolean done = false;
while (! done) {
Header frameHeader = bitstream.readFrame();
if (frameHeader == null) {
done = true;
} else {
totalMs += frameHeader.ms_per_frame();
if (totalMs >= startMs) {
seeking = false;
}
if (! seeking) {
SampleBuffer output = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream);
if (output.getSampleFrequency() != 44100
|| output.getChannelCount() != 2) {
throw new com.mindtherobot.libs.mpg.DecoderException("mono or non-44100 MP3 not supported");
}
short[] pcm = output.getBuffer();
for (short s : pcm) {
outStream.write(s & 0xff);
outStream.write((s >> 8 ) & 0xff);
}
}
if (totalMs >= (startMs + maxMs)) {
done = true;
}
}
bitstream.closeFrame();
}
return outStream.toByteArray();
} catch (BitstreamException e) {
throw new IOException("Bitstream error: " + e);
} catch (DecoderException e) {
Log.w(TAG, "Decoder error", e);
throw new com.mindtherobot.libs.mpg.DecoderException(e);
} finally {
IOUtils.safeClose(inputStream);
}
}
public static byte[] mix(String path1, String path2) {
byte[] pcm1 = decode(path1, 0, 60000);
byte[] pcm2 = decode(path2, 0, 60000);
int len1=pcm1.length;
int len2=pcm2.length;
byte[] pcmL;
byte[] pcmS;
int lenL; // length of the longest
int lenS; // length of the shortest
if (len2>len1) {
lenL = len1;
pcmL = pcm1;
lenS = len2;
pcmS = pcm2;
} else {
lenL = len2;
pcmL = pcm2;
lenS = len1;
pcmS = pcm1;
}
for (int idx = 0; idx < lenL; idx++) {
int sample;
if (idx >= lenS) {
sample = pcmL[idx];
} else {
sample = pcmL[idx] + pcmS[idx];
}
sample=(int)(sample*.71);
if (sample>127) sample=127;
if (sample<-128) sample=-128;
pcmL[idx] = (byte) sample;
}
return pcmL;
}
Note that I added attenuation and clipping in the last rows: you always have to do both when mixing two waveforms.
If you don't have memory/time requirements you can make an int[] of the sum of the samples and evaluate what is the best attenuation to avoid clipping.
To merge (overlap) two sound files, you can use This FFMPEG library.
Here is the Documentation
In their sample you can just enter the command you want. So lets talk about the command that we need.
-i [FISRST_FILE_PATH] -i [SECOND_FILE_PATH] -filter_complex amerge -ac 2 -c:a libmp3lame -q:a 4 [OUTPUT_FILE_PATH]
For first and second file paths, you will get the absolute path of the sound file.
1- If it is on storage then it is a sub folder for Environment.getExternalStorageDirectory().getAbsolutePath()
2- If it is assets so it should be a sub folder for file:///android_asset/
For the output path, Make sure to add the extension
ex.
String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/File Name.mp3"
I didn't get any fine solution.but we can do some trick here.. :)
You can assign both mp3 files to two different MediaPlayer object.then play both files at a time with a button.compare both mp3 files to find the longest duration.after that Use a AudioReorder to record to that duration. it will solve your problem..I know its not a right way but hope it will help you.. :)

AAC format in Android

I have a problem with audio on android.
Short Question:
I need to play and record audio in aac format an all Android devices.
I found that it is possible starts from API 10, but on my BLU device(2.3.5) it works by using MediaRecorder and MediaPlayer.
But on HTC Nexus One it doesn't work.
Have you any suggestions?
Long question:
To record and play audio in AAC format I'm using following code. It pretty simple and stupid, but it works for testing.
String pathForAppFiles = getFilesDir()
.getAbsolutePath();
pathForAppFiles += "/bla.mp4";
if (audioRecorder == null) {
File file = new File(pathForAppFiles);
if (file.exists())
file.delete();
audioRecorder = new MediaRecorder();
audioRecorder
.setAudioSource(MediaRecorder.AudioSource.MIC);
audioRecorder
.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
audioRecorder
.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
audioRecorder.setOutputFile(pathForAppFiles);
try {
audioRecorder.prepare();
} catch (IllegalStateException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
audioRecorder.start();
} else {
audioRecorder.stop();
audioRecorder.release();
audioRecorder = null;
new AudioUtils().playSound(pathForAppFiles);
}
new AudioUtils().playSound(pathForAppFiles); - creates MediaPlayer and play sound from file;
To make it work on nexus I tried aac-decoder - it doesn't play file to the end (plays only 6 second of 10 seconds file). And it doesn't plays sound recorded by code above.
Also I tried to install FFmpeg, but I haven't experience to make this library work.
So can you recommend something?
I resolve this issue by changing audio type to MP3, because some devices (like Kindle Fire) do not play aac from anywhere.
My recommendation is:
If you want to make cross platform sounds - use MP3. You can convert any sound to MP3 with lame encoder.

Android AudioRecord Supported Sampling Rates

I'm trying to figure out what sampling rates are supported for phones running Android 2.2 and greater. We'd like to sample at a rate lower than 44.1kHz and not have to resample.
I know that all phones support 44100Hz but was wondering if there's a table out there that shows what sampling rates are valid for specific phones. I've seen Android's documentation (
http://developer.android.com/reference/android/media/AudioRecord.html) but it doesn't help much.
Has anyone found a list of these sampling rates??
The original poster has probably long since moved on, but I'll post this in case anyone else finds this question.
Unfortunately, in my experience, each device can support different sample rates. The only sure way of knowing what sample rates a device supports is to test them individually by checking the result of AudioRecord.getMinBufferSize() is non negative (which means there was an error), and returns a valid minimum buffer size.
public void getValidSampleRates() {
for (int rate : new int[] {8000, 11025, 16000, 22050, 44100}) { // add the rates you wish to check against
int bufferSize = AudioRecord.getMinBufferSize(rate, AudioFormat.CHANNEL_CONFIGURATION_DEFAULT, AudioFormat.ENCODING_PCM_16BIT);
if (bufferSize > 0) {
// buffer size is valid, Sample rate supported
}
}
}
Android has AudioManager.getProperty() function to acquire minimum buffer size and get the preferred sample rate for audio record and playback. But yes of course, AudioManager.getProperty() is not available on API level < 17. Here's an example code sample on how to use this API.
// To get preferred buffer size and sampling rate.
AudioManager audioManager = (AudioManager) this.getSystemService(Context.AUDIO_SERVICE);
String rate = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE);
String size = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER);
Log.d("Buffer Size and sample rate", "Size :" + size + " & Rate: " + rate);
Though its a late answer, I thought this might be useful.
Unfortunately not even all phones support the supposedly guaranteed 44.1kHz rate :(
I' ve been testing a Samsung GalaxyY (GT-S5360L) and if you record from the Camcorder source (ambience microphone), the only supported rates are 8kHz and 16kHz. Recording # 44.1kHz produces utter garbage and # 11.025kHz produces a pitch-altered recording with slightly less duration than the original sound.
Moreover, both strategies suggested by #Yahma and #Tom fail on this particular phone, as it is possible to receive a positive, minimum-buffer size from an unsupported configuration, and worse, I've been forced to reset the phone to get the audio stack working again, after attempting to use an AudioRecord class initialized from parameters that produce a supposedly valid, (non-exception raising) AudioTrack or AudioRecord instance.
I'm frankly a little bit worried at the problems I envision when releasing a sound-app to the wild. In our case, we are being forced to introduce a costly sample-rate-conversion layer if we expect to reuse our algorithms (expecting a 44.1kHz recording rate)on this particular phone model.
:(
I have a phone (Acer Z3) where I get a positive buffer size returned from AudioRecord.getMinBufferSize(...) when testing 11025 Hz. However, if I subsequently run
audioRecord = new AudioRecord(...);
int state = audioRecord.getState();
if (state != AudioRecord.STATE_INITIALIZED) ...
I can see that this sampling rate in fact does not represent a valid configuration (as pointed out by user1222021 on Jun 5 '12). So my solution is to run both tests to find a valid sampling rate.
This method gives the minimum audio sample rate supported by your device.
NOTE : You may reverse the for loop to get the maximum sample rate supported by your device (Don't forget to change the method name).
NOTE 2 : Though android doc says upto 48000(48khz) sample rate is supported ,I have added all the possible sampling rates (as in wikipedia) since who know new devices may record UHD audio in higher (sampling) framerates.
private int getMinSupportedSampleRate() {
/*
* Valid Audio Sample rates
*
* #see <a
* href="http://en.wikipedia.org/wiki/Sampling_%28signal_processing%29"
* >Wikipedia</a>
*/
final int validSampleRates[] = new int[] { 8000, 11025, 16000, 22050,
32000, 37800, 44056, 44100, 47250, 48000, 50000, 50400, 88200,
96000, 176400, 192000, 352800, 2822400, 5644800 };
/*
* Selecting default audio input source for recording since
* AudioFormat.CHANNEL_CONFIGURATION_DEFAULT is deprecated and selecting
* default encoding format.
*/
for (int i = 0; i < validSampleRates.length; i++) {
int result = AudioRecord.getMinBufferSize(validSampleRates[i],
AudioFormat.CHANNEL_IN_DEFAULT,
AudioFormat.ENCODING_DEFAULT);
if (result != AudioRecord.ERROR
&& result != AudioRecord.ERROR_BAD_VALUE && result > 0) {
// return the mininum supported audio sample rate
return validSampleRates[i];
}
}
// If none of the sample rates are supported return -1 handle it in
// calling method
return -1;
}
I'd like to provide an alternative to Yahma's answer.
I agree with his/her proposition that it must be tested (though presumably it varies according to the model, not the device), but using getMinBufferSize seems a bit indirect to me.
In order to test whether a desired sample rate is supported I suggest attempting to construct an AudioTrack instance with the desired sample rate - if the specified sample rate is not supported you will get an exception of the form:
"java.lang.IllegalArgumentException: 2756Hz is not a supported sample rate"
public class Bigestnumber extends AsyncTask<String, String, String>{
ProgressDialog pdLoading = new ProgressDialog(MainActivity.this);
#Override
protected String doInBackground(String... params) {
final int validSampleRates[] = new int[]{
5644800, 2822400, 352800, 192000, 176400, 96000,
88200, 50400, 50000, 48000,47250, 44100, 44056, 37800, 32000, 22050, 16000, 11025, 4800, 8000};
TrueMan = new ArrayList <Integer> ();
for (int smaple : validSampleRates){
if(validSampleRate(smaple) == true) {
TrueMan.add(smaple);
}}
return null;
}
#Override
protected void onPostExecute(String result) {
Integer largest = Collections.max(TrueMan);
System.out.println("Largest " + String.valueOf(largest));
}
}
public boolean validSampleRate(int sample_rate) {
AudioRecord recorder = null;
try {
int bufferSize = AudioRecord.getMinBufferSize(sample_rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, sample_rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);
} catch(IllegalArgumentException e) {
return false;
} finally {
if(recorder != null)
recorder.release();
}
return true;
}
This Code will give you Max Supported Sample Rate on your Android OS. Just Declare ArrayList <Integer> TrueMan; in your beggining of the class. Then you can use high sample rate in AudioTrack and AudioRecord to get better sound quality. Reference.
Just some updated information here. I spent some time trying to get access to recording from the microphone to work with Android 6 (4.4 KitKat was fine). The error shown was the same as I got for 4.4 when using the wrong settings for sample rate/pcm etc. But my problem was in fact that the Permissions in AndroidManifest.xml are no longer sufficient to request access to the Microphone and in fact this now needs to be done run time:
https://developer.android.com/training/permissions/requesting.html

Categories

Resources