I have a screen recording app that uses a MediaCodec encoder to encode the video frames. Here's one way I retrieve the video-encoder:
videoCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
I then try to determine the best bitrate-mode that this encoder supports, with my order of preference being "Constant Quality" mode, Variable Bitrate mode, Constant Bitrate mode. This is how I try to do it:
MediaCodecInfo.CodecCapabilities capabilities = videoCodec.getCodecInfo().getCapabilitiesForType(MediaFormat.MIMETYPE_VIDEO_AVC);
MediaCodecInfo.EncoderCapabilities encoderCapabilities = capabilities.getEncoderCapabilities();
if (encoderCapabilities.isBitrateModeSupported(MediaCodecInfo.EncoderCapabilities.BITRATE_MODE_CQ)) {
Timber.i("Setting bitrate mode to constant quality");
videoFormat.setInteger(MediaFormat.KEY_BITRATE_MODE, MediaCodecInfo.EncoderCapabilities.BITRATE_MODE_CQ);
} else if (encoderCapabilities.isBitrateModeSupported(MediaCodecInfo.EncoderCapabilities.BITRATE_MODE_VBR)) {
Timber.w("Setting bitrate mode to variable bitrate");
videoFormat.setInteger(MediaFormat.KEY_BITRATE_MODE, MediaCodecInfo.EncoderCapabilities.BITRATE_MODE_VBR);
} else if (encoderCapabilities.isBitrateModeSupported(MediaCodecInfo.EncoderCapabilities.BITRATE_MODE_CBR)) {
Timber.w("Setting bitrate mode to constant bitrate");
videoFormat.setInteger(MediaFormat.KEY_BITRATE_MODE, MediaCodecInfo.EncoderCapabilities.BITRATE_MODE_CBR);
}
Running this on my Samsung Galaxy S7 ends up selecting VBR mode, i.e. Constant Quality mode is supposedly not supported. However, if I just set the BITRATE_MODE to Constant Quality, it not only works but in fact produces a better quality video than VBR mode.
So, if Constant Quality mode is apparently supported by this encoder, why do I get a false negative from isBitrateModeSupported()? Am I missing something here?
It's super late but maybe i can help people that arrive here in the future.
As far as android 26, MediaCodec class will only accept BITRATE_MODE_CQ for MIMETYPE_AUDIO_FLAC codecs.
I dont know why but this is hard coded into the class:
/**
* Query whether a bitrate mode is supported.
*/
public boolean isBitrateModeSupported(int mode) {
for (Feature feat: bitrates) {
if (mode == feat.mValue) {
return (mBitControl & (1 << mode)) != 0;
}
}
return false;
}
private void applyLevelLimits() {
String mime = mParent.getMimeType();
if (mime.equalsIgnoreCase(MediaFormat.MIMETYPE_AUDIO_FLAC)) {
mComplexityRange = Range.create(0, 8);
mBitControl = (1 << BITRATE_MODE_CQ);
} else if (mime.equalsIgnoreCase(MediaFormat.MIMETYPE_AUDIO_AMR_NB)
|| mime.equalsIgnoreCase(MediaFormat.MIMETYPE_AUDIO_AMR_WB)
|| mime.equalsIgnoreCase(MediaFormat.MIMETYPE_AUDIO_G711_ALAW)
|| mime.equalsIgnoreCase(MediaFormat.MIMETYPE_AUDIO_G711_MLAW)
|| mime.equalsIgnoreCase(MediaFormat.MIMETYPE_AUDIO_MSGSM)) {
mBitControl = (1 << BITRATE_MODE_CBR);
}
}
considering that BITRATE_MODE_CQ is 0 isBitrateModeSupported will only return true case MIMETYPE_AUDIO_FLAC is selected.
This is the answer why it returns false up to android lvl 26
why it is coded like this i wont know.
I guess a simple way to check is try to create the encoder with the format you wish and catch any possible exception
Related
I'm developing an Android native app where I record the screen video stream (encoding it with the native AMediaCodec library to video/avc) and I mux it with an AAC / audio/mp4a-latm audio track. My code works just fine on several devices, but I got some problems with some devices (Huawei P8 and P10 lite, running Android 6.0.0 and 7.0 respectively, and Nexus 5 running Android 6.0.1). The issue is that, whenever I try to add the second track to the muxer (no matter the order I add them), it fails returning a -10000 error code.
I've simplified the problem, trying to just mux an audio and a video file together; the results are the same. In this simplified version I use two AMediaExtractor's to get the audio and video formats to configure the AMediaMuxer, but when I add the second track I still get an error. Here's the code:
const auto videoSample = "videosample.mp4";
const auto audioSample = "audiosample.aac";
const auto filePath = "muxed_file.mp4"
auto* extractorV = AMediaExtractor_new();
AMediaExtractor_setDataSource(extractorV, videoSample);
AMediaExtractor_selectTrack(extractorV, 0U); // here I take care to select the right "video/avc" track
auto* videoFormat = AMediaExtractor_getTrackFormat(extractorV, 0U);
auto* extractorA = AMediaExtractor_new();
AMediaExtractor_setDataSource(extractorA, audioSample);
AMediaExtractor_selectTrack(extractorA, 0U); // here I take care to select the right "mp4a-latm" track
auto* audioFormat = AMediaExtractor_getTrackFormat(extractorA, 0U);
auto fd = open(filePath.c_str(), O_WRONLY | O_CREAT | O_TRUNC, 0666);
auto* muxer = AMediaMuxer_new(fd, AMEDIAMUXER_OUTPUT_FORMAT_MPEG_4);
auto videoTrack = AMediaMuxer_addTrack(muxer, videoFormat); // the operation succeeds: videoTrack is 0
auto audioTrack = AMediaMuxer_addTrack(muxer, audioFormat); // error: audioTrack is -10000
AMediaExtractor_seekTo(extractorV, 0, AMEDIAEXTRACTOR_SEEK_CLOSEST_SYNC);
AMediaExtractor_seekTo(extractorA, 0, AMEDIAEXTRACTOR_SEEK_CLOSEST_SYNC);
AMediaMuxer_start(muxer);
Is there something wrong with my code? Is it something that is not supposed to work on Android prior to 8 or it'a pure coincidence? I've read a lot of posts (especially by #fadden) here on SO, but I'm not able to figure it out.
Let me give you some context:
the failure is independent from the order I add the two tracks: it will always be the second AMediaMuxer_addTrack() to fail
audio and video tracks should be ok: when I mux only one of the tracks, everything works well even on Huaweis and Nexus 5, I obtain correct output files, both with the audio or video track alone
I tried to move the AMediaExtractor_seekTo() calls to other positions, without success
the same code works just fine on other devices (OnePlus 5 and Nokia 7 plus, both running Android >= 8.0)
Just for completeness, this is the code I later use to obtain the output mp4 file:
AMediaMuxer_start(muxer);
// mux the VIDEO track
std::array<uint8_t, 256U * 1024U> videoBuf;
AMediaCodecBufferInfo videoBufInfo{};
videoBufInfo.flags = AMediaExtractor_getSampleFlags(extractorV);
bool videoEos{};
while (!videoEos) {
auto ts = AMediaExtractor_getSampleTime(extractorV);
videoBufInfo.presentationTimeUs = std::max(videoBufInfo.presentationTimeUs, ts);
videoBufInfo.size = AMediaExtractor_readSampleData(extractorV, videoBuf.data(), videoBuf.size());
if(videoBufInfo.presentationTimeUs == -1 || videoBufInfo.size < 0) {
videoEos = true;
} else {
AMediaMuxer_writeSampleData(muxer, videoTrack, videoBuf.data(), &videoBufInfo);
AMediaExtractor_advance(extractorV);
}
}
// mux the audio track
std::array<uint8_t, 256U * 1024U> audioBuf;
AMediaCodecBufferInfo audioBufInfo{};
audioBufInfo.flags = AMediaExtractor_getSampleFlags(extractorA);
bool audioEos{};
while (!audioEos) {
audioBufInfo.size = AMediaExtractor_readSampleData(extractorA, audioBuf.data(), audioBuf.size());
if(audioBufInfo.size < 0) {
audioEos = true;
} else {
audioBufInfo.presentationTimeUs = AMediaExtractor_getSampleTime(extractorA);
AMediaMuxer_writeSampleData(muxer, audioTrack, audioBuf.data(), &audioBufInfo);
AMediaExtractor_advance(extractorA);
}
}
AMediaMuxer_stop(muxer);
AMediaMuxer_delete(muxer);
close(fd);
AMediaFormat_delete(audioFormat);
AMediaExtractor_delete(extractorA);
AMediaFormat_delete(videoFormat);
AMediaExtractor_delete(extractorV);
I am using MediaCodec API to decode a H264 video stream using a SurfaceView as the output surface. The decoder is configured successfully without any errors. When I try to finally render the decoded video frame onto the SurfaceView using releaseOutputBuffer(bufferIndex, true), it throws MediaCodec.CodecException, however the video is rendered correctly.
Calling getDiagnosticInfo() and getErrorCode() on the exception object return an error code of -34, but I can't find in the docs what this error code means. The documentation is also very unclear about when this exception is thrown.
Has anyone faced this exception/error code before? How can I fix this?
PS: Although the video works fine but this exeception is thrown at everyreleaseOutputBuffer(bufferIndex, true), call.
Android media-codec is very dependant on the device vendor. Samsung is incredibly problematic other devices running the same code will run fine. This has been my life for the last 6 months.
The best approach to do although it can feel wrong is to try + catch + retry. There are 4 distinct places where the MediaCodec will throw exceptions:
Configuration - NativeDecoder.Configure(...);
Start - NativeDecoder.Start();
Render output - NativeDecoder.ReleaseOutputBuffer(...);
Input - codec.QueueInputBuffer(...);
NOTE: my code is in Xamarin but the calls map very closely to raw java.
The way you configure your format description also matters. The media-codec can crash on NEXUS devices if you don't specify:
formatDescription.SetInteger(MediaFormat.KeyMaxInputSize, currentPalette.Width * currentPalette.Height);
When you catch any exception you will need to ensure the mediacodec is reset. Unfortunatly reset isnt available to older api-levels but you can simulate the same effect with:
#region Close + Release Native Decoder
void StopAndReleaseNativeDecoder() {
FlushNativeDecoder();
StopNativeDecoder();
ReleaseNativeDecoder();
}
void FlushNativeDecoder() {
if (NativeDecoder != null) {
try {
NativeDecoder.Flush();
} catch {
// ignore
}
}
}
void StopNativeDecoder() {
if (NativeDecoder != null) {
try {
NativeDecoder.Stop();
} catch {
// ignore
}
}
}
void ReleaseNativeDecoder() {
while (NativeDecoder != null) {
try {
NativeDecoder.Release();
} catch {
// ignore
} finally {
NativeDecoder = null;
}
}
}
#endregion
Once you catch the error when you pass new input you can check:
if (!DroidDecoder.IsRunning && streamView != null && streamView.VideoLayer.IsAvailable) {
DroidDecoder.StartDecoder(streamView.VideoLayer.SurfaceTexture);
}
DroidDecoder.DecodeH264FrameBuffer(payload, payloadSize, frameDuration, presentationTime, isKeyFrame);
Rendering to a texture-view seems to be the most stable option currently. But the device fragmentation has really hurt android in this area. We have found cheaper devices such as a the Tesco Hudl to be of the most stable for video. Even had up to 21 concurrent videos on screen at 1 time. Samsung S4 can get around 4-6 depending on the resolution/fps but something like the HTC can work as well as the Hudl. Its been a wake up call and made me realise samsung devices are literally copying apple design and twiddling with the android-sdk and actually breaking a lot of functionality along the way.
It is most probably an issue with the codec you are using. Try using something like this to
private static MediaCodecInfo selectCodec(String mime){
int numCodecs = MediaCodecList.getCodecCount();
for(int i = 0; i < numCodecs; i++){
MediaCodecInfo codecInfo = MediaCodecList.getCodecInfoAt(i);
if(!codecInfo.isEncoder()){
continue;
}
String[] types = codecInfo.getSupportedTypes();
for(int j = 0; j < types.length; j++){
if(types[j].equalsIgnoreCase(mime)){
return codecInfo;
}
}
}
return null;
}
And then setting your encoder with:
MediaCodecInfo codecInfo = selectCodec(MIME_TYPE);
mEncoder = MediaCodec.createCodecByName(codecInfo.getName());
That may resolve your error by ensuring that the Codec you've chosen is fully supported.
I'm working on video transcoding in Android, and using the standard method as these samples to extract/decode a video. I test the same process on different devices with different video devices, and I found a problem on the frame count of decoder input/output.
For some timecode issues as in this question, I use a queue to record the extracted video samples, and check the queue when I got a decoder frame output, like the following codes:
(I omit the encoding-related codes to make it clearer)
Queue<Long> sample_time_queue = new LinkedList<Long>();
....
// in transcoding loop
if (is_decode_input_done == false)
{
int decode_input_index = decoder.dequeueInputBuffer(TIMEOUT_USEC);
if (decode_input_index >= 0)
{
ByteBuffer decoder_input_buffer = decode_input_buffers[decode_input_index];
int sample_size = extractor.readSampleData(decoder_input_buffer, 0);
if (sample_size < 0)
{
decoder.queueInputBuffer(decode_input_index, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
is_decode_input_done = true;
}
else
{
long sample_time = extractor.getSampleTime();
decoder.queueInputBuffer(decode_input_index, 0, sample_size, sample_time, 0);
sample_time_queue.offer(sample_time);
extractor.advance();
}
}
else
{
DumpLog(TAG, "Decoder dequeueInputBuffer timed out! Try again later");
}
}
....
if (is_decode_output_done == false)
{
int decode_output_index = decoder.dequeueOutputBuffer(decode_buffer_info, TIMEOUT_USEC);
switch (decode_output_index)
{
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
{
....
break;
}
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
{
....
break;
}
case MediaCodec.INFO_TRY_AGAIN_LATER:
{
DumpLog(TAG, "Decoder dequeueOutputBuffer timed out! Try again later");
break;
}
default:
{
ByteBuffer decode_output_buffer = decode_output_buffers[decode_output_index];
long ptime_us = decode_buffer_info.presentationTimeUs;
boolean is_decode_EOS = ((decode_buffer_info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0);
if (is_decode_EOS)
{
// Decoder gives an EOS output.
is_decode_output_done = true;
....
}
else
{
// The frame time may not be consistent for some videos.
// As a workaround, we use a frame time queue to guard this.
long sample_time = sample_time_queue.poll();
if (sample_time == ptime_us)
{
// Very good, the decoder input/output time is consistent.
}
else
{
// If the decoder input/output frame count is consistent, we can trust the sample time.
ptime_us = sample_time;
}
// process this frame
....
}
decoder.releaseOutputBuffer(decode_output_index, false);
}
}
}
In some cases, the queue can "correct" the PTS if the decoder gives error value (e.g. a lot of 0s). However, there are still some issues about the frame count of decoder input/output.
On an HTC One 801e device, I use the codec OMX.qcom.video.decoder.avc to decode the video (with MIME types video/avc). The sample time and PTS is matched well for the frames, except the last one.
For example, if the extractor feeds 100 frames and then EOS to the decoder, the first 99 decoded frames has the exactly same time values, but the last frame is missing and I get output EOS from the decoder. I test different videos encoded by the built-in camera, by ffmpeg muxer, or by a video processing AP on Windows. All of them have the last one frame disappeared.
On some pads with OMX.MTK.VIDEO.DECODER.AVC codec, things becomes more confused. Some videos has good PTS from the decoder and the input/output frame count is correct (i.e. the queue is empty when the decoding is done.). Some videos has consistent input/output frame count with bad PTS in decoder output (and I can still correct them by the queue). For some videos, a lot of frames are missing during the decoding. For example, the extractor get 210 frames in a 7 second video, but the decoder only output the last 180 frames. It is impossible to recover the PTS using the same workaround.
Is there any way to expect the input/output frame count for a MediaCodec decoder? Or more accurately, to know which frame(s) are dropped by the decoder while the extractor gives it video samples with correct sample time?
Same basic story as in the other question. Pre-4.3, there were no tests confirming that every frame fed to an encoder or decoder came out the other side. I recall that some devices would reliably drop the last frame in certain tests until the codecs were fixed in 4.3.
I didn't search for a workaround at the time, so I don't know if one exists. Delaying before sending EOS might help if it's causing something to shut down early.
I don't believe I ever saw a device drop large numbers of frames. This seems like an unusual case, as it would have been noticeable in any apps that exercised MediaCodec in similar ways even without careful testing.
I tried to get the SLDeviceVolumeItf interface of the RecorderObject on Android but I got the error: SL_RESULT_FEATURE_UNSUPPORTED.
I read that the Android implementation of OpenSL ES does not support volume setting for the AudioRecorder. Is that true?
If yes is there a workaround? I have a VOIP application that does not worl well on Galaxy Nexus because of the very high mic gain.
I also tried to get the SL_IID_ANDROIDCONFIGURATION to set the streamType to the new VOICE_COMMUNINCATION audio-source but again I get error 12 (not supported).
// create audio recorder
const SLInterfaceID id[2] = { SL_IID_ANDROIDSIMPLEBUFFERQUEUE, SL_IID_ANDROIDCONFIGURATION };
const SLboolean req[2] = { SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE };
result = (*engine)->CreateAudioRecorder(engine, &recorderObject, &audioSrc, &audioSnk, 2, id, req);
if (SL_RESULT_SUCCESS != result) {
return false;
}
SLAndroidConfigurationItf recorderConfig;
result = (*recorderObject)->GetInterface(recorderObject, SL_IID_ANDROIDCONFIGURATION, &recorderConfig);
if(result != SL_RESULT_SUCCESS) {
error("failed to get SL_IID_ANDROIDCONFIGURATION interface. e == %d", result);
}
The recorderObject is created but I can't get the SL_IID_ANDROIDCONFIGURATION interface.
I tried it on Galaxy Nexus (ICS), HTC sense (ICS) and Motorola Blur (Gingerbread).
I'm using NDK version 6.
Now I can get the interface. I had to use NDK 8 and target-14.
When I tried to use 10 as a target, I had an error compiling the native code (dirent.h was not found).
I had to use target-platform-14.
I ran into a similar problem. My results were returning the error code for not implemented. However, my problem was that I wasn't creating the recorder with the SL_IID_ANDROIDCONFIGURATION interface flag.
apiLvl = (*env)->GetStaticIntField(env, versionClass, sdkIntFieldID);
SLint32 streamType = SL_ANDROID_RECORDING_PRESET_GENERIC;
if(apiLvl > 10){
streamType = SL_ANDROID_RECORDING_PRESET_VOICE_COMMUNICATION;
I("set SL_ANDROID_RECORDING_PRESET_VOICE_COMMUNICATION");
}
result = (*recorderConfig)->SetConfiguration(recorderConfig, SL_ANDROID_KEY_RECORDING_PRESET, &streamType, sizeof(SLint32));
if (SL_RESULT_SUCCESS != result) {
return 0;
}
Even i tried to find a way to change the gain in OpenSL, looks like there is no api/interface for that. i implemented a work around by implementing a simple shift gain multiplier
void multiply_gain(void *buffer, int bytes, int gain_val)
{
int i = 0, j = 0;
short *buffer_samples = (short*)buffer;
for(i = 0, j = 0; i < bytes; i+=2,j++)
{
buffer_samples[j] = (buffer_samples[j] >> gain_val);
}
}
But here the gain is multiplied/divided (based on << or >>) by a factor or 2. if you need a smoother gain curve, you need to write a more complex digital gain function.
I'm trying to figure out what sampling rates are supported for phones running Android 2.2 and greater. We'd like to sample at a rate lower than 44.1kHz and not have to resample.
I know that all phones support 44100Hz but was wondering if there's a table out there that shows what sampling rates are valid for specific phones. I've seen Android's documentation (
http://developer.android.com/reference/android/media/AudioRecord.html) but it doesn't help much.
Has anyone found a list of these sampling rates??
The original poster has probably long since moved on, but I'll post this in case anyone else finds this question.
Unfortunately, in my experience, each device can support different sample rates. The only sure way of knowing what sample rates a device supports is to test them individually by checking the result of AudioRecord.getMinBufferSize() is non negative (which means there was an error), and returns a valid minimum buffer size.
public void getValidSampleRates() {
for (int rate : new int[] {8000, 11025, 16000, 22050, 44100}) { // add the rates you wish to check against
int bufferSize = AudioRecord.getMinBufferSize(rate, AudioFormat.CHANNEL_CONFIGURATION_DEFAULT, AudioFormat.ENCODING_PCM_16BIT);
if (bufferSize > 0) {
// buffer size is valid, Sample rate supported
}
}
}
Android has AudioManager.getProperty() function to acquire minimum buffer size and get the preferred sample rate for audio record and playback. But yes of course, AudioManager.getProperty() is not available on API level < 17. Here's an example code sample on how to use this API.
// To get preferred buffer size and sampling rate.
AudioManager audioManager = (AudioManager) this.getSystemService(Context.AUDIO_SERVICE);
String rate = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE);
String size = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER);
Log.d("Buffer Size and sample rate", "Size :" + size + " & Rate: " + rate);
Though its a late answer, I thought this might be useful.
Unfortunately not even all phones support the supposedly guaranteed 44.1kHz rate :(
I' ve been testing a Samsung GalaxyY (GT-S5360L) and if you record from the Camcorder source (ambience microphone), the only supported rates are 8kHz and 16kHz. Recording # 44.1kHz produces utter garbage and # 11.025kHz produces a pitch-altered recording with slightly less duration than the original sound.
Moreover, both strategies suggested by #Yahma and #Tom fail on this particular phone, as it is possible to receive a positive, minimum-buffer size from an unsupported configuration, and worse, I've been forced to reset the phone to get the audio stack working again, after attempting to use an AudioRecord class initialized from parameters that produce a supposedly valid, (non-exception raising) AudioTrack or AudioRecord instance.
I'm frankly a little bit worried at the problems I envision when releasing a sound-app to the wild. In our case, we are being forced to introduce a costly sample-rate-conversion layer if we expect to reuse our algorithms (expecting a 44.1kHz recording rate)on this particular phone model.
:(
I have a phone (Acer Z3) where I get a positive buffer size returned from AudioRecord.getMinBufferSize(...) when testing 11025 Hz. However, if I subsequently run
audioRecord = new AudioRecord(...);
int state = audioRecord.getState();
if (state != AudioRecord.STATE_INITIALIZED) ...
I can see that this sampling rate in fact does not represent a valid configuration (as pointed out by user1222021 on Jun 5 '12). So my solution is to run both tests to find a valid sampling rate.
This method gives the minimum audio sample rate supported by your device.
NOTE : You may reverse the for loop to get the maximum sample rate supported by your device (Don't forget to change the method name).
NOTE 2 : Though android doc says upto 48000(48khz) sample rate is supported ,I have added all the possible sampling rates (as in wikipedia) since who know new devices may record UHD audio in higher (sampling) framerates.
private int getMinSupportedSampleRate() {
/*
* Valid Audio Sample rates
*
* #see <a
* href="http://en.wikipedia.org/wiki/Sampling_%28signal_processing%29"
* >Wikipedia</a>
*/
final int validSampleRates[] = new int[] { 8000, 11025, 16000, 22050,
32000, 37800, 44056, 44100, 47250, 48000, 50000, 50400, 88200,
96000, 176400, 192000, 352800, 2822400, 5644800 };
/*
* Selecting default audio input source for recording since
* AudioFormat.CHANNEL_CONFIGURATION_DEFAULT is deprecated and selecting
* default encoding format.
*/
for (int i = 0; i < validSampleRates.length; i++) {
int result = AudioRecord.getMinBufferSize(validSampleRates[i],
AudioFormat.CHANNEL_IN_DEFAULT,
AudioFormat.ENCODING_DEFAULT);
if (result != AudioRecord.ERROR
&& result != AudioRecord.ERROR_BAD_VALUE && result > 0) {
// return the mininum supported audio sample rate
return validSampleRates[i];
}
}
// If none of the sample rates are supported return -1 handle it in
// calling method
return -1;
}
I'd like to provide an alternative to Yahma's answer.
I agree with his/her proposition that it must be tested (though presumably it varies according to the model, not the device), but using getMinBufferSize seems a bit indirect to me.
In order to test whether a desired sample rate is supported I suggest attempting to construct an AudioTrack instance with the desired sample rate - if the specified sample rate is not supported you will get an exception of the form:
"java.lang.IllegalArgumentException: 2756Hz is not a supported sample rate"
public class Bigestnumber extends AsyncTask<String, String, String>{
ProgressDialog pdLoading = new ProgressDialog(MainActivity.this);
#Override
protected String doInBackground(String... params) {
final int validSampleRates[] = new int[]{
5644800, 2822400, 352800, 192000, 176400, 96000,
88200, 50400, 50000, 48000,47250, 44100, 44056, 37800, 32000, 22050, 16000, 11025, 4800, 8000};
TrueMan = new ArrayList <Integer> ();
for (int smaple : validSampleRates){
if(validSampleRate(smaple) == true) {
TrueMan.add(smaple);
}}
return null;
}
#Override
protected void onPostExecute(String result) {
Integer largest = Collections.max(TrueMan);
System.out.println("Largest " + String.valueOf(largest));
}
}
public boolean validSampleRate(int sample_rate) {
AudioRecord recorder = null;
try {
int bufferSize = AudioRecord.getMinBufferSize(sample_rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, sample_rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);
} catch(IllegalArgumentException e) {
return false;
} finally {
if(recorder != null)
recorder.release();
}
return true;
}
This Code will give you Max Supported Sample Rate on your Android OS. Just Declare ArrayList <Integer> TrueMan; in your beggining of the class. Then you can use high sample rate in AudioTrack and AudioRecord to get better sound quality. Reference.
Just some updated information here. I spent some time trying to get access to recording from the microphone to work with Android 6 (4.4 KitKat was fine). The error shown was the same as I got for 4.4 when using the wrong settings for sample rate/pcm etc. But my problem was in fact that the Permissions in AndroidManifest.xml are no longer sufficient to request access to the Microphone and in fact this now needs to be done run time:
https://developer.android.com/training/permissions/requesting.html