I’m struggling since days trying to obtain a raw audio stream from the microphone. I am trying different ways: the low-level JNI way with Oboe Library (either AAudio and OpenSL ES implementations) and the Android’s AudioRecord Java classes.
The problem I am facing is that I am not able to retrieve amplitudes near -/+1.0 while being sure of saturating the microphone input with a calibrated pure tone with such a high amplitude.
I think that the problem is that I am not able to effectively disable the signal preprocessing from AndroidOS (Automatic Gain Control or Noise Cancelling).
AutomaticGainControl.create(id).setEnabled(false)
(not working!)
Also, it seems that it is not possible also to disable any additional microphone rather than the one "selected" (done that as selecting the setPreferredDevice on AudioRecord instance). Used as audio source: unprocessed, mic, voice_recognition.
Is there anyway doing this or am I missing something?
Thank you
Which audio source are you using for your recording? VOICE_RECOGNITION or UNPROCESSED are mandated to not have any pre-processing enabled by default (i.e. see https://source.android.com/compatibility/10/android-10-cdd#5_11_capture_for_unprocessed) and therefore would allow you to check your signal path.
Related
I have a cross-platform(iOS and Android) app where I will record audio clips then send it to the server to do some machine learning operations. In my iOS app, I use AVAudioRecorder for recording the audio. In the Android app, I use MediaRecorder for recording the audio. In the mobile initially, I use m4a format because of size constrictions. After reaching the server I will convert it to wav format before using it in the ML operations.
My Problem is, in iOS the AVAudioRecorder by OS default does a factor of Amplification to the raw audio data before we the developer get access to the raw data. But in Android, the MediaRecorder doesn't provide any sort of default Amplification to the raw data. In other words, in iOS I will never get the raw audio stream from the microphone whereas in Android I will always only get the raw audio stream from the microphone. The distinction is clearly visible if you can record the same audio in both iPhone and Android phones side by side with a common audio source, then import the recorded audio in Audacity for visual representation. I have attached a sample representation screenshot below.
In the image, the first track is the Android recording and the second track is from the iOS recording. When I hear both the audio through headphones I can vaguely distinguish them but when I visualize the data points, you can clearly see the difference in the image. These distinctions are bad for ML operations.
Clearly in the iPhone, there is a certain amplification factor involved which I would like to implement in the Android also.
Is anyone aware of the amplification factor? OR are there any other possible alternatives?
It's quite possible that the difference is that the effect of Automatic Gain Control.
You can disable this in your app's AVAudioSession by setting its mode to AVAudioSessionModeMeasurement which you do once in your application - usually at startup. This disables a great deal of input signal processing.
Reading your problem description, you might be better off enabling AGC on Android.
If neither of these yields results, you might want to gain scale both signals so they are just below clipping.
let audioSession = AVAudioSession.sharedInstance()
audio.session.setMode(AVAudioSessionModeMeasurement)
I am developing an Android application which is supposed to capture audio with the built-in mic of the smartphone and save it.
For the further processing purposes I would like to have some control over the quality of audio captured. For instance, to my knowledge some smartphones have high-quality audio recording mode and I would like to make use of it, if that is possible.
I am aware of mediaRecorder, but I am not sure how to use its methods or input arguments to get the best quality of sound possible. I would be very grateful if somebody could point out that for me or provide references to other libraries that allow to adjust the quality of recorded sound.
i'm try to get pcm data by class AudioRecord, the source of audio from headset,it linked a device ,the device will send some wave to my app(i hope you can understand what i say).![difference device's wave],the picture at http://i.stack.imgur.com/5iZnY.png
we see the picture, wave 1 and wave 2,i can get the right result,because i can calculate the point of one cycle, but using sony xl36h, i received wave not closeness real wave, device actually send signal closeness to wave 1.
my question is what caused this phenomenon, how to get the closeness wave such like wave1? i think it maybe Sony optimize the bottom layer of audio ,if that ,should i use NDK to avoid that?
should i use NDK to avoid that?
No, you will get the same results with the NDK.
AudioRecord provides already access to the raw PCM data. The difference between the devices occures because they use different audio modules. The modules have different hardware features (low pass filters/ sensibility) and you can not disable them through software. The reason behind that is that these features reduce noise.
I want to develop an android app that takes in the audio signal and generates the time and frequency domain values.
I have the audio in a buffer which is obtained from the android MIC. I want to use this buffer values to generate a frequency domain graph. Is there some code that would help me find the FFT of an audio signal??
I have seen Moonblink's Audalyzer code and there are some missing components in the code. I could find lots of missing modules. I want to find a better logic that would take in audio and perform some calculations on it.
I found these for you using duckduckgo:
Android audio FFT to retrieve specific frequency magnitude using audiorecord
http://www.digiphd.com/android-java-reconstruction-fast-fourier-transform-real-signal-libgdx-fft/
This should help
I am new to android and presently doing android voice recording application. I want top know which format is best for saving audio file in android. (i.e RAW-AMR or 3gp or mp4).So rhat we can hear playback sound loudly in device.
Is there any alternative way to increase audio sound through voice processing in android.
Thanks in advance.
Question: Which bear is best? Answer: Black Bear
Seriously though, you would need to state your criteria for the audio file for us to make a codec recommendation. Does it need to be portable? Best compression? Highest fidelity?
The codec that you choose has no affect on the loudness of audio that will be played over the device, so this should not factor into your criteria.
Is there an alternative way to increase audio?
Yes, if you are recording audio from the microphone then you can amplify the audio data before you save it to a file.
Let an audio sample from the microphone be represented by the function:
f(t)
Amplification is achieved by multiplying the audio sample by some factor A
A * f(t)
You can use AGC(Automatic Gain Control) module from WebRTC to increase sound level.
I didn't find any simple Java API yet. You can use C++ API via JNI.
Have a look here, WebRTC AGC (Automatic Gain Control) .
I want top know which format is best for saving audio file in android.
To save voice audio on Android (or any other platform), take a look at Opus. It's a free, state-of-the-art audio codec that also supports voice mode.