Lowest level of access to real-time microphone data on Android - android

I've just written some iOS code that uses Audio Units to get a mono float stream from the microphone at the hardware sampling rate.
It's ended up being quite a lot of code! First I have to set up an audio session, specifying a desired sample rate of 48kHz. I then have to start the session and inspect the sample rate that was actually returned. This will be the actual hardware sampling rate. I then have to set up an audio unit, implementing a render callback.
But I am at least able to use the hardware sampling rate (so I can be certain that there is no information is lost through software re-sampling). And also I am able to set the smallest possible buffer size, so that I achieve minimal latency.
What is the analogous process on android?
How can I get down to the wire?
PS Nobody has mentioned it yet but it appears to be possible to work at the JNI level.

The AudioRecord class should be able to help you do what you need from the Java/Kotlin side of things. This will give you raw PCM data at the sampling rate you requested (assuming the hardware supports it.) It's up to your app to read the data out of the AudioRecord class in an efficient and timely manner so it does not overflow the buffer and drop data.

Related

How to use noise meter package in flutter to give only few decibel readings per second

I am using noise meter to read noise in decibels. When I run the app it is recording almost 120 readings per second. I don't want those many recordings. Is there any way to specify that I want only one or two recordings per second like that. Thanks in advance. noise_meter package.
I am using code from git hub which is already written using noise_meter github repo noise_meter example
I tried to calculate no. of samples using sample rate which is 40100 in the package. but I can't understand it.
As you see in the source code , audio streamer uses a fixed size buffer of a new thousand and an audio sample rate of 41000, and includes this comment Uses a buffer array of size 512. Whenever buffer is full, the content is sent to Flutter. So, small audio blocks will arrive at the consumer frequently (as you might expect from a streamer). It doesn't seem possible to adjust this.
The noise meter package simply takes each block of audio and calculates the noise level, so the rate of arrival of those is exactly the same as rate of arrival of audio blocks from the underlying package.
Given the simplicity of the noise meter calculation, you could replace it with your own code directly on top of audio streamer. You just need to collect multiple blocks of audio together before performing the simple decibel calculation.
Alternatively you could simply discard N out of each N+1 samples.

Using AudioRecord's data outside Android

I'm trying to stream the audio data recorded on android to a micro-controller for playback. the audio is recorded using the AudioRecord class and is then sent over UDP. on the receiving side, the micro-controller receives the data and plays it using PWM. there are a couple of problems though :
I don't exactly know what format the AudioRecord class uses. i'm using ENCODING_PCM_16BIT but don't even know if its bipolar or not and how to convert it to unipolar if it is.
Due to limited bandwidth, i can't send more than 8 bits per sample. since 8 bit PCM isn't supported on my phone, i've used the 16 bit version but for conversion, i've just used the upper 8 bits. i'm not sure if that's right.
Since i've used a weird Crystal Oscillator for my circuit, the audio has to be sampled at 7.2kHz. my phone supports 8kHz sampling so i just use that and send %90 of the recorded data (using a for loop with a float as variable).
I've hooked up a 2W speaker to the OC2 pin on my ATmega32 using a 220 Ohm resistor and a 100nF capacitor to act as a filter. (Schematic) but again i'm not sure if its the correct way to do it.
So all of this put together produces nothing but noise as output. the only thing that changes when i "make some noise" near the MIC is the volume and the pattern of the output noise. the pattern doesn't make any sense though and is the same for human voice or music.
This is the piece of code i wrote to convert the data before sending it over UDP :
float divider = 8/7.2f;
int index=0;
recorder.read(record_buffer,0,buffer_size);
for(float i=0;i<buffer_size;i+=divider)
{
send_buffer[index++]= (byte) (record_buffer[(int)i] >> 8);
}
I don't know where to go from here. any suggestion is appreciated.
Update:
I took RussSchultz's advice and sent a sine wave over UDP and hooked up the output to my cheap O-Scope. this is what i get:
No Data : http://i.stack.imgur.com/1XYE6.png
No Data Close-up: http://i.stack.imgur.com/ip0ip.png
Sine : http://i.stack.imgur.com/rhtn0.png
Sine Close-up: http://i.stack.imgur.com/12JxZ.png
There are gaps when i start sending the sine wave which could be the result of buffer overflow on the hardware. since the gaps follow a pattern, it can't be UDP data loss.
so after working on this for a month i got it to work.
I don't exactly know what format the AudioRecord class uses. i'm using ENCODING_PCM_16BIT but don't even know if its bipolar or not and how to convert it to unipolar if it is.
Due to limited bandwidth, i can't send more than 8 bits per sample. since 8 bit PCM isn't supported on my phone, i've used the 16 bit version but for conversion, i've just used the upper 8 bits. i'm not sure if that's right.
its was bipolar. i had to convert it to 8 bit by adding half the dynamic range to each sample and taking the upper 8 bits.
Since i've used a weird Crystal Oscillator for my circuit, the audio has to be sampled at 7.2kHz. my phone supports 8kHz sampling so i just use that and send %90 of the recorded data (using a for loop with a float as variable).
even though i have a slight frequency shift, its still acceptable.
I've hooked up a 2W speaker to the OC2 pin on my ATmega32 using a 220 Ohm resistor and a 100nF capacitor to act as a filter. (Schematic) but again i'm not sure if its the correct way to do it.
i changed the filter to an exact 3.6KHz low pass RC one (using one of the many online calculators). the speaker should not be connected directly because it requires a current uC can't provide. you will still get an output but the quality is not good at all. what you should do is drive the speaker using a darlington pair or (as i have) use a simple op-amp circuit.

Adjusting the mic sensitivity while recording audio in android

I am developing an android application which records audio in PCM using the AudioRecord API. I want to adjust the mic sensitivity to low, medium and high as the user chooses it in the settings.
Is it possible to adjust the mic sensitivity? Your answers will be highly appreciated :)
Not really. It's usually possible to get at least two different "sensitivities" (acoustic tunings used by the platform) implicitly by using different AudioSources.There should at least be one tuning for handset recording and one for far-field recording. On some devices you might also have different far-field tunings, e.g. one for recording audio a few decimeters away and one for recording audio a few meters away.
The problem is that you can't really know which AudioSource corresponds to which tuning, as there's no standard for it. CAMCORDER typically means far-field, and VOICE_RECOGNITION often means handset mode, but there's no guarantee for it. You should also keep in mind that vendors typically apply automatic gain control, noise reduction, etc that you as a user / app developer can't disable, in order to meet acoustic requirements for their products.
Your best bet would probably be to use a single AudioSource and then do attenuation of the signal in your app to simulate a lower mic sensitivity. You could do amplification as well, but that would be akin to using digital zoom in the camera app (it works, but doesn't look all that good because you're just scaling the existing data).

Process the sound wave just before going to the speaker

Is there any way I can process the sound wave that goes to the input of the speaker before it gets played? I want to change the Decibel values for the different frequencies.
Thank you
It depends on what kind of effects you want to apply. You can use SoundPool.setRate to simply change the pitch. If you want to get more complicated effects consider using AudioEffect.
I want to change the Decibel values for the different frequencies.
That's exactly what Equalizer effect is doing. You can retrieve the band for desired frequency using Equalizer.getBand and than change its level with Equalizer.setBandLevel.
If you mean right before the digital-to-analog conversion, then no, you can't do that from an app. What you can do is to process the audio before writing it to your AudioTrack instance, or by using an AudioEffect as Andrei suggested. In both of these cases the audio might still go through additional filters in the platform's audio DSP (e.g. multiband compression, peak limiting, equalization to compensate for the particular speaker component used, etc) before reaching the DAC.
It sounds to me like you want to modify the audio signal in the frequency domain, so you could take a look at e.g. FFTW, which has a C interface as well as a Java wrapper, so you can use it both from native code and Java code depending on what you feel most comfortable with. I've never used it myself so I can't provide any info on how to integrate it into an Android project.

Android Stagefright unable to set video frame rate

I have an application streaming video from the device to a remote computer. When trying to set the frame rate I keep getting:
ERROR/StagefrightRecorder(131): Failed to set frame rate to 15 fps. The actual frame rate is 30
The code I use is:
video = new MediaStreamer();
video.setVideoSource(MediaRecorder.VideoSource.CAMERA);
video.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
video.setVideoFrameRate(frameRate);
Any ideas on how to fix this?
The decoders usually come from the semiconductor vendor like TI, Qualcomm etc. It depends on the decoders whether they honor the call of frame rate modification or not. From the app layer, you cannot do much on this. The calls that you are making are the right ones. If the underlying decoders support it, then you can modify else you cannot.
Vibgyor
I guess documentation says that you may or may not be able to set the frame rate from the application layer. It depends on the underlying decoder whether it gives the app that flexibility or not. I wagely rememeber that I ahve tried setting frame rate to even 3-4 frames but still it gives the default frame rate only. I have seen in the Stagefright framework that it passes the frame rate call to the decoder and then depends on the deocoder to honor the call or not.
Vibgyor

Categories

Resources