I am working on an Android project in which I want the same functionality as the Android native audio recorder. Specifically the sound quality option. I can't use the native recorder, so I'm recording audio using the AudioRecord class. How do I process the data coming from the AudioRecord class for the quality? Application has three quality defined which I need to implement:
Low - records will have only high pitch sounds,
Medium - records will have some of background sound,
High - everything that reaches at microphone Will be Recorded.
Please suggest some way to do it.
Related
I have an app calling using WebRTC. But during a call, I need to record microphone. WebRTC has an object WebRTCAudioRecord to record audio but the audio file is so large (PCM_16bit). I want to record but to a smaller size.
I've tried MediaRecorder but it doesn't work because WebRTC is recorded and MediaRecorder does not have permission to record while calling.
Has anyone done this, or have any idea that could help me?
Webrtc is considered as comparatively much better pre-processing tool for Audio and Video.
Webrtc native development includes fully optimized native C and C++ classes, In order to maintain wonderful Speech Quality and Intelligibility of audio and video which is quite interesting.
Visit Reference Link: https://github.com/jitsi/webrtc/tree/master/examples regularly.
As Problem states;
I want to record but smaller size. I've tried MediaRecorder and it doesn't work because WebRtc is recorded and MediaRecorder has not permission to record while calling.
First of all, to reduce or minimize the size of your recorded data (audio bytes), you should look at different types of speech codecs which basically reduce the size of recorded data by maintaining sound quality at a level. To see different voice codecs, here are well-known speech codecs as follows:
OPUS
SPEEX
G7.11 (G-Series Speech Codecs)
As far as size of the audio data is concerned, it basically depends upon the Sample Rate and Time for which you record a chunk or audio packet.
Supppose time = 40ms ---then---> Reocrded Data = 640 bytes (or 320 short)
Size of recorded data is **directly proportional** to both Time and Sample rate.
Sample Rate = 8000 or 16000 etc. (greater the sample rate, greater would be the size)
To see in more detail visit: fundamentals of audio data representation. But Webrtc mainly process 10ms audio data for pre-processing in which packet size is reduced up to 160 bytes.
Secondly, If you want to use multiple AudioRecorder instances at a time, then it is practically impossible. As WebRtc is already recording from microphone then practically MediaRecorder instance would not perform any function as this answer depicts audio-record-multiple-audio-at-a-time. Webrtc has following methods to manage audio bytes such as;
1. Push input PCM data into `ProcessCaptureStream` to process in place.
2. Get the processed PCM data from `ProcessCaptureStream` and send to far-end.
3. The far end pushed the received data into `ProcessRenderStream`.
I have maintained a complete tutorial related to audio processing using Webrtc, you can visit to see more details; Android-Audio-Processing-Using-Webrtc.
There are two parts for the solution:
Get the raw PCM audio frames from webrtc
Save them to a local file in compressed size so that it can be played out later
For the first part you have to attach the SamplesReadyCallback while creating audioDeviceManager by calling the setSamplesReadyCallback method of JavaAudioDeviceModule. This callback will give you the raw audio frames captured by webrtc's AudioRecord from the mic.
For the second part you have to encode the raw frames and write into a file. Check out this sample from google on how to do it - https://android.googlesource.com/platform/frameworks/base/+/master/packages/SystemUI/src/com/android/systemui/screenrecord/ScreenInternalAudioRecorder.java#234
I develop an Android app based on sound and video records. I would like to get a real-time playback of the mic audio in the headphones while previewing AND capturing the video and sound.
What i have now, working fine alone:
1) use Superpowered library to record audio and playing it back in real-time (during preview and record). Behind the scene, it does directly with C++ the work of AudioRecord by pushing the buffer to the output (headphones). The goal is to apply audio effects on the raw sound in real-time.
2) capture the video with mediaRecorder
When audio playback is running, I try to launch the video record, it crashes starting :
E/MediaRecorder: start failed: -2147483648
I imagine that i can't launch two recording process at the same time. I think using the AudioRecord or Superpowered lib is the good way to process the raw audio, but I can't figure out how to record video without conflicting with the current audio recording.
So is there a way to achieve my feature?
(minSdk 16)
According bigflake
The MediaCodec class first became available in Android 4.1 (API 16). It was added to allow direct access to the media codecs on the device.
In Android 4.3 (API 18), MediaCodec was expanded to include a way to provide input through a Surface (via the createInputSurface method). This allows input to come from camera preview or OpenGL ES rendering.
So if it's possible please think about increasing MinSDK to 18 and use AudioVideoRecordingSample or HWEncoderExperiments as examples.
I am developing an Android application which is supposed to capture audio with the built-in mic of the smartphone and save it.
For the further processing purposes I would like to have some control over the quality of audio captured. For instance, to my knowledge some smartphones have high-quality audio recording mode and I would like to make use of it, if that is possible.
I am aware of mediaRecorder, but I am not sure how to use its methods or input arguments to get the best quality of sound possible. I would be very grateful if somebody could point out that for me or provide references to other libraries that allow to adjust the quality of recorded sound.
I have developed a app which record the sound & play at same time approximate 100ms delay by using audio track & audio record class in android. But the played audio have lot of background noise. So there is any way in android to reduce the background noise while playing.
Please don't tell me used to Audacity Software for reduce the noise because i am not saving the recorded audio. I am just save it in a buffer & play this buffer by audio track.Cab be implemented a filter to reduce the background noise in android using NDK.
The simple thing to do is play the audio back using the voice setting. It will filter out most of the "hum" noise. AudioManager.STREAM_VOICE_CALL. You can use the NoiseSupression filter if available. NoiseSuppressor.create(device.getAudioSessionId());
You can use ClearRecord application. It is free. You can play audio with noise reduction on. While playing audio, the background noise gets eliminated and clear one can hear clear sound.
Is there any way to record audio in high quality?
And how can I read information that user is saying something? In Audio Recording application you can see such indicator (I don't know the right name for it).
At the moment, a big reason for poor audio quality recording on Android is the codec used by the MediaRecorder class (the AMR-NB codec). However, you can get access to uncompressed audio via the AudioRecord class, and record that into a file directly.
The Rehearsal Assistant app does this to save uncompressed audio into a WAV file - take a look at the RehearsalAudioRecord class source code.
The RehearsalAudioRecord class also provides a getMaxAmplitude method, which you can use to detect the maximum audio level since the last time you called the method (MediaRecorder also provides this method).
For recording and monitoring: You can use the sound recorder activity.
Here's a snippet of code:
Intent recordIntent = new Intent(
MediaStore.Audio.Media.RECORD_SOUND_ACTION);
startActivityForResult(recordIntent, REQUEST_CODE_RECORD);
For a perfect working example of how to record audio which includes an input monitor, download the open source Ringdroid project: https://github.com/google/ringdroid
Look at the screenshots and you'll see the monitor.
For making the audio higher quality, you'd need a better mic. The built in mic can only capture so much (which is not that good). Again, look at the ringdroid project, glean some info from there. At that point you could implement some normalization and amplification routines to improve the sound.
I give you a simple answer.
for samplerate, about the quality, 48000 is almost the same as 16000.
for bitrate, about the quality, 96Kbps is much better than 16Kbps.
you can try stereo(channelCount = 2), but make little change.
So, for android phones, just set the audio bit rate bigger, you will get the better quality.