Query Supported Bitrate while recording videos - android

I am using MediaRecorder to record videos and want to change certain parameters like videoBitrate frameRate like this
profile.videoBitRate = 50000000;
But from Android Docs
MediaRecorder error if the output bit rate exceeds the encoder limit.
So how do I query about the supported bitrates and framerate? LIke above I have hardcoded the bitrate to 50mbps but it may exceed encoder limit on some devices.
I checked MediaRecorder docs and couldn't find any reference to this.
Is there a way I can get the supported bitrates and framerates for video recording using media recorder.

Related

Which AudioEncoder codec is best for highest quality audio recording on android

I am recording both audio and video on my Xamarin.Forms project. Some people tell me that when they use it the audio quality is low when recording.
After some research I think it has something to do with the encoder.
This is my line for setting the codec
this.mediaRecorder.SetAudioEncoder(AudioEncoder.Aac);
And here are the available encoder options
In some threads I see people recommend using AmrNb or Aac (which I already use). I tried all options and can't see any noticeable difference.
Which one is best for recording audio on smartphones?
Should I manually set audio bitrate, and if yes how do I determine it?
this.mediaRecorder.SetAudioSource(AudioSource.VoiceRecognition);
this.mediaRecorder.SetAudioEncodingBitRate(128000);
this.mediaRecorder.SetAudioSamplingRate(16000);
I added these lines ( basically I increased the bitrate and changed audio source) and I think it's better now.

How to record microphone to more compressed format during WebRTC call on Android?

I have an app calling using WebRTC. But during a call, I need to record microphone. WebRTC has an object WebRTCAudioRecord to record audio but the audio file is so large (PCM_16bit). I want to record but to a smaller size.
I've tried MediaRecorder but it doesn't work because WebRTC is recorded and MediaRecorder does not have permission to record while calling.
Has anyone done this, or have any idea that could help me?
Webrtc is considered as comparatively much better pre-processing tool for Audio and Video.
Webrtc native development includes fully optimized native C and C++ classes, In order to maintain wonderful Speech Quality and Intelligibility of audio and video which is quite interesting.
Visit Reference Link: https://github.com/jitsi/webrtc/tree/master/examples regularly.
As Problem states;
I want to record but smaller size. I've tried MediaRecorder and it doesn't work because WebRtc is recorded and MediaRecorder has not permission to record while calling.
First of all, to reduce or minimize the size of your recorded data (audio bytes), you should look at different types of speech codecs which basically reduce the size of recorded data by maintaining sound quality at a level. To see different voice codecs, here are well-known speech codecs as follows:
OPUS
SPEEX
G7.11 (G-Series Speech Codecs)
As far as size of the audio data is concerned, it basically depends upon the Sample Rate and Time for which you record a chunk or audio packet.
Supppose time = 40ms ---then---> Reocrded Data = 640 bytes (or 320 short)
Size of recorded data is **directly proportional** to both Time and Sample rate.
Sample Rate = 8000 or 16000 etc. (greater the sample rate, greater would be the size)
To see in more detail visit: fundamentals of audio data representation. But Webrtc mainly process 10ms audio data for pre-processing in which packet size is reduced up to 160 bytes.
Secondly, If you want to use multiple AudioRecorder instances at a time, then it is practically impossible. As WebRtc is already recording from microphone then practically MediaRecorder instance would not perform any function as this answer depicts audio-record-multiple-audio-at-a-time. Webrtc has following methods to manage audio bytes such as;
1. Push input PCM data into `ProcessCaptureStream` to process in place.
2. Get the processed PCM data from `ProcessCaptureStream` and send to far-end.
3. The far end pushed the received data into `ProcessRenderStream`.
I have maintained a complete tutorial related to audio processing using Webrtc, you can visit to see more details; Android-Audio-Processing-Using-Webrtc.
There are two parts for the solution:
Get the raw PCM audio frames from webrtc
Save them to a local file in compressed size so that it can be played out later
For the first part you have to attach the SamplesReadyCallback while creating audioDeviceManager by calling the setSamplesReadyCallback method of JavaAudioDeviceModule. This callback will give you the raw audio frames captured by webrtc's AudioRecord from the mic.
For the second part you have to encode the raw frames and write into a file. Check out this sample from google on how to do it - https://android.googlesource.com/platform/frameworks/base/+/master/packages/SystemUI/src/com/android/systemui/screenrecord/ScreenInternalAudioRecorder.java#234

Android capturing slow motion video using CamcorderProfile

I am trying to capture slow motion video on my Nexus 5x. This is how I am configuring the media recorder:
CamcorderProfile profile = CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH_SPEED_HIGH);
mMediaRecorder = new MediaRecorder();
// Step 1: Unlock and set camera to MediaRecorder
mCamera.unlock();
mMediaRecorder.setCamera(mCamera);
// Step 2: Set sources
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.DEFAULT);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
// Step 3: Set the high speed CamcorderProfile
mMediaRecorder.setProfile(profile);
// Step 4: Set output file
// Step 5: Prepare media recorder
// Step 6: Capture video
The problem is, the captured videos are not the 120 fps slow motion videos that my device supports. They are the regular 29 fps videos.
I went through this answer, which talks about the following in the official documentation:
For all the high speed profiles defined below ((from
QUALITY_HIGH_SPEED_LOW to QUALITY_HIGH_SPEED_2160P), they are similar
as normal recording profiles, with just higher output frame rate and
bit rate. Therefore, setting these profiles with
setProfile(CamcorderProfile) without specifying any other encoding
parameters will produce high speed videos rather than slow motion
videos that have different capture and output (playback) frame rates.
To record slow motion videos, the application must set video output
(playback) frame rate and bit rate appropriately via
setVideoFrameRate(int) and setVideoEncodingBitRate(int) based on the
slow motion factor. If the application intends to do the video
recording with MediaCodec encoder, it must set each individual field
of MediaFormat similarly according to this CamcorderProfile.
The thing that I don't get is, setProfile already calls the two methods setVideoFrameRate and setVideoEncodingBitRate with parameters derived from the chosen CamcorderProfile. Why do I need to call them again? What am I missing here?
Any help would be greatly appreciated. For the life of me, I cannot get this to work!
EDIT: I have tried calling the methods like so but it still captures normal speed video:
mMediaRecorder.setVideoFrameRate(profile.videoFrameRate/4);
mMediaRecorder.setVideoEncodingBitRate(profile.videoBitRate/4);
1/4 since the advertised frame rate by the CamcorderProfile.QUALITY_HIGH_SPEED_HIGH is 120 and I want to capture a 30 fps video as stated in the document here
public int videoFrameRate
Added in API level 8 The target video frame rate in frames per second.
This is the target recorded video output frame rate per second if the
application configures the video recording via
setProfile(CamcorderProfile) without specifying any other
MediaRecorder encoding parameters. For example, for high speed quality
profiles (from QUALITY_HIGH_SPEED_LOW to QUALITY_HIGH_SPEED_2160P),
this is the frame rate where the video is recorded and played back
with. If the application intends to create slow motion use case with
the high speed quality profiles, it must set a different video frame
rate that is corresponding to the desired output (playback) frame rate
via setVideoFrameRate(int). For example, if QUALITY_HIGH_SPEED_720P
advertises 240fps videoFrameRate in the CamcorderProfile, and the
application intends to create 1/8 factor slow motion recording videos,
the application must set 30fps via setVideoFrameRate(int). Failing to
do so will result in high speed videos with normal speed playback
frame rate (240fps for above example). If the application intends to
do the video recording with MediaCodec encoder, it must set each
individual field of MediaFormat similarly according to this
CamcorderProfile.
mMediaRecorder.setVideoFrameRate(QUALITY_HIGH_SPEED_LOW);
or
mMediaRecorder.setVideoFrameRate(QUALITY_HIGH_SPEED_HIGH);

How to record Ultra low size audio on Android

I tried using AMR_NB encoder and 3gp outfut format in MediaRecorder and got fairly low filesize (104KB for 60 seconds audio). But for my application (audio chat on low bandwidth unreliable network especially for the third world), I need lower audio sizes.
I tried using the following options alongwith AMR_NB encoder, 3gp output format and setAudioChannels(1) -
a. setAudioSamplingRate(8000) and setAudioEncodingBitRate(4750)
b. setAudioSamplingRate(4000) and setAudioEncodingBitRate(4000)
c. setAudioSamplingRate(2000) and setAudioEncodingBitRate(2000)
But, whether any of the above options are used or not, the audio filesize remains same.
My questions are as follows -
1. Why aren't any of the options having any effect on the filesize?
2. What should I do to reduce the filesize (by sacrificing the sound quality of course)?
For comparison, Facebook messenger records a 60 second audio in 90KB as a .mp4 file.
recorder = new MediaRecorder();
recorder.setAudioChannels(1);
recorder.setAudioSamplingRate(8000);
recorder.setAudioEncodingBitRate(4750);
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
Look into the official doc : http://developer.android.com/reference/android/media/MediaRecorder.html
It says that, for AMR_NB, setAudioSamplingRate() only supports 8000 Hz!
Can you post some of your code? It might be helpful to see what is your problem.
EDIT: Try to record on AAC format - like Facebook messenger. I think that you don't do smaller size with AMR_NB/3GP.
Android encoders override programmer's parameters if they detect a mistake. AMR_NB only works with 8000 Hz and 1 channel, so if you set different values, like 4000 Hz or 2 channels, the encoder will silently change them back to 8000 and 1.
If you reduce the Audio Sampling Rate below 8000, human voice becomes undecipherable.
For lower sampling rate, you will have to use the class AudioRecord as that one lets you go as low as 4000 Hz.
For higher compression, you will have to use a different encoder, or program one yourself.

Dynamic Sampling Rate for Audio Recording Android

Is it possible to change the sampling rate while recording an Audio in Android using AudioRecord or MediaRecorder?
Both of these class requires to initialize first the sampling rates before recording an Audio, But I was wondering if I can change the sampling rate, let's say 8000 to 16000 and vis-a-vis, in the middle of recording.
What would you expect to happen when you change the sampling rate once it is recording? Setting the rate directly is not supported by AudioRecord, so that is a definite no.
Setting the rate directly with MediaRecorder is allowed, but is expected to be done before starting the recording. I would not expect all, if any, implementations of the Android OS to handle this.

Categories

Resources