My app is recording audio from phone's microphones and does some real time processing on it. It's working fine on physical devices, but acts "funny" in emulator. It records something, but I'm not quite sure what it is it's recording.
It appears that on emulator the audio samples are being read at about double the rate as on actual devices. In the app I have a visual progress widget (a horizontally moving recording head), which moves about twice as fast in emulator.
Here is the recording loop:
int FREQUENCY = 44100;
int BLOCKSIZE = 110;
int bufferSize = AudioRecord.getMinBufferSize(FREQUENCY,
AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT) * 10;
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.CAMCORDER,
FREQUENCY, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT,
bufferSize);
short[] signal = new short[BLOCKSIZE * 2]; // Times two for stereo
audioRecord.startRecording();
while (!isCancelled()) {
int bufferReadResult = audioRecord.read(signal, 0, BLOCKSIZE * 2);
if (bufferReadResult != BLOCKSIZE * 2)
throw new RuntimeException("Recorded less than BLOCKSIZE x 2 samples:"
+ bufferReadResult);
// process the `signal` array here
}
audioRecord.stop();
audioRecord.release();
The audio source is set to "CAMCORDER" and it records in stereo. The idea is, if the phone has multiple microphones, the app will process data from both and use whichever has better SNR. But I have the same problems if recording mono from AudioSource.MIC. It reads audio data in a while loop, I am assuming that audioRecord.read() is a blocking call and will not let me read same data twice.
The recorded data looks OK – the record buffer contains 16-bit PCM samples for two channels. The loop just seems to be running at twice the speed as on real devices. Which leads me to think that maybe the emulator is using a higher sampling rate than the specified 44100Hz. If I query the sample rate with audioRecord.getSampleRate() it returns the correct value.
Also there are some interesting audio related messages in logcat while recording:
07-13 12:22:02.282 1187 1531 D AudioFlinger: mixer(0xf44c0000) throttle end: throttle time(154)
(...)
07-13 12:22:02.373 1187 1817 E audio_hw_generic: Error opening input stream format 1, channel_mask 0010, sample_rate 16000
07-13 12:22:02.373 1187 3036 I AudioFlinger: AudioFlinger's thread 0xf3bc0000 ready to run
07-13 12:22:02.403 1187 3036 W AudioFlinger: RecordThread: buffer overflow
(...)
07-13 12:22:24.792 1187 3036 W AudioFlinger: RecordThread: buffer overflow
07-13 12:22:30.677 1187 3036 W AudioFlinger: RecordThread: buffer overflow
07-13 12:22:37.722 1187 3036 W AudioFlinger: RecordThread: buffer overflow
I'm using up-to-date Android Studio and Android SDK, and I have tried emulator images running API levels 21-24. My dev environment is Ubuntu 16.04
Has anybody experienced something similar?
Am I doing something wrong in my recording loop?
I suspect it is caused by AudioFormat.CHANNEL_IN_STEREO. A mic on device is typically a mono audio source. If because of some reason emulator supports stereo, you will be receiving twice as much data on emulator (for both channels). To verify this, try to switch to AudioFormat.CHANNEL_IN_MONO, which is guarantied to work on all devices, and see whether you receive same amount of data on emulator then.
Related
Android MediaCodec decode takes a super long time, about 115 to 118 msecs per frame. This is a h264 frame. The Android device has a qualcomm snapdragon 845 processor, so I assume the Android MediaCodec APIs target the qualcomm GPU and not the ARM core CPU.
Wondering if anyone has experienced such issue/s before and can provide guidance on how to make this decode go faster?
The code is all native code, no java at all. With no Java, I have no active window, no surface texture... so Grafika examples don't help here. I am using AndroidP(9.0) API 28. NDK 19.2.5x.
Here's how my code is setup:
Step1: I have two codec instances configured on two separate threads as follows:
codecData.codec = AMediaCodec_createDecoderByType("video/avc");
AMediaFormat_setString(codecData.format_eye, AMEDIAFORMAT_KEY_MIME, "video/avc");
AMediaFormat_setInt32(codecData.format_eye, AMEDIAFORMAT_KEY_HEIGHT, 1920);
AMediaFormat_setInt32(codecData.format_eye, AMEDIAFORMAT_KEY_WIDTH, 1080);
AMediaFormat_setFloat(codecData.format_eye, AMEDIAFORMAT_KEY_FRAME_RATE, 60.0f);
Step2: I enqueue the encoded buffer using these calls which take 14 to 17 msecs on 60 FPS input with two separate threads populating the individual codec Qs:
bufIdx = AMediaCodec_dequeueInputBuffer(codecData.codec, -1); //-1 makes it blocking call
auto buf = AMediaCodec_getInputBuffer(codecData.codec, bufIdx, &bufSize);
uint64_t presentTime = presentTimer.getTimeUs();
memcpy(buf, data, size);
AMediaCodec_queueInputBuffer(codecData.codec, bufIdx, 0, size, presentTime, 0);
Step3: I dequeue the decoded buffer as follows, these takes 115 to 118 msecs per frame per codec on 60 FPS output. The dequeue for both the codecs are done by one consumer thread which goes through both the codec instances one at a time:
AMediaCodecBufferInfo info_eye;
bufIdx = AMediaCodec_dequeueOutputBuffer(codecData.codec, &info_eye, 1);
auto decodedBuf = AMediaCodec_getOutputBuffer(codecData.codec, bufIdx, &bufSize);
Step4: The decoded buffer is then fed to a NV12toRGBA shader on the render thread that populates the texture which takes about 2 msecs. This texture then gets displayed.
I am expecting 60 FPS but get about 50 FPS due to the delays in Step3 i.e. the 115 to 118 msecs latency is killing me :-(
Any ideas? Appreciate any and all help.
I'm trying to record some audio from the microphone using AudioRecord.
The recording works, but the volume is way to loud and I'm getting horrible clipping.
I tried to use AutomaticGainControl, but it is not available on my device.
Is there any other way to lower the volume either automatically or manually?
This is the code I'm using:
sampleRate = 44100
channel = AudioFormat.CHANNEL_IN_MONO
encoding = AudioFormat.ENCODING_PCM_16BIT
audioRecord =new AudioRecord(MediaRecorder.AudioSource.DEFAULT, //also tried VOICE_RECOGNITION
sampleRate, channel, encoding,
bufferSize)
audioRecord.startRecording()
Turns out what I was hearing wasn't clipping caused by volume, but by wrong byte order. The clip was recorded with little-endian and I was replaying it with big-endian. Because of that lower volume values were just amplified and otherwise unaffected, since the most significant bits were all 0, but when the volume was higher, the MSB were not 0 and the overall value got corrupted into white noise.
I am using MediaCodec to encode video. Frames are coming through the camera preview callback to the MediaCodec instance (no Surface used). I am using JCodec library for muxing and I am able to stream produced video (video player is showing correct duration and I am able to change video position with seek bar).
Today I've tried to use MediaMuxer instead of JCodec and I've got video which still looks fine, but duration is absolutely incorrect (a few hours instead of one minute) and the seek bar is not working at all.
mediaMuxer = new MediaMuxer("/path/to/video.mp4", MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
The following code is lazily called when I receive MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
videoTrackIndex = mediaMuxer.addTrack(encoder.getMediaFormat());
mediaMuxer.start();
I am encoding the frames with the following code:
mediaMuxer.writeSampleData(videoTrackIndex, byteBuffer, bufferInfo);
byteBuffer and bufferInfo are coming directly from MediaCodec after some positioning stuff:
byteBuffer.position(bufferInfo.offset);
byteBuffer.limit(bufferInfo.offset + bufferInfo.size);
Presentation time is set correctly:
mMediaCodec.queueInputBuffer(inputBufferIndex, 0, getWidth() * getHeight() * 1.5, System.nanoTime() / 1000, 0);
And at the end of the record I do:
mediaMuxer.stop();
mediaMuxer.release();
Logs:
I/MPEG4Writer﹕ setStartTimestampUs: 0
I/MPEG4Writer﹕ Earliest track starting time: 0
D/MPEG4Writer﹕ Stopping Video track
I/MPEG4Writer﹕ Received total/0-length (770/0) buffers and encoded 770 frames. - video
D/MPEG4Writer﹕ Stopping Video track source
D/MPEG4Writer﹕ Video track stopped
D/MPEG4Writer﹕ Stopping writer thread
D/MPEG4Writer﹕ 0 chunks are written in the last batch
D/MPEG4Writer﹕ Writer thread stopped
I/MPEG4Writer﹕ The mp4 file will not be streamable.
D/MPEG4Writer﹕ Stopping Video track
I guess the The mp4 file will not be streamable. signals about problem.
Update:
I've tested my app on another device (LG G2) which does more verbose logging. The same file is produced with huge duration. Logs are here and the video file is here.
Thanks to #fadden I was able to figure out the problem. I was actually sending my first frame with presentationTimeUs = 0. It happened because I was not handling frames with MediaCodec.BUFFER_FLAG_CODEC_CONFIG flag properly. I was actually feeding them to the muxer, but what I should have done is to skip them with the following code (as per example):
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
mBufferInfo.size = 0;
}
I'm using AudioRecord to record the audio stream during a camera capturing process on Android device.
Since I want to process the frame data and handle audio/video samples, I do not use MediaRecorder.
I run AudioRecord in another thread with the calling of read() to gather the raw audio data.
Once I get a data stream, I feed them into an MediaCodec configured as an AAC audio encoder.
Here are some of my codes about the audio recorder / encoder:
m_encode_audio_mime = "audio/mp4a-latm";
m_audio_sample_rate = 44100;
m_audio_channels = AudioFormat.CHANNEL_IN_MONO;
m_audio_channel_count = (m_audio_channels == AudioFormat.CHANNEL_IN_MONO ? 1 : 2);
int audio_bit_rate = 64000;
int audio_data_format = AudioFormat.ENCODING_PCM_16BIT;
m_audio_buffer_size = AudioRecord.getMinBufferSize(m_audio_sample_rate, m_audio_channels, audio_data_format) * 2;
m_audio_recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, m_audio_sample_rate,
m_audio_channels, audio_data_format, m_audio_buffer_size);
m_audio_encoder = MediaCodec.createEncoderByType(m_encode_audio_mime);
MediaFormat audio_format = new MediaFormat();
audio_format.setString(MediaFormat.KEY_MIME, m_encode_audio_mime);
audio_format.setInteger(MediaFormat.KEY_BIT_RATE, audio_bit_rate);
audio_format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, m_audio_channel_count);
audio_format.setInteger(MediaFormat.KEY_SAMPLE_RATE, m_audio_sample_rate);
audio_format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
audio_format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, m_audio_buffer_size);
m_audio_encoder.configure(audio_format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
I found that the first time of AudioRecord.read() takes longer time to return, while the successive read() have time intervals that are more close to the real time of audio data.
For example, my audio format is 44100Hz 16Bit 1Channel, and the buffer size of AudioRecord is 16384, so a full buffer means 185.76 ms. When I record the system time for each call of read() and subtracting them from a base time, I get the following sequence:
time before each read(): 0ms, 345ms, 543ms, 692ms, 891ms, 1093ms, 1244ms, ...
I feed these raw data to the audio encoder with the above time values as PTS, and the encoder outputs encoded audio samples with the following PTS:
encoder output PTS: 0ms, 185ms, 371ms, 557ms, 743ms, 928ms, ...
It looks like that the encoder treats each part of data as having the same time period. I believe that the encoder works correctly since I give it raw data with the same size (16384) every time. However, if I use the encoder output PTS as the input of muxer, I'll get a video with audio content being faster then video content.
I want to ask that:
Is it expected that the first time of AudioRecord.read() blocks longer? I'm sure that the function call takes more than 300ms while it only records 16384 bytes as 186ms. Is this also an issue that depends on device / Android version?
What should I do to achieve audio/video synchronization? I have a workaround to measure the delay time of the first call of read(), then shift the PTS of audio samples by the delay. Is there another better way to handle this?
Convert the mono input to stereo. I was pulling my hair out for some time before I realised the AAC encoder exposed by MediaCoder only works with stereo input.
I have an application that uses the AudioRecord API to capture audio on Android devices and it repeatedly fails on Galaxy S4 devices. This also occurs in other applications that try to record audio with both AudioRecord and MediaRecorder (AudioRec HQ for example). I was able to reproduce it in a test application using the code below:
final int bufferSize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.VOICE_RECOGNITION, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize << 2);
mAudioRecord.startRecording();
mRecordThread = new Thread(new Runnable() {
#Override
public void run() {
BufferedOutputStream fileOutputStream = null;
try {
fileOutputStream = new BufferedOutputStream(new FileOutputStream(String.format(Locale.US, "/sdcard/%1$d.pcm", System.currentTimeMillis())));
final byte[] buffer = new byte[bufferSize];
int bytesRead;
do {
bytesRead = mAudioRecord.read(buffer, 0, buffer.length);
if (bytesRead > 0) {
fileOutputStream.write(buffer, 0, bytesRead);
}
}
while (bytesRead > 0);
} catch (Exception e) {
Log.e("RecordingTestApp", e.toString());
}
}
});
mRecordThread.start();
These are the relevant logcat entries:
02-03 15:36:10.913: W/AudioRecord(20986): obtainBuffer timed out (is the CPU pegged?) user=001699a0, server=001699a0
02-03 15:36:11.394: E/alsa_pcm(208): Arec: error5
02-03 15:36:11.394: W/AudioStreamInALSA(208): pcm_read() returned error n -5, Recovering from error
02-03 15:36:11.424: D/ALSADevice(208): close: handle 0xb7730148 h 0x0
02-03 15:36:11.424: D/ALSADevice(208): open: handle 0xb7730148, format 0x2
02-03 15:36:11.424: D/ALSADevice(208): Device value returned is hw:0,0
02-03 15:36:11.424: V/ALSADevice(208): flags 11000000, devName hw:0,0
02-03 15:36:11.424: V/ALSADevice(208): pcm_open returned fd 39
02-03 15:36:11.424: D/ALSADevice(208): handle->format: 0x2
02-03 15:36:11.434: D/ALSADevice(208): setHardwareParams: reqBuffSize 320 channels 1 sampleRate 8000
02-03 15:36:11.434: W/AudioRecord(20986): obtainBuffer timed out (is the CPU pegged?) user=001699a0, server=001699a0
02-03 15:36:11.444: D/ALSADevice(208): setHardwareParams: buffer_size 640, period_size 320, period_cnt 2
02-03 15:36:20.933: W/AudioRecord(20986): obtainBuffer timed out (is the CPU pegged?) user=0017ade0, server=0017ade0
02-03 15:36:21.394: E/alsa_pcm(208): Arec: error5
02-03 15:36:21.394: W/AudioStreamInALSA(208): pcm_read() returned error n -5, Recovering from error
02-03 15:36:21.424: D/ALSADevice(208): close: handle 0xb7730148 h 0x0
02-03 15:36:21.424: D/ALSADevice(208): open: handle 0xb7730148, format 0x2
02-03 15:36:21.424: D/ALSADevice(208): Device value returned is hw:0,0
02-03 15:36:21.424: V/ALSADevice(208): flags 11000000, devName hw:0,0
02-03 15:36:21.424: V/ALSADevice(208): pcm_open returned fd 39
02-03 15:36:21.424: D/ALSADevice(208): handle->format: 0x2
02-03 15:36:21.434: D/ALSADevice(208): setHardwareParams: reqBuffSize 320 channels 1 sampleRate 8000
02-03 15:36:21.434: D/ALSADevice(208): setHardwareParams: buffer_size 640, period_size 320, period_cnt 2
02-03 15:36:21.454: W/AudioRecord(20986): obtainBuffer timed out (is the CPU pegged?) user=0017ade0, server=0017ade0
Here is the full logcat:
http://pastebin.com/y3XQ1rMf
No exceptions are thrown when this happens, AudioRecord.read just blocks until the hardware has recovered and started recording again, but loses 2-4 seconds of audio data so it is very annoying for users that their audio files are missing large areas without any explanation as to why.
Is this a known hardware issue or are there things I should be doing differently to record more reliably?
Is there any way to detect that this issue has occurred
After trying a wide variety of frequencies, buffer sizes, and audio sources with the AudioRecord and several formats with the MediaRecorder I was not able to record audio without these pcm errors. These same errors happen with several audio recording applications I have downloaded from the play store.
I followed this tutorial to create an OpenSL ES jni library and it has been working well, I would reccomend this approach to anyone who is seeing these errors on the Galaxy S4
I don't have a galaxy at hand, but I see a couple of things wrong with your example.
new AudioRecord(MediaRecorder.AudioSource.VOICE_RECOGNITION, 8000,
First you initialize the AudioRecord at a samplerate of 8000 Hz. As per documentation, only 44100 is guaranteed available. The 8000 Hz samplerate might simply not be there. So you should check whether the AudioRecord is in a correct state by. That is:
mAudioRecord.getState()==STATE_INITIALIZED
You also might want to check the return from getMinimumBufferSize. If it is ERROR_BAD_VALUE then the parameters you passed were incorrect.
Then once done, you might want to start recording within the thread that will read the data. This is very likely not the cause of your problem, but it is often unclear what happens with audiodrivers when an overflow occured. E.g: the hardware might have produced too much data any you didn't read it fast enough. Often alsa drviers behave different depending on who made them. So, to avoid that problem, it is better to write startRecording directly before you start recording, thus in this case, within the runnable.
Once that is done you might want to check the
mAudioRecord.getRecordingState()==RECORDING_RECORDING
if it is not recording, then the driver already tells you there is a problem.
Once you're finished with recording you should also stop the device. Again, some alsa drivers have a timeout to close if you don't close them yourself, meaning that the next time you try to open them, you might simply have no access (this is of course very Linux specific).
So, at a first glance these are the avenues I would take, and my guess is that the combination samplerate/channel configuration is just not available.
Lastly, I also have a feeling that the parameter VOICE_RECOGNITION might be off. Maybe just replace it with DEFAULT. I have had some problems with this myself in the past.