I'm working on a DSP project on Android which requires low latency audio I/O. For this reason, I'm using Oboe library. In the LiveEffect example, the synchronous recording and playback is demonstrated. However, for acoustic feedback neutralization, I need the other way around, that is to generate White Noise signal through a built-in speaker first, then record it using a mic. I tried to modify LiveEffect example using this asked question, i.e setting the recording stream as Master (callback) and using non-blocking write method for the playback stream. But I got the following error when I run my code on Pixel XL (Android 9.0):
D/AudioStreamInternalCapture_Client: processDataNow() wait for valid timestamps
D/AudioStreamInternalCapture_Client: advanceClientToMatchServerPosition() readN = 0, writeN = 384, offset = -384
--------- beginning of crash
A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x5800003f666c66 in tid 2852 (AAudio_1), pid 2796 (ac.oiinitialize)
Here is my callback:
oboe::DataCallbackResult
AudioEngine::onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) {
assert(oboeStream == mRecordingStream);
int32_t framesToWrite = mPlayStream->getFramesPerBurst();
oscillator_->whiteNoise(framesToWrite); // write white noise into buffer;
oboe::ResultWithValue<int32_t> result = mPlayStream->write(oscillator_->write(), framesToWrite, 0);
// oscillator_->write() returns const void* buffer;
if (result != oboe::Result::OK) {
LOGE("input stream read error: %s", oboe::convertToText(result.error()));
return oboe::DataCallbackResult ::Stop;
}
// add Adaptive Feedback Neutralization Algorithm here....
return oboe::DataCallbackResult::Continue;
}
Is my approach correct for generating a signal and then capturing it through a mic? If so, can anyone help me with this error? Thank you in advance.
However, for acoustic feedback neutralization, I need the other way around, that is to generate White Noise signal through a built-in speaker first, then record it using a mic
You can still do this using an output stream callback and a non-blocking read on the input stream. This is the more common (and tested) way of doing synchronous I/O. A Larsen effect will work fine this way.
Your approach should still work, however, I'd stick to the LiveEffect way of setting up the streams since it works.
In terms of your error SIGSEGV usually means a null pointer dereference - are you starting your input stream before the output stream? This could meant you're attempting to write to the output stream which hasn't yet been opened.
Related
I have a problem with the Android AudioRecord library.
I need to record an audio stream from the device's microphone.
The initialization of the class is as follows:
recorder = new AudioRecord(
currentAudioSource,
SAMPLE_RATE_IN_8KHZ,
CHANNEL, // Mono
ENCODING, // ENCODING_PCM_16BIT
bufferSize // 2048
);
Then, I call recorder.startRecording() to make the audio stream active.
And to acquire this flow I call the method in a loop:
recorder.read(samples, 0, currentBufferSize)
The variable samples is a short[] and currentBufferSize is the lenght of the buffer.
The "read" method works correctly for the first N loop.
At loop N + 1, the method is stuck waiting to give me back 2048 short, until I call the stopRecording which gives me the registered values (less than 2048 short).
On the next registration, the read method returns me an empty short [] and the error code "-1" (general error).
It's been a few days since I got it right, also because the error does not occur in a systematic way, but randomly on multiple devices.
Do you have any ideas to resolve this situation?
Thank you
I solved with this solution:
while(runWorker) {
if (recorder != null && recorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
...
Thread.sleep(40);
sampled = this.recorder.read(samples, 0, currentBufferSize, AudioRecord.READ_NON_BLOCKING);
...
I inserted a 40 ms pause and used the non-blocking Read.
This works on all devices except some Samsung tablets. This is the related post: AudioRecord.read with read mode: READ_NON_BLOCKING not working on Tablet Samsung
On Samsung tablets, the non-blocking Read always returns the value 0, as if it were not recording audio. If I use AudioRecord.READ_BLOCKING it works correctly.
Do you have any ideas about it?
Thanks.
Michele
I'm using the Android oboe library for high performance audio in a music game.
In the assets folder I have 2 .raw files (both 48000Hz 16 bit PCM wavs and about 60kB)
std_kit_sn.raw
std_kit_ht.raw
These are loaded into memory as SoundRecordings and added to a Mixer. kSampleRateHz is 48000:
stdSN= SoundRecording::loadFromAssets(mAssetManager, "std_kit_sn.raw");
stdHT= SoundRecording::loadFromAssets(mAssetManager, "std_kit_ht.raw");
mMixer.addTrack(stdSN);
mMixer.addTrack(stdFT);
// Create a builder
AudioStreamBuilder builder;
builder.setFormat(AudioFormat::I16);
builder.setChannelCount(1);
builder.setSampleRate(kSampleRateHz);
builder.setCallback(this);
builder.setPerformanceMode(PerformanceMode::LowLatency);
builder.setSharingMode(SharingMode::Exclusive);
LOGD("After creating a builder");
// Open stream
Result result = builder.openStream(&mAudioStream);
if (result != Result::OK){
LOGE("Failed to open stream. Error: %s", convertToText(result));
}
LOGD("After openstream");
// Reduce stream latency by setting the buffer size to a multiple of the burst size
mAudioStream->setBufferSizeInFrames(mAudioStream->getFramesPerBurst() * 2);
// Start the stream
result = mAudioStream->requestStart();
if (result != Result::OK){
LOGE("Failed to start stream. Error: %s", convertToText(result));
}
LOGD("After starting stream");
They are called appropriately to play with standard code (as per Google tutorials) at required times:
stdSN->setPlaying(true);
stdHT->setPlaying(true); //Nasty Sound
The audio callback is standard (as per Google tutorials):
DataCallbackResult SoundFunctions::onAudioReady(AudioStream *mAudioStream, void *audioData, int32_t numFrames) {
// Play the stream
mMixer.renderAudio(static_cast<int16_t*>(audioData), numFrames);
return DataCallbackResult::Continue;
}
The std_kit_sn.raw plays fine. But std_kit_ht.raw has a nasty distortion. Both play with low latency. Why is one playing fine and the other has a nasty distortion?
I loaded your sample project and I believe the distortion you hear is caused by clipping/wraparound during mixing of sounds.
The Mixer object from the sample is a summing mixer. It just adds the values of each track together and outputs the sum.
You need to add some code to reduce the volume of each track to avoid exceeding the limits of an int16_t (although you're welcome to file a bug on the oboe project and I'll try to add this in an upcoming version). If you exceed this limit you'll get wraparound which is causing the distortion.
Additionally, your app is hardcoded to run at 22050 frames/sec. This will result in sub-optimal latency across most mobile devices because the stream is forced to upsample to the audio device's native frame rate. A better approach would be to leave the sample rate undefined when opening the stream - this will give you the optimal frame rate for the current audio device - then use a resampler on your source files to supply audio at this frame rate.
I'm working on an Android app dealing with a device which is basically a USB microphone. I need to read the input data and process it. Sometimes, I need to send data the device (4 shorts * the number of channels which is usually 2) and this data does not depend on the input.
I'm using Oboe, and all the phones I use for testing use AAudio underneath.
The reading part works, but when I try to write data to the output stream, I get the following warning in logcat and nothing is written to the output:
W/AudioTrack: releaseBuffer() track 0x78e80a0400 disabled due to previous underrun, restarting
Here's my callback:
oboe::DataCallbackResult
OboeEngine::onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) {
// check if there's data to write, agcData is a buffer previously allocated
// and h2iaudio::getAgc() returns true if data's available
if (h2iaudio::getAgc(this->agcData)) {
// padding the buffer
short* padPos = this->agcData+ 4 * playStream->getChannelCount();
memset(padPos, 0,
static_cast<size_t>((numFrames - 4) * playStream->getBytesPerFrame()));
// write the data
oboe::ResultWithValue<int32_t> result =
this->playStream->write(this->agcData, numFrames, 1);
if (result != oboe::Result::OK){
LOGE("Failed to create stream. Error: %s",
oboe::convertToText(result.error()));
return oboe::DataCallbackResult::Stop;
}
}else{
// if there's nothing to write, write silence
memset(this->agcData, 0,
static_cast<size_t>(numFrames * playStream->getBytesPerFrame()));
}
// data processing here
h2iaudio::processData(static_cast<short*>(audioData),
static_cast<size_t>(numFrames * oboeStream->getChannelCount()),
oboeStream->getSampleRate());
return oboe::DataCallbackResult::Continue;
}
//...
oboe::AudioStreamBuilder *OboeEngine::setupRecordingStreamParameters(
oboe::AudioStreamBuilder *builder) {
builder->setCallback(this)
->setDeviceId(this->recordingDeviceId)
->setDirection(oboe::Direction::Input)
->setSampleRate(this->sampleRate)
->setChannelCount(this->inputChannelCount)
->setFramesPerCallback(1024);
return setupCommonStreamParameters(builder);
}
As seen in setupRecordingStreamParameters, I'm registering the callback to the input stream. In all the Oboe examples, the callback is registered on the output stream, and the reading is blocking. Does this have an importance? If not, how many frames do I need to write to the stream to avoid underruns?
EDIT
In the meantime, I found the source of the underruns. The output stream was not reading the same amount of frames as the input stream (which in hindsight seems logical), so writing the amount of frames given by playStream->getFramesPerBurst() fix my issue. Here's my new callback:
oboe::DataCallbackResult
OboeEngine::onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) {
int framesToWrite = playStream->getFramesPerBurst();
memset(agcData, 0, static_cast<size_t>(framesToWrite *
this->playStream->getChannelCount()));
h2iaudio::getAgc(agcData);
oboe::ResultWithValue<int32_t> result =
this->playStream->write(agcData, framesToWrite, 0);
if (result != oboe::Result::OK) {
LOGE("Failed to write AGC data. Error: %s",
oboe::convertToText(result.error()));
}
// data processing here
h2iaudio::processData(static_cast<short*>(audioData),
static_cast<size_t>(numFrames * oboeStream->getChannelCount()),
oboeStream->getSampleRate());
return oboe::DataCallbackResult::Continue;
}
It works this way, I'll change which stream has the callback attached if I notice any performance issue, for now I'll keep it this way.
Sometimes, I need to send data the device
You always need to write data to the output. Generally you need to write at least numFrames, maybe more. If you don't have any valid data to send then write zeros.
Warning: in your else block you are calling memset() but not writing to the stream.
->setFramesPerCallback(1024);
Do you need 1024 specifically? Is that for an FFT? If not then AAudio can optimize the callbacks better if the FramesPerCallback is not specified.
In all the Oboe examples, the callback is registered on the output stream,
and the reading is blocking. Does this have an importance?
Actually the read is NON-blocking. Whatever stream does not have the callback should be non-blocking. Use a timeoutNanos=0.
It is important to use the output stream for the callback if you want low latency. That is because the output stream can only provide low latency mode with callbacks and not with direct write()s. But an input stream can provide low latency with both callback and with read()s.
Once the streams are stabilized then you can read or write the same number of frames in each callback. But before it is stable, you may need to to read or write extra frames.
With an output callback you should drain the input for a while so that it is running close to empty.
With an input callback you should fill the output for a while so that it is running close to full.
write(this->agcData, numFrames, 1);
Your 1 nanosecond timeout is very small. But Oboe will still block. You should use a timeoutNanos of 0 for non-blocking mode.
According to Oboe documentation, during the onAudioReady callback, you have to write exactly numFrames frames directly into the buffer pointed to by *audioData. And you do not have to call Oboe "write" function but, instead, fill the buffer by yourself.
Not sure how your getAgc() function works but maybe you can give that function the pointer audioData as an argument to avoid having to copy data again from one buffer to another one.
If you really need the onAudioReady callback to request the same amount of frames, then you have to set that number while building the AudioStream using:
oboe::AudioStreamBuilder::setFramesPerCallback(int framesPerCallback)
Look here at the things that you should not do during an onAudioReady callback and you will find that oboe write function is forbidden:
https://google.github.io/oboe/reference/classoboe_1_1_audio_stream_callback.html
I am using the new MediaCodec API on Jelly Bean to decode an h264 stream.
Using the code snippets in the developer page , instantiated a decoder by name (taken from media_codec.xml), passed a surface and configured the codec.
The problem I am facing is, dequeOutputBuffer always returns -1.
Tried with a negative timeout to wait indefenitely, no luck with that.
Whenever I get a -1, refreshed the buffers using getOutputBuffers.
Please note that the same issue is seen when a custom app is used to parse the data from a media source and provide to decoder.
Any inputs on the above will be helpful
I had faced same problem. Incrementing presentationTimeUs parameter of queueInputBuffer() on each call solved the issue.
For example,
codec.queueInputBuffer(inputBufferIndex, 0, data.size, time, 0)
time += 66 //incrementing by 1 works too
If anyone else is facing this problem (as I did today) while starting with MediaCodec make sure to release the output codecs after you're done with them:
mediaCodec.releaseOutputBuffer(index, render);
or else the codec will run out of available buffers pretty soon.
It may be necessary to feed several input buffers before obtaining data in output buffer.
-1 is INFO_TRY_AGAIN_LATER, meaning the output buffer queue is still being prepared and you just need to call dequeueOutputBuffer again.
Try using a work loop that calls dequeueOutputBuffer in a loop similar to ExoPlayer:
while (drainOutputBuffer(positionUs, elapsedRealtimeUs)) {}
if (feedInputBuffer(true)) {
while (feedInputBuffer(false)) {}
}
where drainOutputBuffer is a method that calls dequeueOutputBuffer.
I've got an AudioTrack in my application, which is set to Stream mode. I want to write audio which I receive over a wireless connection. The AudioTrack is declared like this:
mPlayer = new AudioTrack(STREAM_TYPE,
FREQUENCY,
CHANNEL_CONFIG_OUT,
AUDIO_ENCODING,
PLAYER_CAPACITY,
PLAY_MODE);
Where the parameters are defined like:
private static final int FREQUENCY = 8000,
CHANNEL_CONFIG_OUT = AudioFormat.CHANNEL_OUT_MONO,
AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT,
PLAYER_CAPACITY = 2048,
STREAM_TYPE = AudioManager.STREAM_MUSIC,
PLAY_MODE = AudioTrack.MODE_STREAM;
However, when I write data to the AudioTrack with write(), it will play choppy... The call
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
is made whenever a packet is received over the network connection. Does anybody have an idea on why it sounds choppy? Maybe it has something to do with the WiFi connection itself? I don't think so, as the sound doesn't sound horrible the other way around, when I send data from the Android phone to another source over UDP. The sound then sounds complete and not choppy at all... So does anybody have an idea on why this is happening?
Do you know how many bytes per second you are recieving, the average time between packets compares, and the maximum time between packets? If not, can you add code to calculate it?
You need to be averaging 8000 samples/second * 2 bytes/sample = 16,000 bytes per second in order to keep the stream filled.
A gap of more than 2048 bytes / (16000 bytes/second) = 128 milliseconds between incoming packets will cause your stream to run dry and the audio to stutter.
One way to prevent it is to increase the buffer size (PLAYER_CAPACITY). A larger buffer will be more able to handle variation in the incoming packet size and rate. The cost of the extra stability is a larger delay in starting playback while you wait for the buffer to initially fill.
I have partially solved it by placing the mPlayer.write(audio, 0, audio.length); in it's own Thread. This does take away some of the choppy-ness (due to the fact that write is a blocking call), but it still sounds choppy after a good second or 2. It still has a significant delay of 2-3 seconds.
new Thread(){
public void run(){
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
}
}.start();
Just a little anonymous Thread that does the writing now...
Anybody have an idea on how to solve this issue?
Edit:
After some further checking and debugging, I've noticed that this is an issue with obtainBuffer.
I've looked at the java code of the AudioTrack and the C++ code of AudioTrack And I've noticed that it only can appear in the C++ code.
if (__builtin_expect(result!=NO_ERROR, false)) {
LOGW( "obtainBuffer timed out (is the CPU pegged?) "
"user=%08x, server=%08x", u, s);
mAudioTrack->start(); // FIXME: Wake up audioflinger
timeout = 1;
}
I've noticed that there is a FIXME in this piece of code. :< But anyway, could anybody explain how this C++ code works? I've had some experience with it, but it was never as complicated as this...
Edit 2:
I've tried somewhat different now, the difference being that I buffer the data I receive, and then when the buffer is filled with some data, it is being written to the player. However, the player keeps up with consuming for a few cycles, then the obtainBuffer timed out (is the CPU pegged?) warning kicks in, and there is no data at all written to the player untill it is kick started back to life... After that, it will continually get data written to it untill the buffer is emptied.
Another slight difference is that I stream a file to the player now. That is, reading it in chunks, the writing those chunks to the buffer. This simulates the packages being received over wifi...
I am beginning to wonder if this is just an OS issue that Android has, and it isn't something I can solve on my own... Anybody got any ideas on that?
Edit 3:
I've done more testing, but this doesn't help me any further. This test shows me that I only get lag when I try to write to the AudioTrack for the first time. This takes somewhat between 1 and 3 seconds to complete. I did this by using the following bit of code:
long beforeTime = Utilities.getCurrentTimeMillis(), afterTime = 0;
mPlayer.write(data, 0, data.length);
afterTime = Utilities.getCurrentTimeMillis();
Log.e("WriteToPlayerThread", "Writing a package took " + (afterTime - beforeTime) + " milliseconds");
However, I get the following results:
Logcat Image http://img810.imageshack.us/img810/3453/logcatimage.png
These show that the lag initially occurs at the beginning, after which the AudioTrack keeps getting data continuously... I really need to get this one fixed...