I am trying to take small recordings to find the Sound Pressure Level from a service but Android wont give me access to the hardware. I get the following errors in Logcat:
The error comes from the following code:
AudioRecord recordInstance = null;
// We're important...
android.os.Process
.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
short bufferSize = 4096;// 2048;
recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, this //line 167
.getFrequency(), this.getChannelConfiguration(), this
.getAudioEncoding(), bufferSize); //object not created
tempBuffer = new short[bufferSize];
recordInstance.startRecording();
What happens is that recordInstance is never properly created and so when it gets to the end and calls recordInstance.startRecording(), recordInstance is still null. Android rejects my programs request at the definition. Does anyone know what those errno's indicate? I couldn't find a list online.
AudioRecord Docs
Thanks
Check three things:
Your permissions (you need to give permission for RECORD_AUDIO)
Have you run this once and not called recordInstance.release()? If so you may have tied up audio resources and it will likely not work until you restart the phone. I have found that in my experience anyway.
3.The buffer size. There is a static method AudioRecord.getMinBufferSize()
Related
I would like to modify Android OS (official image from AOSP) to add preprocessing to a normal phone call playback sound.
I've already achieved this filtering for app audio playback (by modifying HAL and audioflinger).
I'm OK with targeting only a specific device (Nexus 5X). Also, I only need to filter playback - I don't care about recording (uplink).
UPDATE #1:
To make it clear - I'm OK with modifying Qualcomm-specific drivers, or whatever part that it is that runs on Nexus 5X and can help me modify in-call playback.
UPDATE #2:
I'm attempting to create a Java layer app that routes the phone playback to the music stream in real time.
I've already succeeded in installing it as a system app, getting permissions for initializing AudioRecord with AudioSource.VOICE_DOWNLINK. However, the recording gives blank samples; it doesn't record the voice call.
This is the code inside my worker thread:
// Start recording
int recBufferSize = AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mRecord = new AudioRecord(MediaRecorder.AudioSource.VOICE_DOWNLINK, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, recBufferSize);
// Start playback
int playBufferSize = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, playBufferSize, AudioTrack.MODE_STREAM);
mRecord.startRecording();;
mTrack.play();
int bufSize = 1024;
short[] buffer = new short[bufSize];
int res;
while (!interrupted())
{
// Pull recording buffers and play back
res = mRecord.read(buffer, 0, bufSize, AudioRecord.READ_NON_BLOCKING);
mTrack.write(buffer, 0, res, AudioTrack.WRITE_BLOCKING);
}
// Stop recording
mRecord.stop();
mRecord.release();
mRecord = null;
// Stop playback
mTrack.stop();
mTrack.release();;
mTrack = null;
I'm running on a Nexus 5X, my own AOSP custom ROM, Android 7.1.1. I need to find the place which will allow call recording to work - probably somewhere in hardware/qcom/audio/hal in platform code.
Also I've been looking at the function voice_check_and_set_incall_rec_usecase at hardware/qcom/audio/hal/voice.c However, I wasn't able to make sense of it (how to make it work the way I want it to).
UPDATE #3:
I've opened a more-specific question about using AudioSource.VOICE_DOWNLINK, which might draw the right attention and will eventually help me solve this question's problem as well.
There are several possible issues that come to my mind. The blank buffer might indicate that you have the wrong source selected. Also since according to https://developer.android.com/reference/android/media/AudioRecord.html#AudioRecord(int,%20int,%20int,%20int,%20int) you might not always get an exception even if something's wrong with the configuration, you might want to confirm whether your object has been initialized properly. If all else fails, you could also do an
"mRecord.setPreferredDevice(AudioDeviceInfo.TYPE_BUILTIN_EARPIECE);"
to route the phone's built-in earpiece directly to the input of your recorder. Yeah, it's kinda dirty and hacky, but perhaps suits the purpose.
The other thing what was puzzling me that instead of using the builder class you've tried to configure the object directly via its constructor. Is there a specific reason why you don't want to use AudioRecord.Builder (there's even a nice example at https://developer.android.com/reference/android/media/AudioRecord.Builder.html ) instead?
I am creating AudioTrack with following definition.
audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
buffer.length * 2,
AudioTrack.MODE_STATIC);
audioTrack.write(buffer, 0, buffer.length);
audioTrack.setPositionNotificationPeriod(500);
audioTrack.setNotificationMarkerPosition(buffer.length);
progressListener = new PlaybackProgress(buffer.length);
audioTrack.setPlaybackPositionUpdateListener(progressListener);
When the audioTrack finishes, the following is called to stop the audio and reset the head position.
private void resetAudioPlayback() {
ViewGroup.LayoutParams params = playbackView.getLayoutParams();
params.width = 0;
playbackView.setLayoutParams(params);
audioTrack.stop();
audioTrack.reloadStaticData();
playImage.animate().alpha(100).setDuration(500).start();
}
The above code works perfectly fine with Android 5.1. But I having issues with 4.4.4. audioTrack.stop() is called but the audio is not stopped, since the reloadStaticData rewinds the audio back to the start position, it replays the audio. but with 5.1, it correctly stops and resets the buffer back to the start of the playback and when play button is pressed, plays from beginning.
Can someone help me how can I this issue with Android 4.4.4?
I'm not absolutely certain if this will solve your problem, but consider using pause() instead of stop(). By documentation, stop() for MODE_STREAM will actually keep playing the remainder of the last buffer that was written. You're using MODE_STATIC, but it might be worth trying.
Also (possibly unrelated), consider that write() returns the number of bytes written, so you shouldn't depend on a single write filling the entire buffer of the AudioTrack every time. write() should be treated like an OutputStream write in that it may not write the entire contents of the buffer it was given, so it's better to write a loop and check how much has been written with each call to write(), then continue to write from a new index in the buffer array until the sum of all the writes equals the length of the buffer.
I need to implement into my app a functionality where works or needs to read the intensity of the voice through the microphone. But I don't know how to do it, can someone help me please? Just I need to read the intensity of the voice, not recognize any word.
I understand that I have to use AudioRecord class but I don't undestant whats steps I must write into my code, because I don't know if really necessary that I save a little of the voice into the SD card an after that convert it to PCM an after read the maximum of this signal.
The AudioRecord class will let you record into a buffer. You can then chose to process the buffer or save it to the SD card, depending on your needs. Which you want to do depends entirely on your application. Do you need the data after you process it? Or is the processed result all you need? Do you intend to play back the recordings?
A simplified example of how to use the AudioRecord class follows:
AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize);
recorder.startRecording();
short buf[] = new short[buffersize];
int n = 0;
while(<some condition>) {
n = recorder.read(buf, 0, bufferSize);
process(buf);
}
recorder.stop();
recorder.release();
You would obviously want to put the above code in a thread outside of the main UI thread.
You need to make sure that whatever you do in process is quick enough that you can get back around to reading the data before the buffer fills up, or you will drop data. Sample rate and buffer size will depend on how you are processing the data and what your latency requirements are.
After you get all that working, you may decide that you want to put the phone into 'Speaker Phone' in order to get better gain through the mic:
AudioManager amAudioManager;
amAudioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
amAudioManager.setMode(AudioManager.MODE_IN_CALL);
amAudioManager.setSpeakerphoneOn(true);
Yes, you have to put the phone in IN_CALL in order to enable the speaker phone. Yes, some phones apparently disable the ability to record when IN_CALL.
I've been trying to get my application recording the sound coming from the microphone and playing it back in (approximately) real-time, however without success.
I'm using AudioRecord and AudioTrack classes for record and playback, respectively. I've tried different approaches, I've tried to record the incoming sound and write it to a file and it worked fine. I've also tried to playback sound from that file AFTER with AudioTrack and it worked fine too. The problem is when I try to play the sound in real-time, instead of reading a file after it's written.
Here is the code:
//variables
private int audioSource = MediaRecorder.AudioSource.MIC;
private int samplingRate = 44100; /* in Hz*/
private int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
private int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private int bufferSize = AudioRecord.getMinBufferSize(samplingRate, channelConfig, audioFormat);
private int sampleNumBits = 16;
private int numChannels = 1;
// …
AudioRecord recorder = new AudioRecord(audioSource, samplingRate, channelConfig, audioFormat, bufferSize);
recorder.startRecording();
isRecording = true;
AudioTrack audioPlayer = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM);
if(audioPlayer.getPlayState() != AudioTrack.PLAYSTATE_PLAYING)
audioPlayer.play();
//capture data and record to file
int readBytes=0, writtenBytes=0;
do{
readBytes = recorder.read(data, 0, bufferSize);
if(AudioRecord.ERROR_INVALID_OPERATION != readBytes){
writtenBytes += audioPlayer.write(data, 0, readBytes);
}
}
while(isRecording);
It is thrown a java.lang.IllegalStateException with the reason being caused by "play() called on a uninitialized AudioTrack".
However, if I change the AudioTrack initialization for example to use sampling rate 8000Hz and sample format 8 bits (instead of 16), it doesn't throw the exception anymore and the application runs, although it produces horrible noise.
When I play AudioTrack from a file, there is no problem with the initialization of the AudioTrack, I tried 44100 and 16 bits and it worked properly, producing the correct sound.
Any help ?
All native Android audio is encoded. You can only play out PCM formats in real time, or use a special streaming codec, which I don't think is trivial on Android.
The point is that if you want to record/play out audio simultaneously, you would have to create your own audio buffer and store raw PCM-encoded audio samples in there (I'm not sure if you're thinking duh! or whether this is all over your head, so I'll try to be clear but not to chew your own gum).
PCM is a digital representation of an analog signal in which your audio samples are a set of "snapshots" of the original acoustic wave. Because all kinds of clever mathematicians and engineers saw the potential in trying to reduce the number of bits you represent this data with, they came up with all sorts of encoders. The encoded (compressed) signal is represented very differently from the raw PCM signal and has to be decoded (en-cod-er+dec-oder = codec). Unless you're using special algorithms and media streaming codecs, it's impossible to play back an encoded signal like you're trying to, because it's not encoded sample by sample, but rather frame by frame, where you need the whole frame of samples, if not the complete signal, to decode this frame.
The way to do it is to manually store audio samples coming from the microphone buffer and manually feeding them to the output buffer. You will have to do some coding for that, but I believe there are some open-source apps that you can look at and take a peak at their source (unless you're willing to sell your app later on, of course, but that's a whole different discussion).
If you're developing for Android 2.3 or later and are not too scared of programming in native code, you can try using OpenSL ES. The Android-specific features of OpenSL ES are listed here. This platform allows you somewhat more flexible audio manipulation and you might find just what you need, if your app will be highly reliant on audio processing.
It is thrown a java.lang.IllegalStateException with the reason being
caused by "play() called on a uninitialized AudioTrack".
It is because the buffer size too small. I tried "bufferSize += 2048;", it's ok then.
I had a similar problem and I solved it by adding this permission to the manifest file:
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>
make sure that your var data is enough for samplingRate
Ex: if you use samplingRate as 44100 your data bytearrays's length should be 44101 or more
I've got an AudioTrack in my application, which is set to Stream mode. I want to write audio which I receive over a wireless connection. The AudioTrack is declared like this:
mPlayer = new AudioTrack(STREAM_TYPE,
FREQUENCY,
CHANNEL_CONFIG_OUT,
AUDIO_ENCODING,
PLAYER_CAPACITY,
PLAY_MODE);
Where the parameters are defined like:
private static final int FREQUENCY = 8000,
CHANNEL_CONFIG_OUT = AudioFormat.CHANNEL_OUT_MONO,
AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT,
PLAYER_CAPACITY = 2048,
STREAM_TYPE = AudioManager.STREAM_MUSIC,
PLAY_MODE = AudioTrack.MODE_STREAM;
However, when I write data to the AudioTrack with write(), it will play choppy... The call
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
is made whenever a packet is received over the network connection. Does anybody have an idea on why it sounds choppy? Maybe it has something to do with the WiFi connection itself? I don't think so, as the sound doesn't sound horrible the other way around, when I send data from the Android phone to another source over UDP. The sound then sounds complete and not choppy at all... So does anybody have an idea on why this is happening?
Do you know how many bytes per second you are recieving, the average time between packets compares, and the maximum time between packets? If not, can you add code to calculate it?
You need to be averaging 8000 samples/second * 2 bytes/sample = 16,000 bytes per second in order to keep the stream filled.
A gap of more than 2048 bytes / (16000 bytes/second) = 128 milliseconds between incoming packets will cause your stream to run dry and the audio to stutter.
One way to prevent it is to increase the buffer size (PLAYER_CAPACITY). A larger buffer will be more able to handle variation in the incoming packet size and rate. The cost of the extra stability is a larger delay in starting playback while you wait for the buffer to initially fill.
I have partially solved it by placing the mPlayer.write(audio, 0, audio.length); in it's own Thread. This does take away some of the choppy-ness (due to the fact that write is a blocking call), but it still sounds choppy after a good second or 2. It still has a significant delay of 2-3 seconds.
new Thread(){
public void run(){
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
}
}.start();
Just a little anonymous Thread that does the writing now...
Anybody have an idea on how to solve this issue?
Edit:
After some further checking and debugging, I've noticed that this is an issue with obtainBuffer.
I've looked at the java code of the AudioTrack and the C++ code of AudioTrack And I've noticed that it only can appear in the C++ code.
if (__builtin_expect(result!=NO_ERROR, false)) {
LOGW( "obtainBuffer timed out (is the CPU pegged?) "
"user=%08x, server=%08x", u, s);
mAudioTrack->start(); // FIXME: Wake up audioflinger
timeout = 1;
}
I've noticed that there is a FIXME in this piece of code. :< But anyway, could anybody explain how this C++ code works? I've had some experience with it, but it was never as complicated as this...
Edit 2:
I've tried somewhat different now, the difference being that I buffer the data I receive, and then when the buffer is filled with some data, it is being written to the player. However, the player keeps up with consuming for a few cycles, then the obtainBuffer timed out (is the CPU pegged?) warning kicks in, and there is no data at all written to the player untill it is kick started back to life... After that, it will continually get data written to it untill the buffer is emptied.
Another slight difference is that I stream a file to the player now. That is, reading it in chunks, the writing those chunks to the buffer. This simulates the packages being received over wifi...
I am beginning to wonder if this is just an OS issue that Android has, and it isn't something I can solve on my own... Anybody got any ideas on that?
Edit 3:
I've done more testing, but this doesn't help me any further. This test shows me that I only get lag when I try to write to the AudioTrack for the first time. This takes somewhat between 1 and 3 seconds to complete. I did this by using the following bit of code:
long beforeTime = Utilities.getCurrentTimeMillis(), afterTime = 0;
mPlayer.write(data, 0, data.length);
afterTime = Utilities.getCurrentTimeMillis();
Log.e("WriteToPlayerThread", "Writing a package took " + (afterTime - beforeTime) + " milliseconds");
However, I get the following results:
Logcat Image http://img810.imageshack.us/img810/3453/logcatimage.png
These show that the lag initially occurs at the beginning, after which the AudioTrack keeps getting data continuously... I really need to get this one fixed...