I am trying to record audio from the device.
I created an AudioRecord object and manage it over the cycle of the activity.
When my app goes to background it stops, and when in foreground it continues.
When the recording is running, I want to get the samples from the recorder to a byte array
This is the code I use to do it:
private void startRecorder() {
Log.d(TAG, "before start recording");
myBuffer = new byte[2048];
audioManager.setMode(AudioManager.MODE_IN_COMMUNICATION);
audioManager.requestAudioFocus(mAudioFocusListener, AudioManager.STREAM_DTMF, AudioManager.AUDIOFOCUS_GAIN_TRANSIENT);
myRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, 2048);
myThread = new Thread() {
public void run() {
while (true) {
if (myRecorder != null && myRecorder.getState() == AudioRecord.RECORDSTATE_RECORDING) {
myRecorder.read(myBuffer, 0, 2048);
recordingSampleNumber++;
if (recordingSampleNumber % 10 == 0) {
Log.d(TAG, "recording sample number:" + recordingSampleNumber);
}
}
}
}
};
myThread.setPriority(Thread.MAX_PRIORITY);
myRecorder.startRecording();
myThread.start();
Log.d(TAG, "after start recording");
}
My problem is: Every once in a while I get the following error :
06-22 11:44:21.057: E/AndroidRuntime(17776): Process: com.example.microphonetestproject2, PID: 17776
06-22 11:44:21.057: E/AndroidRuntime(17776): java.lang.NullPointerException: Attempt to invoke virtual method 'int android.media.AudioRecord.getState()' on a null object reference
06-22 11:44:21.057: E/AndroidRuntime(17776): at com.example.microphonetestproject2.MicrophoneTestApp$3.run(MicrophoneTestApp.java:108)
my question is: Why would I get an NPE on myRecorder.getState() when just half a line before that i wrote "if myRecorder!=null"
This looks like a concurrency problem.
It seems that after the check myRecorder != null, the variable is actually set to null in a different thread, which is possible because as you probably know threads run in parallel.
I'd recommend you lock on the object and execute your loop. Then, no one will access it out of place, i.e. unexpectedly.
while (true) {
synchronized (myRecorder) {
if (myRecorder != null && myRecorder.getState() == AudioRecord.RECORDSTATE_RECORDING) {
myRecorder.read(myBuffer, 0, 2048);
recordingSampleNumber++;
if (recordingSampleNumber % 10 == 0) {
Log.d(TAG, "recording sample number:" + recordingSampleNumber);
}
}
}
}
Although this may fix your problem, you should deal with it with in other ways, e.g. instead of setting the variable to null to cancel the thread, use the built in interrupt and join methods:
private Thread mRecorderThread;
private void startRecorder() {
myRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000,
AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, 2048);
mRecorderThread = new Thread() {
public void run() {
while (true) {
if (!isInterrupted() && myRecorder.getState() == AudioRecord
.RECORDSTATE_RECORDING) {
myRecorder.read(myBuffer, 0, 2048);
recordingSampleNumber++;
if (recordingSampleNumber % 10 == 0) {
Log.d(TAG, "recording sample number:" + recordingSampleNumber);
}
}
}
}
};
mRecorderThread.setPriority(Thread.MAX_PRIORITY);
myRecorder.startRecording();
mRecorderThread.start();
Log.d(TAG, "after start recording");
}
private void stopRecorder() {
mRecorderThread.interrupt();
// Wait for the thread to finish (for the interruption to take effect)
try {
mRecorderThread.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
myRecorder.stop();
}
interrupt() as you may have understood, interrupts the thread, but it does not kill it instantly. With the help of join() you can wait for the Thread to finish after you interrupted it.
Related
in my app I want to record the user's speech, run it through a band pass filter, then pass the resulting audio file (PCM / WAV) to the text to speech engine to speak the filtered results, I have everything working except cannot find a way to pass an audio file to the tts engine, I have googled this for a long time now (2 weeks) and no luck. is there any workaround for achieving this?
What I tried was calling the RecognizerIntent, then start the band pass filter via recording, and also tried the other way around by start the band pass method first then calling the recognizer intent but either way kills the tts instance even tho it's running on a separate thread. Also I have tested this using the normal tts procedure in the recognizer intent and also the web search version of the recognizer intent both with the same results, If I don't implement the band pass filter (NOTE that a recording thread is started at this time) it works fine but as soon as I implement the bandpass filter it fails, with a helpfull message when in web search mode that says "google is unavailable" Here's my current code:
RecognizerIntent, normal version:
public void getMic() {//bring up the speak now message window
tts = new TextToSpeech(this, new TextToSpeech.OnInitListener() {
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
result = tts.setLanguage(Locale.US);
if (result == TextToSpeech.LANG_MISSING_DATA || result == TextToSpeech.LANG_NOT_SUPPORTED) {
l = new Intent();
l.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(l);
}
}
}
});
k = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
k.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
k.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
k.putExtra(RecognizerIntent.EXTRA_PROMPT, "Say something");
try {
startActivityForResult(k, 400);
} catch (ActivityNotFoundException a) {
Log.i("CrowdSpeech", "Your device doesn't support Speech Recognition");
}
if(crowdFilter && running==4){
try {
startRecording();
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
}
Recognizer intent web search version:
public void getWeb() { //Search the web from voice input
k = new Intent(RecognizerIntent.ACTION_WEB_SEARCH);
k.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
k.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
k.putExtra(RecognizerIntent.EXTRA_PROMPT, "Say something");
try {
startActivityForResult(k, 400);
} catch (ActivityNotFoundException a) {
Log.i("CrowdSpeech", "Your device doesn't support Speech Recognition");
}
if (crowdFilter && running == 4) {
try {
startRecording();
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
}
And the startRecording method that applies the bandpass filter:
private void startRecording() throws FileNotFoundException {
if (running == 4) { //start recording from mic, apply bandpass filter and save as wave file using TARSOS library
dispatcher = AudioDispatcherFactory.fromDefaultMicrophone(RECORDER_SAMPLERATE, bufferSize, 0);
AudioProcessor p = new BandPass(freqChange, tollerance, RECORDER_SAMPLERATE);
dispatcher.addAudioProcessor(p);
isRecording = true;
// Output
File f = new File(myFilename.toString() + "/Filtered result.wav");
RandomAccessFile outputFile = new RandomAccessFile(f, "rw");
TarsosDSPAudioFormat outputFormat = new TarsosDSPAudioFormat(44100, 16, 1, true, true);
WriterProcessor writer = new WriterProcessor(outputFormat, outputFile);
dispatcher.addAudioProcessor(writer);
recordingThread = new Thread(new Runnable() {
#Override
public void run() {
dispatcher.run();
}
}, "Crowd_Speech Thread");
recordingThread.start();
}
}
The only reason I'm doing it this way is in hopes that by applying the filter that the tts engine would receive the modified audio, which is also saved in a file because originally I wanted to just pass the file to tts to read after recording, Is there any way to accomplish this?
Another thing I'm thinking of is there any possible way inside my project that I can modify the source code inside the library that the recognizer intent references so that I can add a parameter to get audio from file?
I have setup two commands in Oncreate and both send data to a bluetooth device. I need the 2nd command to wait for a data string to be received from the 1st command before advancing to the 2nd command. Each command just send a byte to BT. I tried a while looping true but does not seam to work and hangs on the while true statement. I assume the while true is not letting the handler fire when in the loop. Both commands work fine individually as long as I don't send both.
This is the code in Oncreate with both commands and while true statement
looping=true;
intByteCount =9;
GetData(intCommand); // (Command 1)Send byte to get data on reveiver
while( looping) { // Wait add data to be received before next command
Log.d("TAG", "On Hold ? ");
}
intByteCount=160; // (command 2)
GetTitle(intCommand);
This is the code in a handler for bluetooth that sets the looping to false once all the bytes have been received.
Handler h = new Handler() {
#Override
// public void handleMessage(android.os.Message msg) {
public void handleMessage(android.os.Message msg) {
byte[]readBuf = (byte[]) msg.obj;
if (intByteCount==9){
// Data is channel status and Master value
byte[] encodedBytes = new byte[5];
System.arraycopy(readBuf, 0, encodedBytes, 0, encodedBytes.length);
looping=false;
};
The GetTitle and GetData are basically the same
Here is the GetTitle()
private void GetData(int FixtureNumber) {
Log.d("TAG", "Value " + intArrayToInt(intArray1));
intByteCount=9; // set to receive 9 bytes
byte buffer[] = new byte[6];
buffer[0] = ((byte) 1); // Command (get data)
buffer[1] = ((byte) Master_value);
buffer[2] = ((byte) intArrayToInt(intArray1));
buffer[3] = ((byte) intArrayToInt(intArray2));
buffer[4] = ((byte) 3);
buffer[5] = ((byte) 4);
if (isBTConnected) {
try {
mmOutputStream.write(buffer);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Here is the final code to get both control data
new Thread(new Runnable() {
#Override
public void run() {
intByteCount =9;
GetData(intCommand); // Send byte to get data on reveiver
//you can use a for here and check if the command was executed or just wait and execute the 2nd command
try {
Thread.sleep(1000); //wait 2 seconds
} catch (InterruptedException e) {
e.printStackTrace();
}
intByteCount=160; // Sed incoming data byte count
GetTitle(intCommand);
}
}).start();
You can use a Thread and wait x amount of time, :
new Thread(new Runnable() {
#Override
public void run() {
//your 1st command
//you can use a for here and check if the command was executed or just wait and execute the 2nd command
try {
Thread.sleep(2000); //wait 2 seconds
} catch (InterruptedException e) {
e.printStackTrace();
}
//your 2nd command
}
}).start();
Thank you Agustin that worked fine. The received data was quick so just a delay will work without the control boolean. Here is the new code.
new Thread(new Runnable() {
#Override
public void run() {
intByteCount =9;
GetData(intCommand); // Send byte to get data on reveiver
//you can use a for here and check if the command was executed or just wait and execute the 2nd command
try {
Thread.sleep(1000); //wait 2 seconds
} catch (InterruptedException e) {
e.printStackTrace();
}
intByteCount=160; // Sed incoming data byte count
GetTitle(intCommand);
}
}).start();
You are currently holding up the main thread there, something it is not advised to do. The best option for this is to modify the code so that the second Bluetooth command is done in a function that is called where the looping == false code is used.
I want know how to send my screen over RTP? Here is my approach.
First, I'm using media projection to capture my screen. Referenced url is http://mattsnider.com/video-recording-with-mediaprojectionmanager/
Second, I'm using and try to modify spydroid library, to stream my screen not camera view.
Referenced url is https://github.com/fyhertz/spydroid-ipcamera
I've done screen capture using media projection. Here is some part of sample code.
private boolean drainEncoder() {
mDrainHandler.removeCallbacks(mDrainEncoderRunnable);
while (true) {
int bufferIndex = mVideoEncoder.dequeueOutputBuffer(mVideoBufferInfo, 0);
if (bufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
// nothing available yet
break;
}
else if (bufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// should happen before receiving buffers, and should only happen once
if (mTrackIndex >= 0) {
throw new RuntimeException("format changed twice");
}
mTrackIndex = mMuxer.addTrack(mVideoEncoder.getOutputFormat());
if (!mMuxerStarted && mTrackIndex >= 0) {
mMuxer.start();
mMuxerStarted = true;
}
}
else if (bufferIndex < 0) {
// not sure what's going on, ignore it
}
else {
ByteBuffer encodedData = mVideoEncoder.getOutputBuffer(bufferIndex);
if (encodedData == null) {
throw new RuntimeException("couldn't fetch buffer at index " + bufferIndex);
}
if ((mVideoBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
mVideoBufferInfo.size = 0;
}
if (mVideoBufferInfo.size != 0) {
if (mMuxerStarted) {
encodedData.position(mVideoBufferInfo.offset);
encodedData.limit(mVideoBufferInfo.offset + mVideoBufferInfo.size);
mMuxer.writeSampleData(mTrackIndex, encodedData, mVideoBufferInfo);
}
else {
// muxer not started
}
}
mVideoEncoder.releaseOutputBuffer(bufferIndex, false);
if ((mVideoBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
break;
}
}
}
}
As you can see,
mMuxer.writeSampleData(mTrackIndex, encodedData, mVideoBufferInfo);
If it possible to send encodedData via RTP then mirroring will work fine.
Here is spydroid code, VideoStream.Java
protected void encodeWithMediaRecorder() throws IOException {
Log.d(TAG, "Video encoded using the MediaRecorder API");
// We need a local socket to forward data output by the camera to the packetizer
createSockets();
// Reopens the camera if needed
destroyCamera();
createCamera();
// The camera must be unlocked before the MediaRecorder can use it
unlockCamera();
try {
mMediaRecorder = new MediaRecorder();
mMediaRecorder.setCamera(mCamera);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mMediaRecorder.setVideoEncoder(mVideoEncoder);
mMediaRecorder.setPreviewDisplay(mSurfaceView.getHolder().getSurface());
mMediaRecorder.setVideoSize(mRequestedQuality.resX, mRequestedQuality.resY);
mMediaRecorder.setVideoFrameRate(mRequestedQuality.framerate);
// The bandwidth actually consumed is often above what was requested
mMediaRecorder.setVideoEncodingBitRate((int) (mRequestedQuality.bitrate * 0.8));
// We write the ouput of the camera in a local socket instead of a file !
// This one little trick makes streaming feasible quiet simply: data from the camera
// can then be manipulated at the other end of the socket
mMediaRecorder.setOutputFile(mSender.getFileDescriptor());
mMediaRecorder.prepare();
mMediaRecorder.start();
}
catch (Exception e) {
throw new ConfNotSupportedException(e.getMessage());
}
// This will skip the MPEG4 header if this step fails we can't stream anything :(
InputStream is = mReceiver.getInputStream();
try {
byte buffer[] = new byte[4];
// Skip all atoms preceding mdat atom
while (!Thread.interrupted()) {
while (is.read() != 'm') ;
is.read(buffer, 0, 3);
if (buffer[0] == 'd' && buffer[1] == 'a' && buffer[2] == 't') break;
}
}
catch (IOException e) {
Log.e(TAG, "Couldn't skip mp4 header :/");
stop();
throw e;
}
// The packetizer encapsulates the bit stream in an RTP stream and send it over the network
mPacketizer.setDestination(mDestination, mRtpPort, mRtcpPort);
mPacketizer.setInputStream(mReceiver.getInputStream());
mPacketizer.start();
mStreaming = true;
}
Can anyone advise me? Some more information how to modify spydroid library to send mirroring data?
I would like to know if there is a way to fix the duration of recording using mobile's microphone. Like when I click a button the recording should start and it should stop after 5 seconds on its own, what method do you propose me to use :-)
Edit:
Sorry for the confusion but I am using AudioRecorder class to record data and I don't think the MediaRecorder class function works properly (/at all) for the same.
If you just use a timer, I do not think that you can accurately control how much data is within the buffer when your app reads it.
I think they way to record 5 seconds of audio data is to use the technique from this class.
The code there carefully sets the size of the audio buffer so that it will call back after it has recorded data for a certain amount of time. Here is a snipped from that class.
public boolean startRecordingForTime(int millisecondsPerAudioClip,
int sampleRate, int encoding)
{
float percentOfASecond = (float) millisecondsPerAudioClip / 1000.0f;
int numSamplesRequired = (int) ((float) sampleRate * percentOfASecond);
int bufferSize =
determineCalculatedBufferSize(sampleRate, encoding,
numSamplesRequired);
return doRecording(sampleRate, encoding, bufferSize,
numSamplesRequired, DEFAULT_BUFFER_INCREASE_FACTOR);
}
Then later on your code just does this:
while (continueRecording)
{
int bufferResult = recorder.read(readBuffer, 0, readBufferSize);
//do stuff
}
since readBufferSize is just right, you will get the amount of data you want (with some slight variation)
This is all what you need.
#Override
public void onClick(View view)
{
if (view.getId() == R.id.Record)
{
new Timer().schedule(new TimerTask()
{
#Override
public void run()
{
runOnUiThread(new Runnable()
{
#Override
public void run()
{
mediaRecorder.stop();
mediaRecorder.reset();
mediaRecorder.release();
files.setEnabled(true);
record.setEnabled(true);
stop.setEnabled(false);
}
});
}
}, 5000);
record.setEnabled(false);
files.setEnabled(false);
stop.setEnabled(true);
try
{
File file = new File(Environment.getExternalStorageDirectory(),
"" + new Random().nextInt(50) + ".3gp");
adapter.add(file.getAbsolutePath());
adapter.notifyDataSetChanged();
mediaRecorder = new MediaRecorder();
mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mediaRecorder
.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mediaRecorder
.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
mediaRecorder.setOutputFile(file.getAbsolutePath());
mediaRecorder.prepare();
mediaRecorder.start();
stop.setEnabled(true);
} catch (IllegalStateException e)
{
e.printStackTrace();
} catch (IOException e)
{
e.printStackTrace();
}
}
Use setMaxDuration from MediaRecorder class.
alternately
When you start recording start a new thread and put it to sleep for 5 seconds. when it wakes stop the recording.
or use a timertask which shall call the stop recording after 5 second delay.
or
I am trying to run the audio recording http://developer.android.com/guide/topics/media/index.html, Its working fine, what I need is to show max amplitude while recording voice continuously. What is the best approach for that.
Max amplitude gives max amplitude of the given sample, so I taken sample for every 250 milli seconds and calculated max amplitude
public void run() {
int i = 0;
while(i == 0) {
Message msg = mHandler.obtainMessage();
Bundle b = new Bundle();
try {
sleep(250);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if (mRecorder != null) {
amplitude = mRecorder.getMaxAmplitude();
b.putLong("currentTime", amplitude);
Log.i("AMPLITUDE", new Integer(amplitude).toString());
} else {
b.putLong("currentTime", 0);
}
msg.setData(b);
mHandler.sendMessage(msg);
}
}
I used message handlers to modify front end using background process thread
Create a thread which runs all the time.
In the thread do this:
int amp = mrec.getMaxAmplitude();
if (amp > 0)
yourcode;
Do you need more information on the thread?