I am trying to write Short[] to wav audio file using file output stream but the file only contains scratch sound.
The reason i am using short[] rather than byte[] is because i am trying to use an external library which provides Voice Activity Detection . I did add wav header provided in Android Audio Record to wav and i tried to convert Short[] to byte[] using Converting Short array from Audio Record to Byte array without degrading audio quality? but none of the above links were able to help me.
Here is my code:
private class ProcessVoice implements Runnable {
#Override
public void run() {
File fl = new File(filePath, AUDIO_RECORDING_FILE_NAME);
try {
os = new BufferedOutputStream(new FileOutputStream(fl));
} catch (FileNotFoundException e) {
Log.w(TAG, "File not found for recording ");
}
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_AUDIO);
while (!Thread.interrupted() && isListening && audioRecord != null) {
short[] buffer = new short[vad.getConfig().getFrameSize().getValue() * getNumberOfChannels() * 2];
audioRecord.read(buffer, 0, buffer.length);
isSpeechDetected(buffer);
}
}
private void isSpeechDetected(final short[] buffer) {
vad.isContinuousSpeech(buffer, new VadListener() {
#Override
public void onSpeechDetected() {
callback.onSpeechDetected();
bytes2 = new byte[buffer.length * 2];
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);
//Log.w(TAG, String.valueOf(buffer));
try {
// // writes the data to file from buffer
// // stores the voice buffer
os.write(header, 0, 44);
working = true;
os.write(bytes2, 0, bytes2.length);
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void onNoiseDetected() {
callback.onNoiseDetected();
if(working == true){
working = false;
try {
doneRec();
} catch (IOException e) {
e.printStackTrace();
}
}
//Log.w(TAG, String.valueOf(bytes2));
}
});
}
}
Related
I want to make a dubbing app in Android.
Flow of the app is:
Get video and audio from the gallery.
Reduce the original sound of Video file. And mix (Dub) the selected audio on this video file.
After mixing the audio on this video file save it in to external memory.
I am using MediaMuxer for this, but m not success. Please help me regarding this.
Regards,
Prateek
even i was looking for the same to dub my video with an audio using mediaMuxer, MediaMuxer was a little difficult concept for me to understand as i am beginner . i ended up refering this github code. https://github.com/tqnst/MP4ParserMergeAudioVideo
it was my saviour. really thanx to that person.
i just picked up the code i wanted from it, i.e dubbing a video with the audio i specify.
here is my code i used in my project below
private void mergeAudioVideo(String originalVideoPath,String AudioPath,String OutPutVideoPath) {
// TODO Auto-generated method stub
Movie video = null;
try {
new MovieCreator();
video = MovieCreator.build(originalVideoPath);
} catch (RuntimeException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Movie audio = null;
try {
new MovieCreator();
audio = MovieCreator.build(AudioPath);
} catch (IOException e) {
e.printStackTrace();
} catch (NullPointerException e) {
e.printStackTrace();
}
List<Track> videoTracks = new LinkedList<Track>();
for (Track t : video.getTracks()) {
if (t.getHandler().equals("vide")) {
videoTracks.add(t);
//seperate the video from the orginal video
}
}
Track audioTrack = audio.getTracks().get(0);// get your audio track to dub the video
Movie result = new Movie();
result.addTrack(videoTracks.get(0)); // add the video seprated from the originals
result.addTrack(audioTrack); //add the track to be put in resul video
Container out = new DefaultMp4Builder().build(result);
FileOutputStream fos = null;
try {
fos = new FileOutputStream(OutPutVideoPath);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
BufferedWritableFileByteChannel byteBufferByteChannel = new BufferedWritableFileByteChannel(fos);
try {
out.writeContainer(byteBufferByteChannel);
byteBufferByteChannel.close();
fos.close();
} catch (IOException e) {
e.printStackTrace();
}
}
and here is the BufferedWritableFileByteChannel class to write the outputVideo data to the directory.
public class BufferedWritableFileByteChannel implements WritableByteChannel {
private static final int BUFFER_CAPACITY = 1000000;
private boolean isOpen = true;
private final OutputStream outputStream;
private final ByteBuffer byteBuffer;
private final byte[] rawBuffer = new byte[BUFFER_CAPACITY];
public BufferedWritableFileByteChannel(OutputStream outputStream) {
this.outputStream = outputStream;
this.byteBuffer = ByteBuffer.wrap(rawBuffer);
}
#Override
public int write(ByteBuffer inputBuffer) throws IOException {
int inputBytes = inputBuffer.remaining();
if (inputBytes > byteBuffer.remaining()) {
dumpToFile();
byteBuffer.clear();
if (inputBytes > byteBuffer.remaining()) {
throw new BufferOverflowException();
}
}
byteBuffer.put(inputBuffer);
return inputBytes;
}
#Override
public boolean isOpen() {
return isOpen;
}
#Override
public void close() throws IOException {
dumpToFile();
isOpen = false;
}
private void dumpToFile() {
try {
outputStream.write(rawBuffer, 0, byteBuffer.position());
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
and dont forget to add the libraries in your project.
this may not be the exact answer to your question. but atleast it will able to shed some light on the probable solution.
I want to play a mp3 with mono and stereo effect. Currently i am working on play stereo effect I have read all the documents, but i am getting only white noise.
my code is:
public class AudioTest extends Activity
{ byte[] b;
public void onCreate(Bundle savedInstanceState)
{
AndroidAudioDevice device = new AndroidAudioDevice( );
super.onCreate(savedInstanceState);
File f=new File("/sdcard/lepord.mp3");
try {
FileInputStream in=new FileInputStream(f);
int size=in.available();
b=new byte[size];
in.read(b);
in.close();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
while( true )
{
device.writeSamples(b);
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
and my AndroidAudioDevice class is:
public class AndroidAudioDevice
{
AudioTrack track;
byte[] buffer = new byte[158616];
#SuppressWarnings("deprecation")
public AndroidAudioDevice( )
{
int minSize =AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT );
track = new AudioTrack( AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT,
158616, AudioTrack.MODE_STREAM);
Log.e(""," sizewe are using for track buffer is 158616");
track.setStereoVolume(.6f,.6f);
track.play();
}
public void writeSamples(byte[] b) {
// TODO Auto-generated method stub
Log.e("","bytes to be write in track is "+b.length );
fillBuffer(b);
track.write( buffer, 0, b.length );
}
private void fillBuffer( byte[] samples )
{
Log.e("","track buffer length="+buffer.length+" samle length"+samples.length );
if( buffer.length < samples.length )
buffer = new byte[samples.length];
for( int i = 0; i < samples.length; i++ )
buffer[i] = (byte)(samples[i] * Byte.MAX_VALUE);;
}
}
first i am not getting any sound rather than white noise, i just want to play a sound by AudioTrack and then i will work on mono and stereo sound effect
please help me
Thanks in advance.*
I am trying to use an AudioRecord object in android to record audio data into a byte array and simultaneously perform some analysis on the recorded data. But I am unsure how to do it.
If I use the byte array directly the application crashes. I need a byte array as an input for the analysing thread I am relatively new to android development and I would appreciate any help on this topic.
Thanks
byte[] data;
public void Record()throws IOException{
int bufferSize = AudioRecord.getMinBufferSize(RECORDER_SAMPLERATE,RECORDER_CHANNELS,RECORDER_AUDIO_ENCODING);
AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE, RECORDER_CHANNELS,RECORDER_AUDIO_ENCODING, bufferSize);
recorder.startRecording();
isRecording = true;
boolean flag = true;
data = new byte[bufferSize];
while(isRecording){
try {
int result = recorder.read(data, 0, bufferSize);
if(flag){
Thread analyseThread = new Thread(new Runnable() {
#Override
public void run() {
theAnalysingFunction();
}
},"AudioRecorder Thread");
analyseThread.start();
flag=false;
}
if (AudioRecord.ERROR_INVALID_OPERATION !=result ) {
} else if (result == AudioRecord.ERROR_INVALID_OPERATION) {
Log.e("Recording", "Invalid operation error");
break;
} else if (result == AudioRecord.ERROR_BAD_VALUE) {
Log.e("Recording", "Bad value error");
break;
} else if (result == AudioRecord.ERROR) {
Log.e("Recording", "Unknown error");
break;
}
} catch (Exception e) {
Log.i("Error", "AudioRecord error");
}
}
}
public void theAnalysingFunction(){
//
//Analyse the byte array named data
//
}
This is multithreading. You try to analyze a buffer. While another thread changes it simultaneously.
About your crash, if U use a byte buffer, make sure U use ENCODING_PCM_8BIT for encoding.
I am trying to stream live audio from an Axis network security camera over a Multipart HTTP stream that is encoded in g711 ulaw 8 khz, 8 bit samples on an Android phone. It seems like this should be pretty straight forward, and this is the basis of my code. I reused some streaming code I had that grabbed JPEG frames from a MJPEG stream, and now it grabs 512 byte blocks of audio data and hands it down to the AudioTrack. The audio sounds all garbled and distorted though, am I missing something obvious?
#Override
public void onResume() {
super.onResume();
int bufferSize = AudioTrack.getMinBufferSize(8000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_8BIT);
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_8BIT, bufferSize, AudioTrack.MODE_STREAM);
mAudioTrack.play();
thread.start();
}
class StreamThread extends Thread {
public boolean running = true;
public void run() {
try {
MjpegStreamer streamer = MjpegStreamer.read("/axis-cgi/audio/receive.cgi?httptype=multipart");
while(running) {
byte[] buf = streamer.readMjpegFrame();
if(buf != null && mAudioTrack != null) {
mAudioTrack.write(buf, 0, buf.length);
}
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
I am using AudioRecord to record raw audio for processing.
The audio records entirely without any noise but when the raw PCM data generated is played back, it plays as if it has been speeded up a lot (upto about twice as much).
I am viewing and playing the PCM data in Audacity. I am using actual phone (Samsung Galaxy S5670) for testing.
The recording is done at 44100 Hz, 16 bit. Any idea what might cause this?
Following is the recording code:
public class TestApp extends Activity
{
File file;
OutputStream os;
BufferedOutputStream bos;
AudioRecord recorder;
int iAudioBufferSize;
boolean bRecording;
int iBytesRead;
Thread recordThread = new Thread(){
#Override
public void run()
{
byte[] buffer = new byte[iAudioBufferSize];
int iBufferReadResult;
iBytesRead = 0;
while(!interrupted())
{
iBufferReadResult = recorder.read(buffer, 0, iAudioBufferSize);
// Android is reading less number of bytes than requested.
if(iAudioBufferSize > iBufferReadResult)
{
iBufferReadResult = iBufferReadResult +
recorder.read(buffer, iBufferReadResult - 1, iAudioBufferSize - iBufferReadResult);
}
iBytesRead = iBytesRead + iBufferReadResult;
for (int i = 0; i < iBufferReadResult; i++)
{
try
{
bos.write(buffer[i]);
} catch (IOException e)
{
e.printStackTrace();
}
}
}
}
};
#Override
public void onCreate(Bundle savedInstanceState)
{
// File Creation and UI init stuff etc.
bRecording = false;
bPlaying = false;
int iSampleRate = AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_SYSTEM);
iAudioBufferSize = AudioRecord.getMinBufferSize(iSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, iSampleRate, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, iAudioBufferSize);
bt_Record.setOnClickListener(new OnClickListener()
{
#Override
public void onClick(View v)
{
if (!bRecording)
{
try
{
recorder.startRecording();
bRecording = true;
recordThread.start();
}
catch(Exception e)
{
tv_Error.setText(e.getLocalizedMessage());
}
}
else
{
recorder.stop();
bRecording = false;
recordThread.interrupt();
try
{
bos.close();
}
catch(IOException e)
{
}
tv_Hello.setText("Recorded Sucessfully. Total " + iBytesRead + " bytes.");
}
}
});
}
}
RESOLVED : I posted this after struggling with it for 1-2 days. But, ironically, I found the solution soon after posting. The buffered output stream write was taking too much time in the for loop, so the stream was skipping samples. changed it to block write, removing the for loop. Works perfectly.
The audio skipping was caused by the delay in writing to buffer.
the solution is to just replace this FOR loop:
for (int i = 0; i < iBufferReadResult; i++)
{
try
{
bos.write(buffer[i]);
} catch (IOException e)
{
e.printStackTrace();
}
}
by a single write, like so:
bos.write(buffer, 0, iBufferReadResult);
I had used the code from a book which worked, I guess, for lower sample rates and buffer updates.