I have recorded voice with android AudioRecord and I would like to convert it to ogg vorbis as it is patent free. I have try vorbis-java beta, but it seem not work or I make some mistake.
Here are my code :
int frequency = 44100;
int channel = AudioFormat.CHANNEL_IN_STEREO;
int mAudioSource = MediaRecorder.AudioSource.MIC;
int mAudioEncoder = AudioFormat.ENCODING_PCM_16BIT;
try {
final File outputFile = new File(mOutputPath);
DataOutputStream dos = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(outputFile)));
int bufferSize = AudioRecord.getMinBufferSize(frequency, channel, mAudioEncoder);
AudioRecord audioRecord = new AudioRecord(mAudioSource, frequency, channel, mAudioEncoder, bufferSize);
short[] buffer = new short[bufferSize];
audioRecord.startRecording();
while (isRecordStart) {
int bufferReadResult = audioRecord.read(buffer, 0, bufferSize);
for(int i = 0; i < bufferReadResult; i++) {
dos.writeShort(buffer[i]);
}
}
audioRecord.stop();
dos.close();
}
catch (FileNotFoundException e) {
e.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
}
I save it to a file with extension wav and use example of vorbis-java to encode, but output is only zzz.......
How to encode this to ogg vorbis in android?
I think i read this question a few weeks ago and was also super frustrated. I ended up writing the needed ndk wrapper to use Xiph.org's stuff. The only catch is that in order to make it run well, I had to enable floating point instructions. Emulators don't have floating point, so it'll crash the emulator. Run it on pretty much any phone, though, and you'll be good to go. It's designed to emulate a FileInputStream and FileOutputStream for interfacing with the vorbis files.
https://github.com/nwertzberger/libogg-vorbis-android
You seem to write raw audio data into a file instead of wav format. Wav format does have headers, not just audio data.
Note: Don't use vorbis-java, but compile from libogg and libvorbis sources at http://www.xiph.org/downloads/
Use android NDK to compile them for embedding in your apk file.
Then you can call the native code from your app to encode the audio data.
Related
I'm working on Audio recorder app using AudioRecord not MediaRecorder.
I'm writing this code to record:
private void startRecord(){
File file = new File(Environment.getExternalStorageDirectory(), "test.pcm");
try {
file.createNewFile();
OutputStream outputStream = new FileOutputStream(file);
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(outputStream);
DataOutputStream dataOutputStream = new DataOutputStream(bufferedOutputStream);
int minBufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
short[] audioData = new short[minBufferSize];
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minBufferSize);
audioRecord.startRecording();
while(recording){
int numberOfShort = audioRecord.read(audioData, 0, minBufferSize);
for(int i = 0; i < numberOfShort; i++){
dataOutputStream.writeShort(audioData[i]);
}
}
audioRecord.stop();
dataOutputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
It's working fine. but, i have a large file size. I recorded about 1 min and i had file with size about 1.2M.
I tried to use MediaRecorder using this code:
mMediaRecorder = new MediaRecorder();
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
mMediaRecorder.setOutputFile(fileName);
mMediaRecorder.prepare();
mMediaRecorder.start();
It's great and recorded 5 minutes with file size about 500K or less. But, i have to use AudioRecord because i need to apply some task on audio byte by byte.
Can i have the same file size in MediaRecorder when i use AudioRecord?
Thank you very much.
Actually with AudioRecord class you get raw data from sound source without any compression to byte buffer you work with and MediaRecorder class provides only basic functionality for recording media from any available sources without direct access to data buffers.
I assume you should use AudioRecord for capturing audio, apply your byte to byte task for data in AudioRecord buffer and then write modified data from buffer using compression to a file. As I remember, there is no already implemented functionality in android API for audio compression, so you should use third-party library (for example lame) or write compression yourself. You can check this sources for audio recording in MP3 with lame: https://github.com/yhirano/Mp3VoiceRecorderSampleForAndroid
Is it possible to play two sound (mp3) files at the same time? I have tried using two different MediaPlayer objects-
MediaPlayer mediaPlayer;
MediaPlayer mediaPlayer2;
to play the sounds, but that does not work. I cannot use SoundPool either as the sound files in use are around 10MB each (since SoundPool doesn't work well with sound files > 3MB).
Here is some code to get familiar with my situation-
#Override
public void onResume() {
super.onResume();
if(mediaPlayer == null)
{
mediaPlayer = MediaPlayer.create(getActivity().getApplicationContext(), R.raw.song1);
}
if(mediaPlayer2 == null)
{
mediaPlayer2 = MediaPlayer.create(getActivity().getApplicationContext(), R.raw.song2);
}
}
private void startPlaying() {
mediaPlayer.setLooping(true);
mediaPlayer.start();
mediaPlayer2.start();
}
Any suggestions? Is there some way to make this 2 MediaPlayer objects approach work? If not then what other options are there? Code would be helpful!
So playing two audio files simultaneously is definitely an issue so I thought, another way to look at the problem would be to combine two audio files programmatically into one, and play that. That turned out to be simple to implement with WAV files (MP3 and other compressed formats would need to be uncompressed first?). Anyway, here's how I did it:
InputStream is = getResources().openRawResource(R.raw.emokylotheme); // Name of file 1
byte [] bytesTemp2 = fullyReadFileToBytes(new File(
Environment.getExternalStorageDirectory().getAbsolutePath()+
"/Kylo Ren/"+filename+"_morphed.wav")); // Name of file 2
byte [] sample2 = convertInputStreamToByteArray(is);
byte[] temp2 = bytesTemp2.clone();
RandomAccessFile randomAccessFile2 = new RandomAccessFile(new File(
Environment.getExternalStorageDirectory().getAbsolutePath()+
"/Kylo Ren/"+filename+"_morphed.wav"), "rw");
//seek to skip 44 bytes for WAV formats
randomAccessFile2.seek(44);
for (int n = 0; n < bytesTemp2.length; n++)
{
bytesTemp2[n] = (byte) ((temp2[n] + (sample2[n])));
}
randomAccessFile2.write(bytesTemp2);
randomAccessFile2.close();
And here are the support functions:
public byte[] convertInputStreamToByteArray(InputStream inputStream)
{
byte[] bytes= null;
try
{
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte data[] = new byte[1024];
int count;
while ((count = inputStream.read(data)) != -1)
{
bos.write(data, 0, count);
}
bos.flush();
bos.close();
inputStream.close();
bytes = bos.toByteArray();
}
catch (IOException e)
{
e.printStackTrace();
}
return bytes;
}
byte[] fullyReadFileToBytes(File f) throws IOException {
int size = (int) f.length();
byte bytes[] = new byte[size];
byte tmpBuff[] = new byte[size];
FileInputStream fis= new FileInputStream(f);
try {
int read = fis.read(bytes, 0, size);
if (read < size) {
int remain = size - read;
while (remain > 0) {
read = fis.read(tmpBuff, 0, remain);
System.arraycopy(tmpBuff, 0, bytes, size - remain, read);
remain -= read;
}
}
} catch (IOException e){
throw e;
} finally {
fis.close();
}
return bytes;
}
In essence, what the code does is: It gets the bytes of the two audio files (one is within the app in R.raw and the other is in the external storage directory), sums them up, and writes the new bytes to another file.
The issue with this code is that it generates some amount of background noise. It isn't a lot but I believe the summing up of the bytes at certain points (maybe extremas?) leads to the noise. If someone knows how to fix this, could you edit the answer and let me know.
P.S. this was for an open source voice changer app with dramatic background noise effects called Kylo Ren Voice Changer (https://github.com/advaitsaravade/Kylo-Ren-Voice-Changer)
I have used two instances of MediaPlayer in a Service. My code is similar to yours. I had some problems but finally solved it. Please check my answer here Unable to play two MediaPlayer at same time in Nexus 5
If you still have problems, please put your full code and the logcat error.
You can also try to create two fragments and each play a sound.
I have another problem if i played an mp3 file, then click the back button and then start the activity again and ican play the same file again. You can will read Android Mediaplayer multiple instances when activity resumes play sound in the same time
In short, I'm looking for a way to get the byte stream from the camera while recording video.
The aim is to continuously record while saving certain portions of the current recording without stopping the actual recording process to access the output file. Is this even possible, or will I need to actually stop the recording and save it for it be playable?
I've seen projects and open source library's that allow live streaming from the camera to a server via a local socket and the ParcelFileDescriptor class, so I assume (maybe incorrectly) that the recorder byte stream must be accessible somehow.
Any suggestions or help would be greatly appreciated.
Set output file to FileDescriptor:
mRecorder.setOutputFile(getStreamFd());
Then use this function:
private FileDescriptor getStreamFd() {
ParcelFileDescriptor[] pipe = null;
try {
pipe = ParcelFileDescriptor.createPipe();
new TransferThread(new ParcelFileDescriptor.AutoCloseInputStream(pipe[0]),
new FileOutputStream(getOutputFile())).start();
} catch (IOException e) {
Log.e(getClass().getSimpleName(), "Exception opening pipe", e);
}
return (pipe[1].getFileDescriptor());
}
private File getOutputFile() {
return (new File(Environment.getExternalStorageDirectory().getPath().toString() + "/YourDirectory/filename"));
}
New thread code:
static class TransferThread extends Thread {
InputStream in;
FileOutputStream out;
TransferThread(InputStream in, FileOutputStream out) {
this.in = in;
this.out = out;
}
#Override
public void run() {
byte[] buf = new byte[8192];
int len;
try {
while ((len = in.read(buf)) > 0) {
out.write(buf, 0, len);
}
in.close();
out.flush();
out.getFD().sync();
out.close();
} catch (IOException e) {
Log.e(getClass().getSimpleName(),
"Exception transferring file", e);
}
}
}
Don't forget to add persmissions to your manifest file:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
I had a similiar problem and wanted to record H264 in an MP4 file while accessing the H264 NAL units camera byte stream (redirect it to libRTMP). The following example helped alot (requires at least Android 4.3):
http://bigflake.com/mediacodec/ <- the CameraToMpegTest.java and "Android Breakout game recorder patch" examples
Basically, Androids MediaCodec class provides low level access to the device encoders/decoders. Take a look at the function drainEncoder() of the examples above:
the video data is send to MediaMuxer to create an output file
you can easily access the H264 NAL units from encodedData ByteBuffer and process them the way you want
Example:
int old_pos = encodedData.position();
encodedData.position(0);
byte[] encoded_array = new byte[encodedData.remaining()];
encodedData.get(encoded_array);
encodedData.position(old_pos);
I'm a beginner in Android programming!
My hardware is a Samsung Galaxy Young GT-S5360!
With my app I want to record something using the mic source.
If I record a sine with 1000Hz or other sound samples I always get two transients or clicking sounds at the beginning of the sample. After 0,200s the sample looks ok for me.
How can I elimenate these transients?!
Here's my code which I'm using found on the web:
private void startrec(){
File file = new File(Environment.getExternalStorageDirectory(),"test.pcm" );
int minBufferSize = audioRecord.getMinBufferSize(44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
short[] audioData = new short[minBufferSize];
try {
file.createNewFile();
OutputStream outputStream = new FileOutputStream(file);
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(outputStream);
DataOutputStream dataOutputStream = new DataOutputStream(bufferedOutputStream);
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minBufferSize);
audioRecord.startRecording();
while(isRecording){
int numberOfShort = audioRecord.read(audioData, 0, minBufferSize);
for(int i = 0; i < numberOfShort; i++){
dataOutputStream.writeShort(audioData[i]);
}
}
audioRecord.stop();
audioRecord.release();
dataOutputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
The startrec() function is called with a button click and is stopped with another button.
Maybe the Button-Click-Sound causes the transients at the beginning, because it also uses the same SOURCE(MIC).
It could also be a setup problem, but I'dont know yet.
I also want to record with 44100 samples p/s .
During the time I have tried different sample rates, but there's still the same problem.
I hope someone can help me and can give me some advice!
Have a nice day!
I am trying to play the bigger size audio wav file(which is >20 mb) using the following code(AudioTrack) on my Android 1.6 HTC device which basically has less memory. But i found device crash as soon as it executes reading, writing and play. But the same code works fine and plays the lesser size audio wav files(10kb, 20 kb files etc) very well.
P.S: I should play PCM(.wav) buffer sound, the reason behind why i use AudioTrack here.
Though my device has lesser memory, how would i read bigger audio files bytes by bytes and play the sound to avoid crashing due to memory constraints.
private void AudioTrackPlayPCM() throws IOException
{
String filePath = "/sdcard/myWav.wav"; // 8 kb file
byte[] byteData = null;
File file = null;
file = new File(filePath);
byteData = new byte[(int) file.length()];
FileInputStream in = null;
try {
in = new FileInputStream( file );
in.read( byteData );
in.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
int intSize = android.media.AudioTrack.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_8BIT);
AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_8BIT, intSize, AudioTrack.MODE_STREAM);
at.play();
at.write(byteData, 0, byteData.length);
at.stop();
at.release();
}
Could someone guide me please to play the AudioTrack code for bigger size wav files?
Your intSize will be the minimum buffer size needed to init the AudioTrack, try using a multiple of this to create a bigger buffer
AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, 8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT, intSize*10,
AudioTrack.MODE_STREAM);