I want to do some FSK Modulation over the audio port. So the problem is that my sinus wave isn't very good. It is disturb by even parts. I used the code original from http://marblemice.blogspot.com/2010/04/generate-and-play-tone-in-android.html with the further modification from Playing an arbitrary tone with Android and https://market.android.com/details?id=re.serialout&feature=search_result .
So where is the failure? What do I wrong?
private static int bitRate=300;
private static int sampleRate=48000;
private static int freq1=600;
public static void loopOnes(){
playque.add(UARTHigh);
athread.interrupt();
}
private static byte[] UARTHigh() {
int numSamples=sampleRate/bitRate;
double sample[]=new double[numSamples];
byte[] buffer=new byte[numSamples*2];
for(int i=0; i<numSamples;++i){
sample[i]=Math.sin(2*Math.PI*i*freq1/sampleRate);
}
int idx = 0;
for (final double dVal : sample) {
// scale to maximum amplitude
final short val = (short) ((dVal * 32767));
// in 16 bit wav PCM, first byte is the low order byte
buffer[idx++] = (byte) (val & 0x00ff);
buffer[idx++] = (byte) ((val & 0xff00) >>> 8);
}
return buffer;
}
private static void playSound(){
active = true;
while(active)
{
try {Thread.sleep(Long.MAX_VALUE);} catch (InterruptedException e) {
while (playque.isEmpty() == false)
{
if (atrk != null)
{
if (generatedSnd != null)
{
// Das letzte Sample erst fertig abspielen lassen
// systemClock.sleep(xx) xx könnte angepasst werden
while (atrk.getPlaybackHeadPosition() < (generatedSnd.length))
SystemClock.sleep(50); // let existing sample finish first: this can probably be set to a smarter number using the information above
}
atrk.release();
}
UpdateParameters(); // might as well do it at every iteration, it's cheap
generatedSnd = playque.poll();
length = generatedSnd.length;
if (minbufsize<length)
minbufsize=length;
atrk = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleRate, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minbufsize,
AudioTrack.MODE_STATIC);
atrk.setStereoVolume(1,1);
atrk.write(generatedSnd, 0, length);
atrk.play();
}
// Playque is Empty =>send StopBit!
// Set Loop Points
int setLoopError=atrk.setLoopPoints(0, length, -1);
atrk.play();
}
}
}
}
So the answer is to change from MODE_STATIC to MODE_STREAM and don't use Looping Points. In a new thread with low priority a busy loop writes the tracks.
Related
I am sorry if this is a trivial question but I am new in Android and have spent a few days searching but there is no answer or information satisfies me.
I want to record an audio record of length approximately 3 seconds for every 30 seconds by using an Android phone. Every record is sent to my PC (using TCP/IP protocol) for further processing.
Here is the code in Android side (I refer to the code of #TechEnd in this question: Android AudioRecord example):
private final int AUD_RECODER_SAMPLERATE = 44100; // 44.1 kHz
private final int AUD_RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
private final int AUD_RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
private final int AUD_RECORDER_BUFFER_NUM_ELEMENTS = 131072; // ~~ 1.486 second ???
private final int AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT = 2;
private AudioRecord audioRecorder = null;
private boolean isAudioRecording = false;
private Runnable runnable = null;
private Handler handler = null;
private final int AUD_RECORDER_RECORDING_PERIOD = 30000; // one fire every 30 seconds
private byte[] bData = new byte[AUD_RECORDER_BUFFER_NUM_ELEMENTS*AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT];
public void start() {
audioRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, AUD_RECODER_SAMPLERATE, AUD_RECORDER_CHANNELS, AUD_RECORDER_AUDIO_ENCODING, AUD_RECORDER_BUFFER_NUM_ELEMENTS*AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT);
audioRecorder.startRecording();
isAudioRecording = true;
handler = new Handler();
runnable = new Runnable() {
#Override
public void run() {
if (isAudioRecording) {
int nElementRead = audioRecorder.read(bData, 0, bData.length);
net_send(bData, 0, nElementRead);
}
handler.postDelayed(this, AUD_RECORDER_RECORDING_PERIOD);
}
};
handler.postDelayed(runnable, AUD_RECORDER_RECORDING_PERIOD);
}
public void stop() {
isAudioRecording = false;
if (audioRecorder != null) {
audioRecorder.stop();
audioRecorder.release();
audioRecorder = null;
}
handler.removeCallbacks(runnable);
}
public void net_send(byte[] data, int nbytes) {
try {
dataOutputStream.writeInt(nbytes);
dataOutputStream.write(data,0,nbytes);
} catch (IOException e) {
e.printStackTrace();
}
}
And in PC side (server written in C), after receive a record (I checked and they are all 262144 bytes), I first write the byte array to a binary file (with extension .raw) and open with Free Audio Editor (http://www.free-audio-editor.com/) and obtain the result with duration 1.486 seconds
https://www.dropbox.com/s/xzml51jzvagl6dy/aud1.PNG?dl=0
And then I convert every two consecutive bytes into a 2-bytes integer using this function
short bytes2short( const char num_buf[2] )
{
return(
( ( num_buf[1] & 0xFF ) << 8 ) |
( num_buf[0] & 0xFF )
);
}
and write to file (length is 131072 bytes) and plot (the normalized one) with Excel, the similar graph is obtained.
As I calculated, the number of bytes recorded in one second is 44100(sample/sec)*1(sec)*2(byte/sample/channel)*1(channel) = 88200 bytes.
So with my buffer of length 131072*2 (bytes), the corresponding duration should be 262144/88200 = 2.97 seconds. But the result I obtain is just a half. I tried on three different devices running Android OS version 2.3.3, 2.3.4 and 4.3 and obtain the same result. Thus, this is my own problem.
Could anyone tell me where is the problem, in my calculation or in my code? I my understanding is correct?
Any comments or suggestion would be appreciated.
I am currently developing an Android application that has to record the microphone input as PCM stream.
Whenever I record something, I experience some strange stutter and I can't find a solution to this.
Here's my code:
In my MainActivity I have an ASyncTask for the Microphone input:
ArrayList<byte[]> mBufferList;
#Override
protected String doInBackground(String... params) {
Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
mMicrophone = new Microphone();
mMicrophone.init();
byte[] buffer;
while (mRecord) {
try {
mMicrophone.record();
buffer = mMicrophone.getBuffer();
mBufferList.add(buffer);
}
catch
{
}
}
}
In my Microphone class I initialize the AudioRecorder:
public void init() {
Log.d("DEBUG", "Microphone: Recording started");
mBufferSize = AudioRecord.getMinBufferSize(44100,
AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT);
mRecorder = new AudioRecord(AudioSource.MIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, mBufferSize);
mRecorder.startRecording();
mBuffer = new short[mBufferSize];
}
The record method:
public void record() throws IOException {
mRecorder.read(mBuffer, 0, mBufferSize);
}
Short[] to Byte[]:
public byte[] shortToBytes(short[] sData) {
int shortArrsize = sData.length;
byte[] bytes = new byte[shortArrsize * 2];
for (int i = 0; i < shortArrsize; i++) {
bytes[i * 2] = (byte) (sData[i] & 0x00FF);
bytes[(i * 2) + 1] = (byte) (sData[i] >> 8);
sData[i] = 0;
}
return bytes;
}
Method to retrieve the buffer:
public byte[] getBuffer() {
byte[] buffer = shortToBytes(mBuffer);
return buffer;
}
I have uploaded a wav-file which demonstrates the stutter effect. I'm saying 'One':
Wav-File
I already tried to change the samplerates, buffersizes et cetera, but with no avail.
Any help is very appreciated! Would be great if anyone could help me out!
Please note: This error is not caused by the way I replay the pcm stream since I have tested it on an android devices and even sent the raw data to a server to convert the file to a wav there.
After hours and hours of desperately searching for a solution I have finally found the error.
I accidentaly created my short buffer in the Microphone class like this:
mBuffer = new short[mBufferSize];
The buffer size is in bytes though, so I of course have to use mBuffersize/2
mBuffer = new short[mBufferSize/2];
I will keep my question online in case anyone is interested in the code and /or has a similar problem.
I am reading values from a wav file; selecting only some of those values and writing them into another wav file (inorder to remove silence periods from the wav file). The problem is, that when I am creating this new wav file, it has background noise (which is not present in the original wav file). I am adding here the part of the code which is doing the file writing part:
private void writeToFile(String filePath) {
short nChannels = 1;
int sRate = 16000;
short bSamples = 16;
audioShorts = new short[size];
int nSamples = 0;
for(int i=0; i<size-1; i++) {
//audioShorts[i] = Short.reverseBytes((short)(zff[i]*0x8000));
if(slope[i] >= slopeThreshold) { // Voice region -- Should be written to output
audioShorts[nSamples] = Short.reverseBytes((short)(a[i]*0x8000));
audioShorts[nSamples+1] = Short.reverseBytes((short)(a[i+1]*0x8000));
nSamples += 2;
i++;
}
/*else
audioShorts[i] = 0;*/
}
finalShorts = new short[nSamples];
for(int i=0; i<nSamples; i++){
finalShorts[i] = audioShorts[i];
}
data = new byte[finalShorts.length*2];
ByteBuffer buffer = ByteBuffer.wrap(data);
ShortBuffer sbuf = buffer.asShortBuffer();
sbuf.put(finalShorts);
data = buffer.array();
Log.d("Data length------------------------------", Integer.toString(data.length));
RandomAccessFile randomAccessWriter;
try {
randomAccessWriter = new RandomAccessFile(filePath, "rw");
randomAccessWriter.setLength(0); // Set file length to 0, to prevent unexpected behaviour in case the file already existed
randomAccessWriter.writeBytes("RIFF");
randomAccessWriter.writeInt(Integer.reverseBytes(36+data.length)); // File length
randomAccessWriter.writeBytes("WAVE");
randomAccessWriter.writeBytes("fmt ");
randomAccessWriter.writeInt(Integer.reverseBytes(16)); // Sub-chunk size, 16 for PCM
randomAccessWriter.writeShort(Short.reverseBytes((short) 1)); // AudioFormat, 1 for PCM
randomAccessWriter.writeShort(Short.reverseBytes(nChannels));// Number of channels, 1 for mono, 2 for stereo
randomAccessWriter.writeInt(Integer.reverseBytes(sRate)); // Sample rate
randomAccessWriter.writeInt(Integer.reverseBytes(sRate*bSamples*nChannels/8)); // Byte rate, SampleRate*NumberOfChannels*BitsPerSample/8
randomAccessWriter.writeShort(Short.reverseBytes((short)(nChannels*bSamples/8))); // Block align, NumberOfChannels*BitsPerSample/8
randomAccessWriter.writeShort(Short.reverseBytes(bSamples)); // Bits per sample
randomAccessWriter.writeBytes("data");
randomAccessWriter.writeInt(Integer.reverseBytes(data.length)); // No. of samples
randomAccessWriter.write(data);
randomAccessWriter.close();
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Your code snippet leaves some details out (like what slope and slopeThreshold are), so treat this answer as a suggestion only.
In general, this kind of chopping of audio data will introduce noise. It depends on where the cut happens. If the last sample before a cut is identical to the first one after it, you're safe, but otherwise you will introduce a click.
If the cuts are infrequent, you will be hearing individual clicks but if the chopping happens often enough, it might sound like continuous noise.
To do this without clicks, you would need to add a short fade out and fade in around each cut.
EDIT: try removing the "if (slope[i] >= slopeThreshold)" condition and see if the noise disappears. If so, the noise is very likely a result of what I described. Otherwise, you probably have some error with the various byte conversions.
Instead of:
data = new byte[finalShorts.length*2];
ByteBuffer buffer = ByteBuffer.wrap(data);
ShortBuffer sbuf = buffer.asShortBuffer();
sbuf.put(finalShorts);
data = buffer.array();
would not it be necessary to convert from short [] to byte [] ?
data = shortToBytes(finalShorts);
public byte [] shortToBytes(short [] input){
int short_index, byte_index;
int iterations = input.length;
byte [] buffer = new byte[input.length * 2];
short_index = byte_index = 0;
for(/*NOP*/; short_index != iterations; /*NOP*/)
{
buffer[byte_index] = (byte) (input[short_index] & 0x00FF);
buffer[byte_index + 1] = (byte) ((input[short_index] & 0xFF00) >> 8);
++short_index; byte_index += 2;
}
return buffer;
}
This work for me.
i want to send an audio stream from PC (C++ application, using FMOD-API to decode audio data and send via UDP Socket) to an android device. The communication already works and i can hear "sound" (100ms sound, followed by 900ms silence, alternating) on the android.
I don't know why the sound is stuttering - on the PC the same audio stream is played fine in nice quality. I think the problem is on the android..
Here is the code:
DatagramSocket sock = new DatagramSocket(12345);
byte []bSockBuffer = new byte[1024];
byte []bRecvBufTmp;
int iAudioBufSize, iCurAudioBufPos = 0;
sock.setReceiveBufferSize(bSockBuffer.length);
// Audio Stream initialisieren:
iAudioBufSize = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT, iAudioBufSize, AudioTrack.MODE_STREAM);
track.play();
while (true)
{
DatagramPacket pack = new DatagramPacket(bSockBuffer, bSockBuffer.length);
// Paket empfangen:
sock.receive(pack);
track.write(pack.getData(), 0, pack.getLength());
}
I'm sure to set up 'AudioTrack' object correctly, settings compare to my settings in the c++ application.
An other step was pre-buffering the received socket-data in a temporary 'byte[]' variable and writing it to the AudioTrack-object when the size of the buffer 'iAudioBufSize' was reached.
This did not helped.
Any idears?
Thanks
[EDIT]
Code of C++ Application, used sample "manualdecode" of FMOD API examples:
FMOD_RESULT F_CALLBACK pcmreadcallback(FMOD_SOUND *sound, void *data, unsigned int datalen)
{
CCtrlSocket *cClientTmp = /* Obtaining target client sock here */;
FMOD_RESULT result;
unsigned int read, uSentTmp, uSizeTmp;
EnterCriticalSection(&decodecrit);
if (!decodesound)
return (FMOD_ERR_FILE_EOF);
result = decodesound->readData(data, datalen, &read);
if (result == FMOD_ERR_FILE_EOF)
{
// Handle looping:
decodesound->seekData(0);
datalen -= read;
result = decodesound->readData((char*) data + read, datalen, &read);
}
// Split package in multiple parts:
uSentTmp = 0;
do
{
uSizeTmp = (read - uSentTmp);
if (uSizeTmp > 1024)
uSizeTmp = 1024;
uSentTmp += cClientTmp->SendAudioData((char*) data + uSentTmp, uSizeTmp);
} while (uSentTmp < read);
LeaveCriticalSection(&decodecrit);
return (FMOD_OK);
}
I've done this problem.
The mess was an entry in a logfile that has cost lots of time creating a lag :(
Now i can hear the streamed music on my android client. But there are still some lags. I've experimented a LOT of values for socket and AudioTrack buffers.
I have compared the amount of sent and received bytes: In 20 secs sending 9170000 bytes of data results in receiving 8120000 bytes on android device. At first the stream is played fast for 3 secs (that means buffer's full?). After 30 secs the stream lags (which means buffer's empty?).
In general the music quality is very good, but there is a sizzling noise all the time (which indicates lost socket packages?).
My 'PlaybackStart()' function has changed - i'm not using a PCM read callback anymore:
FMOD_RESULT CAudioStream::PlaybackStart()
{
CCtrlSocket *cClientTmp;
unsigned int read, uSentTmp, uSizeTmp;
FMOD_RESULT result;
result = system->createStream("C:\\test.mp3", FMOD_OPENONLY | FMOD_ACCURATETIME, 0, &sound);
if(result != FMOD_OK)
return (result);
int iChannels, iBits;
FMOD_SOUND_FORMAT fFormat;
FMOD_SOUND_TYPE fType;
result = sound->getFormat(&fType, &fFormat, &iChannels, &iBits);
if(result != FMOD_OK)
return (result);
void *data;
unsigned int length = 0;
int iSampleSec = 1; // Playtime
int iSampleSize = (44100 * 2 * sizeof(signed short) * iSampleSec);
int iSleep = 6; // Sleep after sending a package
DWORD dSleepTotal;
result = sound->getLength(&length, FMOD_TIMEUNIT_PCMBYTES);
if(result != FMOD_OK)
return (result);
data = malloc(iSampleSize);
if (!data)
return (FMOD_RESULT_FORCEINT);
cClientTmp = (CCtrlSocket*) CCtrlSocket::cServerSock.GetClientSock(CCtrlSocket::cServerSock.GetClientSockCount() - 1);
do
{
result = sound->readData((char*) data, iSampleSize, &read);
if ((result != FMOD_OK) && (result != FMOD_ERR_FILE_EOF))
ASSERT(FALSE);
else if (read > 0)
{
dSleepTotal = 0;
for (int i = 0; i < read; i += NET_SVR_AUDIO_BUFFER)
{
// MIN_VAL_LIMITED ((MIN_VAL(VAL1, VAL2) <= LIMIT) ? LIMIT : MIN_VAL(VAL1, VAL2))
cClientTmp->SendAudioData((char*) data + i, MIN_VAL_LIMITED(NET_SVR_AUDIO_BUFFER, (read - i), 0));
// Sleep after sending every package:
Sleep(iSleep);
dSleepTotal += iSleep;
}
if (dSleepTotal < (iSampleSec * 1000))
{
dSleepTotal = (iSampleSec * 1000) - dSleepTotal;
// Sleep after sending every second playtime:
Sleep(dSleepTotal);
}
}
} while (read > 0);
result = sound->release();
if(result != FMOD_OK)
return (result);
result = system->close();
if(result != FMOD_OK)
return (result);
result = system->release();
if(result != FMOD_OK)
return (result);
return (result);
}
I have experimented with different sleep-timings, too.
i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.