Play sound from array on Android - android

Solved: I forgot the track.play(); at the end...
I want to play a sound on my Android Smartphone (4.0.4 Api level 15).
I tried to hear some random noise, but its not working:
public class Sound {
private static int length = 22050 * 10; //10 seconds long
private static byte[] data = new byte[length];
static void fillRandom() {
new Random().nextBytes(data); //Create some random noise to listen to.
}
static void play() {
fillRandom();
final int TEST_SR = 22050; //This is from an example I found online.
final int TEST_CONF = AudioFormat.CHANNEL_OUT_MONO;
final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
final int TEST_MODE = AudioTrack.MODE_STATIC; //I need static mode.
final int TEST_STREAM_TYPE = AudioManager.STREAM_ALARM;
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, length, TEST_MODE);
track.write(data, 0, length);
}
}
I have played a little bit with the variabels, but could not get it to work.

All you have left to do is play it. Add this line to the end of your play() function:
track.play();

Related

The audio file on iOS that is being played on Android does not hear me. Only sound that's not good

I'm working on a recorder app on iOS and a friend of mine he's doing it on Android, but when he want's to play my url sound which I send to server and which also works fine for me when I play it .. he hears a lot of noise and not my actual recorder.
func setupRecorder() {
let myID = userDefaults.value(forKey: "user_id") as! Int
let message_token = generateUniqueToken()
self.curentFileName = message_token
let currentFileName = "\(String(describing: self.curentFileName!))"+"."+"\(myID).m4a"
let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
self.soundFileURL = documentsDirectory.appendingPathComponent(currentFileName)
print("writing to soundfile url: '\(soundFileURL!)'")
if FileManager.default.fileExists(atPath: soundFileURL.absoluteString) {
print("soundfile \(soundFileURL.absoluteString) exists")
}
let recordSettings: [String : Any] = [AVFormatIDKey: kAudioFormatAppleLossless,
AVEncoderAudioQualityKey: AVAudioQuality.max.rawValue,
AVEncoderBitRateKey: 320000,
AVNumberOfChannelsKey: 2,
AVSampleRateKey: 44100.0]
do {
soundRecorder = try AVAudioRecorder.init(url: soundFileURL, settings: recordSettings)
soundRecorder.delegate = self
soundRecorder.isMeteringEnabled = true
soundRecorder.prepareToRecord()
}
catch {
soundRecorder = nil
print(error.localizedDescription)
}
soundRecorder.prepareToRecord()
}
This is my recorder settings....
What I have from him it s only
private static final int SAMPLE_PER_SEC = 8000;
private static final int CHANNEL_TYPE = AudioFormat.CHANNEL_IN_MONO;
private static final int AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
private static int BUFFER_ELEMENT_TO_REC = 1024; // want to play 2048 (2K) since 2 bytes we use only 1024
private static int BYTES_PER_ELEMENT = 2;
ENCODING_PCM_16BIT
He said I m encoding my sound somehow and he s doing a raw record...I have no ideea where i m messing it since it s my first time with AvFoundation
Instead of using kAudioFormatAppleLossless change that to kAudioFormatLinearPCM. kAudioFormatAppleLossless format is a format only for Apple devices. It is better to use a common format for Android and iOS
devices, so that if any processing in the server for the audio, server doesn't have to handle the audio in two different way

play an array of sounds and change the tempo using soundpool

I am trying to develop an application which plays one of the predefined arrays of sounds continuously and also change the the tempo,pitch (individually) etc using soundpool class of android.
I know how to change the pitch of a sound.
But I do not know how to play the whole array and also the tempo of that whole array.
Please help!
Thank you in advance!
EDIT : I think I will have to use Handler in order to play the array of sounds, but i don't know how!
public class Sound {
private static int length = 22050 * 10; //10 seconds long
private static byte[] data = new byte[length];
static void fillRandom() {
new Random().nextBytes(data); //Create some random noise to listen to.
}
static void play() {
fillRandom();
final int TEST_SR = 22050; //This is from an example I found online.
final int TEST_CONF = AudioFormat.CHANNEL_OUT_MONO;
final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
final int TEST_MODE = AudioTrack.MODE_STATIC; //I need static mode.
final int TEST_STREAM_TYPE = AudioManager.STREAM_ALARM;
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, length, TEST_MODE);
track.write(data, 0, length);
track.play();
}
}
This code play some random noises. All you have to do is fill the byte[]data array with the proper music files you intend to play...

Android AudioRecord calculate duration of PCM buffer

I am sorry if this is a trivial question but I am new in Android and have spent a few days searching but there is no answer or information satisfies me.
I want to record an audio record of length approximately 3 seconds for every 30 seconds by using an Android phone. Every record is sent to my PC (using TCP/IP protocol) for further processing.
Here is the code in Android side (I refer to the code of #TechEnd in this question: Android AudioRecord example):
private final int AUD_RECODER_SAMPLERATE = 44100; // 44.1 kHz
private final int AUD_RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
private final int AUD_RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
private final int AUD_RECORDER_BUFFER_NUM_ELEMENTS = 131072; // ~~ 1.486 second ???
private final int AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT = 2;
private AudioRecord audioRecorder = null;
private boolean isAudioRecording = false;
private Runnable runnable = null;
private Handler handler = null;
private final int AUD_RECORDER_RECORDING_PERIOD = 30000; // one fire every 30 seconds
private byte[] bData = new byte[AUD_RECORDER_BUFFER_NUM_ELEMENTS*AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT];
public void start() {
audioRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, AUD_RECODER_SAMPLERATE, AUD_RECORDER_CHANNELS, AUD_RECORDER_AUDIO_ENCODING, AUD_RECORDER_BUFFER_NUM_ELEMENTS*AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT);
audioRecorder.startRecording();
isAudioRecording = true;
handler = new Handler();
runnable = new Runnable() {
#Override
public void run() {
if (isAudioRecording) {
int nElementRead = audioRecorder.read(bData, 0, bData.length);
net_send(bData, 0, nElementRead);
}
handler.postDelayed(this, AUD_RECORDER_RECORDING_PERIOD);
}
};
handler.postDelayed(runnable, AUD_RECORDER_RECORDING_PERIOD);
}
public void stop() {
isAudioRecording = false;
if (audioRecorder != null) {
audioRecorder.stop();
audioRecorder.release();
audioRecorder = null;
}
handler.removeCallbacks(runnable);
}
public void net_send(byte[] data, int nbytes) {
try {
dataOutputStream.writeInt(nbytes);
dataOutputStream.write(data,0,nbytes);
} catch (IOException e) {
e.printStackTrace();
}
}
And in PC side (server written in C), after receive a record (I checked and they are all 262144 bytes), I first write the byte array to a binary file (with extension .raw) and open with Free Audio Editor (http://www.free-audio-editor.com/) and obtain the result with duration 1.486 seconds
https://www.dropbox.com/s/xzml51jzvagl6dy/aud1.PNG?dl=0
And then I convert every two consecutive bytes into a 2-bytes integer using this function
short bytes2short( const char num_buf[2] )
{
return(
( ( num_buf[1] & 0xFF ) << 8 ) |
( num_buf[0] & 0xFF )
);
}
and write to file (length is 131072 bytes) and plot (the normalized one) with Excel, the similar graph is obtained.
As I calculated, the number of bytes recorded in one second is 44100(sample/sec)*1(sec)*2(byte/sample/channel)*1(channel) = 88200 bytes.
So with my buffer of length 131072*2 (bytes), the corresponding duration should be 262144/88200 = 2.97 seconds. But the result I obtain is just a half. I tried on three different devices running Android OS version 2.3.3, 2.3.4 and 4.3 and obtain the same result. Thus, this is my own problem.
Could anyone tell me where is the problem, in my calculation or in my code? I my understanding is correct?
Any comments or suggestion would be appreciated.

split two channels of AudioRecord of CHANNEL_IN_STEREO

I'm working on a project using stereo record of the Android phones (note 3). But I need to split the data from different channels (right, left). Any idea of how to perform that?
Now, I use AudioRecord to record the sound of internal microphones. And I can record, save the sound to .raw and .wav files.
Some codes as follows.
private int audioSource = MediaRecorder.AudioSource.MIC;
private static int sampleRateInHz = 44100;
private static int channelConfig = AudioFormat.CHANNEL_IN_STEREO;
private static int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
bufferSizeInBytes = AudioRecord.getMinBufferSize(sampleRateInHz,
channelConfig, audioFormat);
audioRecord = new AudioRecord(audioSource, sampleRateInHz,
channelConfig, audioFormat, bufferSizeInBytes);
// some other codes....
//get the data from audioRecord
readsize = audioRecord.read(audiodata, 0, bufferSizeInBytes);
Finally, I got the answers. I used stereo record of android phone. And the audioFormat is PCM_16BIT.
private int audioSource = MediaRecorder.AudioSource.MIC;
private static int sampleRateInHz = 48000;
private static int channelConfig = AudioFormat.CHANNEL_IN_STEREO;
private static int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
which means the data stored in buffer as follows.
leftChannel data: [0,1],[4,5]...
rightChannel data: [2,3],[6,7]...
So the code of splitting data of stereo record.
readSize = audioRecord.read(audioShortData, 0, bufferSizeInBytes);
for(int i = 0; i < readSize/2; i = i + 2)
{
leftChannelAudioData[i] = audiodata[2*i];
leftChannelAudioData[i+1] = audiodata[2*i+1];
rightChannelAudioData[i] = audiodata[2*i+2];
rightChannelAudioData[i+1] = audiodata[2*i+3];
}
Then you can write the data to file.
leftChannelFos = new FileOutputStream(rawLeftChannelDataFile);
rightChannelFos = new FileOutputStream(rawRightChannelDataFile);
leftChannelBos = new BufferedOutputStream(leftChannelFos);
rightChannelBos = new BufferedOutputStream(rightChannelFos);
leftChannelDos = new DataOutputStream(leftChannelBos);
rightChannelDos = new DataOutputStream(rightChannelBos);
leftChannelDos.write(leftChannelAudioData);
rightChannelDos.write(rightChannelAudioData);
Happy coding!

mp3 encoder parameters and settings

I tried to encode an mp3 sound in android device so I used lame encoder, it was successful but there are some settings and parameters for encoding like sample rate, bitrate and etc, I searched to find what they are and what choices is possible but I do not find something good, can anyone help me with them?
Here they are:
public static final int NUM_CHANNELS = 1;
public static final int SAMPLE_RATE = 16000;
public static final int BITRATE = 128;
public static final int MODE = 1;
public static final int QUALITY = 2;
To find out more about these parameters, take a look at http://en.wikipedia.org/wiki/MP3 first. You may need to check more specific details on mp3 codec documentation, too.

Categories

Resources