This is my array, in the form of audio files
int[] rawQuetion = {R.raw.alikhlas, R.raw.alkafirun};// this for question
int [] rawAnswer={R.raw.jwbaliklas,R.raw.alfalaq };// this for answer
and this method to randomize questions
//fisher-yates Shuffle
public void playSoal() {
shuffleArray(rawQuetion);
try{
int idx = new Random().nextInt(rawQuetion.length);
mp = MediaPlayer.create(this, rawQuetion[idx]);
mp.start();
}
static void shuffleArray(int[] arr)
{
Random rnd = new Random();
for (int i = arr.length - 1; i > 0; i--)
{
int index = rnd.nextInt(i + 1);
// Swap
int a = arr[index];
arr[index] = arr[i];
arr[i] = a;
}
}
public void audioFile() throws IOException{
InputStream is = getResources().openRawResource(R.raw.jwbaliklas);// I want get audio file from rawAnswer based rawQuestion
ByteArrayOutputStream out = new ByteArrayOutputStream();
BufferedInputStream in = null;
in = new BufferedInputStream(is);
int read;
byte[] buff = new byte[1024];
while ((read = in.read(buff)) > 0)
{
out.write(buff, 0, read);
}
out.flush();
byte[] audioBytes = out.toByteArray();
for (int i = 0; i < audioBytes.length; i++) {
audioBytes[i] = (byte) ((audioBytes[i]) & 0xFF); }
absNormalizedSignal = hitungFFT(audioBytes);
AppLog.logString("===== INI DARI AUDIO FILE");
}
public void audioFile() throws IOException{
InputStream is = getResources().openRawResource(R.raw.jwbaliklas);// I want get audio file from rawAnswer based rawQuestion
You have your answer right here: use openRawResource() to open a raw resource. You don't need to hard-code specific argument values in your code. The method takes an int argument. How you determine the value to pass it entirely up to you. For example, you can declare your audioFile() method to take an integer argument and pass that on to openRawResource():
public void audioFile(final int resid) throws IOException{
InputStream is = getResources().openRawResource(resid);
Then, where you have your corresponding question/answer audio ids, you can pass the correct id.
Related
I get two different audio samples from two sources.
For microphone sound:
audioRecord =
new AudioRecord(MediaRecorder.AudioSource.DEFAULT, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT,
(AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT)*5));
For Internal sound:
audioRecord = new AudioRecord.Builder()
.setAudioPlaybackCaptureConfig(config)
.setAudioFormat(new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_16BIT)
.setSampleRate(44100)
.setChannelMask(AudioFormat.CHANNEL_IN_STEREO)
.build())
.setBufferSizeInBytes((AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT)*5))
.build();
For reading from the audioRecord object we create individual frame objects(Custom objects called frame)-
private ByteBuffer pcmBuffer = ByteBuffer.allocateDirect(4096);
private Frame read() {
pcmBuffer.rewind();
int size = audioRecord.read(pcmBuffer, pcmBuffer.remaining());
if (size <= 0) {
return null;
}
return new Frame(pcmBuffer.array(),
pcmBuffer.arrayOffset(), size);
}
We create two separate LL(Linked List) for adding these frames that we get from read function.
private LinkedList internalAudioQueue = new LinkedList<>();
private LinkedList microphoneAudioQueue = new LinkedList<>();
public void onFrameReceived(Frame frame, boolean isInternalAudio) {
if (isInternalAudio) {
internalAudioQueue.add(frame);
} else {
microphoneAudioQueue.add(frame);
}
checkAndPoll();
}
Every time we add a frame in the respective LL we call the following checkAndPoll() function and depending upon the case pass the frame to the audioEncoder.
public void checkAndPoll() {
Frame frame1 = internalAudioQueue.poll();
Frame frame2 = microphoneAudioQueue.poll();
if (frame1 == null && frame2 != null) {
audioEncoder.inputPCMData(frame2);
} else if (frame1 != null && frame2 == null) {
audioEncoder.inputPCMData(frame1);
} else if (frame1 != null && frame2 != null) {
Frame frame = new Frame(PCMUtil.mix(frame1.getBuffer(), frame2.getBuffer(), frame1.getSize(), frame2.getSize(), false), frame1.getOrientation(), frame1.getSize());
audioEncoder.inputPCMData(frame);
}
}
Now we mix the audio samples in form of ByteBuffer from the two sources in this way taking Hendrik's help from this link.
public static byte[] mix(final byte[] a, final byte[] b, final boolean bigEndian) {
final byte[] aa;
final byte[] bb;
final int length = Math.max(a.length, b.length);
// ensure same lengths
if (a.length != b.length) {
aa = new byte[length];
bb = new byte[length];
System.arraycopy(a, 0, aa, 0, a.length);
System.arraycopy(b, 0, bb, 0, b.length);
} else {
aa = a;
bb = b;
}
// convert to samples
final int[] aSamples = toSamples(aa, bigEndian);
final int[] bSamples = toSamples(bb, bigEndian);
// mix by adding
final int[] mix = new int[aSamples.length];
for (int i=0; i<mix.length; i++) {
mix[i] = aSamples[i] + bSamples[i];
// enforce min and max (may introduce clipping)
mix[i] = Math.min(Short.MAX_VALUE, mix[i]);
mix[i] = Math.max(Short.MIN_VALUE, mix[i]);
}
// convert back to bytes
return toBytes(mix, bigEndian);
}
private static int[] toSamples(final byte[] byteSamples, final boolean bigEndian) {
final int bytesPerChannel = 2;
final int length = byteSamples.length / bytesPerChannel;
if ((length % 2) != 0) throw new IllegalArgumentException("For 16 bit audio, length must be even: " + length);
final int[] samples = new int[length];
for (int sampleNumber = 0; sampleNumber < length; sampleNumber++) {
final int sampleOffset = sampleNumber * bytesPerChannel;
final int sample = bigEndian
? byteToIntBigEndian(byteSamples, sampleOffset, bytesPerChannel)
: byteToIntLittleEndian(byteSamples, sampleOffset, bytesPerChannel);
samples[sampleNumber] = sample;
}
return samples;
}
private static byte[] toBytes(final int[] intSamples, final boolean bigEndian) {
final int bytesPerChannel = 2;
final int length = intSamples.length * bytesPerChannel;
final byte[] bytes = new byte[length];
for (int sampleNumber = 0; sampleNumber < intSamples.length; sampleNumber++) {
final byte[] b = bigEndian
? intToByteBigEndian(intSamples[sampleNumber], bytesPerChannel)
: intToByteLittleEndian(intSamples[sampleNumber], bytesPerChannel);
System.arraycopy(b, 0, bytes, sampleNumber * bytesPerChannel, bytesPerChannel);
}
return bytes;
}
// from https://github.com/hendriks73/jipes/blob/master/src/main/java/com/tagtraum/jipes/audio/AudioSignalSource.java#L238
private static int byteToIntLittleEndian(final byte[] buf, final int offset, final int bytesPerSample) {
int sample = 0;
for (int byteIndex = 0; byteIndex < bytesPerSample; byteIndex++) {
final int aByte = buf[offset + byteIndex] & 0xff;
sample += aByte << 8 * (byteIndex);
}
return (short)sample;
}
// from https://github.com/hendriks73/jipes/blob/master/src/main/java/com/tagtraum/jipes/audio/AudioSignalSource.java#L247
private static int byteToIntBigEndian(final byte[] buf, final int offset, final int bytesPerSample) {
int sample = 0;
for (int byteIndex = 0; byteIndex < bytesPerSample; byteIndex++) {
final int aByte = buf[offset + byteIndex] & 0xff;
sample += aByte << (8 * (bytesPerSample - byteIndex - 1));
}
return (short)sample;
}
private static byte[] intToByteLittleEndian(final int sample, final int bytesPerSample) {
byte[] buf = new byte[bytesPerSample];
for (int byteIndex = 0; byteIndex < bytesPerSample; byteIndex++) {
buf[byteIndex] = (byte)((sample >>> (8 * byteIndex)) & 0xFF);
}
return buf;
}
private static byte[] intToByteBigEndian(final int sample, final int bytesPerSample) {
byte[] buf = new byte[bytesPerSample];
for (int byteIndex = 0; byteIndex < bytesPerSample; byteIndex++) {
buf[byteIndex] = (byte)((sample >>> (8 * (bytesPerSample - byteIndex - 1))) & 0xFF);
}
return buf;
}
The mixed samples that I am getting have both distortion and noise. Not able to figure out what needs to be done to remove it. Any help here is appreciated.
Thanks in Advance!
I think if you're mixing, you should take the (weighted) average of both.
If you've got a sample 128 and 128, the result would be still 128, not 256, which could be out-of-range.
So just change your code to:
// mix by adding
final int[] mix = new int[aSamples.length];
for (int i=0; i<mix.length; i++) {
// calculating the average
mix[i] = (aSamples[i] + bSamples[i]) >> 1;
}
Does that work for you?
This is my code, but the second split of the video is different, therefore when combining the split parts the video plays the first split but the second part it does not.
int bufferSize = (int) video_size_bytes;
byte[] buffer = new byte[bufferSize];
ByteArrayOutputStream byteBuffer = new ByteArrayOutputStream();
int len = 0;
try {
while ((len = inputStream.read(buffer)) != -1)
{
byteBuffer.write(buffer, 0, len);
}
} catch (IOException e) {
e.printStackTrace();
Log.i(PostsActivity.TAG, "IOException: " + e.getMessage());
}
int offset = 0;
int addition = 100000;
int length = 100000;
int limit = (int) video_size_bytes;
boolean stop_loop = false;
boolean loop_mock = true;
do{
//Converting bytes into base64
String video_string_raw = Base64.encodeToString(byteBuffer.toByteArray(), offset, length, Base64.DEFAULT);
String video_string_raw_refined = video_string_raw.trim();
video_string = video_string_raw_refined.replaceAll("\n", "");
video_parts_array_string.add(video_string);
if(stop_loop){
break;
}
offset = offset + addition;
if((offset+addition) > limit){
length = limit-offset;
stop_loop = true;
}else{
offset = offset + addition;
}
}while(loop_mock);
This array of questions and answers,
R.raw.ikhlas example is the question to answer R.raw.jwbalikhlas
int[] rawQuetion = {R.raw.alfalaq, R.raw.alikhlas, R.raw.alkafirun, R.raw.allahab};
int [] rawAnswer={R.raw.jwbaliklas};
This method to randomize questions
//fisher-yates Shuffle
public void playSoal() {
shuffleArray(rawQuetion);
try{
int idx = new Random().nextInt(rawQuetion.length);
mp = MediaPlayer.create(this, rawQuetion[idx]);
mp.start();
}catch(Exception e){
Log.e("ERROR", "Media Player", e);
mp = null;
mp.release();
mp.stop();
e.printStackTrace();
}
}
static void shuffleArray(int[] arr)
{
Random rnd = new Random();
for (int i = arr.length - 1; i > 0; i--)
{
int index = rnd.nextInt(i + 1);
// Swap
int a = arr[index];
arr[index] = arr[i];
arr[i] = a;
}
}
I want when the quiz questions selected at random, will answer here
public void audioFile() throws IOException{
InputStream is = getResources().openRawResource(R.raw.jwbaliklas);//I want this to be obtained from the above array
ByteArrayOutputStream out = new ByteArrayOutputStream();
BufferedInputStream in = null;
in = new BufferedInputStream(is);
int read;
byte[] buff = new byte[1024];
while ((read = in.read(buff)) > 0)
{
out.write(buff, 0, read);
}
out.flush();
byte[] audioBytes = out.toByteArray();
for (int i = 0; i < audioBytes.length; i++) {
audioBytes[i] = (byte) ((audioBytes[i]) & 0xFF); }
absNormalizedSignal = hitungFFT(audioBytes);
AppLog.logString("===== From audio File");
}
If you pass the array id for the resource you want to open, you can use the index directly from the array of integers (as long as it is visible from the audioFile scope.
public void audioFile(#RawRes int i) throws IOException {
InputStream is = getResources().openRawResource(rawQuestion[i]);
...
}
Also, you have to change include the annotation #RawRes:
#RawRes int[] rawQuestion = {R.raw.alfalaq, R.raw.alikhlas, R.raw.alkafirun, R.raw.allahab};
#RawRes int[] rawAnswer = {R.raw.jwbaliklas};
I want to play my .mp3 files sequentially, but they are running almost at the same time.How can I solve it?
I parsed the "long list of station names" to groups of 3 station names.I send them and get InputStream, convert them into mp3's and then try to play them in media player one after another.
Here is the Code as it flows: first Method to run is readStationNames():
public void readStationNames(String[] arrayOfStations) throws IOException
{
int mod = (arrayOfStations.length) % 3;
int length = (arrayOfStations.length);
int m = 0;
for(int k = 1; k <= (length) / 3; k++)
{
String stationNamesAsString = "";
for(int j = m; j < 3 * k ; j++)
{
stationNamesAsString = stationNamesAsString+"\t"+","+arrayOfStations[j];
}
speak(stationNamesAsString);
m += 3;
}
if(mod != 0)
{
String stationNamesAsString = "";
for(int g = (length-1)-mod; g < length; g++)
{
stationNamesAsString = stationNamesAsString+"\t"+","+ arrayOfStations[g];
}
speak(stationNamesAsString);
}
ListenMicrophone();
}
Then the speak() method that is called from arrayOfStations() method is below:
private void speak(String StationNamesAsString) throws IOException
{
String encodedString = URLEncoder.encode(StationNamesAsString,"UTF-8");
sound = audio.getAudio(encodedString, Language.ENGLISH);
File convertedFile = File.createTempFile("convertedFile", ".mp3", null); //getDir("filez", 0)
FileOutputStream out = new FileOutputStream(convertedFile);
byte buffer[] = new byte[16384];
int length = 0;
while ( (length = sound.read(buffer)) != -1 )
{
out.write(buffer,0, length);
}
out.close();
playFile(convertedFile);
}
The method which is going to play the convertedfile is below:
public void playFile(File playThis) throws IllegalArgumentException, IllegalStateException, IOException
{
MediaPlayer mp = new MediaPlayer();
FileInputStream fis = null;
fis = new FileInputStream(playThis);
mp.setDataSource(fis.getFD());
fis.close();
mp.prepare();
mp.start();
mp.setOnCompletionListener(new MediaPlayer.OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) {
mp.stop();
}
});
}
For downloading stuff I work with the apache classes HTTPResponse HTTPClient etc.
I check for a valid download like this:
entity.writeTo(new FileOutputStream(outfile));
if(outfile.length()!=entity.getContentLength()){
long fileLength = outfile.length();
outfile.delete();
throw new Exception("Incomplete download, "+fileLength+"/"
+entity.getContentLength()+" bytes downloaded");
}
But it seems that the exception is never triggered. How to properly handle this? Is entity.getContentLength the length of the file on server or the amount of data received?
The file request should always come with a MD5 checksum. If you have an MD5 header then all you need to do is check that against the files generated MD5. Then your done, its better to do it this way as you can have a file with the same number of bytes but one byte gets garbled in transmission.
entity.writeTo(new FileOutputStream(outfile));
String md5 = response.getHeaders("Content-MD5")[0].getValue();
byte[] b64 = Base64.decode(md5, Base64.DEFAULT);
String sB64 = IntegrityUtils.toASCII(b64, 0, b64.length);
if (outfile.exists()) {
String orgMd5 = null;
try {
orgMd5 = IntegrityUtils.getMD5Checksum(outfile);
} catch (Exception e) {
Log.d(TAG,"Exception in file hex...");
}
if (orgMd5 != null && orgMd5.equals(sB64)) {
Log.d(TAG,"MD5 is equal to files MD5");
} else {
Log.d(TAG,"MD5 does not equal files MD5");
}
}
Add this class to your project:
public class IntegrityUtils {
public static String toASCII(byte b[], int start, int length) {
StringBuffer asciiString = new StringBuffer();
for (int i = start; i < (length + start); i++) {
// exclude nulls from the ASCII representation
if (b[i] != (byte) 0x00) {
asciiString.append((char) b[i]);
}
}
return asciiString.toString();
}
public static String getMD5Checksum(File file) throws Exception {
byte[] b = createChecksum(file);
String result = "";
for (int i = 0; i < b.length; i++) {
result += Integer.toString((b[i] & 0xff) + 0x100, 16).substring(1);
}
return result;
}
public static byte[] createChecksum(File file) throws Exception {
InputStream fis = new FileInputStream(file);
byte[] buffer = new byte[1024];
MessageDigest complete = MessageDigest.getInstance("MD5");
int numRead;
do {
numRead = fis.read(buffer);
if (numRead > 0) {
complete.update(buffer, 0, numRead);
}
} while (numRead != -1);
fis.close();
return complete.digest();
}
}