How can I get short[] from a ByteBuffer - android

I am using JNI code in an Android project in which the JNI native function requires a short[] argument. However, the original data is stored as a ByteBuffer. I'm trying the convert the data format as follows.
ByteBuffer rgbBuf = ByteBuffer.allocate(size);
...
short[] shortArray = (short[]) rgbBuf.asShortBuffer().array().clone();
But I encounter the following problem when running the second line of code shown above:
E/AndroidRuntime(23923): Caused by: java.lang.UnsupportedOperationException
E/AndroidRuntime(23923): at Java.nio.ShortToByteBufferAdapter.protectedArray(ShortToByteBufferAdapter.java:169)
Could anyone suggest a means to implement the conversion?

The method do this is a bit odd, actually. You can do it as below; ordering it is important to convert it to a short array.
short[] shortArray = new short[size/2];
rgbBuf.order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shortArray);
Additionally, you may have to use allocateDirect instead of allocate.

I had the same error with anything that used asShortBuffer(). Here's a way around it (adapted from 2 bytes to short java):
short[] shortArray = new short[rgbBuf.capacity() / 2]);
for (int i=0; i<shortArray.length; i++)
{
ByteBuffer bb = ByteBuffer.allocate(2);
bb.order(ByteOrder.LITTLE_ENDIAN);
bb.put(rgbBuf[2*i]);
bb.put(rgbBuf[2*i + 1]);
shortArray[i] = bb.getShort(0);
}

Related

Android AudioRecord delay when using bytes, not when using and shorts

I'm developing an Xamarin Android application where video and audio are recorded with some playing music. I need to merge all these streams together.
I'm trying to figure out what I'm missing when recording audio using byte[] (or ByteBuffer which I also tried out) in the audioRecord.read() function. The output WAV file seems right (is clearly playable at a 44100Hz sample rate), but a delay appears after a couple of seconds and tends to get bigger and bigger.
When using shorts, I don't have any delay in the MIC recorded audio. The big issue using shorts is that no mather what I do, I can't have a sample rate higher then 8000hz (but this isn't the current issue although if someone knows how to fix it I'll take it :) )
The final merged file is an mp4 with AAC audio, merged using ffmpeg, but I don't think this is the issue.
Could it be related to 8000Hz (using short) and 44100Hz (using byte) ? Or I'm a adding something when using byte[] since I don't check how many bytes are read ?
Here are the parts involved in the issue:
//output file initialization
mDataOutputStream = new FileOutputStream(new Java.IO.File(mRawFilePath));
public void Run()
{
...
short[] shortBuf = new short[bufferSize / 2];
//byte[] byteBuf = new byte[bufferSize];
while(isRecording) {
//using shorts
audioRecorder.Read(shortBuf, 0, shortBuf.Length);
WriteShortsToFile(buf);
//using byte[]
//audioRecorder.Read(byteBuf, 0, byteBuf.Length);
//WriteBytesToFile(buf);
}
...
}
public void WriteShortsToFile(short[] shorts)
{
for (int i = 0; i < shorts.Length; i++)
{
mDataOutputStream.WriteByte(shorts[i] & 0xFF);
mDataOutputStream.WriteByte((shorts[i] >> 8) & 0xFF);
}
}
public void WriteBytesToFile(byte[] buf)
{
mDataOutputStream.Write(buf);
}
Finally got it working as it should.
I changed the allocation size of the short array to bufferSize.

Hex QString to hex qByteArray

I am trying to implement an OTP generator for Blackberry OS10. I already use the reference implementation on Android side, you can find it here:
So I would like to convert it to C++ / QNX code and I have some troubles with hexadecimal conversion...
In java:
private static byte[] hexStr2Bytes(String hex){
// Adding one byte to get the right conversion
// Values starting with "0" can be converted
byte[] bArray = new BigInteger("10" + hex,16).toByteArray();
// Copy all the REAL bytes, not the "first"
byte[] ret = new byte[bArray.length - 1];
for (int i = 0; i < ret.length; i++)
ret[i] = bArray[i+1];
return ret;
}
In QNX:
QByteArray msg = QByteArray::fromHex(m.toLocal8Bit());
Problem is "m" start with '00' and so my final msg array is 0 length...
For example I try to encode the hex qstring:0000000002ca4e32
In blackberry: m=""
In Android: m="?M?"
So you can someone explain me how to deal with such a conversion ?
Thanks!
What I would do is to translate your Java function to plain C++, i.e. not QT format. Then adapt data type to QT.

AudioRecord read() returns strange values

I'm trying to read raw data from mic by following code:
short buffer[] = new short[AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT)];
Log.d("O_o",""+buffer.length);
AudioRecord rec = new AudioRecord(
MediaRecorder.AudioSource.MIC, 8000,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, buffer.length);
rec.startRecording();
int read = rec.read(buffer, 0, buffer.length);
for (int i = 0; i < read; i++) {
Log.d("O_o",i+" "+buffer[i]);
}
rec.stop();
rec.release();
But buffer always filled with 257 values.
What's wrong ?
UDP: look like i'ts initital values. Calling read() in cycle causes normal values.
You definitely should take a look at this question + answer. It shows some code which would improve your code very much.
Basically, your problem is that you're trying to read it synchronously. The Audio process will usually have to be implemented asynchronously and you'll be getting 256 byte-sized chunks of audio at any one time.

How to send int array to Java code from C

On Android platform in my native code I have allocated an int array
mBuffer = new int[BUFSIZE];
I want to send this to Java side, Java method is this
public void WriteBuffer(int[] buffer, int size)
{
}
I call back to java code like this
const char* callback = "WriteBuffer";
mWriteMethod = env->GetMethodID(cls, callback, "([II)V");
This calls the java method its just that in my Java code the buffer is null. As I am really passing a pointer to memory that was dynamically allocated rather than an actual array is probably why it doesn't work but I don't know how to pass a pointer to Java.
I need the buffer parameter as an integer array on the Java side anyway.
Anyone know how I can modify the above to get it to work?
Thanks
My understanding of your question is you want call the java method WriteBuffer and pass a int[] to it.
Some pseudo-code which you will need in jni
jintArray buffer;
buffer= (*env)->NewIntArray(env, BUFSIZE);
(*env)->SetIntArrayRegion(env, buffer, 0,BUFSIZE, mBuffer);
SetIntArrayRegion() will copy from mBuffer into the Java array.

android java audio dsp sites or android sound library?

anyone know of any usefull links for learning audio dsp for android?
or a sound library?
im trying to make a basic mixer for playing wav files but realised i dont know enough about dsp, and i cant find anything at all for android.
i have a wav file loaded into a byte array and an AudioTrack on a short loop.
how can i feed the data in?
i expect this post will be ignored but its worth a try.
FileInputStream is = new FileInputStream(filePath);
BufferedInputStream bis = new BufferedInputStream(is);
DataInputStream dis = new DataInputStream(bis);
int i = 0;
while (dis.available() > 0) {
byteData[i] = dis.readByte(); //byteData
i++;
}
final int minSize = AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT );
track = new AudioTrack( AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT,
minSize, AudioTrack.MODE_STREAM);
track.play();
bRun=true;
new Thread(new Runnable() {
public void run() {
track.write(byteData, 0, minSize);
}
}).start();
I'll give this a shot just because I was in your position a few months ago...
If you already have the wav file audio samples in a byte array, you simple need to pass the samples to the audio track object (lookup the write() methods).
To mix audio together you simply add the sames from each track. For example, add the first sample from track 1 to track 2, add the second sample from track 1 to track 2 and so on. The end result would ideally be a third array containing the added samplws which you pass to the 'write' method of your audio track instance.
You must be mindful of clipping here. If your data type 'short' then the maximum value allowed is 32768. A simple way to ensure that your added samples do not exceed this limit is to peform the addition and store the result in a variable whose data type is larger than a short (eg. int) and evaluate the result. If it's greater than 32768 then make it equal to 32768 and cast it back to a short.
int result = track1[i] + track2[i];
if(result > 32768) {
result = 32768;
}
else if(result < -32768) {
result = -32768;
}
mixedAudio[i] = (short)result;
Notice how the snippet above also tests for the minimum range of a short.
Appologies for the lack of formatting here, I'm on my mobile phone on a train :-)
Good luck.

Categories

Resources