I want to read some bytes of a file, from "offset" and it'length is "size". So i use FIleInputStream and this code:
byte[] data = new byte[size];
FileInputStream fis=new FileInputStream(inputFile);
System.out.println("offset:"+offset+","+"size:"+size);
fis.read(data, offset, size);
So I have true values of offset and size, but I receiver error: indexoutofbound. I don't understand. Can anyone show how I fall and whether have any other right way to do it?
The JavaDoc tells you:
public int read(byte[] b, int off, int len) throws IOException
Throws:
IndexOutOfBoundsException - If off is negative, len is negative, or len is
greater than b.length - off
be aware that the indexes are 0-based.
I'm not too sure what you've got in offset here, but offset is meant to be the offset (i.e. starting index) in the array where you want to store the bytes.
So you're trying to read size bytes into your array, starting at position offset - hence an IndexOutOfBounds if offset > 0. You need offset to be 0, and it should work.
Related
OpenGL ES 3.1, Android.
I have set up SSBO with the intention to write something in fragment shader and read it back in the application. Things almost work, i.e. I can read back the value I have written, with one issue: when I read an INT, its bytes come reversed (a '17' = 0x00000011 written in the shader comes back as '285212672' = 0x11000000 ).
Here's how I do it:
Shader
(...)
layout (std140,binding=0) buffer SSBO
{
int ssbocount[];
};
(...)
ssbocount[0] = 17;
(...)
Application code
int SIZE = 40;
int[] mSSBO = new int[1];
ByteBuffer buf = ByteBuffer.allocateDirect(SIZE).order(ByteOrder.nativeOrder());
(...)
glGenBuffers(1,mSSBO,0);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, mSSBO[0]);
glBufferData(GL_SHADER_STORAGE_BUFFER, SIZE, null, GL_DYNAMIC_READ);
buf = (ByteBuffer) glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, SIZE, GL_MAP_READ_BIT );
glBindBufferBase(GL_SHADER_STORAGE_BUFFER,0, mSSBO[0]);
(...)
int readValue = buf.getInt(0);
Now print out the Value and it comes as '17' with reversed bytes.
Notice I DO allocate the ByteBuffer with 'nativeOrder'. Of course, I could manually flip the bytes, but the concern is this would only sometimes work, depending on the endianness of the host machine...
The fix is to use native endianess, and create an integer view of the ByteBuffer using ByteBuffer.asIntBuffer(). For some reason the local getInt() calls do not seem to respect the local ByteBuffer endianness settings.
I am using the following code to get the R,G,B values for a grayscale image(they all will be the same)
But the output gives me negative values.
Why is this so? I am totally confused.
for (int i=0;i<bb.getWidth();i++){
for (int j=0;j<bb.getHeight();j++){
long temp=bb.getPixel(i, j);
int a=(int)temp;
ByteBuffer b = ByteBuffer.allocate(4);
b.putInt(a );
byte[] result = b.array();
Log.d("gray value is=", String.valueOf((int)result[1]));
// Log.d("gray value is=", String.valueOf(getLastBytes(a)));
}
}
Here result[1] should correspond to the 'R' value. So how is it negative?
Try this
long temp=bb.getPixel(i, j);
R = Color.red(temp);
G = Color.green(temp);
B = Color.blue(temp);
It is because of all type changes, the casting associated with it, and the fact that those types are signed.
First of all, the initial return value of getPixel() is an int (32-bits or 4 bytes, one byte for ARGB). Placing it into a long seems unnecessary here, because you just cast it back into an int. But so far so good.
When you get the byte array, you are correct that the four bytes for ARGB are in order so result[1] should be the Red value. However, when you implicitly cast that byte into an int in your log statement, a problem happens.
Because the int is four bytes long, and both int and byte are signed types, sign extension applies to the result, i.e.
The byte 0x48 will extend into an int as 0x00000048
The byte 0x88 will extend into an int as 0xFFFFFF88
So when you print the result to the log, it interprets the decimal value of your data to be a negative number. Admittedly, even if you didn't cast and just printed the value of byte, that type is still signed (i.e. it goes from -128 to 127 and not 0 to 255, and 0x88 means -120), so the log output would be the same.
If you want to log out the actual 0-255 value, modify your code to bitmask the pixel data to defeat sign extension in the int:
int red = (int)(result[1] & 0xFF);
Log.d("gray value is=", String.valueOf(red));
I am trying to build some kind of sound-meter to the Android platform. (i am using HTC wildfire) I use the AudioRecord class for that goal, however it seems that the
values that are being returned from its "read" are not reasonable.
This is how i created the AudioRecord object:
int minBufferSize =
AudioRecord.getMinBufferSize(sampleRateInHz,
android.media.AudioFormat.CHANNEL_IN_MONO,
android.media.AudioFormat.ENCODING_PCM_16BIT);
audioRecored = new AudioRecord( MediaRecorder.AudioSource.MIC,
44100,
android.media.AudioFormat.CHANNEL_IN_MONO,
android.media.AudioFormat.ENCODING_PCM_16BIT,
minBufferSize );
This is how i try to read data from it:
short[] audioData = new short[bufferSize];
int offset =0;
int shortRead = 0;
int sampleToReadPerGet = 1000;//some value in order to avoid momentaraly nosies.
//start tapping into the microphone
audioRecored.startRecording();
//start reading from the microphone to an internal buffer - chuck by chunk
while (offset < sampleToReadPerGet)
{
shortRead = audioRecored.read(audioData, offset ,sampleToReadPerGet - offset);
offset += shortRead;
}
//stop tapping into the microphone
audioRecored.stop();
//average the buffer
int averageSoundLevel = 0;
for (int i = 0 ; i < sampleToReadPerGet ; i++)
{
averageSoundLevel += audioData[i];
}
averageSoundLevel /= sampleToReadPerGet;
What are those values? are they decibels?
Edit:
The values goes from -200 to 3000.
The value of shortRead is sampleToReadPerGet (1000).
Not sure what "those values" you are referring to, the raw output or the averaged values, but the raw output are instantaneous amplitude levels. It's important to realize that such values are not referenced to anything in particular. That is, just because you are reading 20, does not tell you 20 of what.
Taking the average of these values doesn't make any sense, because those values swing above and below zero. Do it long enough and you'll just get zero.
It might make sense to take the average of the squares, and then find the square root of the average. This is called the RMS. However, without a fixed buffer size to average over, this is hazardous at best.
To measure dB, you will have to use the formula dB = 20 log_10 (|A|/A_r) where A is the amplitude and A_r is the reference amplitude -- clearly, you must decide what you are referencing (you can calibrate the HTC, or measure against the maximum level or something like that).
You should not get negative values. The values span 16 or 8 bits, so your max is about 32000 or something. The values have no units.
Also, I recommend root-mean-squared instead of an average for determining volume. It is more stable.
What you should try:
Increase the buffer size by 3: Your app may not be reading it fast
enough so you need some space. Otherwise you might be getting some
buffer overflow errors (which you are not checking for in your code)
Try the code in gast-lib: It helps you periodically record audio and also provides you an AsyncTask.
Root mean squared:
public static double rootMeanSquared(short[] nums)
{
double ms = 0;
for (int i = 0; i < nums.length; i++)
{
ms += nums[i] * nums[i];
}
ms /= nums.length;
return Math.sqrt(ms);
}
What am I doing:
I add four integers in C. on the way, I lose information.
See code below:
//c-file
jbyte *inputByteArray = (*env)->GetDirectBufferAddress (env, obj);
// checked every value, also sizeof(val1)= 4 etc...
int val1 = (int) *(inputByteArray + 1); //120
int val2 = (int) *(inputByteArray + 2); //120
int val3 = (int) *(inputByteArray + 3); //180
int val4 = (int) *(inputByteArray + 4); //180
int result = val1 + val2 + val3 + val4;
return result;
//return type is int
//output: 88, should be 600
// 88 binary: 0000 0000 0101 1000
//600 binary: 0000 0010 0101 1000
The special thing about this is the following, which might be causing the problem:
The 4 values for the input are from a handed-over Buffer from Java, which is a direct ByteBuffer. It is directly allocated in Java, in order NOT be moved by the garbage collector. On the c-side I hand the buffer over via pointer from "GetDirectBufferAddress" (see code), and the single values do match the values in the array.
Does anyone know about this strange behaviour?
When I am using IntBuffer to hand over the numbers, it works by the way.
Im working here on performance, so I want small buffers and my data values are small enough to use ByteBuffer. (this is only a fragment of a larger calculation on the c-code side)
Since this is on Android, I did not manage to debug into the c-code...
Edit: I am using eclipse/SDK/NDK in current versions on Android 3.2.1 testing device
As #Deucalion says your array looks dodgy. Unless you are trying to add array1, array[2], array[3] and array[4]. Without using array[0].
Anyway assuming that what you have done is what you intend, your value is exactly what you will get.
Byte range is -128 t0 +127. And so 180 is actually stored as -76. And voila !!
What is the best value for buffer size when implementing a guitar tuner using FFT? Am getting an output, but it seems that the value displayed is not much accurate as I expected. I think it's an issue with the buffer size I allocated. I'm using 8000 as the buffer size. Are there any other suggestions to retrieve more efficient result?
You can kinda wiggle the results around a bit. It's been a while since I've done FFT work, but if I recall, with a buffer of 8000, the Nth bucket would be (8000 / 2) / N Hz (is that right? It's been a long time). So the 79th through 81st buckets are 50.63, 50, and 49.38 Hz.
You can then do a FFT with a slightly different number of buckets. So if you dropped down to 6000 buckets, the 59th through 61st buckets would be 50.84, 50, and 49.18 Hz.
Now you've got an algorithm that you can use to home in on the specific frequency. I think it's O((log M) * (N log N)) where N is roughly the number of buckets you use each time, and M is the precision.
Update: Sample Stretching
public byte[] stretch(byte[] input, int newLength) {
byte[] result = new byte[newLength];
result[0] = input[0];
for (int i = 1; i < newLength; i++) {
float t = i * input.length / newLength;
int j = (int) t;
float d = t - j;
result[i] = (byte) (input[j - 1] * d + input[j] * (1 - d))
}
return result;
}
You might have to fix some of the casting to make sure you get the right numbers, but that looks about right.
i = index in result[]
j = index in input[] (rounded up)
d = percentage of input[j - 1] to use
1 - d = percentage of input[j] to use