I wrote a conversion from YUV_420_888 to Bitmap, considering the following logic (as I understand it):
To summarize the approach: the kernel’s coordinates x and y are congruent both with the x and y of the non-padded part of the Y-Plane (2d-allocation) and the x and y of the output-Bitmap. The U- and V-Planes, however, have a different structure than the Y-Plane, because they use 1 byte for coverage of 4 pixels, and, in addition, may have a PixelStride that is more than one, in addition they might also have a padding that can be different from that of the Y-Plane. Therefore, in order to access the U’s and V’s efficiently by the kernel I put them into 1-d allocations and created an index “uvIndex” that gives the position of the corresponding U- and V within that 1-d allocation, for given (x,y) coordinates in the (non-padded) Y-plane (and, so, the output Bitmap).
In order to keep the rs-Kernel lean, I excluded the padding area in the yPlane by capping the x-range via LaunchOptions (this reflects the RowStride of the y-plane which thus can be ignored WITHIN the kernel). So we just need to consider the uvPixelStride and uvRowStride within the uvIndex, i.e. the index used in order to access to the u- and v-values.
This is my code:
Renderscript Kernel, named yuv420888.rs
#pragma version(1)
#pragma rs java_package_name(com.xxxyyy.testcamera2);
#pragma rs_fp_relaxed
int32_t width;
int32_t height;
uint picWidth, uvPixelStride, uvRowStride ;
rs_allocation ypsIn,uIn,vIn;
// The LaunchOptions ensure that the Kernel does not enter the padding zone of Y, so yRowStride can be ignored WITHIN the Kernel.
uchar4 __attribute__((kernel)) doConvert(uint32_t x, uint32_t y) {
// index for accessing the uIn's and vIn's
uint uvIndex= uvPixelStride * (x/2) + uvRowStride*(y/2);
// get the y,u,v values
uchar yps= rsGetElementAt_uchar(ypsIn, x, y);
uchar u= rsGetElementAt_uchar(uIn, uvIndex);
uchar v= rsGetElementAt_uchar(vIn, uvIndex);
// calc argb
int4 argb;
argb.r = yps + v * 1436 / 1024 - 179;
argb.g = yps -u * 46549 / 131072 + 44 -v * 93604 / 131072 + 91;
argb.b = yps +u * 1814 / 1024 - 227;
argb.a = 255;
uchar4 out = convert_uchar4(clamp(argb, 0, 255));
return out;
}
Java side:
private Bitmap YUV_420_888_toRGB(Image image, int width, int height){
// Get the three image planes
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
byte[] y = new byte[buffer.remaining()];
buffer.get(y);
buffer = planes[1].getBuffer();
byte[] u = new byte[buffer.remaining()];
buffer.get(u);
buffer = planes[2].getBuffer();
byte[] v = new byte[buffer.remaining()];
buffer.get(v);
// get the relevant RowStrides and PixelStrides
// (we know from documentation that PixelStride is 1 for y)
int yRowStride= planes[0].getRowStride();
int uvRowStride= planes[1].getRowStride(); // we know from documentation that RowStride is the same for u and v.
int uvPixelStride= planes[1].getPixelStride(); // we know from documentation that PixelStride is the same for u and v.
// rs creation just for demo. Create rs just once in onCreate and use it again.
RenderScript rs = RenderScript.create(this);
//RenderScript rs = MainActivity.rs;
ScriptC_yuv420888 mYuv420=new ScriptC_yuv420888 (rs);
// Y,U,V are defined as global allocations, the out-Allocation is the Bitmap.
// Note also that uAlloc and vAlloc are 1-dimensional while yAlloc is 2-dimensional.
Type.Builder typeUcharY = new Type.Builder(rs, Element.U8(rs));
//using safe height
typeUcharY.setX(yRowStride).setY(y.length / yRowStride);
Allocation yAlloc = Allocation.createTyped(rs, typeUcharY.create());
yAlloc.copyFrom(y);
mYuv420.set_ypsIn(yAlloc);
Type.Builder typeUcharUV = new Type.Builder(rs, Element.U8(rs));
// note that the size of the u's and v's are as follows:
// ( (width/2)*PixelStride + padding ) * (height/2)
// = (RowStride ) * (height/2)
// but I noted that on the S7 it is 1 less...
typeUcharUV.setX(u.length);
Allocation uAlloc = Allocation.createTyped(rs, typeUcharUV.create());
uAlloc.copyFrom(u);
mYuv420.set_uIn(uAlloc);
Allocation vAlloc = Allocation.createTyped(rs, typeUcharUV.create());
vAlloc.copyFrom(v);
mYuv420.set_vIn(vAlloc);
// handover parameters
mYuv420.set_picWidth(width);
mYuv420.set_uvRowStride (uvRowStride);
mYuv420.set_uvPixelStride (uvPixelStride);
Bitmap outBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Allocation outAlloc = Allocation.createFromBitmap(rs, outBitmap, Allocation.MipmapControl.MIPMAP_NONE, Allocation.USAGE_SCRIPT);
Script.LaunchOptions lo = new Script.LaunchOptions();
lo.setX(0, width); // by this we ignore the y’s padding zone, i.e. the right side of x between width and yRowStride
//using safe height
lo.setY(0, y.length / yRowStride);
mYuv420.forEach_doConvert(outAlloc,lo);
outAlloc.copyTo(outBitmap);
return outBitmap;
}
Testing on Nexus 7 (API 22) this returns nice color Bitmaps. This device, however, has trivial pixelstrides (=1) and no padding (i.e. rowstride=width). Testing on the brandnew Samsung S7 (API 23) I get pictures whose colors are not correct - except of the green ones. But the Picture does not show a general bias towards green, it just seems that non-green colors are not reproduced correctly. Note, that the S7 applies an u/v pixelstride of 2, and no padding.
Since the most crucial code line is within the rs-code the Access of the u/v planes uint uvIndex= (...) I think, there could be the problem, probably with incorrect consideration of pixelstrides here. Does anyone see the solution? Thanks.
UPDATE: I checked everything, and I am pretty sure that the code regarding the access of y,u,v is correct. So the problem must be with the u and v values themselves. Non green colors have a purple tilt, and looking at the u,v values they seem to be in a rather narrow range of about 110-150. Is it really possible that we need to cope with device specific YUV -> RBG conversions...?! Did I miss anything?
UPDATE 2: have corrected code, it works now, thanks to Eddy's Feedback.
Look at
floor((float) uvPixelStride*(x)/2)
which calculates your U,V row offset (uv_row_offset) from the Y x-coordinate.
if uvPixelStride = 2, then as x increases:
x = 0, uv_row_offset = 0
x = 1, uv_row_offset = 1
x = 2, uv_row_offset = 2
x = 3, uv_row_offset = 3
and this is incorrect. There's no valid U/V pixel value at uv_row_offset = 1 or 3, since uvPixelStride = 2.
You want
uvPixelStride * floor(x/2)
(assuming you don't trust yourself to remember the critical round-down behavior of integer divide, if you do then):
uvPixelStride * (x/2)
should be enough
With that, your mapping becomes:
x = 0, uv_row_offset = 0
x = 1, uv_row_offset = 0
x = 2, uv_row_offset = 2
x = 3, uv_row_offset = 2
See if that fixes the color errors. In practice, the incorrect addressing here would mean every other color sample would be from the wrong color plane, since it's likely that the underlying YUV data is semiplanar (so the U plane starts at V plane + 1 byte, with the two planes interleaved)
For people who encounter error
android.support.v8.renderscript.RSIllegalArgumentException: Array too small for allocation type
use buffer.capacity() instead of buffer.remaining()
and if you already made some operations on the image, you'll need to call rewind() method on the buffer.
Furthermore for anyone else getting
android.support.v8.renderscript.RSIllegalArgumentException: Array too
small for allocation type
I fixed it by changing yAlloc.copyFrom(y); to yAlloc.copy1DRangeFrom(0, y.length, y);
Posting full solution to convert YUV->BGR (can be adopted for other formats too) and also rotate image to upright using renderscript. Allocation is used as input and byte array is used as output. It was tested on Android 8+ including Samsung devices too.
Java
/**
* Renderscript-based process to convert YUV_420_888 to BGR_888 and rotation to upright.
*/
public class ImageProcessor {
protected final String TAG = this.getClass().getSimpleName();
private Allocation mInputAllocation;
private Allocation mOutAllocLand;
private Allocation mOutAllocPort;
private Handler mProcessingHandler;
private ScriptC_yuv_bgr mConvertScript;
private byte[] frameBGR;
public ProcessingTask mTask;
private ImageListener listener;
private Supplier<Integer> rotation;
public ImageProcessor(RenderScript rs, Size dimensions, ImageListener listener, Supplier<Integer> rotation) {
this.listener = listener;
this.rotation = rotation;
int w = dimensions.getWidth();
int h = dimensions.getHeight();
Type.Builder yuvTypeBuilder = new Type.Builder(rs, Element.YUV(rs));
yuvTypeBuilder.setX(w);
yuvTypeBuilder.setY(h);
yuvTypeBuilder.setYuvFormat(ImageFormat.YUV_420_888);
mInputAllocation = Allocation.createTyped(rs, yuvTypeBuilder.create(),
Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
//keep 2 allocations to handle different image rotations
mOutAllocLand = createOutBGRAlloc(rs, w, h);
mOutAllocPort = createOutBGRAlloc(rs, h, w);
frameBGR = new byte[w*h*3];
HandlerThread processingThread = new HandlerThread(this.getClass().getSimpleName());
processingThread.start();
mProcessingHandler = new Handler(processingThread.getLooper());
mConvertScript = new ScriptC_yuv_bgr(rs);
mConvertScript.set_inWidth(w);
mConvertScript.set_inHeight(h);
mTask = new ProcessingTask(mInputAllocation);
}
private Allocation createOutBGRAlloc(RenderScript rs, int width, int height) {
//Stored as Vec4, it's impossible to store as Vec3, buffer size will be for Vec4 anyway
//using RGB_888 as alternative for BGR_888, can be just U8_3 type
Type.Builder rgbTypeBuilderPort = new Type.Builder(rs, Element.RGB_888(rs));
rgbTypeBuilderPort.setX(width);
rgbTypeBuilderPort.setY(height);
Allocation allocation = Allocation.createTyped(
rs, rgbTypeBuilderPort.create(), Allocation.USAGE_SCRIPT
);
//Use auto-padding to be able to copy to x*h*3 bytes array
allocation.setAutoPadding(true);
return allocation;
}
public Surface getInputSurface() {
return mInputAllocation.getSurface();
}
/**
* Simple class to keep track of incoming frame count,
* and to process the newest one in the processing thread
*/
class ProcessingTask implements Runnable, Allocation.OnBufferAvailableListener {
private int mPendingFrames = 0;
private Allocation mInputAllocation;
public ProcessingTask(Allocation input) {
mInputAllocation = input;
mInputAllocation.setOnBufferAvailableListener(this);
}
#Override
public void onBufferAvailable(Allocation a) {
synchronized(this) {
mPendingFrames++;
mProcessingHandler.post(this);
}
}
#Override
public void run() {
// Find out how many frames have arrived
int pendingFrames;
synchronized(this) {
pendingFrames = mPendingFrames;
mPendingFrames = 0;
// Discard extra messages in case processing is slower than frame rate
mProcessingHandler.removeCallbacks(this);
}
// Get to newest input
for (int i = 0; i < pendingFrames; i++) {
mInputAllocation.ioReceive();
}
int rot = rotation.get();
mConvertScript.set_currentYUVFrame(mInputAllocation);
mConvertScript.set_rotation(rot);
Allocation allocOut = rot==90 || rot== 270 ? mOutAllocPort : mOutAllocLand;
// Run processing
// ain allocation isn't really used, global frame param is used to get data from
mConvertScript.forEach_yuv_bgr(allocOut);
//Save to byte array, BGR 24bit
allocOut.copyTo(frameBGR);
int w = allocOut.getType().getX();
int h = allocOut.getType().getY();
if (listener != null) {
listener.onImageAvailable(frameBGR, w, h);
}
}
}
public interface ImageListener {
/**
* Called when there is available image, image is in upright position.
*
* #param bgr BGR 24bit bytes
* #param width image width
* #param height image height
*/
void onImageAvailable(byte[] bgr, int width, int height);
}
}
RS
#pragma version(1)
#pragma rs java_package_name(com.affectiva.camera)
#pragma rs_fp_relaxed
//Script convers YUV to BGR(uchar3)
//current YUV frame to read pixels from
rs_allocation currentYUVFrame;
//input image rotation: 0,90,180,270 clockwise
uint32_t rotation;
uint32_t inWidth;
uint32_t inHeight;
//method returns uchar3 BGR which will be set to x,y in output allocation
uchar3 __attribute__((kernel)) yuv_bgr(uint32_t x, uint32_t y) {
// Read in pixel values from latest frame - YUV color space
uchar3 inPixel;
uint32_t xRot = x;
uint32_t yRot = y;
//Do not rotate if 0
if (rotation==90) {
//rotate 270 clockwise
xRot = y;
yRot = inHeight - 1 - x;
} else if (rotation==180) {
xRot = inWidth - 1 - x;
yRot = inHeight - 1 - y;
} else if (rotation==270) {
//rotate 90 clockwise
xRot = inWidth - 1 - y;
yRot = x;
}
inPixel.r = rsGetElementAtYuv_uchar_Y(currentYUVFrame, xRot, yRot);
inPixel.g = rsGetElementAtYuv_uchar_U(currentYUVFrame, xRot, yRot);
inPixel.b = rsGetElementAtYuv_uchar_V(currentYUVFrame, xRot, yRot);
// Convert YUV to RGB, JFIF transform with fixed-point math
// R = Y + 1.402 * (V - 128)
// G = Y - 0.34414 * (U - 128) - 0.71414 * (V - 128)
// B = Y + 1.772 * (U - 128)
int3 bgr;
//get red pixel and assing to b
bgr.b = inPixel.r +
inPixel.b * 1436 / 1024 - 179;
bgr.g = inPixel.r -
inPixel.g * 46549 / 131072 + 44 -
inPixel.b * 93604 / 131072 + 91;
//get blue pixel and assign to red
bgr.r = inPixel.r +
inPixel.g * 1814 / 1024 - 227;
// Write out
return convert_uchar3(clamp(bgr, 0, 255));
}
On a Samsung Galaxy Tab 5 (Tablet), android version 5.1.1 (22), with alleged YUV_420_888 format, the following renderscript math works well and produces correct colors:
uchar yValue = rsGetElementAt_uchar(gCurrentFrame, x + y * yRowStride);
uchar vValue = rsGetElementAt_uchar(gCurrentFrame, ( (x/2) + (y/4) * yRowStride ) + (xSize * ySize) );
uchar uValue = rsGetElementAt_uchar(gCurrentFrame, ( (x/2) + (y/4) * yRowStride ) + (xSize * ySize) + (xSize * ySize) / 4);
I do not understand why the horizontal value (i.e., y) is scaled by a factor of four instead of two, but it works well. I also needed to avoid use of rsGetElementAtYuv_uchar_Y|U|V. I believe the associated allocation stride value is set to zero instead of something proper. Use of rsGetElementAt_uchar() is a reasonable work-around.
On a Samsung Galaxy S5 (Smart Phone), android version 5.0 (21), with alleged YUV_420_888 format, I cannot recover the u and v values, they come through as all zeros. This results in a green looking image. Luminous is OK, but image is vertically flipped.
This code requires the use of the RenderScript compatibility library (android.support.v8.renderscript.*).
In order to get the compatibility library to work with Android API 23, I updated to gradle-plugin 2.1.0 and Build-Tools 23.0.3 as per Miao Wang's answer at How to create Renderscript scripts on Android Studio, and make them run?
If you follow his answer and get an error "Gradle version 2.10 is required" appears, do NOT change
classpath 'com.android.tools.build:gradle:2.1.0'
Instead, update the distributionUrl field of the Project\gradle\wrapper\gradle-wrapper.properties file to
distributionUrl=https\://services.gradle.org/distributions/gradle-2.10-all.zip
and change File > Settings > Builds,Execution,Deployment > Build Tools > Gradle >Gradle to Use default gradle wrapper as per "Gradle Version 2.10 is required." Error.
Re: RSIllegalArgumentException
In my case this was the case that buffer.remaining() was not multiple of stride:
The length of last line was less than stride (i.e. only up to where actual data was.)
An FYI in case someone else gets this as I was also getting "android.support.v8.renderscript.RSIllegalArgumentException: Array too small for allocation type" when trying out the code. In my case it turns out that the when allocating the buffer for Y i had to rewind the buffer because it was being left at the wrong end and wasn't copying the data. By doing buffer.rewind(); before allocation the new bytes array makes it work fine now.
I'm trying to implement an high pass audio filter on the microphone data that I get form the audioRecord.
The data I get form the microphone is a 16-bit PCM audio byte-array. I was trying to use TarsosDSP which provides a API for high pass filtering. However, as an input it requires a float-array so I converted the byte into a float array and ran the highpass filter. To confirm the results I saved the filtered data in a wave file but it sounds totally distorted.
public static byte[] highPassFilter( byte[] buffer, WaveHeader waveHeader, float frequency) {
HighPass highPass = new HighPass(frequency, waveHeader.getSampleRate());
TarsosDSPAudioFormat format = new TarsosDSPAudioFormat(waveHeader.getSampleRate(),waveHeader.getBitsPerSample(),waveHeader.getChannels(),true, false);
AudioEvent audioEvent = new AudioEvent(format);
float[] f_buffer = bytesToFloats(buffer);
audioEvent.setFloatBuffer(f_buffer);
highPass.process(audioEvent);
buffer = audioEvent.getByteBuffer();
byte[] data = PCMtoWav(buffer, waveHeader.getSampleRate(), waveHeader.getChannels(), waveHeader.getBitsPerSample());
writeWavFile(data);
return buffer;
}
public static float[] bytesToFloats(byte[] bytes) {
float[] floats = new float[bytes.length / 2];
for(int i=0; i < bytes.length; i+=2) {
floats[i/2] = bytes[i] | (bytes[i+1] < 128 ? (bytes[i+1] << 8) : ((bytes[i+1] - 256) << 8));
}
return floats;
}
The data in the waveHeader is:
Sample rate = 11025
getBitsPerSample = 16
getChannels = 1
My best guess is that the bytesToFloats conversion is wrong. To verify this I just set the float buffer of the audioEvent with audioEvent.setFloatBuffer and then retrieved it with audioEvent.getByteBuffer which also resulted in a totally distorted audio file.
The byte buffer is read from the audioRecord:
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, 220500);
....
buffer = new byte[frameByteSize];
byte[] audioRecord.read(buffer, 0, frameByteSize);
Anybody have any idea how to fix this or suggestions for different high pass filters that I could use on a byte array in android.
Update: I figured it out. This is my updated function to convert from bytes to floats:
public static float[] bytesToFloats(byte[] bytes) {
float[] floats = new float[bytes.length / 2];
short[] shorts = new short[bytes.length/2];
ByteBuffer.wrap(bytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
for(int i=0; i < bytes.length; i+=2) {
floats[i/2] = shorts[i/2] / 32768f;
}
return floats;
}
Do the two bytes samples represent float values? They could be signed short within the range of -32,768 to 32,767. Also, for floating point representation of samples the values within the range of -1.0 to 1.0 are common.
I would try:
short sample = bytes[i] | (bytes[i+1] < 128 ? (bytes[i+1] << 8) : ((bytes[i+1] - 256) << 8));
floats[i/2] = (float)sample / 32,768f;
You need to convert pairs of bytes into signed short and then scale it to a float in the range of -1.0 to 1.0.
One of the following lines depending on the endianness of the data will convert to signed 16-bit.
short shortSample = (short)(bytes[i]) | (short)(bytes[i+1]) << 8);
short shortSample = (short)(bytes[i] << 8) | (short)(bytes[i+1]));
And then scale to float:
float sample = shortSample / 32768f;
I have a sample code that fixed sampling rate, fft point in audio recording. This code is
private static final String FILE_NAME = "audiorecordtest.raw";
private static final int SAMPLING_RATE = 44100;
private static final int FFT_POINTS = 1024;
private static final int MAGIC_SCALE = 10;
private void proceed() {
double temp;
Complex[] y;
Complex[] complexSignal = new Complex[FFT_POINTS];
for (int i=0; i<FFT_POINTS; i++) {
temp = (double)((audioBuffer[2*i] & 0xFF) | (audioBuffer[2*i+1] << 8)) / 32768.0F;
complexSignal[i] = new Complex(temp * MAGIC_SCALE, 0d);
}
y = FFT.fft(complexSignal);
/*
* See http://developer.android.com/reference/android/media/audiofx/Visualizer.html#getFft(byte[]) for format explanation
*/
final byte[] y_byte = new byte[y.length*2];
y_byte[0] = (byte) y[0].re();
y_byte[1] = (byte) y[y.length - 1].re();
for (int i = 1; i < y.length - 1; i++) {
y_byte[i*2] = (byte) y[i].re();
y_byte[i*2+1] = (byte) y[i].im();
}
if (handler != null) {
handler.onFftDataCapture(y_byte);
}
}
That code is used to record raw file from audio recording. However, I want to change SAMPLING_RATE to 16000. Could I used same FFT_POINTS is 1024? If not, Please suggest to me how to compute it and MAGIC_SCALE. I tried to used that values but the sound appear noise. Thanks.
The reference link is here
The FFT algorithm doesn't care about the sampling rate. I know that sounds somewhat non-intuitive, but each sample of the output (referred to as a bin) represents the magnitude of the content that is (SAMPLING_FREQUENCY / FFT_POINTS) Hz wide.
MAGIC_SCALE is just a value to scale the data and doesn't have a real impact when you're dealing with doubles. If it were a DFFT using 16 bit integers, you'd have a scaling factor to ensure your input doesn't saturate/overflow during it's calculations.
Notice that the FFT function is never told what SAMPLING_FREQUENCY or MAGIC_SCALE is.
In the case of 44100, and 1024, each bin is the spectral content of ~43 Hz. In the case of 16000, it's ~15Hz.
If 44100 works and 16000 doesn't, the problem is probably in the code that manages your audioBuffer[] variable.
I am using android platform, from the following reference question I come to know that using AudioRecord class which returns raw data I can filter range of audio frequency depends upon my need but for that I will need algorithm, can somebody please help me out to find algorithm to filter range b/w 14,400 bph and 16,200 bph.
I tried "JTransform" but i don't know can I achieve this with JTransform or not ? Currently I am using "jfftpack" to display visual effects which works very well but i can't achieve audio filter using this.
Reference here
help appreciated Thanks in advance.
Following is my code as i mentioned above i am using "jfftpack" library to display you may find this library reference in the code please don't get confuse with that
private class RecordAudio extends AsyncTask<Void, double[], Void> {
#Override
protected Void doInBackground(Void... params) {
try {
final AudioRecord audioRecord = findAudioRecord();
if(audioRecord == null){
return null;
}
final short[] buffer = new short[blockSize];
final double[] toTransform = new double[blockSize];
audioRecord.startRecording();
while (started) {
final int bufferReadResult = audioRecord.read(buffer, 0, blockSize);
for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
}
transformer.ft(toTransform);
publishProgress(toTransform);
}
audioRecord.stop();
audioRecord.release();
} catch (Throwable t) {
Log.e("AudioRecord", "Recording Failed");
}
return null;
/**
* #param toTransform
*/
protected void onProgressUpdate(double[]... toTransform) {
canvas.drawColor(Color.BLACK);
for (int i = 0; i < toTransform[0].length; i++) {
int x = i;
int downy = (int) (100 - (toTransform[0][i] * 10));
int upy = 100;
canvas.drawLine(x, downy, x, upy, paint);
}
imageView.invalidate();
}
There are a lot of tiny details in this process that can potentially hang you up here. This code isn't tested and I don't do audio filtering very often so you should be extremely suspicious here. This is the basic process you would take for filtering audio:
Get audio buffer
Possible audio buffer conversion (byte to float)
(optional) Apply windowing function i.e. Hanning
Take the FFT
Filter frequencies
Take inverse FFT
I'm assuming you have some basic knowledge of Android and audio recording so will cover steps 4-6 here.
//it is assumed that a float array audioBuffer exists with even length = to
//the capture size of your audio buffer
//The size of the FFT will be the size of your audioBuffer / 2
int FFT_SIZE = bufferSize / 2;
FloatFFT_1D mFFT = new FloatFFT_1D(FFT_SIZE); //this is a jTransforms type
//Take the FFT
mFFT.realForward(audioBuffer);
//The first 1/2 of audioBuffer now contains bins that represent the frequency
//of your wave, in a way. To get the actual frequency from the bin:
//frequency_of_bin = bin_index * sample_rate / FFT_SIZE
//assuming the length of audioBuffer is even, the real and imaginary parts will be
//stored as follows
//audioBuffer[2*k] = Re[k], 0<=k<n/2
//audioBuffer[2*k+1] = Im[k], 0<k<n/2
//Define the frequencies of interest
float freqMin = 14400;
float freqMax = 16200;
//Loop through the fft bins and filter frequencies
for(int fftBin = 0; fftBin < FFT_SIZE; fftBin++){
//Calculate the frequency of this bin assuming a sampling rate of 44,100 Hz
float frequency = (float)fftBin * 44100F / (float)FFT_SIZE;
//Now filter the audio, I'm assuming you wanted to keep the
//frequencies of interest rather than discard them.
if(frequency < freqMin || frequency > freqMax){
//Calculate the index where the real and imaginary parts are stored
int real = 2 * fftBin;
int imaginary = 2 * fftBin + 1;
//zero out this frequency
audioBuffer[real] = 0;
audioBuffer[imaginary] = 0;
}
}
//Take the inverse FFT to convert signal from frequency to time domain
mFFT.realInverse(audioBuffer, false);
What is the best value for buffer size when implementing a guitar tuner using FFT? Am getting an output, but it seems that the value displayed is not much accurate as I expected. I think it's an issue with the buffer size I allocated. I'm using 8000 as the buffer size. Are there any other suggestions to retrieve more efficient result?
You can kinda wiggle the results around a bit. It's been a while since I've done FFT work, but if I recall, with a buffer of 8000, the Nth bucket would be (8000 / 2) / N Hz (is that right? It's been a long time). So the 79th through 81st buckets are 50.63, 50, and 49.38 Hz.
You can then do a FFT with a slightly different number of buckets. So if you dropped down to 6000 buckets, the 59th through 61st buckets would be 50.84, 50, and 49.18 Hz.
Now you've got an algorithm that you can use to home in on the specific frequency. I think it's O((log M) * (N log N)) where N is roughly the number of buckets you use each time, and M is the precision.
Update: Sample Stretching
public byte[] stretch(byte[] input, int newLength) {
byte[] result = new byte[newLength];
result[0] = input[0];
for (int i = 1; i < newLength; i++) {
float t = i * input.length / newLength;
int j = (int) t;
float d = t - j;
result[i] = (byte) (input[j - 1] * d + input[j] * (1 - d))
}
return result;
}
You might have to fix some of the casting to make sure you get the right numbers, but that looks about right.
i = index in result[]
j = index in input[] (rounded up)
d = percentage of input[j - 1] to use
1 - d = percentage of input[j] to use