I've been scratching my head over this for a bit and the only thing I can conclude is that rsRand() is not implemented on the processor that is usually meant to run the script (e.g. GPU or CPU) or that it cannot be run in parallel.
Can anyone confirm this? If that is the case, is there a reference somewhere listing what functions are safe to use in relation to performance?
Is there any other way to get a random number without using rsRand()?
Here is my renderscript file:
#pragma version(1)
#pragma rs java_package_name(com.example.app)
#pragma rs_fp_relaxed
float width;
float height;
float3 p0, p1, p2, p3;
uchar4 __attribute__((kernel)) gradGen(uint32_t x, uint32_t y)
{
float3 result;
float hd = x / width;
float vd = y / height;
float noise = rsRand((float) 1 / 256) - ((float) 1 / 512); // CULPRIT
hd = 3 * hd * hd - 2 * hd * hd * hd;
vd = 3 * vd * vd - 2 * vd * vd * vd;
result.r = (1 - vd) * ((1 - hd) * p0.r + hd * p1.r) + vd * ((1 - hd) * p3.r + hd * p2.r) + noise;
result.g = (1 - vd) * ((1 - hd) * p0.g + hd * p1.g) + vd * ((1 - hd) * p3.g + hd * p2.g) + noise;
result.b = (1 - vd) * ((1 - hd) * p0.b + hd * p1.b) + vd * ((1 - hd) * p3.b + hd * p2.b) + noise;
return rsPackColorTo8888(result);
}
rsRand() calls the platform rand() on most implementations (that's how it's implemented in the CPU backend, I don't know that any RS GPU drivers actually implement RNGs in their drivers), so it's going to be significantly more heavyweight and slower than something like simple shifts and XORs.
and yeah, looking at the bionic implementation of rand(), you're right that it's serialized. maybe I'll get someone to port a Mersenne twister sometime.
Instead of wondering, I decided to do the dumb thing and write my own rsRand(). Xorshift was simple enough and here is extra code for implementing a PRNG:
uint32_t r0 = 0x6635e5ce, r1 = 0x13bf026f, r2 = 0x43225b59, r3 = 0x3b0314d0;
uchar4 __attribute__((kernel)) gradGen(uint32_t x, uint32_t y)
{
...
// Generate a random number between 0-1
uint32_t t = r0 ^ (r0 << 11);
r0 = r1; r1 = r2; r2 = r3;
r3 = r3 ^ (r3 >> 19) ^ t ^ (t >> 8);
float rnd = (float) r3 / 0xffffffff;
...
}
The above is fast and the quality of random numbers are good enough for my application. I'd still be interested to know the details behind rsRand() slow down.
Related
Based on the discussion I had at Camera2 api Imageformat.yuv_420_888 results on rotated image, I wanted to know how to adjust the lookup done via rsGetElementAt_uchar methods so that the YUV data is rotated by 90 degree.
I also have a project like the HdrViewfinder provided by Google. The problem is that the output is in landscape because the output surface used as target surface is connected to the yuv allocation which does not care if the device is in landscape or portrait mode. But I want to adjust the code so that it is in portrait mode.
Therefore, I took a custom YUVToRGBA renderscript but I do not know what to change to rotate the output.
Can somebody help me to adjust the following custom YUVtoRGBA script by 90 degree because I want to use the output in portrait mode:
// Needed directive for RS to work
#pragma version(1)
// The java_package_name directive needs to use your Activity's package path
#pragma rs java_package_name(net.hydex11.cameracaptureexample)
rs_allocation inputAllocation;
int wIn, hIn;
int numTotalPixels;
// Function to invoke before applying conversion
void setInputImageSize(int _w, int _h)
{
wIn = _w;
hIn = _h;
numTotalPixels = wIn * hIn;
}
// Kernel that converts a YUV element to a RGBA one
uchar4 __attribute__((kernel)) convert(uint32_t x, uint32_t y)
{
// YUV 4:2:0 planar image, with 8 bit Y samples, followed by
// interleaved V/U plane with 8bit 2x2 subsampled chroma samples
int baseIdx = x + y * wIn;
int baseUYIndex = numTotalPixels + (y >> 1) * wIn + (x & 0xfffffe);
uchar _y = rsGetElementAt_uchar(inputAllocation, baseIdx);
uchar _u = rsGetElementAt_uchar(inputAllocation, baseUYIndex);
uchar _v = rsGetElementAt_uchar(inputAllocation, baseUYIndex + 1);
_y = _y < 16 ? 16 : _y;
short Y = ((short)_y) - 16;
short U = ((short)_u) - 128;
short V = ((short)_v) - 128;
uchar4 out;
out.r = (uchar) clamp((float)(
(Y * 298 + V * 409 + 128) >> 8), 0.f, 255.f);
out.g = (uchar) clamp((float)(
(Y * 298 - U * 100 - V * 208 + 128) >> 8), 0.f, 255.f);
out.b = (uchar) clamp((float)(
(Y * 298 + U * 516 + 128) >> 8), 0.f, 255.f); //
out.a = 255;
return out;
}
I have found that custom script at https://bitbucket.org/cmaster11/rsbookexamples/src/tip/CameraCaptureExample/app/src/main/rs/customYUVToRGBAConverter.fs .
Here someone has put the Java code to rotate YUV data. But I want to do it in Renderscript since that is faster.
Any help would be great.
best regards,
I'm assuming you want the output to be in RGBA, as in your conversion script. You should be able to use an approach like that used in this answer; that is, simply modify the x and y coordinates as the first step in the convert kernel:
//Rotate 90 deg clockwise during the conversion
uchar4 __attribute__((kernel)) convert(uint32_t inX, uint32_t inY)
{
uint32_t x = wIn - 1 - inY;
uint32_t y = inX;
//...rest of the function
Note the changes to the parameter names.
This presumes you have set up the output dimensions correctly (see linked answer). A 270 degree rotation can be accomplished in a similar way.
i use the Renderscript to do the gaussian blur on a image.
but no matter what i did. the ScriptIntrinsicBlur is more more faster.
why this happened? ScriptIntrinsicBlur is using another method?
this id my RS code:
#pragma version(1)
#pragma rs java_package_name(top.deepcolor.rsimage.utils)
//aussian blur algorithm.
//the max radius of gaussian blur
static const int MAX_BLUR_RADIUS = 1024;
//the ratio of pixels when blur
float blurRatio[(MAX_BLUR_RADIUS << 2) + 1];
//the acquiescent blur radius
int blurRadius = 0;
//the width and height of bitmap
uint32_t width;
uint32_t height;
//bind to the input bitmap
rs_allocation input;
//the temp alloction
rs_allocation temp;
//set the radius
void setBlurRadius(int radius)
{
if(1 > radius)
radius = 1;
else if(MAX_BLUR_RADIUS < radius)
radius = MAX_BLUR_RADIUS;
blurRadius = radius;
/**
calculate the blurRadius by Gaussian function
when the pixel is far way from the center, the pixel will not contribute to the center
so take the sigma is blurRadius / 2.57
*/
float sigma = 1.0f * blurRadius / 2.57f;
float deno = 1.0f / (sigma * sqrt(2.0f * M_PI));
float nume = -1.0 / (2.0f * sigma * sigma);
//calculate the gaussian function
float sum = 0.0f;
for(int i = 0, r = -blurRadius; r <= blurRadius; ++i, ++r)
{
blurRatio[i] = deno * exp(nume * r * r);
sum += blurRatio[i];
}
//normalization to 1
int len = radius + radius + 1;
for(int i = 0; i < len; ++i)
{
blurRatio[i] /= sum;
}
}
/**
the gaussian blur is decomposed two steps:1
1.blur in the horizontal
2.blur in the vertical
*/
uchar4 RS_KERNEL horizontal(uint32_t x, uint32_t y)
{
float a, r, g, b;
for(int k = -blurRadius; k <= blurRadius; ++k)
{
int horizontalIndex = x + k;
if(0 > horizontalIndex) horizontalIndex = 0;
if(width <= horizontalIndex) horizontalIndex = width - 1;
uchar4 inputPixel = rsGetElementAt_uchar4(input, horizontalIndex, y);
int blurRatioIndex = k + blurRadius;
a += inputPixel.a * blurRatio[blurRatioIndex];
r += inputPixel.r * blurRatio[blurRatioIndex];
g += inputPixel.g * blurRatio[blurRatioIndex];
b += inputPixel.b * blurRatio[blurRatioIndex];
}
uchar4 out;
out.a = (uchar) a;
out.r = (uchar) r;
out.g = (uchar) g;
out.b = (uchar) b;
return out;
}
uchar4 RS_KERNEL vertical(uint32_t x, uint32_t y)
{
float a, r, g, b;
for(int k = -blurRadius; k <= blurRadius; ++k)
{
int verticalIndex = y + k;
if(0 > verticalIndex) verticalIndex = 0;
if(height <= verticalIndex) verticalIndex = height - 1;
uchar4 inputPixel = rsGetElementAt_uchar4(temp, x, verticalIndex);
int blurRatioIndex = k + blurRadius;
a += inputPixel.a * blurRatio[blurRatioIndex];
r += inputPixel.r * blurRatio[blurRatioIndex];
g += inputPixel.g * blurRatio[blurRatioIndex];
b += inputPixel.b * blurRatio[blurRatioIndex];
}
uchar4 out;
out.a = (uchar) a;
out.r = (uchar) r;
out.g = (uchar) g;
out.b = (uchar) b;
return out;
}
Renderscript intrinsics are implemented very differently from what you can achieve with a script of your own. This is for several reasons, but mainly because they are built by the RS driver developer of individual devices in a way that makes the best possible use of that particular hardware/SoC configuration, and most likely makes low level calls to the hardware that is simply not available at the RS programming layer.
Android does provide a generic implementation of these intrinsics though, to sort of "fall back" in case no lower hardware implementation is available. Seeing how these generic ones are done will give you some better idea of how these intrinsics work. For example, you can see the source code of the generic implementation of the 3x3 convolution intrinsic here rsCpuIntrinsicConvolve3x3.cpp.
Take a very close look at the code starting from line 98 of that source file, and notice how they use no for loops whatsoever to do the convolution. This is known as unrolled loops, where you add and multiply explicitly the 9 corresponding memory locations in the code, thereby avoiding the need of a for loop structure. This is the first rule you must take into account when optimizing parallel code. You need to get rid of all branching in your kernel. Looking at your code, you have a lot of if's and for's that cause branching -- this means the control flow of the program is not straight through from beginning to end.
If you unroll your for loops, you will immediately see a boost in performance. Note that by removing your for structures you will no longer be able to generalize your kernel for all possible radius amounts. In that case, you would have to create fixed kernels for different radii, and this is exactly why you see separate 3x3 and 5x5 convolution intrinsics, because this is just what they do. (See line 99 of the 5x5 intrinsic at rsCpuIntrinsicConvolve5x5.cpp).
Furthermore, the fact that you have two separate kernels doesn't help. If you're doing a gaussian blur, the convolutional kernel is indeed separable and you can do 1xN + Nx1 convolutions as you've done there, but I would recommend putting both passes together in the same kernel.
Keep in mind though, that even doing these tricks will probably still not give you as fast results as the actual intrinsics, because those have probably been highly optimized for your specific device(s).
Background
I've made a tiny Android library for handling bitmaps using JNI (link here)
In the long past, I've made some code of Bilinear Interpolation as a possible algorithm for scaling of images. The algorithm is a bit complex and uses pixels around to form the target pixel.
The problem
Even though there are no errors (no compilation errors and no runtime errors), the output image look like this (scaled the width by x2) :
The code
Basically the original Java code used SWT and supported only RGB, but it's the same for the Alpha channel. It worked before just perfectly (though now that I look at it, it seems to create a lot of objects on the way) .
Here's the Java code:
/** class for resizing imageData using the Bilinear Interpolation method */
public class BilinearInterpolation
{
/** the method for resizing the imageData using the Bilinear Interpolation algorithm */
public static void resize(final ImageData inputImageData,final ImageData newImageData,final int oldWidth,final int oldHeight,final int newWidth,final int newHeight)
{
// position of the top left pixel of the 4 pixels to use interpolation on
int xTopLeft,yTopLeft;
int x,y,lastTopLefty;
final float xRatio=(float)newWidth/(float)oldWidth,yratio=(float)newHeight/(float)oldHeight;
// Y color ratio to use on left and right pixels for interpolation
float ycRatio2=0,ycRatio1=0;
// pixel target in the src
float xt,yt;
// X color ratio to use on left and right pixels for interpolation
float xcRatio2=0,xcratio1=0;
// copy data from source image to RGB values:
RGB rgbTopLeft,rgbTopRight,rgbBottomLeft=null,rgbBottomRight=null,rgbTopMiddle=null,rgbBottomMiddle=null;
RGB[][] startingImageData;
startingImageData=new RGB[oldWidth][oldHeight];
for(x=0;x<oldWidth;++x)
for(y=0;y<oldHeight;++y)
{
rgbTopLeft=inputImageData.palette.getRGB(inputImageData.getPixel(x,y));
startingImageData[x][y]=new RGB(rgbTopLeft.red,rgbTopLeft.green,rgbTopLeft.blue);
}
// do the resizing:
for(x=0;x<newWidth;x++)
{
xTopLeft=(int)(xt=x/xRatio);
// when meeting the most right edge, move left a little
if(xTopLeft>=oldWidth-1)
xTopLeft--;
if(xt<=xTopLeft+1)
{
// we are between the left and right pixel
xcratio1=xt-xTopLeft;
// color ratio in favor of the right pixel color
xcRatio2=1-xcratio1;
}
for(y=0,lastTopLefty=Integer.MIN_VALUE;y<newHeight;y++)
{
yTopLeft=(int)(yt=y/yratio);
// when meeting the most bottom edge, move up a little
if(yTopLeft>=oldHeight-1)
yTopLeft--;
// we went down only one rectangle
if(lastTopLefty==yTopLeft-1)
{
rgbTopLeft=rgbBottomLeft;
rgbTopRight=rgbBottomRight;
rgbTopMiddle=rgbBottomMiddle;
rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
rgbBottomMiddle=new RGB((int)(rgbBottomLeft.red*xcRatio2+rgbBottomRight.red*xcratio1),(int)(rgbBottomLeft.green*xcRatio2+rgbBottomRight.green*xcratio1),(int)(rgbBottomLeft.blue*xcRatio2+rgbBottomRight.blue*xcratio1));
}
else if(lastTopLefty!=yTopLeft)
{
// we went to a totally different rectangle (happens in every loop start,and might happen more when making the picture smaller)
rgbTopLeft=startingImageData[xTopLeft][yTopLeft];
rgbTopRight=startingImageData[xTopLeft+1][yTopLeft];
rgbTopMiddle=new RGB((int)(rgbTopLeft.red*xcRatio2+rgbTopRight.red*xcratio1),(int)(rgbTopLeft.green*xcRatio2+rgbTopRight.green*xcratio1),(int)(rgbTopLeft.blue*xcRatio2+rgbTopRight.blue*xcratio1));
rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
rgbBottomMiddle=new RGB((int)(rgbBottomLeft.red*xcRatio2+rgbBottomRight.red*xcratio1),(int)(rgbBottomLeft.green*xcRatio2+rgbBottomRight.green*xcratio1),(int)(rgbBottomLeft.blue*xcRatio2+rgbBottomRight.blue*xcratio1));
}
lastTopLefty=yTopLeft;
if(yt<=yTopLeft+1)
{
// color ratio in favor of the bottom pixel color
ycRatio1=yt-yTopLeft;
ycRatio2=1-ycRatio1;
}
// prepared all pixels to look at, so finally set the new pixel data
newImageData.setPixel(x,y,inputImageData.palette.getPixel(new RGB((int)(rgbTopMiddle.red*ycRatio2+rgbBottomMiddle.red*ycRatio1),(int)(rgbTopMiddle.green*ycRatio2+rgbBottomMiddle.green*ycRatio1),(int)(rgbTopMiddle.blue*ycRatio2+rgbBottomMiddle.blue*ycRatio1))));
}
}
}
}
And here's the C/C++ code I've tried to make from it:
typedef struct
{
uint8_t alpha, red, green, blue;
} ARGB;
int32_t convertArgbToInt(ARGB argb)
{
return (argb.alpha) | (argb.red << 16) | (argb.green << 8)
| (argb.blue << 24);
}
void convertIntToArgb(uint32_t pixel, ARGB* argb)
{
argb->red = ((pixel >> 24) & 0xff);
argb->green = ((pixel >> 16) & 0xff);
argb->blue = ((pixel >> 8) & 0xff);
argb->alpha = (pixel & 0xff);
}
...
/**scales the image using a high-quality algorithm called "Bilinear Interpolation" */ //
JNIEXPORT void JNICALL Java_com_jni_bitmap_1operations_JniBitmapHolder_jniScaleBIBitmap(
JNIEnv * env, jobject obj, jobject handle, uint32_t newWidth,
uint32_t newHeight)
{
JniBitmap* jniBitmap = (JniBitmap*) env->GetDirectBufferAddress(handle);
if (jniBitmap->_storedBitmapPixels == NULL)
return;
uint32_t oldWidth = jniBitmap->_bitmapInfo.width;
uint32_t oldHeight = jniBitmap->_bitmapInfo.height;
uint32_t* previousData = jniBitmap->_storedBitmapPixels;
uint32_t* newBitmapPixels = new uint32_t[newWidth * newHeight];
// position of the top left pixel of the 4 pixels to use interpolation on
int xTopLeft, yTopLeft;
int x, y, lastTopLefty;
float xRatio = (float) newWidth / (float) oldWidth, yratio =
(float) newHeight / (float) oldHeight;
// Y color ratio to use on left and right pixels for interpolation
float ycRatio2 = 0, ycRatio1 = 0;
// pixel target in the src
float xt, yt;
// X color ratio to use on left and right pixels for interpolation
float xcRatio2 = 0, xcratio1 = 0;
ARGB rgbTopLeft, rgbTopRight, rgbBottomLeft, rgbBottomRight, rgbTopMiddle,
rgbBottomMiddle, result;
for (x = 0; x < newWidth; ++x)
{
xTopLeft = (int) (xt = x / xRatio);
// when meeting the most right edge, move left a little
if (xTopLeft >= oldWidth - 1)
xTopLeft--;
if (xt <= xTopLeft + 1)
{
// we are between the left and right pixel
xcratio1 = xt - xTopLeft;
// color ratio in favor of the right pixel color
xcRatio2 = 1 - xcratio1;
}
for (y = 0, lastTopLefty = -30000; y < newHeight; ++y)
{
yTopLeft = (int) (yt = y / yratio);
// when meeting the most bottom edge, move up a little
if (yTopLeft >= oldHeight - 1)
--yTopLeft;
if (lastTopLefty == yTopLeft - 1)
{
// we went down only one rectangle
rgbTopLeft = rgbBottomLeft;
rgbTopRight = rgbBottomRight;
rgbTopMiddle = rgbBottomMiddle;
//rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth) + xTopLeft],
&rgbBottomLeft);
//rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth)
+ (xTopLeft + 1)], &rgbBottomRight);
rgbBottomMiddle.alpha = rgbBottomLeft.alpha * xcRatio2
+ rgbBottomRight.alpha * xcratio1;
rgbBottomMiddle.red = rgbBottomLeft.red * xcRatio2
+ rgbBottomRight.red * xcratio1;
rgbBottomMiddle.green = rgbBottomLeft.green * xcRatio2
+ rgbBottomRight.green * xcratio1;
rgbBottomMiddle.blue = rgbBottomLeft.blue * xcRatio2
+ rgbBottomRight.blue * xcratio1;
}
else if (lastTopLefty != yTopLeft)
{
// we went to a totally different rectangle (happens in every loop start,and might happen more when making the picture smaller)
//rgbTopLeft=startingImageData[xTopLeft][yTopLeft];
convertIntToArgb(previousData[(yTopLeft * oldWidth) + xTopLeft],
&rgbTopLeft);
//rgbTopRight=startingImageData[xTopLeft+1][yTopLeft];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth) + xTopLeft],
&rgbTopRight);
rgbTopMiddle.alpha = rgbTopLeft.alpha * xcRatio2
+ rgbTopRight.alpha * xcratio1;
rgbTopMiddle.red = rgbTopLeft.red * xcRatio2
+ rgbTopRight.red * xcratio1;
rgbTopMiddle.green = rgbTopLeft.green * xcRatio2
+ rgbTopRight.green * xcratio1;
rgbTopMiddle.blue = rgbTopLeft.blue * xcRatio2
+ rgbTopRight.blue * xcratio1;
//rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth) + xTopLeft],
&rgbBottomLeft);
//rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth)
+ (xTopLeft + 1)], &rgbBottomRight);
rgbBottomMiddle.alpha = rgbBottomLeft.alpha * xcRatio2
+ rgbBottomRight.alpha * xcratio1;
rgbBottomMiddle.red = rgbBottomLeft.red * xcRatio2
+ rgbBottomRight.red * xcratio1;
rgbBottomMiddle.green = rgbBottomLeft.green * xcRatio2
+ rgbBottomRight.green * xcratio1;
rgbBottomMiddle.blue = rgbBottomLeft.blue * xcRatio2
+ rgbBottomRight.blue * xcratio1;
}
lastTopLefty = yTopLeft;
if (yt <= yTopLeft + 1)
{
// color ratio in favor of the bottom pixel color
ycRatio1 = yt - yTopLeft;
ycRatio2 = 1 - ycRatio1;
}
// prepared all pixels to look at, so finally set the new pixel data
result.alpha = rgbTopMiddle.alpha * ycRatio2
+ rgbBottomMiddle.alpha * ycRatio1;
result.blue = rgbTopMiddle.blue * ycRatio2
+ rgbBottomMiddle.blue * ycRatio1;
result.red = rgbTopMiddle.red * ycRatio2
+ rgbBottomMiddle.red * ycRatio1;
result.green = rgbTopMiddle.green * ycRatio2
+ rgbBottomMiddle.green * ycRatio1;
newBitmapPixels[(y * newWidth) + x] = convertArgbToInt(result);
}
}
//get rid of old data, and replace it with new one
delete[] previousData;
jniBitmap->_storedBitmapPixels = newBitmapPixels;
jniBitmap->_bitmapInfo.width = newWidth;
jniBitmap->_bitmapInfo.height = newHeight;
}
The question
What am I doing wrong?
Is it also possible to make the code a bit more readable? I'm a bit rusty on C/C++ and I was more of a C developer than a C++ developer.
EDIT: now it works fine. I've edited and fixed the code.
Only thing that you guys can help is giving tips about how to make it better.
OK, it all started with a bad conversion of the colors, then it went to the usage of pointers , and then to the basic one of where to put the pixels.
The code I've written now works fine (added all of the needed fixes).
Soon you will all be able to use the new code on the Github project.
I have been trying to animate an image of a fly which moves in a path like the following image(which i have added for a clear idea) in android version 2.2
Well,this can be done in a very simple manner in the iphone as they have a property forsetting this auto rotation after the path is drawn using
animation.rotationMode = kCAAnimationRotateAuto;
which i believe would rotate the object based on the path`
I am able to animate ma fly through this path using the nineoldandroid library using the methods
path.moveTo(float x, float y);
path.lineTo(float x, float y);
path.curveTo(float c0X, float c0Y, float c1X, float c1Y, float x, float y);
Such that the curves are drawn through cubic B�zier curve.
Now what i have been trying is to implement something that would allow my fly to rotate itself along the path and i just cant seem to reach anywhere.
Please Help Me out with some ideas!!! :( :(
You have to download the demo and the lib of nineoldandroids and these 4 java files if you want to use my solution
That was easy, I modified the evaluator in the demo of nineoldandroids.
It's too much to post here:
Just to get the idea:
I extend the PathPoint with the field angle.
Then write all calculated Points in a stack (a simple float[][])
After the first calculation the angle can be calculated by the atan and the last 2 points in the stack.
If you don't want to use a stack you can modify the timeparam and look forward to where the next point will be drawn and calculate the angle out of these.
Just think about:
Do you first watch where you are walking to and then walk or do you just walk and then chose the angle for the destination. It's not neccessary since we have display densities that high and calculating the angle for each pixel.
Here's the PathEvaluator
public class PathEvaluatorAngle implements TypeEvaluator<PathPointAngle> {
private static final int POINT_COUNT = 5000;
private float[][] stack = new float[POINT_COUNT][2];
private int stackC = 0;
#Override
public PathPointAngle evaluate(float t, PathPointAngle startValue, PathPointAngle endValue) {
float x, y;
if (endValue.mOperation == PathPointAngle.CURVE) {
float oneMinusT = 1 - t;
x = oneMinusT * oneMinusT * oneMinusT * startValue.mX +
3 * oneMinusT * oneMinusT * t * endValue.mControl0X +
3 * oneMinusT * t * t * endValue.mControl1X +
t * t * t * endValue.mX;
y = oneMinusT * oneMinusT * oneMinusT * startValue.mY +
3 * oneMinusT * oneMinusT * t * endValue.mControl0Y +
3 * oneMinusT * t * t * endValue.mControl1Y +
t * t * t * endValue.mY;
} else if (endValue.mOperation == PathPointAngle.LINE) {
x = startValue.mX + t * (endValue.mX - startValue.mX);
y = startValue.mY + t * (endValue.mY - startValue.mY);
} else {
x = endValue.mX;
y = endValue.mY;
}
stack[stackC][0] = x;
stack[stackC][1] = y;
double angle;
if (stackC == 0){
angle = 0;
} else if (stackC >= POINT_COUNT){
throw new IllegalStateException("set the stack POINT_COUNT higher!");
} else {
angle = Math.atan(
(stack[stackC][1] - stack[stackC-1][1]) /
(stack[stackC][0] - stack[stackC-1][0])
) * 180d/Math.PI;
}
stackC++;
return PathPointAngle.moveTo(x, y, angle);
}
}
Please check the below link.Hope it will help.
https://github.com/JakeWharton/NineOldAndroids
How do I filter noise of the accelerometer data in Android? I would like to create a high-pass filter for my sample data so that I could eliminate low frequency components and focus on the high frequency components. I have read that Kalman filter might be the best candidate for this, but how do I integrate or use this method in my application which will mostly written in Android Java? or can it be done in the first place? or through Android NDK? Is there by any chance that this can be done in real-time?
Any idea will be much appreciated. Thank you!
The samples from Apple's SDK actually implement the filtering in an even simpler way which is by using ramping:
//ramp-speed - play with this value until satisfied
const float kFilteringFactor = 0.1f;
//last result storage - keep definition outside of this function, eg. in wrapping object
float accel[3];
//acceleration.x,.y,.z is the input from the sensor
//result.x,.y,.z is the filtered result
//high-pass filter to eliminate gravity
accel[0] = acceleration.x * kFilteringFactor + accel[0] * (1.0f - kFilteringFactor);
accel[1] = acceleration.y * kFilteringFactor + accel[1] * (1.0f - kFilteringFactor);
accel[2] = acceleration.z * kFilteringFactor + accel[2] * (1.0f - kFilteringFactor);
result.x = acceleration.x - accel[0];
result.y = acceleration.y - accel[1];
result.z = acceleration.z - accel[2];
Here's the code for Android, adapted from the apple adaptive high pass filter example. Just plug this in and implement onFilteredAccelerometerChanged()
private static final boolean ADAPTIVE_ACCEL_FILTER = true;
float lastAccel[] = new float[3];
float accelFilter[] = new float[3];
public void onAccelerometerChanged(float accelX, float accelY, float accelZ) {
// high pass filter
float updateFreq = 30; // match this to your update speed
float cutOffFreq = 0.9f;
float RC = 1.0f / cutOffFreq;
float dt = 1.0f / updateFreq;
float filterConstant = RC / (dt + RC);
float alpha = filterConstant;
float kAccelerometerMinStep = 0.033f;
float kAccelerometerNoiseAttenuation = 3.0f;
if(ADAPTIVE_ACCEL_FILTER)
{
float d = clamp(Math.abs(norm(accelFilter[0], accelFilter[1], accelFilter[2]) - norm(accelX, accelY, accelZ)) / kAccelerometerMinStep - 1.0f, 0.0f, 1.0f);
alpha = d * filterConstant / kAccelerometerNoiseAttenuation + (1.0f - d) * filterConstant;
}
accelFilter[0] = (float) (alpha * (accelFilter[0] + accelX - lastAccel[0]));
accelFilter[1] = (float) (alpha * (accelFilter[1] + accelY - lastAccel[1]));
accelFilter[2] = (float) (alpha * (accelFilter[2] + accelZ - lastAccel[2]));
lastAccel[0] = accelX;
lastAccel[1] = accelY;
lastAccel[2] = accelZ;
onFilteredAccelerometerChanged(accelFilter[0], accelFilter[1], accelFilter[2]);
}
For those wondering what norm() and clamp() methods do in the answer from rbgrn, you can see them here:
http://developer.apple.com/library/IOS/samplecode/AccelerometerGraph/Listings/AccelerometerGraph_AccelerometerFilter_m.html
double norm(double x, double y, double z)
{
return Math.sqrt(x * x + y * y + z * z);
}
double clamp(double v, double min, double max)
{
if(v > max)
return max;
else if(v < min)
return min;
else
return v;
}
I seem to remember this being done in Apple's sample code for the iPhone. Let's see...
Look for AccelerometerFilter.h / .m on Google (or grab Apple's AccelerometerGraph sample) and this link: http://en.wikipedia.org/wiki/High-pass_filter (that's what Apple's code is based on).
There is some pseudo-code in the Wiki, too. But the math is fairly simple to translate to code.
IMO, designing a Kalman filter as your first attempt is over-complicating what's probably a fairly simple problem. I'd start with a simple FIR filter, and only try something more complex when/if you've tested that and found with reasonable certainty that it can't provide what you want. My guess, however, is that it will be able to do everything you need, and do it much more easily and efficiently.