I started working on a concept that requires me to find a way to move a rectangle toward a given point at a given speed. I'm developing for Android so this is relatively speed critical (it's going to be calculated every frame for potentially hundreds of objects, as well.)
The solutions I could think of are as follows:
float diff_x = x2 - x1;
float diff_y = y2 - y1;
float length = sqrt((diff_x * diff_x) + (diff_y * diff_y));
float dir_x = diff_x / len;
float dir_y = diff_y / len;
float move_x = dir_x * MOVE_SPEED;
float move_y = dir_y * MOVE_SPEED;
As you can see, this way requires a square root, which I know to be quite costly. I thought of an alternative, which uses trigonometry, but it's costly as well.
float diff_x = x2 - x1;
float diff_y = y2 - y1;
float angle = atan2(diff_y, diff_x);
float move_x = sin(angle) * MOVE_SPEED;
float move_y = cos(angle) * MOVE_SPEED;
Are there any other ways? If not, which of my solutions would be faster? Thanks for any help.
A very common tric you can use is to put everything squarred/power of two/ ^2
this way instead of using sqrt you just use
length = (diff_x * diff_x) + (diff_y * diff_y);
diff_x*diff_x/length
Related
I've just stated learning Opengl ES 2.0 in android and I run into a problem that i don't know how to solve.
I want to create a large plane field, I created it, put on texture but here comes my problem
It doesn't draw all of it it only displays about 10 unit on the Z axe.. X i fine.
So i want to create a big square but it displays a rectangle. It is like someone took a scissors and cut it off a certaint Z coordinate.
I don't even know what part of my code should i put it here, shader ? plane coordinates ? camera settings ?
Thank you for your patient.
It sounds like your plane is getting clipped by the frustum or viewing volume. That is typically set by either glOrtho() or glPerspective(). Try increasing the distance between the near and far plane parameters to these functions.
If you are relying on a default frustum provided by Android, you may have to construct your own frustum, which would look something like this for glOrtho():
typedef struct
{
float f0;
float f1;
float f2;
float f3;
float f4;
float f5;
float f6;
float f7;
float f8;
float f9;
float f10;
float f11;
float f12;
float f13;
float f14;
float f15;
} Mat4;
void Ortho(Mat4 * pMat4, float left, float top, float right, float bottom, float nearPlane, float farPlane)
{
float rcplmr = 1.0f / (left - right);
float rcpbmt = 1.0f / (bottom - top);
float rcpnmf = 1.0f / (nearPlane - farPlane);
pMat4->f0 = -2.0f * rcplmr;
pMat4->f1 = 0.0f;
pMat4->f2 = 0.0f;
pMat4->f3 = 0.0f;
pMat4->f4 = 0.0f;
pMat4->f5 = -2.0f * rcpbmt;
pMat4->f6 = 0.0f;
pMat4->f7 = 0.0f;
pMat4->f8 = 0.0f;
pMat4->f9 = 0.0f;
pMat4->f10 = -2.0f * rcpnmf;
pMat4->f11 = 0.0f;
pMat4->f12 = (right + left) * rcplmr;
pMat4->f13 = (top + bottom) * rcpbmt;
pMat4->f14 = (nearPlane + farPlane) * rcpnmf;
pMat4->f15 = 1.0f;
}
I am trying to draw an arrow to point to objects in am image. I have been able to write code to draw the line but cant seem to be able to find a way to draw the arrowhead.The code I wrote to draw a dragabble line is as follows.I need to draw an arrowhead on ACTION_UP event to the direction in which the line is pointing
if(event.getAction() ==MotionEvent.ACTION_DOWN) {
if (count==1){
x1 = event.getX();
y1 = event.getY();
System.out.println(count+"count of value a;skd");
Toast.makeText(getApplicationContext(), ""+(radius+count), Toast.LENGTH_LONG).show();
Log.i(TAG, "coordinate x1 : "+String.valueOf(x1)+" y1 : "+String.valueOf(y1));
}
}
else if(event.getAction() ==MotionEvent.ACTION_MOVE){
imageView.setImageBitmap(bmp2);
x2 = event.getX();
y2 = event.getY();
posX=(float)(x1+x2)/2;
posY=(float)(y1+y2)/2;
radius=(float) Math.sqrt((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2))/2;
onDraw();
Toast.makeText(getApplicationContext(), ""+radius, Toast.LENGTH_LONG).show();
}
Hi, for anyone still needing help .This is how I did it in the end
float h=(float) 30.0;
float phi = (float) Math.atan2(y2 - y1, x2 - x1);
float angle1 = (float) (phi - Math.PI / 6);
float angle2 = (float) (phi + Math.PI / 6);
float x3 = (float) (x2 - h * Math.cos(angle1));
float x4 = (float) (x2 - h * Math.cos(angle2));
float y3 = (float) (y2 - h * Math.sin(angle1));
float y4 = (float) (y2 - h * Math.sin(angle2));
c.drawLine(x1, y1,x2,y2 ,pnt);
c.drawLine(x2, y2,x3,y3 ,pnt);
c.drawLine(x2, y2,x4,y4 ,pnt);
I got help from the accepted answer and ios section in stackoverflow
How I would do this is to find the slope of the line, which is drawn between two points(start and end). The slope would be (dy/dx), and that would be a good start point for your arrow. Assuming you want the base of the arrowhead to be perpendicular to the line of the arrow, to find the slope of the base you would find the opposite reciprocal of the slope of the line. for example, lets say that your line has a slope of 2. The slope for the base of your triangle would be (-1/2), because you do (1/(oldslope)) and multiply by -1. I don't know android very well, but if I remember correctly, in Java, you would use a drawPolygon method, and you would have to specify 4 points(3 unique and 1 the same as the first to close it). Given the slope of the base of the tip, we can get our first two points and our final point. You should know before you start the dimensions of the arrowhead you wish to draw, so in this case b will be the length of your baseline. If you take ϴ=arctan(dy/dx), that will give you an angle between the x axis and your baseline. With that ϴ value, you can do ydif = b*sin(ϴ) to get the difference in y value between the two base corners of your arrow. Doing the same thing but with xdif = b*cos(ϴ) gives you the difference in the x value between the two base points. If the location of the final point of the line that the user drew is, say, (x1, y1), then the locations of the basepoints of the triangle would be (x1-(xdif/2), y1-(ydif/2)) and (x1+(xdif/2), y1+(ydif/2)). These two points, p1 and p2, are the first, second, and fourth points in the draw polygon method. To find the third point, we need to find the angle of the original line, by doing ϴ=arctan(dy/dx), this time using your original dy/dx. with that angle. Before we finish the actual calculation of the point, you first have to know how far from the end of your line the tip of the arrow should actually be, in my case, I will use the var h and h = 10. To get the cordinate, (x,y), assuming the cordinate for the line tip is (x1, y1)you would do (x1+hcosϴ, y1+hsinϴ). Use that for the third value in drawPolygon(), and you should be done. sorry if I kind of rushed at the end, I got kind of tired of typing, comment if you need help.
If you managed to draw a line from the input event, you might additionally draw a triangle on its end indicating the direction.
On another project I drew a square everytime a magnetic point on a grid was touched (as you can see here) Sorry I can not provide you any sample code right now. But if that's a suitable approach for you, I might post it later.
Here is a good code, its not mine, It was a Java Graphics2D code that I converted to Canvas. All credit go to the original guy/lady who wrote it
private void drawArrowHead(Canvas canvas, Point tip, Point tail)
{
double dy = tip.y - tail.y;
double dx = tip.x - tail.x;
double theta = Math.atan2(dy, dx);
int tempX = tip.x ,tempY = tip.y;
//make arrow touch the circle
if(tip.x>tail.x && tip.y==tail.y)
{
tempX = (tip.x-10);
}
else if(tip.x<tail.x && tip.y==tail.y)
{
tempX = (tip.x+10);
}
else if(tip.y>tail.y && tip.x==tail.x)
{
tempY = (tip.y-10);
}
else if(tip.y<tail.y && tip.x==tail.x)
{
tempY = (tip.y+10);
}
else if(tip.x>tail.x || tip.x<tail.x)
{
int rCosTheta = (int) ((10)*Math.cos(theta)) ;
int xx = tip.x - rCosTheta;
int yy = (int) ((xx-tip.x)*(dy/dx) + tip.y);
tempX = xx;
tempY = yy;
}
double x, y, rho = theta + phi;
for(int j = 0; j < 2; j++)
{
x = tempX - arrowLength * Math.cos(rho);
y = tempY - arrowLength * Math.sin(rho);
canvas.drawLine(tempX,tempY,(int)x,(int)y,this.paint);
rho = theta - phi;
}
}
Just call this for both sides of your line and it will draw an arrow at each side!
I have circular sprites and I need to check to see if they collide with any other circle. I tried:
public boolean collision(){
boolean collide=false;
if(spriteNum>0)
for(int x=0;x<spriteNum;x++)
if(yourSprite[spriteNum].collidesWith(yourSprite[x]))
collide=true;
return collide;
}
But that creates a rectangle around it which kind of throws it off. I could use the distance formula to manually calculate if two sprites are in contact, but that seems taxing and each sprite is attached with a circle physics body, meaning there centers are constantly moving (and I don't know how to find the center). Any ideas?
As Alexandru points out, no circle collision detection is supported by AndEngine so far. The best way is to implement it yourself. His solution works fine (fast), but just in case you need a bit more precision, I will post another approximation:
// There is no need to use Sprites, we will use the superclass Entity
boolean collidesWith(Entity circle){
final float x1 = this.getX();
final float y1 = this.getY();
final float x2 = circle.getX();
final float y2 = circle.getY();
final float xDifference = x2 - x1;
final float yDifference = y2 - y1;
// The ideal would be to provide a radius, but as
// we assume they are perfect circles, half the
// width will be just as good
final float radius1 = this.getWidth()/2;
final float radius2 = circle.getWidth()/2;
// Note we are using inverseSqrt but not normal sqrt,
// please look below to see a fast implementation of it.
// Using normal sqrt would not need "1.0f/", is more precise
// but less efficient
final float euclideanDistance = 1.0f/inverseSqrt(
xDifference*xDifference +
yDifference*yDifference);
return euclideanDistance < (radius1+radius2);
}
/**
* Gets an aproximation of the inverse square root with float precision.
* #param x float to be square-rooted
* #return an aproximation to sqrt(x)
*/
public static float inverseSqrt(float x) {
float xhalf = 0.5f*x;
int i = Float.floatToIntBits(x);
i = 0x5f3759df - (i>>1);
x = Float.intBitsToFloat(i);
x = x*(1.5f - xhalf*x*x);
return x;
}
Note I am not the author of the fast inverseSqrt method, it works in Java (and more precisely in Android) because of its floating point representation (see IEEE 754 floating point representation and Java float to byte representation).
For further research, see:
Quake3 fast inverse Sqrt origins
Fast inverse Sqrt implementation in Java
Because there is no circle collision detection in Andengine the only way is to calculate the distance between them
boolean collidesWithCircle(Sprite circle) {
float x1 = this.getX();
float y1 = this.getY();
float x2 = circle.getX();
float y2 = circle.getY();
double a = x1 - x2;
double b = y1 - y2;
double c = (a * a) + (b * b);
if (c <= this.getWidth()*this.getWidth())
return true;
else return false;
}
You can create circular bodies if you are using physics world by using PhysicsFactory.createCircularBody() method.
I am using an adapted version of android's getRotationMatrix in a c++ program that reads the phone's sensor data over the network and calculates the device's matrix.
The function works fine and calculates the device's orientation. Unfortunately, Ogre3d has a different axis system than the device. So even though rotation about the x-axis works fine, the y and z axis are wrong. Holding the device level and pointing to north (identity matrix). When I pitch, the rotation is correct. But when I roll and yaw the rotations are alternated. Roll is yaw in Ogre3d and vice versa.
(Ogre3d) ([Device][5])
^ +y-axis ^ +z-axis
* *
* *
* * ^ +y-axis
* * *
* * *
* * *
************> + x-axis ************> +x-axis
*
*
v +z-axis
A quick look at the two axis system looks like Ogre's system (on the left) is essentially the device's system rotated 90 degrees counter clockwise about the x-axis.
I tried to experiment with various combinations when I fist assign sensor values before the matrix is calculated but no combination seems to work correctly. How would I make sure that the rotation matrix getRotationMatrix() produces displays correctly on Ogre3D?
For Reference here is the function that calculates the matrix:
bool getRotationMatrix() {
//sensor data coming through the network are
//stored in accel(accelerometer) and mag(geomagnetic)
//vars which the function has access to
float Ax = accel[0]; float Ay = accel[1]; float Az = accel[2];
float Ex = mag[0]; float Ey = mag[1]; float Ez = mag[2];
float Hx = Ey * Az - Ez * Ay;
float Hy = Ez * Ax - Ex * Az;
float Hz = Ex * Ay - Ey * Ax;
float normH = (float) Math::Sqrt(Hx * Hx + Hy * Hy + Hz * Hz);
if (normH < 0.1f) {
// device is close to free fall (or in space?), or close to
// magnetic north pole. Typical values are > 100.
return false;
}
float invH = 1.0f / normH;
Hx *= invH;
Hy *= invH;
Hz *= invH;
float invA = 1.0f / (float) Math::Sqrt(Ax * Ax + Ay * Ay + Az * Az);
Ax *= invA;
Ay *= invA;
Az *= invA;
float Mx = Ay * Hz - Az * Hy;
float My = Az * Hx - Ax * Hz;
float Mz = Ax * Hy - Ay * Hx;
//ogre3d's matrix3 is column-major whereas getrotatinomatrix produces
//a row-major matrix thus i have tranposed it here
orientation[0][0] = Hx; orientation[0][2] = Mx; orientation[0][2] = Ax;
orientation[1][0] = Hy; orientation[1][3] = My; orientation[1][2] = Ay;
orientation[2][0] = Hz; orientation[2][4] = Mz; orientation[2][2] = Az;
return true;
}
Why not just add the one additional rotation you've already identified before you use it in ogre?
I found the problem. In my function the unit vectors calculated after the cross products I put them in columns whereas I should be putting them in the rows in their appointed matrix3 cells as usual. Something about row-major and column-major confused me even though I was referring to the elements in 2d [][].
multiplying the outcome of the matrix calculation function with this matrix:
1 0 0
0 0 1
0 -1 0
Then pitching the whole result by another p/2 about axis solved the remap problem but I fear my geometry is inverted.
I don't know much about Matrix Rotation, but if the Systems rotates like you are showing, I think that youshould do the following:
X Axis stays the same way, so:
float Ax = accel[0];
float Ex = mag[0];
Y Axis in (Ogre3d) is Z axis in ([Device][5]), so:
float Ay = accel[2];
float Ey = mag[2];
Z Axis in (Ogre3d) is the oposite of Y axis in ([Device][5]), so:
float Az = accel[1] * (-1);
float Ez = mag[1] * (-1);
Try that
How do I filter noise of the accelerometer data in Android? I would like to create a high-pass filter for my sample data so that I could eliminate low frequency components and focus on the high frequency components. I have read that Kalman filter might be the best candidate for this, but how do I integrate or use this method in my application which will mostly written in Android Java? or can it be done in the first place? or through Android NDK? Is there by any chance that this can be done in real-time?
Any idea will be much appreciated. Thank you!
The samples from Apple's SDK actually implement the filtering in an even simpler way which is by using ramping:
//ramp-speed - play with this value until satisfied
const float kFilteringFactor = 0.1f;
//last result storage - keep definition outside of this function, eg. in wrapping object
float accel[3];
//acceleration.x,.y,.z is the input from the sensor
//result.x,.y,.z is the filtered result
//high-pass filter to eliminate gravity
accel[0] = acceleration.x * kFilteringFactor + accel[0] * (1.0f - kFilteringFactor);
accel[1] = acceleration.y * kFilteringFactor + accel[1] * (1.0f - kFilteringFactor);
accel[2] = acceleration.z * kFilteringFactor + accel[2] * (1.0f - kFilteringFactor);
result.x = acceleration.x - accel[0];
result.y = acceleration.y - accel[1];
result.z = acceleration.z - accel[2];
Here's the code for Android, adapted from the apple adaptive high pass filter example. Just plug this in and implement onFilteredAccelerometerChanged()
private static final boolean ADAPTIVE_ACCEL_FILTER = true;
float lastAccel[] = new float[3];
float accelFilter[] = new float[3];
public void onAccelerometerChanged(float accelX, float accelY, float accelZ) {
// high pass filter
float updateFreq = 30; // match this to your update speed
float cutOffFreq = 0.9f;
float RC = 1.0f / cutOffFreq;
float dt = 1.0f / updateFreq;
float filterConstant = RC / (dt + RC);
float alpha = filterConstant;
float kAccelerometerMinStep = 0.033f;
float kAccelerometerNoiseAttenuation = 3.0f;
if(ADAPTIVE_ACCEL_FILTER)
{
float d = clamp(Math.abs(norm(accelFilter[0], accelFilter[1], accelFilter[2]) - norm(accelX, accelY, accelZ)) / kAccelerometerMinStep - 1.0f, 0.0f, 1.0f);
alpha = d * filterConstant / kAccelerometerNoiseAttenuation + (1.0f - d) * filterConstant;
}
accelFilter[0] = (float) (alpha * (accelFilter[0] + accelX - lastAccel[0]));
accelFilter[1] = (float) (alpha * (accelFilter[1] + accelY - lastAccel[1]));
accelFilter[2] = (float) (alpha * (accelFilter[2] + accelZ - lastAccel[2]));
lastAccel[0] = accelX;
lastAccel[1] = accelY;
lastAccel[2] = accelZ;
onFilteredAccelerometerChanged(accelFilter[0], accelFilter[1], accelFilter[2]);
}
For those wondering what norm() and clamp() methods do in the answer from rbgrn, you can see them here:
http://developer.apple.com/library/IOS/samplecode/AccelerometerGraph/Listings/AccelerometerGraph_AccelerometerFilter_m.html
double norm(double x, double y, double z)
{
return Math.sqrt(x * x + y * y + z * z);
}
double clamp(double v, double min, double max)
{
if(v > max)
return max;
else if(v < min)
return min;
else
return v;
}
I seem to remember this being done in Apple's sample code for the iPhone. Let's see...
Look for AccelerometerFilter.h / .m on Google (or grab Apple's AccelerometerGraph sample) and this link: http://en.wikipedia.org/wiki/High-pass_filter (that's what Apple's code is based on).
There is some pseudo-code in the Wiki, too. But the math is fairly simple to translate to code.
IMO, designing a Kalman filter as your first attempt is over-complicating what's probably a fairly simple problem. I'd start with a simple FIR filter, and only try something more complex when/if you've tested that and found with reasonable certainty that it can't provide what you want. My guess, however, is that it will be able to do everything you need, and do it much more easily and efficiently.