I am implementing 2-finger-zoom.
2nd finger down - remember finger distance: (done once)
event.getPointerCoords(0, finger1_start);
event.getPointerCoords(1, finger2_start);
start_distance = VecLength(finger1_start, finger2_start);
2 fingers down + ACTION_MOVE
event.getPointerCoords(0, finger1_now);
event.getPointerCoords(1, finger2_now);
double distance_now = VecLength(finger1_now, finger2_now);
zoom = distance_now / start_distance;
VecLength method - returns length of distance between 2 points
double VecLength (MotionEvent.PointerCoords a, MotionEvent.PointerCoords b)
{
return Math.sqrt(Math.pow(b.x - a.x, 2) + Math.pow(b.y - a.y, 2));
}
Problem, it jitters when I use view.setScaleX. I zommed in as smoothly as possible using my fingers recording the zoom value in logcat.
Not using view.setScaleX/Y
Using view.setScaleX/Y
Which proves that I do see the jitter. I narrowed it down to the pointers actually having different coordinates each frame going back/forth.
I assume the scale somehow affects the view, but I don't understand how to undo that. How do I get "raw" finger coordinates or respect the zoom in my calculation?
Not that I cannot use getRawX as this only returns one finger. I obviously need both.
It seems I can mostly fix this by multiplying the start distance with the current scale:
So change
event.getPointerCoords(0, finger1_now);
event.getPointerCoords(1, finger2_now);
double distance_now = VecLength(finger1_now, finger2_now);
zoom = distance_now / start_distance;
to
event.getPointerCoords(0, finger1_now);
event.getPointerCoords(1, finger2_now);
double distance_now = VecLength(finger1_now, finger2_now);
zoom = distance_now / start_distance * view.getScaleX(); << x and y scale are the same in my case
From the man page for XFillPolygon:
If shape is Complex, the path may self-intersect. Note that contiguous coincident points in the path are not treated as self-intersection.
If shape is Convex, for every pair of points inside the polygon, the line segment connecting them does not intersect the path. If known by the client, specifying Convex can improve performance. If you specify Convex for a path that is not convex, the graphics results are undefined.
If shape is Nonconvex, the path does not self-intersect, but the shape is not wholly convex. If known by the client, specifying Nonconvex instead of Complex may improve performance. If you specify Nonconvex for a self-intersecting path, the graphics results are undefined.
I am having performance problems with fill XFillPolygon and, as the man page suggests, the first step I want to take is to specify the correct shape of the polygon. I am currently using Complex to be on the safe side.
Is there an efficient algorithm to determine if a polygon (defined by a series of coordinates) is convex, non-convex or complex?
You can make things a lot easier than the Gift-Wrapping Algorithm... that's a good answer when you have a set of points w/o any particular boundary and need to find the convex hull.
In contrast, consider the case where the polygon is not self-intersecting, and it consists of a set of points in a list where the consecutive points form the boundary. In this case it is much easier to figure out whether a polygon is convex or not (and you don't have to calculate any angles, either):
For each consecutive pair of edges of the polygon (each triplet of points), compute the z-component of the cross product of the vectors defined by the edges pointing towards the points in increasing order. Take the cross product of these vectors:
given p[k], p[k+1], p[k+2] each with coordinates x, y:
dx1 = x[k+1]-x[k]
dy1 = y[k+1]-y[k]
dx2 = x[k+2]-x[k+1]
dy2 = y[k+2]-y[k+1]
zcrossproduct = dx1*dy2 - dy1*dx2
The polygon is convex if the z-components of the cross products are either all positive or all negative. Otherwise the polygon is nonconvex.
If there are N points, make sure you calculate N cross products, e.g. be sure to use the triplets (p[N-2],p[N-1],p[0]) and (p[N-1],p[0],p[1]).
If the polygon is self-intersecting, then it fails the technical definition of convexity even if its directed angles are all in the same direction, in which case the above approach would not produce the correct result.
This question is now the first item in either Bing or Google when you search for "determine convex polygon." However, none of the answers are good enough.
The (now deleted) answer by #EugeneYokota works by checking whether an unordered set of points can be made into a convex polygon, but that's not what the OP asked for. He asked for a method to check whether a given polygon is convex or not. (A "polygon" in computer science is usually defined [as in the XFillPolygon documentation] as an ordered array of 2D points, with consecutive points joined with a side as well as the last point to the first.) Also, the gift wrapping algorithm in this case would have the time-complexity of O(n^2) for n points - which is much larger than actually needed to solve this problem, while the question asks for an efficient algorithm.
#JasonS's answer, along with the other answers that follow his idea, accepts star polygons such as a pentagram or the one in #zenna's comment, but star polygons are not considered to be convex. As
#plasmacel notes in a comment, this is a good approach to use if you have prior knowledge that the polygon is not self-intersecting, but it can fail if you do not have that knowledge.
#Sekhat's answer is correct but it also has the time-complexity of O(n^2) and thus is inefficient.
#LorenPechtel's added answer after her edit is the best one here but it is vague.
A correct algorithm with optimal complexity
The algorithm I present here has the time-complexity of O(n), correctly tests whether a polygon is convex or not, and passes all the tests I have thrown at it. The idea is to traverse the sides of the polygon, noting the direction of each side and the signed change of direction between consecutive sides. "Signed" here means left-ward is positive and right-ward is negative (or the reverse) and straight-ahead is zero. Those angles are normalized to be between minus-pi (exclusive) and pi (inclusive). Summing all these direction-change angles (a.k.a the deflection angles) together will result in plus-or-minus one turn (i.e. 360 degrees) for a convex polygon, while a star-like polygon (or a self-intersecting loop) will have a different sum ( n * 360 degrees, for n turns overall, for polygons where all the deflection angles are of the same sign). So we must check that the sum of the direction-change angles is plus-or-minus one turn. We also check that the direction-change angles are all positive or all negative and not reverses (pi radians), all points are actual 2D points, and that no consecutive vertices are identical. (That last point is debatable--you may want to allow repeated vertices but I prefer to prohibit them.) The combination of those checks catches all convex and non-convex polygons.
Here is code for Python 3 that implements the algorithm and includes some minor efficiencies. The code looks longer than it really is due to the the comment lines and the bookkeeping involved in avoiding repeated point accesses.
TWO_PI = 2 * pi
def is_convex_polygon(polygon):
"""Return True if the polynomial defined by the sequence of 2D
points is 'strictly convex': points are valid, side lengths non-
zero, interior angles are strictly between zero and a straight
angle, and the polygon does not intersect itself.
NOTES: 1. Algorithm: the signed changes of the direction angles
from one side to the next side must be all positive or
all negative, and their sum must equal plus-or-minus
one full turn (2 pi radians). Also check for too few,
invalid, or repeated points.
2. No check is explicitly done for zero internal angles
(180 degree direction-change angle) as this is covered
in other ways, including the `n < 3` check.
"""
try: # needed for any bad points or direction changes
# Check for too few points
if len(polygon) < 3:
return False
# Get starting information
old_x, old_y = polygon[-2]
new_x, new_y = polygon[-1]
new_direction = atan2(new_y - old_y, new_x - old_x)
angle_sum = 0.0
# Check each point (the side ending there, its angle) and accum. angles
for ndx, newpoint in enumerate(polygon):
# Update point coordinates and side directions, check side length
old_x, old_y, old_direction = new_x, new_y, new_direction
new_x, new_y = newpoint
new_direction = atan2(new_y - old_y, new_x - old_x)
if old_x == new_x and old_y == new_y:
return False # repeated consecutive points
# Calculate & check the normalized direction-change angle
angle = new_direction - old_direction
if angle <= -pi:
angle += TWO_PI # make it in half-open interval (-Pi, Pi]
elif angle > pi:
angle -= TWO_PI
if ndx == 0: # if first time through loop, initialize orientation
if angle == 0.0:
return False
orientation = 1.0 if angle > 0.0 else -1.0
else: # if other time through loop, check orientation is stable
if orientation * angle <= 0.0: # not both pos. or both neg.
return False
# Accumulate the direction-change angle
angle_sum += angle
# Check that the total number of full turns is plus-or-minus 1
return abs(round(angle_sum / TWO_PI)) == 1
except (ArithmeticError, TypeError, ValueError):
return False # any exception means not a proper convex polygon
The following Java function/method is an implementation of the algorithm described in this answer.
public boolean isConvex()
{
if (_vertices.size() < 4)
return true;
boolean sign = false;
int n = _vertices.size();
for(int i = 0; i < n; i++)
{
double dx1 = _vertices.get((i + 2) % n).X - _vertices.get((i + 1) % n).X;
double dy1 = _vertices.get((i + 2) % n).Y - _vertices.get((i + 1) % n).Y;
double dx2 = _vertices.get(i).X - _vertices.get((i + 1) % n).X;
double dy2 = _vertices.get(i).Y - _vertices.get((i + 1) % n).Y;
double zcrossproduct = dx1 * dy2 - dy1 * dx2;
if (i == 0)
sign = zcrossproduct > 0;
else if (sign != (zcrossproduct > 0))
return false;
}
return true;
}
The algorithm is guaranteed to work as long as the vertices are ordered (either clockwise or counter-clockwise), and you don't have self-intersecting edges (i.e. it only works for simple polygons).
Here's a test to check if a polygon is convex.
Consider each set of three points along the polygon--a vertex, the vertex before, the vertex after. If every angle is 180 degrees or less you have a convex polygon. When you figure out each angle, also keep a running total of (180 - angle). For a convex polygon, this will total 360.
This test runs in O(n) time.
Note, also, that in most cases this calculation is something you can do once and save — most of the time you have a set of polygons to work with that don't go changing all the time.
To test if a polygon is convex, every point of the polygon should be level with or behind each line.
Here's an example picture:
The answer by #RoryDaulton
seems the best to me, but what if one of the angles is exactly 0?
Some may want such an edge case to return True, in which case, change "<=" to "<" in the line :
if orientation * angle < 0.0: # not both pos. or both neg.
Here are my test cases which highlight the issue :
# A square
assert is_convex_polygon( ((0,0), (1,0), (1,1), (0,1)) )
# This LOOKS like a square, but it has an extra point on one of the edges.
assert is_convex_polygon( ((0,0), (0.5,0), (1,0), (1,1), (0,1)) )
The 2nd assert fails in the original answer. Should it?
For my use case, I would prefer it didn't.
This method would work on simple polygons (no self intersecting edges) assuming that the vertices are ordered (either clockwise or counter)
For an array of vertices:
vertices = [(0,0),(1,0),(1,1),(0,1)]
The following python implementation checks whether the z component of all the cross products have the same sign
def zCrossProduct(a,b,c):
return (a[0]-b[0])*(b[1]-c[1])-(a[1]-b[1])*(b[0]-c[0])
def isConvex(vertices):
if len(vertices)<4:
return True
signs= [zCrossProduct(a,b,c)>0 for a,b,c in zip(vertices[2:],vertices[1:],vertices)]
return all(signs) or not any(signs)
I implemented both algorithms: the one posted by #UriGoren (with a small improvement - only integer math) and the one from #RoryDaulton, in Java. I had some problems because my polygon is closed, so both algorithms were considering the second as concave, when it was convex. So i changed it to prevent such situation. My methods also uses a base index (which can be or not 0).
These are my test vertices:
// concave
int []x = {0,100,200,200,100,0,0};
int []y = {50,0,50,200,50,200,50};
// convex
int []x = {0,100,200,100,0,0};
int []y = {50,0,50,200,200,50};
And now the algorithms:
private boolean isConvex1(int[] x, int[] y, int base, int n) // Rory Daulton
{
final double TWO_PI = 2 * Math.PI;
// points is 'strictly convex': points are valid, side lengths non-zero, interior angles are strictly between zero and a straight
// angle, and the polygon does not intersect itself.
// NOTES: 1. Algorithm: the signed changes of the direction angles from one side to the next side must be all positive or
// all negative, and their sum must equal plus-or-minus one full turn (2 pi radians). Also check for too few,
// invalid, or repeated points.
// 2. No check is explicitly done for zero internal angles(180 degree direction-change angle) as this is covered
// in other ways, including the `n < 3` check.
// needed for any bad points or direction changes
// Check for too few points
if (n <= 3) return true;
if (x[base] == x[n-1] && y[base] == y[n-1]) // if its a closed polygon, ignore last vertex
n--;
// Get starting information
int old_x = x[n-2], old_y = y[n-2];
int new_x = x[n-1], new_y = y[n-1];
double new_direction = Math.atan2(new_y - old_y, new_x - old_x), old_direction;
double angle_sum = 0.0, orientation=0;
// Check each point (the side ending there, its angle) and accum. angles for ndx, newpoint in enumerate(polygon):
for (int i = 0; i < n; i++)
{
// Update point coordinates and side directions, check side length
old_x = new_x; old_y = new_y; old_direction = new_direction;
int p = base++;
new_x = x[p]; new_y = y[p];
new_direction = Math.atan2(new_y - old_y, new_x - old_x);
if (old_x == new_x && old_y == new_y)
return false; // repeated consecutive points
// Calculate & check the normalized direction-change angle
double angle = new_direction - old_direction;
if (angle <= -Math.PI)
angle += TWO_PI; // make it in half-open interval (-Pi, Pi]
else if (angle > Math.PI)
angle -= TWO_PI;
if (i == 0) // if first time through loop, initialize orientation
{
if (angle == 0.0) return false;
orientation = angle > 0 ? 1 : -1;
}
else // if other time through loop, check orientation is stable
if (orientation * angle <= 0) // not both pos. or both neg.
return false;
// Accumulate the direction-change angle
angle_sum += angle;
// Check that the total number of full turns is plus-or-minus 1
}
return Math.abs(Math.round(angle_sum / TWO_PI)) == 1;
}
And now from Uri Goren
private boolean isConvex2(int[] x, int[] y, int base, int n)
{
if (n < 4)
return true;
boolean sign = false;
if (x[base] == x[n-1] && y[base] == y[n-1]) // if its a closed polygon, ignore last vertex
n--;
for(int p=0; p < n; p++)
{
int i = base++;
int i1 = i+1; if (i1 >= n) i1 = base + i1-n;
int i2 = i+2; if (i2 >= n) i2 = base + i2-n;
int dx1 = x[i1] - x[i];
int dy1 = y[i1] - y[i];
int dx2 = x[i2] - x[i1];
int dy2 = y[i2] - y[i1];
int crossproduct = dx1*dy2 - dy1*dx2;
if (i == base)
sign = crossproduct > 0;
else
if (sign != (crossproduct > 0))
return false;
}
return true;
}
For a non complex (intersecting) polygon to be convex, vector frames obtained from any two connected linearly independent lines a,b must be point-convex otherwise the polygon is concave.
For example the lines a,b are convex to the point p and concave to it below for each case i.e. above: p exists inside a,b and below: p exists outside a,b
Similarly for each polygon below, if each line pair making up a sharp edge is point-convex to the centroid c then the polygon is convex otherwise it’s concave.
blunt edges (wronged green) are to be ignored
N.B
This approach would require you compute the centroid of your polygon beforehand since it doesn’t employ angles but vector algebra/transformations
Adapted Uri's code into matlab. Hope this may help.
Be aware that Uri's algorithm only works for simple polygons! So, be sure to test if the polygon is simple first!
% M [ x1 x2 x3 ...
% y1 y2 y3 ...]
% test if a polygon is convex
function ret = isConvex(M)
N = size(M,2);
if (N<4)
ret = 1;
return;
end
x0 = M(1, 1:end);
x1 = [x0(2:end), x0(1)];
x2 = [x0(3:end), x0(1:2)];
y0 = M(2, 1:end);
y1 = [y0(2:end), y0(1)];
y2 = [y0(3:end), y0(1:2)];
dx1 = x2 - x1;
dy1 = y2 - y1;
dx2 = x0 - x1;
dy2 = y0 - y1;
zcrossproduct = dx1 .* dy2 - dy1 .* dx2;
% equality allows two consecutive edges to be parallel
t1 = sum(zcrossproduct >= 0);
t2 = sum(zcrossproduct <= 0);
ret = t1 == N || t2 == N;
end
I am trying to pick objects in the bullet physics world but all I seem to be able to pick is the floor/ground plane!!! I am using the Vuforia SDK and have altered the ImageTargets demo code. I have used the following code to project my touched screen points to the 3d world:
void projectTouchPointsForBullet(QCAR::Vec2F point, QCAR::Vec3F &lineStart, QCAR::Vec3F &lineEnd, QCAR::Matrix44F &modelViewMatrix)
{
QCAR::Vec4F normalisedVector((2 * point.data[0] / screenWidth - 1),
(2 * (screenHeight-point.data[1]) / screenHeight - 1),
-1,
1);
QCAR::Matrix44F modelViewProjection;
SampleUtils::multiplyMatrix(&projectionMatrix.data[0], &modelViewMatrix.data[0] , &modelViewProjection.data[0]);
QCAR::Matrix44F inversedMatrix = SampleMath::Matrix44FInverse(modelViewProjection);
QCAR::Vec4F near_point = SampleMath::Vec4FTransform( normalisedVector,inversedMatrix);
near_point.data[3] = 1.0/near_point.data[3];
near_point = QCAR::Vec4F(near_point.data[0]*near_point.data[3], near_point.data[1]*near_point.data[3], near_point.data[2]*near_point.data[3], 1);
normalisedVector.data[2] = 1.0;//z coordinate now 1
QCAR::Vec4F far_point = SampleMath::Vec4FTransform( normalisedVector, inversedMatrix);
far_point.data[3] = 1.0/far_point.data[3];
far_point = QCAR::Vec4F(far_point.data[0]*far_point.data[3], far_point.data[1]*far_point.data[3], far_point.data[2]*far_point.data[3], 1);
lineStart = QCAR::Vec3F(near_point.data[0],near_point.data[1],near_point.data[2]);
lineEnd = QCAR::Vec3F(far_point.data[0],far_point.data[1],far_point.data[2]);
}
when I try a ray test in my physics world I only seem to be hitting the ground plane! Here is the code for the ray test call:
QCAR::Vec3F intersection, lineStart;
projectTouchPointsForBullet(QCAR::Vec2F(touch1.tapX, touch1.tapY), lineStart, lineEnd,inverseProjMatrix, modelViewMatrix);
btVector3 btRayFrom = btVector3(lineEnd.data[0], lineEnd.data[1], lineEnd.data[2]);
btVector3 btRayTo = btVector3(lineStart.data[0], lineStart.data[1], lineStart.data[2]);
btCollisionWorld::ClosestRayResultCallback rayCallback(btRayFrom,btRayTo);
dynamicsWorld->rayTest(btRayFrom, btRayTo, rayCallback);
if(rayCallback.hasHit())
{
char* pPhysicsData = reinterpret_cast<char*>(rayCallback.m_collisionObject->getUserPointer());//my bodies have char* messages attached to them to determine what has been touched
btRigidBody* pBody = btRigidBody::upcast(rayCallback.m_collisionObject);
if (pBody && pPhysicsData)
{
LOG("handleTouches:: notifyOnTouchEvent from physics world!!!");
notifyOnTouchEvent(env, obj,0,0, pPhysicsData);
}
}
I know I am predominantly looking top-down so I am bound to hit the ground plane, I at least know my touch is being correctly projected into the world, but I have objects lying on the ground plane and I can't seem to be able to touch them! Any pointers would be greatly appreciated :)
I found out why I wasn't able to touch the objects - I am scaling the objects up when they are drawn, so I had to scale the view matrix by the same value before I projected my touch point into the 3d world (EDIT I also had the btRayFrom and btRayTo input cooordinates reversed, it is now fixed):
//top of code
int kObjectScale = 100.0f
....
...
//inside touch handler method
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,&modelViewMatrix.data[0]);
projectTouchPointsForBullet(QCAR::Vec2F(touch1.tapX, touch1.tapY), lineStart, lineEnd,inverseProjMatrix, modelViewMatrix);
btVector3 btRayFrom = btVector3(lineStart.data[0], lineStart.data[1], lineStart.data[2]);
btVector3 btRayTo = btVector3(lineEnd.data[0], lineEnd.data[1], lineEnd.data[2]);
My touches are projected correctly now :)
Im creating an application where i need to position a ImageView depending on the Orientation of the device.
I use the values from a MagneticField and Accelerometer Sensors to calculate the device orientation with
SensorManager.getRotationMatrix(rotationMatrix, null, accelerometerValues, magneticFieldValues)
SensorManager.getOrientation(rotationMatrix, values);
double degrees = Math.toDegrees(values[0]);
My problem is that the positioning of the ImageView is very sensitive to changes in the orientation. Making the imageview constantly jumping around the screen. (because the degrees change)
I read that this can be because my device is close to things that can affect the magneticfield readings. But this is not the only reason it seems.
I tried downloading some applications and found that the "3D compass" and "Compass" remains extremely steady in its readings (when setting the noise filter up), i would like the same behavior in my application.
I read that i can tweak the "noise" of my readings by adding a "Low pass filter", but i have no idea how to implement this (because of my lack of Math).
Im hoping someone can help me creating a more steady reading on my device, Where a little movement to the device wont affect the current orientation.
Right now i do a small
if (Math.abs(lastReadingDegrees - newReadingDegrees) > 1) { updatePosition() }
To filter abit of the noise. But its not working very well :)
Though I havn't used the compass on Android, the basic processing shown below (in JavaScript) will probably work for you.
It's based on the low pass filter on the accelerometer that's recommended by the Windows Phone team with modifications to suit a compass (the cyclic behavior every 360").
I assume the compass reading is in degrees, a float between 0-360, and the output should be similar.
You want to accomplish 2 things in the filter:
If the change is small, to prevent gitter, gradually turn to that direction.
If the change is big, to prevent lag, turn to that direction immediatly (and it can be canceled if you want the compass to move only in a smooth way).
For that we will have 2 constants:
The easing float that defines how smooth the movement will be (1 is no smoothing and 0 is never updating, my default is 0.5). We will call it SmoothFactorCompass.
The threshold in which the distance is big enough to turn immediatly (0 is jump always, 360 is never jumping, my default is 30). We will call it SmoothThresholdCompass.
We have one variable saved across the calls, a float called oldCompass and it is the result of the algorithm.
So the variable defenition is:
var SmoothFactorCompass = 0.5;
var SmoothThresholdCompass = 30.0;
var oldCompass = 0.0;
and the function recieves newCompass, and returns oldCompass as the result.
if (Math.abs(newCompass - oldCompass) < 180) {
if (Math.abs(newCompass - oldCompass) > SmoothThresholdCompass) {
oldCompass = newCompass;
}
else {
oldCompass = oldCompass + SmoothFactorCompass * (newCompass - oldCompass);
}
}
else {
if (360.0 - Math.abs(newCompass - oldCompass) > SmoothThresholdCompass) {
oldCompass = newCompass;
}
else {
if (oldCompass > newCompass) {
oldCompass = (oldCompass + SmoothFactorCompass * ((360 + newCompass - oldCompass) % 360) + 360) % 360;
}
else {
oldCompass = (oldCompass - SmoothFactorCompass * ((360 - newCompass + oldCompass) % 360) + 360) % 360;
}
}
}
I see that the issue was opened 5 months ago and probably isn't relevant anymore, but I'm sure other programmers might find it useful.
Oded Elyada.
This lowpass filter works for angles in radians. Use the add function for each compass reading, then call average to get the average.
public class AngleLowpassFilter {
private final int LENGTH = 10;
private float sumSin, sumCos;
private ArrayDeque<Float> queue = new ArrayDeque<Float>();
public void add(float radians){
sumSin += (float) Math.sin(radians);
sumCos += (float) Math.cos(radians);
queue.add(radians);
if(queue.size() > LENGTH){
float old = queue.poll();
sumSin -= Math.sin(old);
sumCos -= Math.cos(old);
}
}
public float average(){
int size = queue.size();
return (float) Math.atan2(sumSin / size, sumCos / size);
}
}
Use Math.toDegrees() or Math.toRadians() to convert.
Keep in mind that, for example the average of 350 and 10 is not 180. My solution:
int difference = 0;
for(int i= 1;i <numberOfAngles;i++){
difference += ( (angles[i]- angles[0] + 180 + 360 ) % 360 ) - 180;
}
averageAngle = (360 + angles[0] + ( difference / numberOfAngles ) ) % 360;
A low pass filter (LPF) blocks fast changing signals and
allows only slow changes in the signals. This means any small
sudden changes will be ignored.
The standard way to implement this in software is to take a running average
of the last N samples and report that value. Start with N as small as 3 and
keep increasing N until you find sufficient smoothed out response in your app.
Do keep in mind that the higher you make N, slower the response of the system.
See my answer to this related question: Smoothing data from a sensor
A software low pass filter is basically a modified version of that. Indeed, in that answer I even provided this link to another related question: Low pass filter software?