2D Rectangle Collision detection in Android - android

I have many images that I need to place on a canvas over a long period of time so that they look random. However, I don't want any of the images to overlap with each other. My solution so far is to randomly place the image somewhere on the canvas. If it overlaps I'll generate a new random location to try.
Now the tricky part is to see if where I am about to place the image is going to overlap with another image.
I was going to make a large array of 1's and 0's and manually mark off where I put the images. However, I was wondering if anyone knew of a way to "auto detect" using a method if where I am about to place an image will overlap with an existing image? Or if there is a way to do collision detection using some Android function?

Checking to see if two rectangles overlap is really simple, just use Rect.intersect()
Check out the Rect docs for more information:
http://developer.android.com/reference/android/graphics/Rect.html
Although I would recommend you try something different than what you have described above. In the beginning the probability of a collision will be very low. However as the screen fills up the probability of a collision will rise. This result in a lot of collisions and wasted computational power.
You should use something more efficient, off the top of my head you could try something like this:
Split the screen into a grid of size MxN
Keep a list of all unpopulated grid locations
Pick a random grid location for a new image i
Pick a random width and height for image i
If i intersects a grid location that is already populated or it if goes off the screen shrink it
Draw i
If all grid locations are taken quit, else go to 3

A simple 2D isinbox function could be:
bool IsInBox(int x1, int y1, int width1, int height1, int x2, int y2, int width2, int height2) {
int right1 = x1 + width1;
int right2 = x2 + width2;
int bottom1 = y1 + height1;
int bottom2 = y2 + height2;
// Check if top-left point is in box
if (x2 >= x1 && x2 <= right1 && y2 >= y2 && y2 <= bottom1) return true;
// Check if bottom-right point is in box
if (right2 >= x1 && right2 <= right1 && bottom2 >= y2 && bottom2 <= bottom1) return true;
return false;
}
Not sure if works though xd
Or you could use Rect.Intersect()

Related

How to trigger specific action if two circle meet on any of their point using View in Android?

I'm doing a simple android animation using my self-customized VIEW. I have two circles drawn on the onDraw() method of the class extends to View class. The one circle is moving upon dragging using MotionEvent while the other one is static on a certain position. If the moving circle touches any point of a static circle, the color of the moving circle will change to the color of the static circle.
For example
int_circle_radius= 50;
int circle1_x = 0;
int circle1_y = 0;
int circle2_x = 200;
int circle2_y = 200;
let's assume that the moving circle which is the circle 1 was drag and drop to a certain point of the circle 2.
I tried using the below formula but the circle 1's color only change if it really goes to the exact location of the circle 2.
if (circle1_x == circle1_x && circle1_y == circle2_y){
paint.setColor(Color.RED);
}
I know that the problem here is a circle has many points from it's radius, but how can I trigger a specific action if the a circle touches any of his point to another circle? Thanks.
You can simply calculate the distance between the centers of the two circles. If the distance is less than two times the radius, the circles are intersecting. Calculating that is easy. You can not expect to get the exact MotionEvent where the circles distance equals the double radius, so you have to check for a distance that is less or equal:
int deltaX = circle1_x - circle2_x;
int deltaY = circle1_y - circle2_y;
if(Math.sqrt(Math.pow(deltaX, 2) + Math.pow(deltaY, 2)) <= 2 * circle_radius) {
paint.setColor(Color.RED);
}

How to display X and Y axis for XYPlot in AndroidPlot

Background
I'm developing an app for Android that plots data as a line graph using AndroidPlot. Because of the nature of the data, it's important that it be pannable and zoomable. I'm using AndroidPlot's sample code on bitbucket for panning and zooming, modified to allow panning and zooming in both X and Y directions.
Everything works as desired except that there are no X and Y axis lines. It is very disorienting to look at the data without them. The grid helps, but there's no guarantee that grid lines will actually fall on the axis.
To remedy this I have tried adding two series, one that falls on just the X axis and the other on the Y. The problem with this is that if one zooms out too far the axis simply end, and it becomes apparent that I have applied a 'hack'.
Question
Is it possible to add X and Y axis lines to AndroidPlot? Or will my sad hack have to do?
EDIT
Added tags
I figured it out. It wasn't trivial, took a joint effort with a collaborator, and sucked up many hours of our time.
Starting with the sample mentioned in my question, I had to extend XYPlot (which I called GraphView) and override the onPreInit method. Note that I have two PointF's, minXY and maxXY, that are defined in my overridden XYPlot and manipulated when I zoom or scroll.
#Override
protected void onPreInit() {
super.onPreInit();
final Paint axisPaint = new Paint();
axisPaint.setColor(getResources().getColor(R.color.MY_AXIS_COLOR));
axisPaint.setStrokeWidth(3); //or whatever stroke width you want
XYGraphWidget oldWidget = getGraphWidget();
XYGraphWidget widget = new XYGraphWidget(getLayoutManager(),
this,
new SizeMetrics(
oldWidget.getHeightMetric(),
oldWidget.getWidthMetric())) {
//We now override XYGraphWidget methods
RectF mGridRect;
#Override
protected void doOnDraw(Canvas canvas, RectF widgetRect)
throws PlotRenderException {
//In order to draw the x axis, we must obtain gridRect. I believe this is the only
//way to do so as the more convenient routes have private rather than protected access.
mGridRect = new RectF(widgetRect.left + ((isRangeAxisLeft())?getRangeLabelWidth():1),
widgetRect.top + ((isDomainAxisBottom())?1:getDomainLabelWidth()),
widgetRect.right - ((isRangeAxisLeft())?1:getRangeLabelWidth()),
widgetRect.bottom - ((isDomainAxisBottom())?getDomainLabelWidth():1));
super.doOnDraw(canvas, widgetRect);
}
#Override
protected void drawGrid(Canvas canvas) {
super.drawGrid(canvas);
if(mGridRect == null) return;
//minXY and maxXY are PointF's defined elsewhere. See my comment in the answer.
if(minXY.y <= 0 && maxXY.y >= 0) { //Draw the x axis
RectF paddedGridRect = getGridRect();
//Note: GraphView.this is the extended XYPlot instance.
XYStep rangeStep = XYStepCalculator.getStep(GraphView.this, XYAxisType.RANGE,
paddedGridRect, getCalculatedMinY().doubleValue(),
getCalculatedMaxY().doubleValue());
double rangeOriginF = paddedGridRect.bottom;
float yPix = (float) (rangeOriginF + getRangeOrigin().doubleValue() * rangeStep.getStepPix() /
rangeStep.getStepVal());
//Keep things consistent with drawing y axis even though drawRangeTick is public
//drawRangeTick(canvas, yPix, 0, getRangeLabelPaint(), axisPaint, true);
canvas.drawLine(mGridRect.left, yPix, mGridRect.right, yPix, axisPaint);
}
if(minXY.x <= 0 && maxXY.x >= 0) { //Draw the y axis
RectF paddedGridRect = getGridRect();
XYStep domianStep = XYStepCalculator.getStep(GraphView.this, XYAxisType.DOMAIN,
paddedGridRect, getCalculatedMinX().doubleValue(),
getCalculatedMaxX().doubleValue());
double domainOriginF = paddedGridRect.left;
float xPix = (float) (domainOriginF - getDomainOrigin().doubleValue() * domianStep.getStepPix() /
domianStep.getStepVal());
//Unfortunately, drawDomainTick has private access in XYGraphWidget
canvas.drawLine(xPix, mGridRect.top, xPix, mGridRect.bottom, axisPaint);
}
}
};
widget.setBackgroundPaint(oldWidget.getBackgroundPaint());
widget.setMarginTop(oldWidget.getMarginTop());
widget.setMarginRight(oldWidget.getMarginRight());
widget.setPositionMetrics(oldWidget.getPositionMetrics());
getLayoutManager().remove(oldWidget);
getLayoutManager().addToTop(widget);
setGraphWidget(widget);
//More customizations can go here
}
And that was that. I sure wish this was built into AndroidPlot; it'll be nasty trying to fix this when it breaks in an AndroidPlot update...

Draw a segmented circle in Android: OpenGL vs Cavans?

I need to draw something like this:
I was hoping that this guy posted some code of how he drew his segmented circle to begin with, but alas he didn't.
I also need to know which segment is where after interaction with the wheel - for instance if the wheel is rotated, I need to know where the original segments are after the rotation action.
Two questions:
Do I draw this segmented circle (with varying colours and content placed on the segment) with OpenGL or using Android Canvas?
Using either of the options, how do I register which segment is where?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EDIT:
Ok, so I've figured out how to draw the segmented circle using Canvas (I'll post the code as an answer). And I'm sure I'll figure out how to rotate the circle soon. But I'm still unsure how I'll recognize a separate segment of the drawn wheel after the rotation action.
Because, what I'm thinking of doing is drawing the segmented circle with these wedges, and the sort of handling the entire Canvas as an ImageView when I want to rotate it as if it's spinning. But when the spinning stops, how do I differentiate between the original segments drawn on the Canvas?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I've read about how to draw a segment on its own (here also), OpenGL, Canvas and even drawing shapes and layering them, but I've yet to see someone explaining how to recognize the separate segments.
Can drawBitmap() or createBitmap() perhaps be used?
If I go with OpenGL, I'll probably be able to rotate the segmented wheel using OpenGL's rotation, right?
I've also read that OpenGL might be too powerful for what I'd like to do, so should I rather consider "the graphic components of a game library built on top of OpenGL"?
This kind of answers my first question above - how to draw the segmented circle using Android Canvas:
Using the code found here, I do this in the onDraw function:
// Starting values
private int startAngle = 0;
private int numberOfSegments = 11;
private int sweepAngle = 360 / numberOfSegments;
#Override
protected void onDraw(Canvas canvas) {
setUpPaint();
setUpDrawingArea();
colours = getColours();
Log.d(TAG, "Draw the segmented circle");
for (int i = 0; i < numberOfSegments; i++) {
// pick a colour that is not the previous colour
paint.setColor(colours.get(pickRandomColour()));
// Draw arc
canvas.drawArc(rectF, startAngle, sweepAngle, true, paint);
// Set variable values
startAngle -= sweepAngle;
}
}
This is how I set up the drawing area based on the device's screen size:
private void setUpDrawingArea() {
Log.d(TAG, "Set up drawing area.");
// First get the screen dimensions
Point size = new Point();
Display display = DrawArcActivity.this.getWindowManager().getDefaultDisplay();
display.getSize(size);
int width = size.x;
int height = size.y;
Log.d(TAG, "Screen size = "+width+" x "+height);
// Set up the padding
int paddingLeft = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
int paddingTop = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
int paddingRight = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
int paddingBottom = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
// Then get the left, top, right and bottom Xs and Ys for the rectangle we're going to draw in
int left = 0 + paddingLeft;
int top = 0 + paddingTop;
int right = width - paddingRight;
int bottom = width - paddingBottom;
Log.d(TAG, "Rectangle placement -> left = "+left+", top = "+top+", right = "+right+", bottom = "+bottom);
rectF = new RectF(left, top, right, bottom);
}
That (and the other functions which are pretty straight forward, so I'm not going to paste the code here) draws this:
The segments are different colours with every run.

Detect whether a polygon is well formed or not (Google Map Android) [duplicate]

From the man page for XFillPolygon:
If shape is Complex, the path may self-intersect. Note that contiguous coincident points in the path are not treated as self-intersection.
If shape is Convex, for every pair of points inside the polygon, the line segment connecting them does not intersect the path. If known by the client, specifying Convex can improve performance. If you specify Convex for a path that is not convex, the graphics results are undefined.
If shape is Nonconvex, the path does not self-intersect, but the shape is not wholly convex. If known by the client, specifying Nonconvex instead of Complex may improve performance. If you specify Nonconvex for a self-intersecting path, the graphics results are undefined.
I am having performance problems with fill XFillPolygon and, as the man page suggests, the first step I want to take is to specify the correct shape of the polygon. I am currently using Complex to be on the safe side.
Is there an efficient algorithm to determine if a polygon (defined by a series of coordinates) is convex, non-convex or complex?
You can make things a lot easier than the Gift-Wrapping Algorithm... that's a good answer when you have a set of points w/o any particular boundary and need to find the convex hull.
In contrast, consider the case where the polygon is not self-intersecting, and it consists of a set of points in a list where the consecutive points form the boundary. In this case it is much easier to figure out whether a polygon is convex or not (and you don't have to calculate any angles, either):
For each consecutive pair of edges of the polygon (each triplet of points), compute the z-component of the cross product of the vectors defined by the edges pointing towards the points in increasing order. Take the cross product of these vectors:
given p[k], p[k+1], p[k+2] each with coordinates x, y:
dx1 = x[k+1]-x[k]
dy1 = y[k+1]-y[k]
dx2 = x[k+2]-x[k+1]
dy2 = y[k+2]-y[k+1]
zcrossproduct = dx1*dy2 - dy1*dx2
The polygon is convex if the z-components of the cross products are either all positive or all negative. Otherwise the polygon is nonconvex.
If there are N points, make sure you calculate N cross products, e.g. be sure to use the triplets (p[N-2],p[N-1],p[0]) and (p[N-1],p[0],p[1]).
If the polygon is self-intersecting, then it fails the technical definition of convexity even if its directed angles are all in the same direction, in which case the above approach would not produce the correct result.
This question is now the first item in either Bing or Google when you search for "determine convex polygon." However, none of the answers are good enough.
The (now deleted) answer by #EugeneYokota works by checking whether an unordered set of points can be made into a convex polygon, but that's not what the OP asked for. He asked for a method to check whether a given polygon is convex or not. (A "polygon" in computer science is usually defined [as in the XFillPolygon documentation] as an ordered array of 2D points, with consecutive points joined with a side as well as the last point to the first.) Also, the gift wrapping algorithm in this case would have the time-complexity of O(n^2) for n points - which is much larger than actually needed to solve this problem, while the question asks for an efficient algorithm.
#JasonS's answer, along with the other answers that follow his idea, accepts star polygons such as a pentagram or the one in #zenna's comment, but star polygons are not considered to be convex. As
#plasmacel notes in a comment, this is a good approach to use if you have prior knowledge that the polygon is not self-intersecting, but it can fail if you do not have that knowledge.
#Sekhat's answer is correct but it also has the time-complexity of O(n^2) and thus is inefficient.
#LorenPechtel's added answer after her edit is the best one here but it is vague.
A correct algorithm with optimal complexity
The algorithm I present here has the time-complexity of O(n), correctly tests whether a polygon is convex or not, and passes all the tests I have thrown at it. The idea is to traverse the sides of the polygon, noting the direction of each side and the signed change of direction between consecutive sides. "Signed" here means left-ward is positive and right-ward is negative (or the reverse) and straight-ahead is zero. Those angles are normalized to be between minus-pi (exclusive) and pi (inclusive). Summing all these direction-change angles (a.k.a the deflection angles) together will result in plus-or-minus one turn (i.e. 360 degrees) for a convex polygon, while a star-like polygon (or a self-intersecting loop) will have a different sum ( n * 360 degrees, for n turns overall, for polygons where all the deflection angles are of the same sign). So we must check that the sum of the direction-change angles is plus-or-minus one turn. We also check that the direction-change angles are all positive or all negative and not reverses (pi radians), all points are actual 2D points, and that no consecutive vertices are identical. (That last point is debatable--you may want to allow repeated vertices but I prefer to prohibit them.) The combination of those checks catches all convex and non-convex polygons.
Here is code for Python 3 that implements the algorithm and includes some minor efficiencies. The code looks longer than it really is due to the the comment lines and the bookkeeping involved in avoiding repeated point accesses.
TWO_PI = 2 * pi
def is_convex_polygon(polygon):
"""Return True if the polynomial defined by the sequence of 2D
points is 'strictly convex': points are valid, side lengths non-
zero, interior angles are strictly between zero and a straight
angle, and the polygon does not intersect itself.
NOTES: 1. Algorithm: the signed changes of the direction angles
from one side to the next side must be all positive or
all negative, and their sum must equal plus-or-minus
one full turn (2 pi radians). Also check for too few,
invalid, or repeated points.
2. No check is explicitly done for zero internal angles
(180 degree direction-change angle) as this is covered
in other ways, including the `n < 3` check.
"""
try: # needed for any bad points or direction changes
# Check for too few points
if len(polygon) < 3:
return False
# Get starting information
old_x, old_y = polygon[-2]
new_x, new_y = polygon[-1]
new_direction = atan2(new_y - old_y, new_x - old_x)
angle_sum = 0.0
# Check each point (the side ending there, its angle) and accum. angles
for ndx, newpoint in enumerate(polygon):
# Update point coordinates and side directions, check side length
old_x, old_y, old_direction = new_x, new_y, new_direction
new_x, new_y = newpoint
new_direction = atan2(new_y - old_y, new_x - old_x)
if old_x == new_x and old_y == new_y:
return False # repeated consecutive points
# Calculate & check the normalized direction-change angle
angle = new_direction - old_direction
if angle <= -pi:
angle += TWO_PI # make it in half-open interval (-Pi, Pi]
elif angle > pi:
angle -= TWO_PI
if ndx == 0: # if first time through loop, initialize orientation
if angle == 0.0:
return False
orientation = 1.0 if angle > 0.0 else -1.0
else: # if other time through loop, check orientation is stable
if orientation * angle <= 0.0: # not both pos. or both neg.
return False
# Accumulate the direction-change angle
angle_sum += angle
# Check that the total number of full turns is plus-or-minus 1
return abs(round(angle_sum / TWO_PI)) == 1
except (ArithmeticError, TypeError, ValueError):
return False # any exception means not a proper convex polygon
The following Java function/method is an implementation of the algorithm described in this answer.
public boolean isConvex()
{
if (_vertices.size() < 4)
return true;
boolean sign = false;
int n = _vertices.size();
for(int i = 0; i < n; i++)
{
double dx1 = _vertices.get((i + 2) % n).X - _vertices.get((i + 1) % n).X;
double dy1 = _vertices.get((i + 2) % n).Y - _vertices.get((i + 1) % n).Y;
double dx2 = _vertices.get(i).X - _vertices.get((i + 1) % n).X;
double dy2 = _vertices.get(i).Y - _vertices.get((i + 1) % n).Y;
double zcrossproduct = dx1 * dy2 - dy1 * dx2;
if (i == 0)
sign = zcrossproduct > 0;
else if (sign != (zcrossproduct > 0))
return false;
}
return true;
}
The algorithm is guaranteed to work as long as the vertices are ordered (either clockwise or counter-clockwise), and you don't have self-intersecting edges (i.e. it only works for simple polygons).
Here's a test to check if a polygon is convex.
Consider each set of three points along the polygon--a vertex, the vertex before, the vertex after. If every angle is 180 degrees or less you have a convex polygon. When you figure out each angle, also keep a running total of (180 - angle). For a convex polygon, this will total 360.
This test runs in O(n) time.
Note, also, that in most cases this calculation is something you can do once and save — most of the time you have a set of polygons to work with that don't go changing all the time.
To test if a polygon is convex, every point of the polygon should be level with or behind each line.
Here's an example picture:
The answer by #RoryDaulton
seems the best to me, but what if one of the angles is exactly 0?
Some may want such an edge case to return True, in which case, change "<=" to "<" in the line :
if orientation * angle < 0.0: # not both pos. or both neg.
Here are my test cases which highlight the issue :
# A square
assert is_convex_polygon( ((0,0), (1,0), (1,1), (0,1)) )
# This LOOKS like a square, but it has an extra point on one of the edges.
assert is_convex_polygon( ((0,0), (0.5,0), (1,0), (1,1), (0,1)) )
The 2nd assert fails in the original answer. Should it?
For my use case, I would prefer it didn't.
This method would work on simple polygons (no self intersecting edges) assuming that the vertices are ordered (either clockwise or counter)
For an array of vertices:
vertices = [(0,0),(1,0),(1,1),(0,1)]
The following python implementation checks whether the z component of all the cross products have the same sign
def zCrossProduct(a,b,c):
return (a[0]-b[0])*(b[1]-c[1])-(a[1]-b[1])*(b[0]-c[0])
def isConvex(vertices):
if len(vertices)<4:
return True
signs= [zCrossProduct(a,b,c)>0 for a,b,c in zip(vertices[2:],vertices[1:],vertices)]
return all(signs) or not any(signs)
I implemented both algorithms: the one posted by #UriGoren (with a small improvement - only integer math) and the one from #RoryDaulton, in Java. I had some problems because my polygon is closed, so both algorithms were considering the second as concave, when it was convex. So i changed it to prevent such situation. My methods also uses a base index (which can be or not 0).
These are my test vertices:
// concave
int []x = {0,100,200,200,100,0,0};
int []y = {50,0,50,200,50,200,50};
// convex
int []x = {0,100,200,100,0,0};
int []y = {50,0,50,200,200,50};
And now the algorithms:
private boolean isConvex1(int[] x, int[] y, int base, int n) // Rory Daulton
{
final double TWO_PI = 2 * Math.PI;
// points is 'strictly convex': points are valid, side lengths non-zero, interior angles are strictly between zero and a straight
// angle, and the polygon does not intersect itself.
// NOTES: 1. Algorithm: the signed changes of the direction angles from one side to the next side must be all positive or
// all negative, and their sum must equal plus-or-minus one full turn (2 pi radians). Also check for too few,
// invalid, or repeated points.
// 2. No check is explicitly done for zero internal angles(180 degree direction-change angle) as this is covered
// in other ways, including the `n < 3` check.
// needed for any bad points or direction changes
// Check for too few points
if (n <= 3) return true;
if (x[base] == x[n-1] && y[base] == y[n-1]) // if its a closed polygon, ignore last vertex
n--;
// Get starting information
int old_x = x[n-2], old_y = y[n-2];
int new_x = x[n-1], new_y = y[n-1];
double new_direction = Math.atan2(new_y - old_y, new_x - old_x), old_direction;
double angle_sum = 0.0, orientation=0;
// Check each point (the side ending there, its angle) and accum. angles for ndx, newpoint in enumerate(polygon):
for (int i = 0; i < n; i++)
{
// Update point coordinates and side directions, check side length
old_x = new_x; old_y = new_y; old_direction = new_direction;
int p = base++;
new_x = x[p]; new_y = y[p];
new_direction = Math.atan2(new_y - old_y, new_x - old_x);
if (old_x == new_x && old_y == new_y)
return false; // repeated consecutive points
// Calculate & check the normalized direction-change angle
double angle = new_direction - old_direction;
if (angle <= -Math.PI)
angle += TWO_PI; // make it in half-open interval (-Pi, Pi]
else if (angle > Math.PI)
angle -= TWO_PI;
if (i == 0) // if first time through loop, initialize orientation
{
if (angle == 0.0) return false;
orientation = angle > 0 ? 1 : -1;
}
else // if other time through loop, check orientation is stable
if (orientation * angle <= 0) // not both pos. or both neg.
return false;
// Accumulate the direction-change angle
angle_sum += angle;
// Check that the total number of full turns is plus-or-minus 1
}
return Math.abs(Math.round(angle_sum / TWO_PI)) == 1;
}
And now from Uri Goren
private boolean isConvex2(int[] x, int[] y, int base, int n)
{
if (n < 4)
return true;
boolean sign = false;
if (x[base] == x[n-1] && y[base] == y[n-1]) // if its a closed polygon, ignore last vertex
n--;
for(int p=0; p < n; p++)
{
int i = base++;
int i1 = i+1; if (i1 >= n) i1 = base + i1-n;
int i2 = i+2; if (i2 >= n) i2 = base + i2-n;
int dx1 = x[i1] - x[i];
int dy1 = y[i1] - y[i];
int dx2 = x[i2] - x[i1];
int dy2 = y[i2] - y[i1];
int crossproduct = dx1*dy2 - dy1*dx2;
if (i == base)
sign = crossproduct > 0;
else
if (sign != (crossproduct > 0))
return false;
}
return true;
}
For a non complex (intersecting) polygon to be convex, vector frames obtained from any two connected linearly independent lines a,b must be point-convex otherwise the polygon is concave.
For example the lines a,b are convex to the point p and concave to it below for each case i.e. above: p exists inside a,b and below: p exists outside a,b
Similarly for each polygon below, if each line pair making up a sharp edge is point-convex to the centroid c then the polygon is convex otherwise it’s concave.
blunt edges (wronged green) are to be ignored
N.B
This approach would require you compute the centroid of your polygon beforehand since it doesn’t employ angles but vector algebra/transformations
Adapted Uri's code into matlab. Hope this may help.
Be aware that Uri's algorithm only works for simple polygons! So, be sure to test if the polygon is simple first!
% M [ x1 x2 x3 ...
% y1 y2 y3 ...]
% test if a polygon is convex
function ret = isConvex(M)
N = size(M,2);
if (N<4)
ret = 1;
return;
end
x0 = M(1, 1:end);
x1 = [x0(2:end), x0(1)];
x2 = [x0(3:end), x0(1:2)];
y0 = M(2, 1:end);
y1 = [y0(2:end), y0(1)];
y2 = [y0(3:end), y0(1:2)];
dx1 = x2 - x1;
dy1 = y2 - y1;
dx2 = x0 - x1;
dy2 = y0 - y1;
zcrossproduct = dx1 .* dy2 - dy1 .* dx2;
% equality allows two consecutive edges to be parallel
t1 = sum(zcrossproduct >= 0);
t2 = sum(zcrossproduct <= 0);
ret = t1 == N || t2 == N;
end

Mapping A "Touch Region" to a Bitmap

I am trying to gain some more familiarity with the Android SurfaceView class, and in doing so am attempting to create a simple application that allows a user to move a Bitmap around the screen. The troublesome part of this implementation is that I am also including the functionality that the user may drag the image again after it has been placed. In order to do this, I am mapping the bitmap to a simple set of coordinates that define the Bitmap's current location. The region I am mapping the image to, however, does not match up with the image.
The Problem
After placing an image on the SurfaceView using canvas.drawBitmap(), and recording the coordinates of the placed image, the mapping system that I have set up misinterprets the Bitmap's coordinates somehow and does not display correctly. As you can see in this image, I have simply used canvas.drawLine() to draw lines representing the space of my touch region, and the image is always off and to the right:
The Code
Here, I shall provide the relevant code excerpts to help answer my question.
CustomSurface.java
This method encapsulates the drawing of the objects onto the canvas. The comments clarify each element:
public void onDraw(Canvas c){
//Simple black paint
Paint paint = new Paint();
//Draw a white background
c.drawColor(Color.WHITE);
//Draw the bitmap at the coordinates
c.drawBitmap(g.getResource(), g.getCenterX(), g.getCenterY(), null);
//Draws the actual surface that is receiving touch input
c.drawLine(g.left, g.top, g.right, g.top, paint);
c.drawLine(g.right, g.top, g.right, g.bottom, paint);
c.drawLine(g.right, g.bottom, g.left, g.bottom, paint);
c.drawLine(g.left, g.bottom, g.left, g.top, paint);
}
This method encapsulates how I capture touch events:
public boolean onTouchEvent(MotionEvent e){
switch(e.getAction()){
case MotionEvent.ACTION_DOWN:{
if(g.contains((int) e.getX(), (int) e.getY()))
item_selected = true;
break;
}
case MotionEvent.ACTION_MOVE:{
if(item_selected)
g.move((int) e.getX(), (int) e.getY());
break;
}
case MotionEvent.ACTION_UP:{
item_selected = false;
break;
}
default:{
//Do nothing
break;
}
}
return true;
}
Graphic.java
This method is used to construct the Graphic:
//Initializes the graphic assuming the coordinate is in the upper left corner
public Graphic(Bitmap image, int start_x, int start_y){
resource = image;
left = start_x;
top = start_y;
right = start_x + image.getWidth();
bottom = start_y + image.getHeight();
}
This method detects if a user is clicking inside the image:
public boolean contains(int x, int y){
if(x >= left && x <= right){
if(y >= top && y <= bottom){
return true;
}
}
return false;
}
This method is used to move the graphic:
public void move(int x, int y){
left = x;
top = y;
right = x + resource.getWidth();
bottom = y + resource.getHeight();
}
I also have 2 methods that determine the center of the region (used for redrawing):
public int getCenterX(){
return (right - left) / 2 + left;
}
public int getCenterY(){
return (bottom - top) / 2 + top;
}
Any help would be greatly appreciated, I feel as though many other StackOverflow users could really benefit from a solution to this issue.
There's a very nice and thorough explanation of touch/multitouch/gestures on Android Developers blog, that includes free and open source code example at google code.
Please, take a look. If you don't need gestures -- just skip that part, read about touch events only.
This issue ended up being much simpler than I had thought, and after some tweaking I realized that this was an issue of image width compensation.
This line in the above code is where the error stems from:
c.drawBitmap(g.getResource(), g.getCenterX(), g.getCenterY(), null);
As you can tell, I manipulated the coordinates from within the Graphic class to produce the center of the bitmap, and then called canvas.drawBitmap() assuming that it would draw from the center outward.
Obviously, this would not work because the canvas always drops from the top left of an image downwards and to the right, so the solution was simple.
The Solution
Create the touch region with regards to the touch location, but draw it relative to a distance equal to the image width subtracted from the center location in the x and y directions. I basically changed the architecture of the Graphic class to implement a getDrawX() and getDrawY() method that would return the modified x and y coordinates of where it should be drawn in order to have the center_x and center_y values (determined in the constructor) actually appear to be at the center of the region.
It all comes down to the fact that in an attempt to compensate for the way the canvas draws bitmaps, I unfortunately incorporated some bad behaviors and in the end had to handle the offset in a completely different way.

Categories

Resources