I'm making a application to track a veicle based in GPS coordinates.
I created a SurfaceView to draw the field, vehicle and the path (route) for him.
The result looked like this:
The black dots represent the coming of GPS coordinates, and blue rectangles would be the area covered by the path traveled. (the width of the path is configurable)
The way I'm drawing with blue rectangles (this is my question) which are the area covered by the path traveled. (the width of the path is configurable)
With that I need to overcome some situation.
I need to calculate the field's rotation angle so that the path always get left behind. (completed)
I need to calculate the angle of rotation of each rectangle so they are facing towards the vehicle. (completed)
In the future I will need:
Detect when the vehicle passes twice in the same place. (based on the path traveled)
Calculate the area (m²) all traveled by the vehicle.
I would like some tips for draw this path.
My code:
public void draw(Canvas canvas) {
Log.d(getClass().getSimpleName(), "draw");
canvas.save();
// translate canvas to vehicle positon
canvas.translate((float) center.cartesian(0), (float) center.cartesian(1));
float fieldRotation = 0;
if (trackerHistory.size() > 1) {
/*
Before drawing the way, only takes the last position and finds the angle of rotation of the field.
*/
Vector lastPosition = new Vector(convertToTerrainCoordinates(lastPoint));
Vector preLastPosition = new Vector(convertToTerrainCoordinates(preLastPoint));
float shift = (float) lastPosition.distanceTo(preLastPosition);
/*
Having the last coordinate as a triangle, 'preLastCoord' saves the values of the legs, while 'shift' is the hypotenuse
*/
// If the Y offset is negative, then the opposite side is the Y displacement
if (preLastPosition.cartesian(1) < 0) {
// dividing the opposite side by hipetenusa, we have the sine of the angle that must be rotated.
double sin = preLastPosition.cartesian(1) / shift;
// when Y is negative, it is necessary to add or subtract 90 degrees depending on the value of X
// The "Math.asin()" calculates the radian arc to the sine previously calculated.
// And the "Math.toDegress()" converts degrees to radians from 0 to 360.
if (preLastPosition.cartesian(0) < 0) {
fieldRotation = (float) (Math.toDegrees(Math.asin(sin)) - 90d);
} else {
fieldRotation = (float) (Math.abs(Math.toDegrees(Math.asin(sin))) + 90d);
}
}
// if not, the opposite side is the X offset
else {
// dividing the opposite side by hipetenusa have the sine of the angle that must be rotated.
double senAngulo = preLastPosition.cartesian(0) / shift;
// The "Math.asin()" calculates the radian arc to the sine previously calculated.
// And the "Math.toDegress()" converts degrees to radians from 0 to 360.
fieldRotation = (float) Math.toDegrees(Math.asin(senAngulo));
}
}
final float dpiTrackerWidth = Navigator.meterToDpi(trackerWidth); // width of rect
final Path positionHistory = new Path(); // to draw the route
final Path circle = new Path(); // to draw the positions
/*
Iterate the historical positions and draw the path
*/
for (int i = 1; i < trackerHistory.size(); i++) {
Vector currentPosition = new Vector(convertToTerrainCoordinates(trackerHistory.get(i))); // vector with X and Y position
Vector lastPosition = new Vector(convertToTerrainCoordinates(trackerHistory.get(i - 1))); // vector with X and Y position
circle.addCircle((float) currentPosition.cartesian(0), (float) currentPosition.cartesian(1), 3, Path.Direction.CW);
circle.addCircle((float) lastPosition.cartesian(0), (float) lastPosition.cartesian(1), 3, Path.Direction.CW);
if (isInsideOfScreen(currentPosition.cartesian(0), currentPosition.cartesian(1)) ||
isInsideOfScreen(lastPosition.cartesian(0), lastPosition.cartesian(1))) {
/*
Calcule degree by triangle sides
*/
float shift = (float) currentPosition.distanceTo(lastPosition);
Vector dif = lastPosition.minus(currentPosition);
float sin = (float) (dif.cartesian(0) / shift);
float degress = (float) Math.toDegrees(Math.asin(sin));
/*
Create a Rect to draw displacement between two coordinates
*/
RectF rect = new RectF();
rect.left = (float) (currentPosition.cartesian(0) - (dpiTrackerWidth / 2));
rect.right = rect.left + dpiTrackerWidth;
rect.top = (float) currentPosition.cartesian(1);
rect.bottom = rect.top - shift;
Path p = new Path();
Matrix m = new Matrix();
p.addRect(rect, Path.Direction.CCW);
m.postRotate(-degress, (float) currentPosition.cartesian(0), (float) currentPosition.cartesian(1));
p.transform(m);
positionHistory.addPath(p);
}
}
// rotates the map to make the route down.
canvas.rotate(fieldRotation);
canvas.drawPath(positionHistory, paint);
canvas.drawPath(circle, paint2);
canvas.restore();
}
My goal is to have something like this application: https://play.google.com/store/apps/details?id=hu.zbertok.machineryguide (but only in 2D for now)
EDIT:
To clarify a bit more my doubts:
I do not have much experience about it. I would like a better way to draw the path. With rectangles it was not very good. Note that the curves are some empty spaces.
Another point is the rotation of rectangles, I'm rotating them at the time of drawing. I believe this will make it difficult to detect overlaps
I believe I need math help for the rotation of objects and overlapping detection. And it also helps to draw the path of a filled shape.
After some research time, I came to a successful outcome. I will comment on my thoughts and how was the solution.
As I explained in question, along the way I have the coordinates traveled by the vehicle, and also a setting for the width of the path should be drawn.
Using LibGDX library is ready a number of features, such as the implementation of a "orthographic camera" to work with positioning, rotation, etc.
With LibGDX I converted GPS coordinates on my side points to the road traveled. Like this:
The next challenge was to fill the path traveled. First I tried using rectangles, but the result was as shown in my question.
So the solution was to trace triangles using the side of the path as vertices. Like this:
Then simply fill in the triangles. Like this:
Finally, using Stencil, I set up OpenGL to highlight overlaps. Like this:
Other issues fixed:
To calculate the covered area, simply calculate the area of existing triangles along the path.
To detect overlapping, just check if the current position of the vehicle is within a triangle.
Thanks to:
AlexWien for the attention and for their time.
Conner Anderson by videos of LibGDX
And a special thanks to Luis Eduardo for knowledge, helped me a lot.
The sample source code.
Usually such a path is drawn using a "path" method from the graphics lib.
In that lib you can create a polyline, and give a line width.
You further specify how corners are filled. (BEVEL_JOIN, MITTER_JOIN)
The main question is wheter the path is drawn while driving or afterwords.
Afterwords is no problem.
To draw while driving might be a bit tricky to avoid to redraw the path each second.
When using the Path with moveTo and lineTo to create a polyline, then you can set a line width, and the graphics lib will do that all for you.
Then there will be no gaps, since it is a poly line.
Related
I have an object which moves on a terrain and a third person camera follow it, after I move it for some distance in different directions it begin to shaking or vibrating even if it is not moving and the camera rotates around it, this is the moving code of the object
double& delta = engine.getDeltaTime();
GLfloat velocity = delta * movementSpeed;
glm::vec3 t(glm::vec3(0, 0, 1) * (velocity * 3.0f));
//translate the objet atri before rendering
matrix = glm::translate(matrix, t);
//get the forward vetor of the matrix
glm::vec3 f(matrix[2][0], matrix[2][1], matrix[2][2]);
f = glm::normalize(f);
f = f * (velocity * 3.0f);
f = -f;
camera.translate(f);
and the camera rotation is
void Camera::rotate(GLfloat xoffset, GLfloat yoffset, glm::vec3& c, double& delta, GLboolean constrainpitch) {
xoffset *= (delta * this->rotSpeed);
yoffset *= (delta * this->rotSpeed);
pitch += yoffset;
yaw += xoffset;
if (constrainpitch) {
if (pitch >= maxPitch) {
pitch = maxPitch;
yoffset = 0;
}
if (pitch <= minPitch) {
pitch = minPitch;
yoffset = 0;
}
}
glm::quat Qx(glm::angleAxis(glm::radians(yoffset), glm::vec3(1.0f, 0.0f, 0.0f)));
glm::quat Qy(glm::angleAxis(glm::radians(xoffset), glm::vec3(0.0f, 1.0f, 0.0f)));
glm::mat4 rotX = glm::mat4_cast(Qx);
glm::mat4 rotY = glm::mat4_cast(Qy);
view = glm::translate(view, c);
view = rotX * view;
view = view * rotY;
view = glm::translate(view, -c);
}
float is sometimes not enough.
I use double precision matrices on CPU side to avoid such problems. But as you are on Android it might not be possible. For GPU use floats again as there are no 64bit interpolators yet.
Big numbers are usually the problem
If your world is big then you are passing big numbers into the equations multiplying any errors and only at the final stage the stuff is translated relative to camera position meaning the errors stay multiplied but the numbers got clamped so error/data ratio got big.
To lower this problem before rendering convert all vertexes to coordinate system with origin at or near your camera. You can ignore rotations just offset the positions.
This way you will got higher errors only far away from camera which is with perspective not visible anyway... For more info see:
ray and ellipsoid intersection accuracy improvement
Use cumulative transform matrix instead of Euler angles
for more info see Understanding 4x4 homogenous transform matrices and all the links at bottom of that answer.
This sounds like a numerical effect to me. Even small offsets coming from your game object will influence the rotation of the following camera with small movements / rotations and it looks like a vibrating object / camera.
So what you can do is:
Check if the movement above a threshold value before calculating a new rotation for your camera
When you are above this threshold: do a linear interpolation between the old and the new rotation using the lerp-algorithm for the quaternion ( see this unity answer to get a better understanding how your code can look like: Unity lerp discussion )
I have an orthogonal perspective which I initialize like so:
gl.glViewport(0, 0, Constants.SCREEN_WIDTH, Constants.SCREEN_HEIGHT);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0,Constants.GAME_AREA_WIDTH, Constants.GAME_AREA_HEIGHT, 0, 1, 10);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
What I want to do here is have a square start off the top of the screen (at like (x,-100,z) and the that square should descend (on y) while at the same time roate (on z).
The square's upper-left is what I use as reference for the square's position.
Ok, now, I think I get how to roate it around itself. I translate the thing to (-squareSize/2, -squareSize/2,z), rotate it along z, then translate back. And indeed, if I only test this rotation it works ok:
gl.glLoadIdentity();
angle = angle + 3;
if(angle>360) {
angle = angle - 360;
}
gl.glTranslatef(xCurrent+size/2, yCurrent+size/2,0);
gl.glRotatef(angle, 0, 0, 1);
gl.glTranslatef(-(xCurrent+size/2), -(yCurrent+size/2),0);
//omitted: enable client state, draw elements, disable client state.
With just this, no matter where I place my square (even small negative values for x and y which only make it partially show on the screen), it will rotate around its center.
However I can't figure out how to add the downwards translation on y. If I do something like this:
angle = angle + 3;
if(angle>360) {
angle = angle - 360;
}
gl.glTranslatef(xCurrent+size/2, yCurrent+size/2,0);
gl.glRotatef(angle, 0, 0, 1);
gl.glTranslatef(-(xCurrent+size/2), -(yCurrent+size/2),0);
yCurrent = yCurrent + realSpeed;
if(yCurrent>Constants.GAME_AREA_HEIGHT+size) {
yCurrent=-size;
}
gl.glTranslatef(0f, yCurrent,0f);
it will only work ok if my square start at (0,0,z) - in which case it will move down and rotate around it's center.
If however I start it at any positive or negative non 0 value for either x or y, it will still move down, but do a weird spiral motion instead of rotating agains its center.
The OpenGL matrix stack post multiplies. Which effectively means that you should do the most local transformation last.
So what you probably want to do is to perform a glTranslatef to the tile's current position, then do the translate/rotate/untranslate sequence to effect your rotation.
Editor's Note: This answer was moved from a question edit, it is written by the Original Poster.
First off, what Tommy sais in the answer below is right, I should first code the translation to the new position, and THEN add the lines of code that do translate/rotate/translate.
Also, the values I asign to x and y when wanting to translate the center of the square to coordinates (0,0,z) are simply wrong, I misscalculated them. The basic idea here is this. Let's say a square has the following vertices:
private static float xLeft = -0.75f;
private static float xRight = +0.25f;
private static float yTop = 2f;
private static float yBottom = 1f;
protected static float vertices[] = {
//x y z
xLeft, yTop, -5f, //Top left triangle1-1 triangle2-1
xRight, yTop, -5f, //Top right triangle1-2
xLeft, yBottom, -5f, //Bottom left triangle2-3
xRight, yBottom, -5f //Bottom right triangle1-3 triangle2-2
};
then the translation amounts needed to place this square's center at (0,0,z) are:
private float xCenterTranslation = (xRight+xLeft)/2f;
private float yCenterTranslation = (yTop+yBottom)/2f;
and the code for translating the square on the y axis while at the same time rotating it along its center is:
gl.glTranslatef(0, translationAmountLinearY, 0); //translate on y
//decrement Y translation for next rendering
translationAmountLinearY+=translationDeltaLinearY;
gl.glTranslatef(xCenterTranslation, yCenterTranslation, 0);//translate BACK from center
gl.glRotatef(rotationAmountZDegrees, 0, 0, 1);//rotate
gl.glTranslatef(-xCenterTranslation, -yCenterTranslation, 0);//translate to center
//increment z rotation for next rendering:
rotationAmountZDegrees+=0.04f;
I have created this post -> plot a real world lat lon into different angle still image map
from that, I can successfully mark the given lat&lon given the two (2) coordinates (upper left, lower right) but due to incorrect angle of the still image map compare to real world map angle, my mark was displaced.
Now, I am thinking of using four (4) coordinates (upper left, lower left, upper right, lower right) of the image. So that, I could plot the given lat&lon without considering the angle.
I think, even without Android experience could answer this question. I just kinda slow with Mathematics matter.
It is possible to implement it? if yes, any guidance & code snippets are appreciated.
UPDATES1
Main goal is to mark the given lat&lon into image map which has different angle against to real world map.
UPDATES2
I am using the below codes to compute my angle. Would you check it if it is reliable for getting the angle. Then convert it to pixel. NOTE: this codes are using only two coordinates of the image plus target coordinate.
public static double[] calc_xy (double imageSize, Location target, Location upperLeft, Location upperRight) {
double newAngle = -1;
try {
double angle = calc_radian(upperRight.getLongitude(), upperRight.getLatitude(),
upperLeft.getLongitude(), upperLeft.getLatitude(),
target.getLongitude(), target.getLatitude());
newAngle = 180-angle;
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
double upperLeft_Target_dist = upperLeft.distanceTo(target);
double upperLeft_Right_dist = upperLeft.distanceTo(upperRight);
double distancePerPx = imageSize /upperLeft_Right_dist;
double distance = upperLeft_Target_dist * distancePerPx;
double radian = newAngle * Math.PI/180;
double[] result = radToPixel(distance, radian);
return result;
}
public static double[] radToPixel(double distance, double radian) {
double[] result = {-1,-1};
result[Functions.Location.PIXEL_X_LON] = distance * Math.cos(radian);
result[Functions.Location.PIXEL_Y_LAT] = distance * Math.sin(radian);
return result;
}
public static double calc_radian(Double x1, Double y1, Double x2, Double y2, Double x3, Double y3)
throws Exception{
double rad = 0.0;
if((Double.compare(x1, x2) == 0 && Double.compare(y1, y2) == 0) ||
(Double.compare(x3, x2) == 0 && Double.compare(y3, y2) == 0))
{
Log.d(tag, "Same place") ;
return rad;
}
/* compute vector */
double BAx = x2 - x1;
double BAy = y2 - y1;
double BCx = x3 - x2;
double BCy = y3 - y2;
double cosA = BAx / Math.sqrt( BAx * BAx + BAy * BAy ) ;
double cosC = BCx / Math.sqrt( BCx * BCx + BCy * BCy ) ;
double radA = Math.acos( cosA ) * 180.0 / Math.PI ;
double radC = Math.acos( cosC ) * 180.0 / Math.PI ;
if( BAy < 0.0 )
{
radA = radA * -1.0 ;
}
if( BCy < 0.0 )
{
radC = radC * -1.0 ;
}
rad = radC - radA ;
if( rad > 180.0 )
{
rad = rad - 360;
}
if( rad < -180.0 )
{
rad = rad + 360;
}
return rad ;
}
This looks like you want to plot the user's current geo-location on an image of, say a building or campus. Assuming this, my approach would be to 'map' the still image to the screen which is likely to require a translation transform, a rotation transform and a scaling transform. In addition, you will need to know the actual geo-location coordinates of at least two points on your image. Given the image in your previous post, I would assume you have the geo coordinates of the bottom left corner and the bottom right corner. You already have the information to convert a geo coordinate into a screen coordinate so the image can be drawn matching up the bottom left corner of your image with the pixel coordinate which you've calculated. I will call this point your anchor point.
At this stage you probably have an image with one corner at the correct location but now it needs to be scaled down or up and then rotated about your anchor point. You can get the current zoom level from your mapView or you can get the latitudeSpan and you can calculate the scale factor to be applied to your image.
Lastly, if you have the geo coordinates of the two corners of the image, you can calculate the angle the image should be rotated. This can be calculated using pythagoras or you can convert from Cartesian coordinates to polar coordinates see here. This calculation doesn't have to be done by your app - it can be calculated separately and put in as a constant. Now you can apply the rotation transform around your fixed anchor point.
You may also want to make use of handy built-in functions such as mapController.zoomInFixing() which takes pixel coordinates or one of the other zoomTo() or animateTo() functions.
Edit: If you're not using a mapview to manage your geo-coordinates then you can apply the image transformations using code like this:
// create a matrix for the manipulation
Matrix matrix = new Matrix();
// resize the bit map
matrix.postScale(scaleWidth, scaleHeight);
// rotate the Bitmap
matrix.postRotate(angle);
// recreate the new Bitmap
Bitmap resizedBitmap = Bitmap.createBitmap(bitmapOrg, 0, 0,
width, height, matrix, true);
// make a Drawable from Bitmap to allow to set the BitMap
// to the ImageView, ImageButton or what ever
BitmapDrawable bmd = new BitmapDrawable(resizedBitmap);
ImageView imageView = new ImageView(this);
// set the Drawable on the ImageView
imageView.setImageDrawable(bmd);
Edit: With the upper left and lower right coordinates, you can calculate the angle as follows:
angle = sin-1((right.x - left.y)/sqrt((right.x - left.x)sq + (right.y - left.y)sq))
[Where sqrt = square root; sq = squared
If you know what is the angle, it is a simple Cartesian rotation of the axis.
Let x be the old longitude,
y be the old latitude,
and b be the angle
The new longitude x' = x*cos(b) - y*sin(b)
The new latitude y' = x*sin(b) + y*cos(b)
I'm not sure I understand but it seems to me like you want to calculate the angle of the image that you want using two points then rotate it and resize it based on the number of pixels between point a and b (two corners) using the method of changing from lat lon to pixels and the distance formula
Can any one help me to draw a cylinder in OpenGL-es android. Whatever i draw its look like a rectangle.
I would appreciate any tips or link.
Here is the code i've tried:
int VERTICES=180; // more than needed
float coords[] = new float[VERTICES * 3];
float theta = 0;
for (int i = 0; i < VERTICES * 3; i += 3) {
coords[i + 0] = (float) Math.cos(theta);
coords[i + 1] = (float) Math.sin(theta);
coords[i + 2] = 0;
_vertexBuffer.put(coords[i + 0]);
_vertexBuffer.put(coords[i + 1]);
_vertexBuffer.put(coords[i + 2]);
theta += Math.PI / 90;
}
This will only draw a circle. A cylinder is more complicated as you will need to define vertices in a translated z plane. And define them with correct normals (either facing in as if you were inside the cylinder -ie a tunnel or out as in looking at a pipe) which is the trickier part.
I'm currently doing this now (which is what brought me here) and have the cylinder drawn but pretty sure my normals are incorrect as my lighting looks a bit off. I'll post some code when I figure it out.
Edit : realized the code also doesn't actually draw a circle. Here is how to do that (in 2D) :
R = Radius
NUM_VERTICES = Number of vertices you want to use in circle
delta = (Math.PI / 180) * (360 / NUM_VERTICES); //get delta in radians between vertex definition
for i = 0 ; i < NUM_VERTICES ; i ++
x = R * cos(Delta * i)
y = R * sin(Delta * i))
vertices[i] = x; vertices[i+1] = y; vertices[i+2] = 0;
end for
//note may need to redefine the original vertex to complete the circle depending on which GL draw type you are using. If so just take the arg to sin / cos to be 0 to complete the loop
Last Edit* : just realized I was overcomplicating the normals by re-using some calculate normal from triangle code I had. Instead I realized how simple the normal calculation is for a cylinder if you consider the the origin 0,0 to be the center of each circular strip. The normal will be = vertex position scaled to length 1. for normals facing in on a cylinder (ie tunnel) the x,y values would be inverted (this is a assuming you are looking down the -z axis).
I'm trying to draw a rectangle with rounded corners. I have a javascript path that does this, but the javascript arcTo method takes a rectangle (to define its oval) and then one param which sets the sweep.
However, in the Android version there are three params. the rectangle oval (which I think I have defined correctly) and then the startAngle and sweepAngle (which I'm not understanding the usage of), but my arcs don't look anything like what I'm expecting when I noodle with how I'm guessing they should work.
Does anyone know of a good tutorial on this?
Specifically I'm trying to understand what would the two params look like if I was trying to draw an arc (on a clock face) from 12 - 3, and then assuming I had a line that ran down from the 3 and then needed to round the corner from 3 to 6 and so forth.
Here's my code (disregard the arc numbers in there now... that's just the latest iteration of my guessing at how this may work, having failed on the previous, more sensible attempts):
Path ctx = new Path();
ctx.moveTo(X+5,Y); //A
ctx.lineTo(X+W-5,Y);//B
ctx.arcTo(new RectF(X+W, Y, X+W, Y+5), -180, 90); //B arc
ctx.lineTo(X+W,Y+H-5); //C
ctx.arcTo(new RectF(X+W,Y+H,X+W-5,Y+H),90,180); //C arc
ctx.lineTo(X+W/2 +6,Y+H);
ctx.lineTo(X+W/2,Y+H+8);
ctx.lineTo(X+W/2-6,Y+H);
ctx.lineTo(X+5,Y+H);
ctx.arcTo(new RectF(X,Y+H,X,Y+H-5),180,270);
ctx.lineTo(X,Y+5);
ctx.arcTo(new RectF(X,Y,X+5,Y),270,0);
Paint p = new Paint();
p.setColor(0xffff00ff);
canvas.drawPath(ctx, p);
much obliged.
odd that no one piped in with an answer, once I found it (it wasn't easy to find) it was really straight forward.
So, the way it works is this:
Assuming you want to draw a rounded corner at 12 - 3 (using clock reference):
you start your path and when you need the line to arc you define a rectangle whose upper left corner is the place where your line is currently terminated and whose lower right corner is the place that you want the arc to go to, so if you imagine a square whose X,Y is 12 (on the clock) and whose X+W,Y+H is 3 that's the square you need.
Now, imagine that you have an oval in that square (in this example it's a circular oval, if you want your curve to be more oval-ish, then define your square as a rectangle), you can take any slice of that circle using the last two params of the method. The first param defines the angle where you want to start cutting. If we're using a compass, 0 degrees is East (not sure why, I'm not a geometry expert... is this normal? I always think of 0 being North, but all the programming geometry examples I see have 0 as East, maybe someone will comment on why that is).
The second param defines how much of the circle you want. If you want the whole circle you put 360 if you want half the circle you put 180 etc.
So, in our case since we want to round the corner from 12 to 3, we put 270 as our starting degree and grab 90 degrees of the circle.
Lastly, when you're done with this process, the line now thinks of itself as being at 3pm so you can continue lineTo(ing) from there.
So... here's my fixed code for my shape (it has a little triangle in it, but that's neither here nor there, the actual rounded parts are B-C, D-E, I-J, and K-A. All the rest are straight lines.
int arc = 25;
public Cursor(int X, int Y, int W, int H){
/*
* A B
* K C
* J D
* I H F E
G
*/
int Ax = X+ arc;
int Ay = Y;
int Bx = X + W - arc;
int By = Y;
int Cx = X + W;
int Cy = Y + arc;
int Dx = Cx;
int Dy = (Y + arc) + (H - arc*2);
int Ex = Bx;
int Ey = Y + H;
int Fx = X+W/2 +6;
int Fy = Ey;
int Gx = X+W/2;
int Gy = Y+H+8;
int Hx = X+W/2-6;
int Hy = Ey;
int Ix = Ax;
int Iy = Hy;
int Jx = X;
int Jy = Dy;
int Kx = X;
int Ky = Cy;
Path ctx = new Path();
ctx.moveTo(Ax,Ay); //A
ctx.lineTo(Bx,By);//B
ctx.arcTo(new RectF(Bx, By, Cx, Cy), 270, 90); //B-C arc
ctx.lineTo(Dx,Dy); //D
ctx.arcTo(new RectF(Dx - arc, Dy, Ex + arc, Ey),0,90); //D-E arc
ctx.lineTo(Fx, Fy); //E-F
ctx.lineTo(Gx, Gy); //F-G
ctx.lineTo(Hx, Hy); //G-H
ctx.lineTo(Ix, Iy); //H - I
ctx.arcTo(new RectF(Jx, Jy, Ix, Iy),90,90);// I = J arc
ctx.lineTo(Kx, Ky); //K
ctx.arcTo(new RectF(Ax - arc, Ay, Kx + arc, Ky),180,90); //K - A arc
ctx.lineTo(Ax, Ay); //K
Paint p = new Paint();
p.setAntiAlias(true);
p.setColor(0xffffffff);
p.setStyle(Style.FILL);
canvas.drawPath(ctx, p);
p.setColor(0xff000000);
p.setStyle(Style.STROKE);
p.setStrokeWidth(3);
canvas.drawPath(ctx, p);
}
This answer visually explains all arcTo parameters using four examples.
arcTo takes the following parameters:
public void arcTo(RectF oval,
float startAngle,
float sweepAngle,
boolean forceMoveTo)
where RectF's constructor takes:
RectF(float left, float top, float right, float bottom)
(Hopefully this visualization is less painful and less mystifying than reading the official arcTo documentation.)
Thanks for this example, it makes the parameters very clear to understand.
From what I read in the dev docs of Android you can probably spare yourself some of the "lineTo()" calls (except those to points F,G,H), since arcTo automatically adds a lineTo when the first point of the arc is not the last point drawn...
As for why 0 starts East, it is so because of math and trigonometry lessons generally assume that the 0 degrees mark is the point where the trigonometric circle (circle with center 0,0 and radius 1) intersects with the X-axis, which is East (these same lessons however generally count the angles counter-clockwise, so 90 degrees becomes north and 270 is south, whereas on Android it seems the angles are counted clockwise)
Here's some sample code (pieced together from one of my classes) to draw a filled, rounded corner rectangle and then adding a stroked rectangle to give it a border:
//Initializing some stuff
_paint = new Paint();
_rect = new RectF();
_radius = 10;
_bgColor = 0xFFFFFFFF;
_borderColor = 0xFFCCCCCC;
//Doing dimension calculations
_rect.left = 0;
_rect.top = 0;
_rect.right = this.getWidth() - 1;
_rect.bottom = this.getHeight() - 1;
//painting
//draw the background
_paint.setColor(_bgColor);
_paint.setStyle(Style.FILL_AND_STROKE);
canvas.drawRoundRect(_rect, _radius, _radius, _paint);
//draw the border
_paint.setStrokeWidth(1);
_paint.setColor(_borderColor);
_paint.setStyle(Style.STROKE);
canvas.drawRoundRect(_rect, _radius, _radius, _paint);