Show FloorPlan and get Location with IndoorAtlas - android

Is there any resource on how to use the IndoorAtlas SDK?
I'm getting confused about how to show the floorPlan and getting current location.
Please kindly help me.

Here's very roughly how:
1) Initialize IndoorAtlas instance:
IndoorAtlas ia = IndoorAtlasFactory.createIndoorAtlas(context, listener, apiKey, apiSecret);
2) Obtain instance of FloorPlan:
FutureResult<FloorPlan> result = ia.fetchFloorPlan(floorPlanId);
result.setCallback(new ResultCallback<FloorPlan>() {
#Override
public void onResult(final FloorPlan result) {
mFloorPlan = result;
loadFloorPlanImage(result);
}
// handle error conditions too
}
3) Obtain actual image:
void loadFloorPlanImage(FloorPlan floorPlan) {
BitmapFactory.Options options = createBitmapOptions(floorPlan);
FutureResult<Bitmap> result = ia.fetchFloorPlanImage(floorPlan, options);
result.setCallback(new ResultCallback<Bitmap>() {
#Override
public void onResult(final Bitmap result) {
// now you have floor plan bitmap, do something with it
updateImageViewInUiThread(result);
}
// handle error conditions too
}
}
4) Start positioning:
ia.startPositioning(venueId, floorId, floorPlanId);
5) Show positions on floor plan:
public void onServiceUpdate(ServiceState state) {
// get position on original floor plan image
int i = state.getImagePoint().getI();
int j = state.getImagePoint().getJ();
// take into account how your floor plan image has been scaled
// and draw position
PointF scaledPoint = new PointF();
Util.calculateScaledPoint((int) floorPlan.dimensions[0], (int) floorPlan.dimensions[1], i, j, mImageView, scaledPoint);
drawNewPositionInUiThread(scaledPoint.x, scaledPoint.y);
}
Of course you can start positioning first and then obtain image. You could also cache image locally but like said, this was roughly how.
Util.java:
public class Utils {
/**
* Calculates scaling factor for an image with original dimensions of
* {#code originalWidth x originalHeight} being displayed with {#code imageView}.
*
* The assumption with this example code is that a) layout has been already performed for
* {#code imageView} and that {#link android.widget.ImageView.ScaleType#CENTER_INSIDE} is used.
*
* #param originalWidth height of the original bitmap to be displayed using {#code imageView}
* #param originalHeight width of the original bitmap to be displayed using {#code imageView}
*/
public static float calculateScaleFactor(int originalWidth, int originalHeight,
ImageView imageView) {
if (imageView.getScaleType() != ImageView.ScaleType.CENTER_INSIDE) {
throw new IllegalArgumentException("only scale type of CENTER_INSIDE supported, was: "
+ imageView.getScaleType());
}
final int availableX = imageView.getWidth()
- (imageView.getPaddingLeft() + imageView.getPaddingRight());
final int availableY = imageView.getHeight()
- (imageView.getPaddingTop() + imageView.getPaddingBottom());
if (originalWidth > availableX || originalHeight > availableY) {
// original image would not fit without scaling
return originalWidth > availableX
? availableX / (float) originalWidth
: availableY / (float) originalHeight;
} else {
return 1f; // no scaling required
}
}
/**
* Calculates point where to draw coordinates {#code x} and {#code y} in a bitmap that's
* original dimensions were {#code originalWidth x originalHeight} and may now be scaled down
* as it's been displayed with {#code imageView}.
*
* #param originalWidth width of the original bitmap before any scaling
* #param originalHeight height of the original bitmap before any scaling
* #param x x-coordinate on original bitmap
* #param y y-coordinate on original bitmap
* #param imageView view that will be used to display bitmap
* #param point point where result value is to be stored
* #see #calculateScaleFactor(int, int, ImageView)
*/
public static void calculateScaledPoint(int originalWidth, int originalHeight,
int x, int y,
ImageView imageView,
PointF point) {
final float scale = calculateScaleFactor(originalWidth, originalHeight, imageView);
final float scaledWidth = originalWidth * scale;
final float scaledHeight = originalHeight * scale;
// when image inside view is smaller than the view itself and image is centered (assumption)
// there will be some empty space around the image (here offset)
final float offsetX = Math.max(0, (imageView.getWidth() - scaledWidth) / 2);
final float offsetY = Math.max(0, (imageView.getHeight() - scaledHeight) / 2);
point.x = offsetX + (x * scale);
point.y = offsetY + (y * scale);
}
}

Related

Rotated Dragshadow (Xamarin, Android)

I have an rotated textview and I want to drag and drop this view.
The problem is that the drag shadow has no rotation.
I found a solution for android in java but this does not work for me.
Maybe I translate the code wrong
How to drag a rotated DragShadow?
class CustomDragShdowBuilder : View.DragShadowBuilder
{
private View _view;
public CustomDragShdowBuilder(View view)
{
_view = view;
}
public override void OnDrawShadow(Canvas canvas)
{
double rotationRad = Math.ToRadians(_view.Rotation);
int w = (int)(_view.Width * _view.ScaleX);
int h = (int)(_view.Height * _view.ScaleY);
double s = Math.Abs(Math.Sin(rotationRad));
double c = Math.Abs(Math.Cos(rotationRad));
int width = (int)(w * c + h * s);
int height = (int)(w * s + h * c);
canvas.Scale(_view.ScaleX, _view.ScaleY, width / 2, height / 2);
canvas.Rotate(_view.Rotation, width / 2, height / 2);
canvas.Translate((width - _view.Width) / 2, (height - _view.Height) / 2);
base.OnDrawShadow(canvas);
}
public override void OnProvideShadowMetrics(Point shadowSize, Point shadowTouchPoint)
{
shadowTouchPoint.Set(shadowSize.X / 2, shadowSize.Y / 2);
base.OnProvideShadowMetrics(shadowSize, shadowTouchPoint);
}
}
I found a solution for android in java but this does not work for me. Maybe I translate the code wrong
Yes, you are translating it wrong, you did change the codes of OnDrawShadow but you didn't pay attention to OnProvideShadowMetrics, which aims at changing the size of the canvas drawing area, so you need to pass the same width and height that has been calculated by codes:
Here is the modified version of DragShdowBuilder:
public class MyDragShadowBuilder : DragShadowBuilder
{
private int width, height;
// Defines the constructor for myDragShadowBuilder
public MyDragShadowBuilder(View v) : base(v)
{
}
// Defines a callback that sends the drag shadow dimensions and touch point back to the system.
public override void OnProvideShadowMetrics(Android.Graphics.Point outShadowSize, Android.Graphics.Point outShadowTouchPoint)
{
double rotationRad = Java.Lang.Math.ToRadians(View.Rotation);
int w = (int)(View.Width * View.ScaleX);
int h = (int)(View.Height * View.ScaleY);
double s = Java.Lang.Math.Abs(Java.Lang.Math.Sin(rotationRad));
double c = Java.Lang.Math.Abs(Java.Lang.Math.Cos(rotationRad));
//calculate the size of the canvas
//width = view's width*cos(rad)+height*sin(rad)
width = (int)(w * c + h * s);
//height = view's width*sin(rad)+height*cos(rad)
height = (int)(w * s + h * c);
outShadowSize.Set(width, height);
// Sets the touch point's position to be in the middle of the drag shadow
outShadowTouchPoint.Set(outShadowSize.X / 2, outShadowSize.Y / 2);
}
// Defines a callback that draws the drag shadow in a Canvas that the system constructs
// from the dimensions passed in onProvideShadowMetrics().
public override void OnDrawShadow(Canvas canvas)
{
canvas.Scale(View.ScaleX, View.ScaleY, width/2 , height/2);
//canvas.DrawColor(Android.Graphics.Color.White);
canvas.Rotate(View.Rotation,width/2, height / 2);
canvas.Translate((width - View.Width)/2, (height - View.Height) / 2);
base.OnDrawShadow(canvas);
}
}
And here is the complete sample:RotatedTextViewSample

Crop rectangle from image issue

I need to take picture from camera in landscape orientation (capture the full screen) with this mask:
Then crop rectangle from image like this:
I have some problem with cropping.
I do(android):
public Bitmap getCropImage() {
Bitmap bitmap = getBitmap();
int captureBitmapWidth = bitmap.getWidth();
int captureBitmapHeight = bitmap.getHeight();
// to get a multiplicative factor of axises.
float xCoefficient = captureBitmapWidth / 720.0;
float yCoefficient = captureBitmapHeight / 480.0;
int cropRectangleWidth = 200;
int cropRectangleHeight = 100;
int cropRectangle_a_x = 200;
int cropRectangle_a_y = 300;
bitmap = Bitmap.createBitmap(bitmap,
Math.round(cropRectangle_a_x * xCoefficient),
Math.round(cropRectangle_a_y * yCoefficient),
Math.round(cropRectangleWidth * xCoefficient),
Math.round(cropRectangleHeight * yCoefficient));
return bitmap;
}
iphone:
extension UIImage {
func crop() -> UIImage {
// to get a multiplicative factor of axises.
let xCoefficient : CGFloat = self.size.width / RectangleConfig.WIDTH;
let yCoefficient : CGFloat = self.size.height / RectangleConfig.HEIGTH;
var cropRect = CGRectMake(RectangleConfig.x * xCoefficient,
RectangleConfig.y * yCoefficient,
RectangleConfig.cropRectWidth * xCoefficient,
RectangleConfig.cropRectHeight * yCoefficient)
cropRect.origin.x *= self.scale
cropRect.origin.y *= self.scale
cropRect.size.width *= self.scale
cropRect.size.height *= self.scale
let imageRef = CGImageCreateWithImageInRect(self.CGImage, cropRect)
let image = UIImage(CGImage: imageRef!, scale: self.scale, orientation: self.imageOrientation)
return image
}
}
On some mobile devices, I get the correct result, on other the crop rectangle shifted down and to the left. Where is the mistake?
My guess is that the problem with your code is that you always use 720x480, which is always the case with different android devices.
Maybe more like:
DisplayMetrics metrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(metrics);
float xCoefficient = captureBitmapWidth / metrics.widthPixels;
float yCoefficient = captureBitmapHeight / metrics.heightPixels;

android-gpuimage - image straightening with GPUImageView

In my android app.I want to make image Straightening edit feature using android-gpuimage library but GPUImageView doesn't give feature of getBitmap() or setMatrix() then how is it possible please let me know? here is the code to review :
#Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
if(isStraightenEffectEnabled){
float angle = (progress - 45);
float width = mGPUImageView.getWidth();
float height = mGPUImageView.getHeight();
if (width > height) {
width = mGPUImageView.getHeight();
height = mGPUImageView.getWidth();
}
float a = (float) Math.atan(height/width);
// the length from the center to the corner of the green
float len1 = (width / 2) / (float) Math.cos(a - Math.abs(Math.toRadians(angle)));
// the length from the center to the corner of the black
float len2 = (float) Math.sqrt(Math.pow(width/2,2) + Math.pow(height/2,2));
// compute the scaling factor
float scale = len2 / len1;
Matrix matrix = mGPUImageView.getMatrix();
if (mMatrix == null) {
mMatrix = new Matrix(matrix);
}
matrix = new Matrix(mMatrix);
float newX = (mGPUImageView.getWidth() / 2) * (1 - scale);
float newY = (mGPUImageView.getHeight() / 2) * (1 - scale);
matrix.postScale(scale, scale);
matrix.postTranslate(newX, newY);
matrix.postRotate(angle, mGPUImageView.getWidth() / 2, mGPUImageView.getHeight() / 2);
Is it possible to get mGPUImageView.setMatrix(matrix);
NOW HERE getMatrix() is a method of GPUImageView but setMatrix() or getBitmap() is not method available with GPUImageView class. Any workarounds if possible ?
add getGPUImage in GPUImageView class
public GPUImage getGPUImage() {
return mGPUImage;
}
then get you can get bitmap like this:
mGPUImageView.getGPUImage().getBitmapWithFilterApplied();
You can also get bitmap like this:
Bitmap bitmap = mGPUImageView.capture();
You can get filtered bitmap renderer and Pixelbuffer. This might be helpful for you.
GPUImageLookupFilter amatorka = new GPUImageLookupFilter();
amatorka.setBitmap(BitmapFactory.decodeResource(getResources(), getResources().getIdentifier("fil_" + position, "drawable", getPackageName())));
GPUImageRenderer renderer = new GPUImageRenderer(amatorka);
renderer.setImageBitmap(bitmap, false);
PixelBuffer buffer = new PixelBuffer(80, 80);
buffer.setRenderer(renderer);
buffer.getBitmap();

Android bilinear image downscaling

I am trying to downscale a bitmap using bilinear filtering, but apparently there is something wrong in my code, because the image seems better than nearest neighbour or just using android's downsampling, but not good enough as Image Magick's biliear filter.
Do you see something wrong in my resize method?
private Bitmap resize(Bitmap immutable, int reqWidth, int reqHeight) {
Bitmap bitmap = Bitmap.createBitmap(reqWidth, reqHeight, Config.ARGB_8888);
float scaleFactor = immutable.getHeight() / (float) reqHeight;
for (int x = 1; x < reqWidth - 1; x++) {
for (int y = 1; y < reqHeight - 1; y++) {
float sx = x * scaleFactor;
float sy = y * scaleFactor;
int rx = (int) (x * scaleFactor);
int ry = (int) (y * scaleFactor);
final int tl = immutable.getPixel(rx, ry);
final int tr = immutable.getPixel(rx + 1, ry);
final int bl = immutable.getPixel(rx, ry+1 );
final int br = immutable.getPixel(rx + 1, ry+1);
float xC1 = sx- rx;
float xC2 = 2-xC1;
float yC1 = sy - ry;
float yC2 = 2 - yC1;
xC1/=2;
xC2/=2;
yC1/=2;
yC2/=2;
final float firstAlpha = (Color.alpha(tl) * xC2 + Color.alpha(tr) * xC1);
final float firstRed = (Color.red(tl) * xC2 + Color.red(tr) * xC1);
final float firstBlue = (Color.blue(tl) * xC2 + Color.blue(tr) * xC1);
final float firstGreen = (Color.green(tl) * xC2 + Color.green(tr) * xC1);
final float secondAlpha = (Color.alpha(bl) * xC2 + Color.alpha(br) * xC1);
final float secondRed = (Color.red(bl) * xC2 + Color.red(br) * xC1);
final float secondGreen = (Color.green(bl) * xC2 + Color.green(br) * xC1);
final float secondBlue = (Color.blue(bl) * xC2 + Color.blue(br) * xC1);
int finalColor = Color.argb((int) (yC2 * firstAlpha + yC1 * secondAlpha), (int) (yC2 * firstRed + yC1 * secondRed), (int) (yC2 * firstGreen + yC1
* secondGreen), (int) (yC2 * firstBlue + yC1 * secondBlue));
bitmap.setPixel(x, y, finalColor);
}
}
return bitmap;
}
while Image magick's result is this:
Android uses bilinear downscaling algorithm when BitmapFactory.Options::inSampleSize->BitmapFactory.decodeResource() is called.
So good downscaling algorithm (not nearest neighbor like) consists of just 2 steps (plus calculation of the exact Rect for input/output rectangles cropping):
downscale using BitmapFactory.Options::inSampleSize->BitmapFactory.decodeResource() as close as possible to the resolution that you need but not less than it
get to the exact resolution by downscaling a little bit using Canvas::drawBitmap()
Here is detailed explanation how SonyMobile resolved this task: http://developer.sonymobile.com/2011/06/27/how-to-scale-images-for-your-android-application/
Here is the source code of SonyMobile scale utils: http://developer.sonymobile.com/downloads/code-example-module/image-scaling-code-example-for-android/

Google Maps API v2 draw part of circle on MapFragment

I need to draw something like this which will be painted and have little transparency
Also it needs to be clickable (onTouch event etc)
I know that in API v1 you have to use Overlay and extend it using canvas and some mathematics.
What is easiest way to do it in Google Map API v2?
PS: Radius is variable.
(For further reference)
EDIT 1:
I implemented CanvasTileProvider subclass and override its onDraw() method:
#Override
void onDraw(Canvas canvas, TileProjection projection) {
// TODO Auto-generated method stub
LatLng tempLocation = moveByDistance(mSegmentLocation, mSegmentRadius, mSegmentAngle);
DoublePoint segmentLocationPoint = new DoublePoint(0, 0);
DoublePoint tempLocationPoint = new DoublePoint(0, 0);
projection.latLngToPoint(mSegmentLocation, segmentLocationPoint);
projection.latLngToPoint(tempLocationPoint, tempLocationPoint);
float radiusInPoints = FloatMath.sqrt((float) (Math.pow(
(segmentLocationPoint.x - tempLocationPoint.x), 2) + Math.pow(
(segmentLocationPoint.y - tempLocationPoint.y), 2)));
RectF segmentArea = new RectF();
segmentArea.set((float)segmentLocationPoint.x - radiusInPoints, (float)segmentLocationPoint.y - radiusInPoints,
(float)segmentLocationPoint.x + radiusInPoints, (float)segmentLocationPoint.y + radiusInPoints);
canvas.drawArc(segmentArea, getAdjustedAngle(mSegmentAngle),
getAdjustedAngle(mSegmentAngle + 60), true, getOuterCirclePaint());
}
Also, I added this from MapActivity:
private void loadSegmentTiles() {
TileProvider tileProvider;
TileOverlay tileOverlay = mMap.addTileOverlay(
new TileOverlayOptions().tileProvider(new SegmentTileProvider(new LatLng(45.00000,15.000000), 250, 30)));
}
Now I'm wondering why my arc isn't on map?
For drawing the circle segments, I would register a TileProvider, if the segments are mainly static. (Tiles are typically loaded only once and then cached.) For checking for click events, you can register an onMapClickListener and loop over your segments to check whether the clicked LatLng is inside one of your segments. (see below for more details.)
Here is a TileProvider example, which you could subclass and just implement the onDraw method.
One important note: The subclass must be thread safe! The onDraw method will be called by multiple threads simultaneously. So avoid any globals which are changed inside onDraw!
/* imports should be obvious */
public abstract class CanvasTileProvider implements TileProvider {
private static int TILE_SIZE = 256;
private BitMapThreadLocal tlBitmap;
#SuppressWarnings("unused")
private static final String TAG = CanvasTileProvider.class.getSimpleName();
public CanvasTileProvider() {
super();
tlBitmap = new BitMapThreadLocal();
}
#Override
// Warning: Must be threadsafe. To still avoid creation of lot of bitmaps,
// I use a subclass of ThreadLocal !!!
public Tile getTile(int x, int y, int zoom) {
TileProjection projection = new TileProjection(TILE_SIZE,
x, y, zoom);
byte[] data;
Bitmap image = getNewBitmap();
Canvas canvas = new Canvas(image);
onDraw(canvas, projection);
data = bitmapToByteArray(image);
Tile tile = new Tile(TILE_SIZE, TILE_SIZE, data);
return tile;
}
/** Must be implemented by a concrete TileProvider */
abstract void onDraw(Canvas canvas, TileProjection projection);
/**
* Get an empty bitmap, which may however be reused from a previous call in
* the same thread.
*
* #return
*/
private Bitmap getNewBitmap() {
Bitmap bitmap = tlBitmap.get();
// Clear the previous bitmap
bitmap.eraseColor(Color.TRANSPARENT);
return bitmap;
}
private static byte[] bitmapToByteArray(Bitmap bm) {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat.PNG, 100, bos);
byte[] data = bos.toByteArray();
return data;
}
class BitMapThreadLocal extends ThreadLocal<Bitmap> {
#Override
protected Bitmap initialValue() {
Bitmap image = Bitmap.createBitmap(TILE_SIZE, TILE_SIZE,
Config.ARGB_8888);
return image;
}
}
}
Use the projection, which is passed into the onDraw method, to get at first the bounds of the tile. If no segment is inside the bounds, just return. Otherwise draw your seqment into the canvas. The method projection.latLngToPoint helps you to convert from LatLng to the pixels of the canvas.
/** Converts between LatLng coordinates and the pixels inside a tile. */
public class TileProjection {
private int x;
private int y;
private int zoom;
private int TILE_SIZE;
private DoublePoint pixelOrigin_;
private double pixelsPerLonDegree_;
private double pixelsPerLonRadian_;
TileProjection(int tileSize, int x, int y, int zoom) {
this.TILE_SIZE = tileSize;
this.x = x;
this.y = y;
this.zoom = zoom;
pixelOrigin_ = new DoublePoint(TILE_SIZE / 2, TILE_SIZE / 2);
pixelsPerLonDegree_ = TILE_SIZE / 360d;
pixelsPerLonRadian_ = TILE_SIZE / (2 * Math.PI);
}
/** Get the dimensions of the Tile in LatLng coordinates */
public LatLngBounds getTileBounds() {
DoublePoint tileSW = new DoublePoint(x * TILE_SIZE, (y + 1) * TILE_SIZE);
DoublePoint worldSW = pixelToWorldCoordinates(tileSW);
LatLng SW = worldCoordToLatLng(worldSW);
DoublePoint tileNE = new DoublePoint((x + 1) * TILE_SIZE, y * TILE_SIZE);
DoublePoint worldNE = pixelToWorldCoordinates(tileNE);
LatLng NE = worldCoordToLatLng(worldNE);
return new LatLngBounds(SW, NE);
}
/**
* Calculate the pixel coordinates inside a tile, relative to the left upper
* corner (origin) of the tile.
*/
public void latLngToPoint(LatLng latLng, DoublePoint result) {
latLngToWorldCoordinates(latLng, result);
worldToPixelCoordinates(result, result);
result.x -= x * TILE_SIZE;
result.y -= y * TILE_SIZE;
}
private DoublePoint pixelToWorldCoordinates(DoublePoint pixelCoord) {
int numTiles = 1 << zoom;
DoublePoint worldCoordinate = new DoublePoint(pixelCoord.x / numTiles,
pixelCoord.y / numTiles);
return worldCoordinate;
}
/**
* Transform the world coordinates into pixel-coordinates relative to the
* whole tile-area. (i.e. the coordinate system that spans all tiles.)
*
*
* Takes the resulting point as parameter, to avoid creation of new objects.
*/
private void worldToPixelCoordinates(DoublePoint worldCoord, DoublePoint result) {
int numTiles = 1 << zoom;
result.x = worldCoord.x * numTiles;
result.y = worldCoord.y * numTiles;
}
private LatLng worldCoordToLatLng(DoublePoint worldCoordinate) {
DoublePoint origin = pixelOrigin_;
double lng = (worldCoordinate.x - origin.x) / pixelsPerLonDegree_;
double latRadians = (worldCoordinate.y - origin.y)
/ -pixelsPerLonRadian_;
double lat = Math.toDegrees(2 * Math.atan(Math.exp(latRadians))
- Math.PI / 2);
return new LatLng(lat, lng);
}
/**
* Get the coordinates in a system describing the whole globe in a
* coordinate range from 0 to TILE_SIZE (type double).
*
* Takes the resulting point as parameter, to avoid creation of new objects.
*/
private void latLngToWorldCoordinates(LatLng latLng, DoublePoint result) {
DoublePoint origin = pixelOrigin_;
result.x = origin.x + latLng.longitude * pixelsPerLonDegree_;
// Truncating to 0.9999 effectively limits latitude to 89.189. This is
// about a third of a tile past the edge of the world tile.
double siny = bound(Math.sin(Math.toRadians(latLng.latitude)), -0.9999,
0.9999);
result.y = origin.y + 0.5 * Math.log((1 + siny) / (1 - siny))
* -pixelsPerLonRadian_;
};
/** Return value reduced to min and max if outside one of these bounds. */
private double bound(double value, double min, double max) {
value = Math.max(value, min);
value = Math.min(value, max);
return value;
}
/** A Point in an x/y coordinate system with coordinates of type double */
public static class DoublePoint {
double x;
double y;
public DoublePoint(double x, double y) {
this.x = x;
this.y = y;
}
}
}
Finally you need something to check, whether a click on a LatLng-Coordinate is inside of your segment.
I would therefore approximate the segment by a list of LatLng-Coordinates, where in your case a simple triangle may be sufficient. For each list of LatLng coordinates, i.e. for each segment, you may then call something like the following:
private static boolean isPointInsidePolygon(List<LatLng> vertices, LatLng point) {
/**
* Test is based on a horizontal ray, starting from point to the right.
* If the ray is crossed by an even number of polygon-sides, the point
* is inside the polygon, otherwise it is outside.
*/
int i, j;
boolean inside = false;
int size = vertices.size();
for (i = 0, j = size - 1; i < size; j = i++) {
LatLng vi = vertices.get(i);
LatLng vj = vertices.get(j);
if ((vi.latitude > point.latitude) != (vj.latitude > point.latitude)) {
/* The polygonside crosses the horizontal level of the ray. */
if (point.longitude <= vi.longitude
&& point.longitude <= vj.longitude) {
/*
* Start and end of the side is right to the point. Side
* crosses the ray.
*/
inside = !inside;
} else if (point.longitude >= vi.longitude
&& point.longitude >= vj.longitude) {
/*
* Start and end of the side is left of the point. No
* crossing of the ray.
*/
} else {
double crossingLongitude = (vj.longitude - vi.longitude)
* (point.latitude - vi.latitude)
/ (vj.latitude - vi.latitude) + vi.longitude;
if (point.longitude < crossingLongitude) {
inside = !inside;
}
}
}
}
return inside;
}
As you may see, I had a very similar task to solve :-)
Create a View, override its onDraw method to use drawArc on its canvas, and add it to your MapFragment. You can specify the radius in drawArc. Set the onClickListener on the View (or onTouch, any listener you can use for normal views, really).

Categories

Resources