BoundingBox pixel coordinates for OSM - android

I have the lat/lon for the two BoundingBox corners
and I need to get the "Projection.toPixels()" for those two
points for use with OpenStreetMap.
I already have the basic functions, like getting the Tile
numbers for a lat/lon location, and given a lat/lon in a Tile
getting the X and Y pixels, but I can't find any code that
will help solve my BoundingBox problem.
It's a large BoundingBox, and usually, at zoom 8, the Tiles
with the BoundingBox points, are outside the normal display grid of Tiles
for the display, so the point on the "left" is something like X: -169 Y: -343
and the point on the "right" is like X: 841 Y: 849
So I need a formula, given the lat/lon of the BoundingBox corner
points, and end up with rectangle points like the above pixel offsets.
Currently I'm using Google Maps, and I can get it's version of the
BoundingBox points with "Projection.toPixels()" basically, but I'm
converting to use OpenStreetMap. They are both based on 256 X 256 tiles,
and use the same Projection, but I don't want to use any Google Maps API calls to do it.
I have seen some Google Maps like source in JavaScript that looks pretty good, but nothing about using BoundingBox.
I already have the display center lat/lon,
zoom level 8, and other needed data, but not the needed formula(s).
The goal is to call an Android draw() function like:
canvas.drawBitmap(
imageBitmap, // bitmap
null, // src rect
BBRect, // dest rect
paint); // paint
I'd like to see code that's similar to Java so I can understand it
enough to convert it if needed.
Thanks!

I think I finally have it "working".
This isn't the ideal way to do it, but it seems to "work".
The Projection has to take into account the display
size for it to work, for the proper solution.
But I was thinking about it from the point of view
of the grid of tiles it drew to make up the display.
For my display, it used a grid of three columns
and five rows of tiles for each, and you know what the X and Y it drew each at, and you can figure out where the
corner tiles of the BoundingBox from the lat/lon are,
and you can get the X/Y pixel offsets for the lat/lon in each corner tile.
So you can just "extend" the X/Y tile draw points out to the corner tiles, and then adjust for the pixel offsets in each corner tile, and you should have the rectangle points
for the BoundingBox.
My problem was that initially, when I tried to do that,
the image was a good ways off. I finally realized that
for the Android Canvas drawing coordinates, it used the
top left corner, but for whatever reason, I was thinking that it was from the bottom left, so once I re-calculated,
it finally looked like it drew the image in the right spot.
I'd like to be able to use the Projection, and properly
figure out how to get the toPixels(), but for now I think
this solution will work.

Related

How to find corner points of any object in point cloud and find distance between corner points in inch/cm/m?

We are creating an app which is used for calculating measurements of any a window/door using Project tango Device. For that we need to follow below mentioned steps:
1. Capture Image (normal 2D image). Store this image.
2. Also capture point cloud while capturing the Image. Store the point cloud in a PCD file.
3. Indicate the position of the window/door in the image displayed on a canvas by drawing a rectangle on the image. See Image.
4. As the rectangle is drawn automatically calculate width and height of the window/door using the stored PointCloud data.
We have managed to do 1, 2 and 3.
For 4 we have two issues:
A. Determine the Points in the PointCloud corresponding to the drawn rectangle i.e. the window/door. We believe that this involves determining the plane in which the window/door is located e.g. Assuming that the axis along the depth (i.e. from the camera to the object) is Z-axis then we need to determine the value(s) of Z that correspond to the plane in which the window/door is located in PointCloud. How can this be done? Please can someone suggest a feasible and an efficient way of doing so?
B. Once we determine the sub-PointCloud corresponding to the drawn rectangle find the distance between the minimum and maximum points along the X & Y axis to determine the width and height respectively. How can this be done?
Any help with demo code or app reference is appreciated.enter image description here
enter image description here
find contour of the point cloud. Use iterative ransac to fit lines to the contour. Intersect the lines and get the corner points
for 3d, compute the surface normals. Then compute the curvature which is a differential of the surface normal. These are corner points.
PCL (Point Cloud Library) has all of these functions.

Draw using graph coordinates in android

In my android app, I want to draw circles and then have to find their intersection points. And then i need to draw the lines between the intersection points which i got. I know how to implement this logic but dont know where to draw like graph.
I tried to draw this on mapview using Geopoints, considered lattitude as "x" axis and longitude as "y"axis. Its not good enough as I expected.(Sometimes lines disappear when zooming).
Which feature of Android facilitate this? Guide me to implement this.
Thanks in advance...
You want a custom view. Custom views allow you to draw to a Canvas. Canvas has functions like drawLine and drawCircle. It takes x,y coordinates, with 0,0 being the upper left corner and x and y growing bigger as you go right and down. If you don't want 0,0 to be the upper left, you can always use some basic algebra to translate it wherever you want.

Influence the Tile size in Android Maps TileProvider?

I have been playing around with the TileOverlay in Android Maps v2 and I have built a custom TileProvider very very similar to this one
But there is something that strikes me as odd. No matter which number I pass on to the Tile constructor, the image on the screen is always the same - 4 to 9 Tiles sharing the screen space evenly, like this:
Of course this is something you would expect from reading the documentation:
The coordinates of the tiles are measured from the top left (northwest) corner of the map. At zoom level N, the x values of the tile coordinates range from 0 to 2N - 1 and increase from west to east and the y values range from 0 to 2N - 1 and increase from north to south.
But you might guess that there is in fact such a functionality from looking at the Constructors documentation
Constructs a Tile. Parameters
width the width of the image in pixels
height the height of the image in pixels
data A byte array containing the image data. The image will be created from this data by calling decodeByteArray(byte[], int, int).
So obviously I misunderstood something here. My personal guess is that the tiles have to cover an entire "Map Tile" and can therefore not be shrunken
My goal would be to make my tiles about 10dp of the screen. Therefore again my question to you:
Can I realize this with TileOverlay or will I end up using custom Markers?
The size of the tile specified in the constructor is the size of (every) bitmap tile you are supplying to the map. This allows you to provide tiles at different densities for different screens if you have such resources.
It will not change the size of the image that is drawn on the map. The physical size of a map tile is defined by the zoom level, where a zoom level of 0 is a single tile covering the entire world, 1 is 2x2 tiles, etc. This is part of an open web map standard for map tiles, not defined by Google.
API docs:
https://developers.google.com/maps/documentation/android/tileoverlay
Ref:
http://www.maptiler.org/google-maps-coordinates-tile-bounds-projection/

OSMDroid PathOverlay drawing is corrupted at high zoom levels

I'm using OSMdroid to implement a mapping application.
I have implemented a custom MapTileProvider that uses a tile source that allows zoom levels up to 22.
The default MAPNIK provider only allows zooms to level 18.
The problem is that any PathOverlay instances draw perfectly until zoom level 19, but then
are not drawn properly at zoom level 20-22. it looks like someone's rubbed out the path with an eraser over 90% of the path length (see screenshots below).
I've stepped through the draw() method of PathOverlay and exerything seems to be calculating correctly (the intermediate points appear correct for ZoomLevel 22, and then the XY projections are dividing by 22-ZoomLevel to get current screen coordinates).
Can anyone provide some insight as to what the problem is, and how to resolve it?
The same thing happens if I invoke the MapView using Cloudmade small tiles, which allows zooms up until level 20 and is a 'built-in' osmDroid tile provider class.
//mMapTileProvider = new HighResMapTileProvider(this);
mMapTileProvider = new MapTileProviderBasic(this,TileSourceFactory.CLOUDMADESMALLTILES);
mMapView = new MapView(this, 256, mResourceProxy,mMapTileProvider);
So the problem does not appear to be with the tile source or provider but with the canvas drawing method. Any ideas on how to resolve this?
At zoomLevel 19 I can see my paths nicely:
But here is that same path at the next zoom level:
I've found a workaround. It's not ideal, but it does work.
Firstly, if I add a
canvas.drawCircle(screenPoint1.x, screenPoint1.y, 5, cpaint);
to PathOverlay's draw() method I see this, so I know the co-ordinates are at least being calculated correctly.
So the problem seems to be related to the underlying line draw method in Android.
After some trial and error, I found that setting the STROKE WIDTH to 0.0 for the PathOverlay's Paint object fixes the problem, but the line is obviously only 1 pixel wide.
Adding a check for the current zoom level in PathOverlay.draw() to set the stroke width will keep the current behaviour for levels <20 and draw a hairline path for higher zoom levels.
Some other things I noticed:
The circles become squares at zoom level 21&22. This strongly suggests that there's some floating point precision issues when passing very large (x,y) co-ordinates to Path.lineTo / canvas.drawCircle etc, e.g. mPath.lineTo(131000001,38000001)
Setting stroke width to say, 5 sort-of works up to zoom level 21, but the same problem crops up at level 22 again
Update: This has been fixed in osmdroid 3.0.9.
Original Answer:
This issue appears to be rooted in Android. I believe it is due to a rounding error that happens when you scroll a view to a large offset (possibly due to use of SKScalar). This bug can be isolated by creating a new android project with an activity and a view:
Begin with the view at the origin with no scroll offset yet.
Draw a circle on the canvas: canvas.drawCircle(screenCenterX, screenCenterY, 100, mPaint)
Scroll the view to a large number: mainView.scrollTo(536870912, 536870912)
Draw a circle on the canvas: canvas.drawCircle(newScreenCenterX, newScreenCenterY, 100, mPaint)
The first circle draws ok, the second one draws distorted. For further evidence, try to draw your path near 0 lat/0 long and zoom in - notice the distortion no longer appears.
I will update the osmdroid ticket with some possible solutions workarounds.
I am fairly sure this is because the default PathOverlay only draws the line between points in the view. If a the point is outside of the curent view the line segment is not drawen. At lower zoom levels you just don't see that the bit of the line going off the view is not showen as all the sections are small.
I think you have 2 options.
The easy but maybe not the best is to put more points into the path, then at least the problem will be less noticable. If this is a good idea or not will depend how many points you already have.
The corect solution will be to extend the class and do your own draw method then clip the line segmet that gose off the view to the view edge. Idealy you would contribute your code back to the source.
I have the same problem and I posted a bug on OSMDroid here:
http://code.google.com/p/osmdroid/issues/detail?can=2&start=0&num=100&q=221&colspec=ID%20Type%20Status%20Priority%20Milestone%20Owner%20Summary&groupby=&sort=&id=221
I'm not sure if it's a problem of OSMDroid or just a problem of too big canvas.
Other workaround would be drawing on a new, smaller canvas (size of current visible mapView) and using drawBitmap in the top left corner of a big canvas. Just remember not to create new bitmap each draw - because it's expensive.
All shapes are drawn perfectly fine but Im struggling with other problem with not smooth panning in levels > 18. You can see that when you pan the map it is not moving every pixel as it is in levels < 18, but it jumps over few pixels.
I agree that this is OSMDROID related bug because I got the same working DrawPath function (Zoom level 21 & 22) using Google MapView. I hope they will be able to address this issue.
To solve this problem, we need to implement the line clipping algorithm.
By clipping with the map viewport, the path is adding more vertexes.
Thus it will be drawed correctly at big zoom level.

Android: Determine what pixels an Android Path occupies

I have a a drawing program where a user can trace with their finger, and in a manner similar to the FingerPaint program, a series of Path's are drawn to represent the lines.
Now, I am doing some collision detection to allow the user to enter an 'erase' mode and delete selected lines, and am trying to determine how to track the individual pixels of the Path. Essentially, I am tracking the RectF that encompasses the Path, and if the RectF is intersected when in erase mode, I'd like to then do pixel-by-pixel intersection tests. So, I need to create some structure for storing the pixels, likely a two dimensional array where each element will be a 1 or 0, based on whether or not the underlying pixel is occupied by the drawn Path.
It is this last part that I am struggling with. While the user is drawing the line, I am feeding the passed X/Y values in as control points for a quadratic bezier curve via Path.quadTo(). The problem is that while Path uses these points to represent a continuous line, I am only being fed discontinous X/Y points from the touch device. Essentially, I need a way to duplicate what the Path object itself is doing, and take the passed X/Y points and interpolate that into a continous curve, but as a set of X/Y coordinates rather than a Path object...
Any pointers to get started on this?
Thanks
EDIT/MORE:
Ok, so as I mentioned, each Path is created (roughly) using the method found in FingerPaint, which means that it is a series of segments, where each segment is a quadratic bezier curve. Given that I know P0, P1 and P2 when I add these curved segments to the larger Path, I can determine the X/Y coordinates along the curve with:
So, my only problem now is determining a 'continous' set of adjacent X/Y coordinates, such that there are no gaps in this set that a user's finger might pass through without hitting one. This would mean determining each X/Y point at 1 pixel intervals. As the above formula would yield points at an infinite number of intervals, given the values of T ranging from 0 through 1, any idea how to programmatically determine the right values of T that will yield points at 1 pixel intervals?
I would do all collision detection using not curves or pixels, but just lines, it's much easier to find intersecting lines, i.e. the intersection of two sequential x/y coordinates of the user's swipe, and the lines of existing lines

Categories

Resources