I make online map and try to add touchable paths/tracks (I change their color when user touches them). On one map I have 6-7 PathOverlays with added onDown event handling:
private class PathOverlayExtended extends PathOverlay
{
public PathOverlayExtended(int color, Context ctx, long trackId, HistoryDetailFragment currentFragment) {
super(color, ctx);
trackIndex = trackId;
fragment = currentFragment;
}
private long trackIndex;
private HistoryDetailFragment fragment;
#Override
public boolean onDown(final MotionEvent event, final MapView mapView) {
fragment.onRoadClicked(trackIndex);
return super.onDown(event,mapView);
}
}
Then I touch one path on screen, it catches event and proceeds through every path. Important: it always starts from the same path (the one added to the Olerlays at the end).
When I replace "return super.onDown(event,mapView)" with "return true", only the last path catches the event and this is not the one I touch (but the one added to the Overlays at the end).
How to check/distinct which Path I touched?
I implemented something similar for detecting a touch on filled Polygons.
It's using Android Regions.
The principle is to "put" the Path that has been drawn in a "Region":
region.setPath(mPath, new Region((int) bounds.left, (int) bounds.top, (int) bounds.right, (int) bounds.bottom));
Then you check if the touched point is in this region with:
region.contains(point.x, point.y);
No idea how this "contains" method is implemented, but it works, and seems quite efficient. Magic. I imagine it should also work for polylines.
You can look at the full code here:
http://code.google.com/p/osmbonuspack/source/browse/trunk/OSMBonusPack/src/org/osmdroid/bonuspack/overlays/Polygon.java
I couldn't find fine solution to my problem, so in the end, I decided to do it in the following way.
Firstly, I made my own PathOverlayExtended class which iherits PathOverlay. Then I added some variables - bounds of the path region (maximum and minimum latitude and longitude).
Secondly, I checkced if tap coordinates fits in theese bounds. That way, I get only those paths which can be understanded as related to my tap.
In the end, I checked distances from tap coordinates to every line segment and chose the smallest one. That's it.
I used viesturz's answer, which helped me very much:
https://code.google.com/p/osmdroid/issues/detail?id=36
Thanks for all answers!
Related
I have an Imageview with different number of touch points on them. It basically an app which is detecting the swipe between 2 touch points and not allowing the user to swipe any other point or in or out of other direction. It should constrict user to just swipe between two touch points.
Just take a look at following picture:
Now the user should start swiping from point 1 to point 2. if the swipe is not started from starting point 1, it should not color the path between point 1 and point 2.
But if the user successfully swipe between the point 1 and point 2 now swipe between point 2 to 3 should be enabled. Thus user should go through Point 1 to 2, Point 2 to 3 , Point 3 to 4 , point 4 to point 5 to complete round 1.
Please tell me how to achieve this functionality . I know about gestures, gesture overlay etc but none of them fits to my condition as they uses general touch events and gesture directions.
Please suggest me the way to achieve this and keep in mind I want to make this app to be able to run on all type of devices , so I can simply give the hard coded x,y values.
Edit : on Demand
I am posting the link of the app on play store who has same functionality , But I do not know How they are achieving this functionality .
https://play.google.com/store/apps/details?id=al.trigonom.writeletters
If each touch point can be created as individual views(e.g. ImageView), then you can create an inViewInBounds() function.
I used code from here to create code where I needed to detect finger press movement over multiple imageViews:
Rect outRect = new Rect();
int[] location = new int[2];
//Determine if a touch movement is currently over a given view.
private boolean inViewInBounds(View view, int x, int y){
view.getDrawingRect(outRect);
view.getLocationOnScreen(location);
outRect.offset(location[0], location[1]);
return outRect.contains(x, y);
}
To use this function, on a parent view to all those child touch points, set the Click and Touch Listeners:
//This might look redundant but is actually required: "The empty OnClickListener is required to keep OnTouchListener active until ACTION_UP"
parentView.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {}
});
//All the work gets done in this function:
parentView.setOnTouchListener(new View.OnTouchListener() {
#Override
public boolean onTouch(View v, MotionEvent event) {
int x = (int)event.getRawX();
int y = (int)event.getRawY();
// ** myTouchPoint might be an array that you loop through here...
if ( inViewInBounds(myTouchPoint, x, y) ) doLogic(myTouchPoint);
return false;
}
});
The code above only shows detecting when one of your views are 'touched'.
if none are 'touched' but a view is 'active' (e.g. When a touch is detected, set a variable like: viewLastTouched = myTouchPoint) then you would call something like drawingLine(viewLastTouched, x, y) function - for whatever it needed to do to draw the line and/or detect boundaries etc.
They are not using android native java code to build this app.
The app is running with this code
import Runtime.MMFRuntime;
public class Main extends MMFRuntime {
}
This in turn is from https://github.com/ClickteamLLC/android/blob/master/docs/index.md
This is used to package apps / games written using - http://www.clickteam.com/clickteam-fusion-2-5
I am attempting to translate an object depending on the touch position of the user.
The problem with it is, when I test it out, the object disappears as soon as I drag my finger on my phone screen. I am not entirely sure what's going on with it?
If somebody can guide me that would be great :)
Thanks
This is the Code:
#pragma strict
function Update () {
for (var touch : Touch in Input.touches)
{
if (touch.phase == TouchPhase.Moved) {
transform.Translate(0, touch.position.y, 0);
}
}
}
The problem is that you're moving the object by touch.position.y. This isn't a point inworld, it's a point on the touch screen. What you'll want to do is probably Camera.main.ScreenToWorldPoint(touch.position).y which will give you the position inworld for wherever you've touched.
Of course, Translate takes a vector indicating distance, not final destination, so simply sticking the above in it still won't work as you're intending.
Instead maybe try this:
Vector3 EndPos = Camera.main.ScreenToWorldPoint(touch.position);
float speed = 1f;
transform.position = Vector3.Lerp(transform.position, EndPos, speed * Time.deltaTime);
which should move the object towards your finger while at the same time keeping its movements smooth looking.
You'll want to ask this question at Unity's dedicated Questions/Answers site: http://answers.unity3d.com/index.html
There are very few people that come to stackoverflow for Unity specific question, unless they relate to Android/iOS specific features.
As for the cause of your problem, touch.position.y is define in screen space (pixels) where as transform.Translate is expecting world units (meters). You can convert between the two using the Camera.ScreenToWorldPoint() method, then creating a vector out of the camera position and screen world point. With this vector you can then either intersect some geometry in the scene or simply use it as a point in front of the camera.
http://docs.unity3d.com/Documentation/ScriptReference/Camera.ScreenToWorldPoint.html
I have a path() which is created from a json object. I pass the JSON object to my overlay object in my constructor.
Objects that extend Overlay always call draw() any time the map is moved, rendered, touched, anything. Even if your overlay isn't visible or nearby or anything.
I want to generate my path object in a for loop, actually a nested loop as the json object contains nested json arrays. This has a high theoretical computation time, so I don't want to do it in the draw() method. Instead I tried to do the logic in my constructor, to make the path() file, and then call that path file only once in the draw() method where required here canvas.drawPath(path, mPaint);
Unfortunately, when I create my path in the constructor, it does not pan with the map. But when I create it, the exact same code, in the draw() method, it does have the desired functionality: a path that fences a portion of the map.
The problem is that the draw() method will then call my double for loop over and over again, and the performance hit is obvious and debilitating. Even putting the loop in new Thread() within draw() does not help performance. Running it in the constructor would be ideal, but then the path does not pan with the map.
similarly putting a private boolean flip within the draw() method, to make the desired code only run, does not work either. the path will not appear on the map unless it is being constantly redrawn, which is too arduous of a task.
The problem with other answers on this site regarding this issue is that people are making squares, circles and images, which only require one call within draw(), not a loop to generate the path.
suggestions? something about the draw() method helps overlays stick to the map, how can I run my for loop only once
A few considerations for you:
you should not be overriding the draw() method. The Overlay class already takes care of the whole panning operation for you. Read the docs that you'll find it.
the new API is a few thousand times easier to use. With a simple code such as:
PolylineOptions p = new new PolylineOptions().width(3).color(0xFFEE8888);
for( //... first for .... ){
for( // .... how many you'll need to nest... ){
// compute your JSON and get lat/long pair
p.add(new LatLng(lat, lon));
}
}
getMap().addPolyline(p);
and let the map class deal with the panning and zooming.
Although you have already accept an answer, here goes the approach I use for this. In the example code bellow, I´m using the old MapView, but the concept should work in any version. I'm also using it with mapforges (with minimum adjustments).
Concept
Build you path on first draw() call, and record position of point(0)
on next draw() if point(0) position changed, map has moved. Offset ´path` by same value before draw it.
If zoom changed, recreate the pathobject.
Performance
With a 10.000 points path in a medium range device it takes about 2 ms per cached draw() call. When path needs to be rebuild (when zoom changed) it takes about 80 ms.
Of cource you can also cache the different path zoom levels, trading a litle more performance by some more memory.
Example Code
The draw()method only checks if there is a zoom change (if so ask the path to be rebuild) and if map has moved (if so offset path) and finally draws the path.
#Override
public void draw(Canvas canvas, MapView mapview, boolean shadow) {
super.draw(canvas, mapview, shadow);
if(shadow) return;
if(mp.getPoints() == null || mp.getPoints().size() < 2) return;
Projection projection = mapview.getProjection();
int lonSpanNew = projection.fromPixels(0,mapview.getHeight()/2).getLongitudeE6() -
projection.fromPixels(mapview.getWidth(),mapview.getHeight()/2).getLongitudeE6();
if(lonSpanNew != pathInitialLonSpan)
pathBuild();
else{ //check if path need to be offset
projection.toPixels(mp.getPoints().get(0), p1);
if(p1.x != pathInitialPoint.x || p1.y != pathInitialPoint.y){
path.offset(p1.x - pathInitialPoint.x, p1.y - pathInitialPoint.y);
pathInitialPoint.x = p1.x;
pathInitialPoint.y = p1.y;
}
}
canvas.drawPath(path, paint);
}
The path has to be built every time zoom changes. The zoom change detection is done using pathInitialLonSpan as getZoomLevel() is not shyncronous with map zoom animation.
private void pathBuild(){
path.rewind();
if(mp.getPoints() == null || mp.getPoints().size() < 2) return;
Projection projection = mapView.getProjection();
pathInitialLonSpan = projection.fromPixels(0,mapView.getHeight()/2).getLongitudeE6() -
projection.fromPixels(mapView.getWidth(),mapView.getHeight()/2).getLongitudeE6();
projection.toPixels(mp.getPoints().get(0), pathInitialPoint);
path.moveTo(pathInitialPoint.x,pathInitialPoint.y);
for(int i=1; i<mp.getPoints().size(); i++){
projection.toPixels(mp.getPoints().get(i), p1);
int distance2 = (pPrev.x - p1.x) * (pPrev.x - p1.x) + (pPrev.y - p1.y) * (pPrev.y - p1.y);
if(distance2 > 9){
path.lineTo(p1.x,p1.y);
pPrev.set(p1.x, p1.y);
}
}
Regards.
I'd previously managed to incorporate OpenStreetMaps into my application using osmdroid-android-3.0.1.jar with the great assistance of the answer to my question
Porting a Google Maps app to Osmdroid - problem with overlay
I've now upgraded to using the 3.0.3 jar and can't get my overlay to display.
My app can switch display from Google Maps to OSM and display the same overlay consisting of lines and texts on top of either. The alternative displays each run in their own activity for now, as the classes are similar but do not have absolutely identical methods, or at least they didn't have in 3.0.1. (Ultimately I'd like to use the Osmdroid wrapper jar to combine them in one activity and reduce the duplicate code, but that's a question for a later date) Right now I'd like to get the same functionality as I had with the now deprecated 3.0.1 jar using the new 3.0.3 version.
With the new jar, I still get the mapview displayed OK but the overlay has disappeared. I've had to make some changes to the code, as the 3.0.1 onDraw() method has now been replaced with draw() (just like Google) in the MapOverlay extends org.osmdroid.views.overlay.Overlay class.
All the code in the previous onDraw() (now draw) method has been copied verbatim from the answer to the question referred to above, It worked fine, although I confess to not fully understanding the concepts of worldsize, bounding boxes and the transformation described.
I notice that the method
final Point upperLeft = org.osmdroid.views.util.Mercator
.projectGeoPoint(boundingBox.getLatNorthE6(), boundingBox.getLonWestE6(),
zoomLevel + tileZoom, null);
is now deprecated, and I had to remove tileZoom and
final int tileZoom = projection.getTileMapZoom();
to get it to compile.
When I get to the code in the draw() method, I can see in the debugger that all the data necessary to draw the overlay is still present and correct. The drawing is done by lines such as canvas.drawLine(....) and canvas.drawText(....). I've not used the extra parameter to the function (boolean shadow) at all.
My redrawOverlay() method remains as:
private void redrawOverlay() {
mGpt = mMapVw.getMapCenter();
if (mmapOverlay == null)
mmapOverlay = new MapOverlay(this);
mmapOverlay.setEnabled(true);
List<Overlay> listOfOverlays = mMapVw.getOverlays();
int ovlSize = listOfOverlays.size();
if (ovlSize > 1)
listOfOverlays.remove(1);
listOfOverlays.add(mmapOverlay);
mMapVw.invalidate();
}
(The Google listofOverlays.clear() shouldn't be used as osmdroid 3.0.1 had element 0 as the map itself - hence the remove(1))
I'm wondering what I have to do to modify the existing 3.0.1 code to work with 3.0.3? I'm hoping that one of the authors of might read this question.
Update
By adapting the Minimap overaly as suggested in the answer to the question referred to above, the draw() method for drawing an overlay from top left to bottom right, now becomes:
#Override
protected void draw(Canvas pC, MapView pOsmv, boolean shadow) {
if(shadow)
return;
Paint lp3;
lp3 = new Paint();
lp3.setColor(Color.RED);
lp3.setAntiAlias(true);
lp3.setStyle(Style.STROKE);
lp3.setStrokeWidth(1);
lp3.setTextAlign(Paint.Align.LEFT);
lp3.setTextSize(12);
// Calculate the half-world size
final Rect viewportRect = new Rect();
final Projection projection = pOsmv.getProjection();
final int zoomLevel = projection.getZoomLevel();
int mWorldSize_2 = TileSystem.MapSize(zoomLevel) / 2;
// Save the Mercator coordinates of what is on the screen
viewportRect.set(projection.getScreenRect());
// DON'T set offset with either of below
//viewportRect.offset(-mWorldSize_2, -mWorldSize_2);
//viewportRect.offset(mWorldSize_2, mWorldSize_2);
// Draw a line from one corner to the other
pC.drawLine(viewportRect.left, viewportRect.top, viewportRect.right, viewportRect.bottom, lp3);
}
This works OK
Let me try to suggest a few things:
In the new jar build, calling getOverlays().clear() will no longer remove the map tile overlay. Calling remove(1) may be incorrect now since the map tile overlay is no longer in position 0.
The projection has a method that will get you the screen coordinate rectangle of what is currently on the screen. You can check the coordinates you use to draw your lines with against this rectangle to make sure they intercept it.
Take a look at the new TileSystem class. Anything that was deprecated can be recreated using these functions.
I thought I had understood this question, but something is quite wrong here. When the user (me, so far) tries to press keys, nothing really happens, and I am having a lot of trouble understanding what it is that I've missed.
Consider this before I present some code to help clarify my problem: I am using Android's Lunar Lander example to make my first "real" Android program. In that example, of course, there exist a class LunarView, and class nested therein LunarThread. In my code the equivalents of these classes are Graphics and GraphicsThread, respectively.
Also I can make sprite animations in 2D just fine on Android. I have a Player class, and let's say GraphicsThread has a Player member referred to as "player". This class has four coordinates - x1, y1, x2, and y2 - and they define a rectangle in which the sprite is to be drawn. I've worked it out so that I can handle that perfectly. Whenever the doDraw(Canvas canvas) method is invoked, it'll just look at the values of those coordinates and draw the sprite accordingly.
Now let's say - and this isn't really what I'm trying to do with the program - I'm trying to make the program where all it does is display the Player sprite at one location of the screen UNTIL the FIRST time the user presses the Dpad's left button. Then the location will be changed to another set position on the screen, and the sprite will be drawn at that position for the rest of the program invariably.
Also note that the GraphicsThread member in Graphics is called "thread", and that the SurfaceHolder member in GraphicsThread is called "mSurfaceHolder".
So consider this method in class Graphics:
#Override
public boolean onKeyDown(int keyCode, KeyEvent msg) {
return thread.keyDownHandler(keyCode, msg);
}
Also please consider this method in class GraphicsThread:
boolean keyDownHandler(int keyCode, KeyEvent msg) {
synchronized (mSurfaceHolder) {
if (keyCode == KeyEvent.KEYCODE_DPAD_LEFT) {
player.x1 = 100;
player.y1 = 100;
player.x2 = 120;
player.y2 = 150;
}
}
return true;
}
Now then assuming that player's coordinates start off as (200, 200, 220, 250), why won't he do anything different when I press Dpad: Left?
Thanks!
Before I would worry about actual movement and the like I would consider Log...
Something like:
Log.d("lunar", "keyCode = ["+String.valueOf(keyCode)+"] // msg = ["+String.valueOf(msg)+"]");
In doing so I can get a feel for what the system is registering before I worry about what I do with said registered data... After that you can decide if you're even sending it the right stuff and can then worry about thread work etc.
Hopefully that can help diagnose etc.(All of this was written freehand, may contain errors)
Throw away LunarLander and use a real guide: Playing with graphics in Android