Finger coordinates jitter when view is zoomed - android

I am implementing 2-finger-zoom.
2nd finger down - remember finger distance: (done once)
event.getPointerCoords(0, finger1_start);
event.getPointerCoords(1, finger2_start);
start_distance = VecLength(finger1_start, finger2_start);
2 fingers down + ACTION_MOVE
event.getPointerCoords(0, finger1_now);
event.getPointerCoords(1, finger2_now);
double distance_now = VecLength(finger1_now, finger2_now);
zoom = distance_now / start_distance;
VecLength method - returns length of distance between 2 points
double VecLength (MotionEvent.PointerCoords a, MotionEvent.PointerCoords b)
{
return Math.sqrt(Math.pow(b.x - a.x, 2) + Math.pow(b.y - a.y, 2));
}
Problem, it jitters when I use view.setScaleX. I zommed in as smoothly as possible using my fingers recording the zoom value in logcat.
Not using view.setScaleX/Y
Using view.setScaleX/Y
Which proves that I do see the jitter. I narrowed it down to the pointers actually having different coordinates each frame going back/forth.
I assume the scale somehow affects the view, but I don't understand how to undo that. How do I get "raw" finger coordinates or respect the zoom in my calculation?
Not that I cannot use getRawX as this only returns one finger. I obviously need both.

It seems I can mostly fix this by multiplying the start distance with the current scale:
So change
event.getPointerCoords(0, finger1_now);
event.getPointerCoords(1, finger2_now);
double distance_now = VecLength(finger1_now, finger2_now);
zoom = distance_now / start_distance;
to
event.getPointerCoords(0, finger1_now);
event.getPointerCoords(1, finger2_now);
double distance_now = VecLength(finger1_now, finger2_now);
zoom = distance_now / start_distance * view.getScaleX(); << x and y scale are the same in my case

Related

DJI MobileSDK I want to specify the distance with VirtualStick

I want to move forward by specifying a distance of 5 meters.
I want to know the relationship between the unit of the value of mPitch, mRoll, mYaw, and mThrottle specified by the argument of mFlightController.sendVirtualStickFlightControlData and the flight distance.
What value must be set for mRoll to advance MAVIC by 5 meters?
I want to know the calculation method.
This is the example code.
// What value must be set for mRoll to advance MAVIC by 5 meters?
float distance = 0.5f;
float rollJoyControlMaxSpeed = 10;
float mPitch = 0.0f;
float mRoll = (float)(rollJoyControlMaxSpeed * distance);
float mYaw = 0.0f;
float mThrottle = 0.0f;
mFlightController.sendVirtualStickFlightControlData(
new FlightControlData(
mPitch, mRoll, mYaw, mThrottle
), new CommonCallbacks.CompletionCallback() {
#Override
public void onResult(DJIError djiError) {
if (djiError!=null){
setResultToToast(djiError.getDescription());
}
}
}
What you're asking for is inertial navigation and the DJI SDK contains no class or function for that at the moment.
In inertial navigation the standard approach is to use the velocity and acceleration in order to calculate the distance moved so far.
The velocity is provided by FlightControllerStateCallback:
aircraft.getFlightController().setStateCallback(new FlightControllerState.Callback() {
#Override
public void onUpdate(#NonNull FlightControllerState flightControllerState) {
//get current velocity
float velocityX = flightControllerState.getVelocityX();
float velocityY = flightControllerState.getVelocityY();
float velocityZ = flightControllerState.getVelocityZ();
}
});
The acceleration values are not accessible.
In theory you need to fly at a speed of 1 m/s for 5 seconds to reach your goal of 5 meters. In reality you need to account for the acceleration and braking distance.
The most promising approach I see is to test how much time the Mavic needs to accelerate to a certain speed and also how long the braking distance is in relation to the speed. Based on these you should be able to approximate the distance flown at an acceptable level. Here's some pseudocode of how I imagine it to be working:
set velocity to 1 m/s
-> the drone needs 2 seconds to achieve the speed and will fly 1.5m during that time. [1]
once the velocity has been reached(information from callback), wait 2 seconds.
-> the drone flies 2m
set velocity to 0
-> the braking distance for 1m/s is 1.5m [1]
the total distance flown: 5m
Please note that these values [1] are purely made up and you need to get approximate values for these via probably tedious testing.

How to bounce two moving objects

I'm making a ConSumo game from the arcade machine in Bully. Basically there's an enemy wrestler that move in a straight line and will bounce the player back if collided. I can't seem to figure out the logic in the angle to bounce the player when collided with an enemy wrestler.
I tried calculating the collision angle using the arctan of (player.centerY - enemy.centerY)/(player.centerX - player.centerY) and then adding 180 degree to mirror the angle.
double angle = Math.atan(((player.getCenterY() - enemies[i].getCenterY())/ (player.getCenterX() - enemies[i].getCenterX())));
angle = Math.toDegrees(angle);
angle += 180;
angle = Math.toRadians(angle);
player.addX(Math.cos(Angle) * strength);
plyaer.addY(-(Math.sin(angle) * strength));
I tried to just make the player bounce back on the same angle(i know this is not the ideal result, but i want to at least get the hang of it first, if you can suggest the better ways, i will appreciate it) but it only works on one or two side of the collision, and the other sides seem to pull the player through the enemy instead of bouncing it back.
Maybe you can try the physics approach which is taking into account conservation of impulse and conservation of energy.
Basically, the player, with mass mp, has velocity [vp; 0] and enemy, with mass me, player has velocity [ve; 0]. So no y components because they move horizontally only. Now at the time of collision t = t_col assume the center of mass of the player has coordinates [xp, yp] and the enemy's center of mass has coordinates [xe, ye] (you can always tweak them to make sure there is a greater bouncing-off effect, by making the y coordinates much more different if you wish).
Conservation of momentum tells us that the velocities of the two objects, call them [Vp, Wp] and [Ve, We] right after the collision are calculated as follows
[Vp; Wp] = [vp; 0] + (1/mp)*[I1; I2];
[Ve; We] = [ve; 0] - (1/me)*[I1; I2];
where, as is typically assumed that the impact is normal to the surface of the objects, the vector [I1; I2] can be taken to be aligned with the vector connecting the two centers: [xp - xe; yp - ye]. Combining this information with the conservation of energy, one can calculate the magnitude of the said vector and find that
k = (mp*me/(mp+me)) * (vp - ve)*(xp - xe) / ((xp - xe)^2 + (yp - ye)^2);
I1 = k*(xp - xe);
I2 = k*(yp - ye);
So basically, at time of collision you have as input:
the position and velocity of the player: [xp; yp], [vp; 0]
the position and velocity of the enemy: [xe; ye], [ve; 0]
the mass of the player mp and the mass of the enemy me
Then calculate
k = (mp*me/(mp+me)) * (vp - ve)*(xp - xe) / ((xp - xe)^2 + (yp - ye)^2);
I1 = k*(xp - xe);
I2 = k*(yp - ye);
Vp = vp + (1/mp)*I1;
Wp = (1/mp)*abs(I2);
Ve = ve - (1/me)*I1;
We = (1/me)*abs(I2);
Observe that I used abs(I2) which is the absolute value of I2. This is because for one of the two objects the y-component of the velocity after collision is going to be positive (so no difference there), but for the other one will be negative. For the negative one, we can also add the fact that the object may bounce off the ground immediately after collision (so collision with object and then collision with the ground). So we use the reflection law, kind of like the way light is reflected by a mirror.
After collision, at time t = t_col the parabolic trajectories of the two players (before they land back on the ground) will be
xp(t) = xp + Vp * (t - t_col);
yp(t) = yp + Wp * (t - t_col) - (g/2) * (t - t_col)^2;
xe(t) = xe + Ve * (t - t_col);
ye(t) = ye + We * (t - t_col) - (g/2) * (t - t_col)^2;
If you want angles:
cos(angle_p) = Vp / (Vp^2 + Wp^2);
sin(angle_p) = Wp / (Vp^2 + Wp^2);
cos(angle_e) = Ve / (Ve^2 + We^2);
sin(angle_e) = We / (Ve^2 + We^2);
where angle_p is the angle of the player and angle_e is the angle of the enemy.

How to focus on location on Google maps, considering a view is on top of it?

Background
Suppose I have a Google maps view, and another view on top of it, that covers a part of it, hiding some content of the map.
The problem
I need to make the "camera" of the map, to focus and have a marker on a coordinate , yet let it all be in the middle of the visible part of the map.
Something like this:
The original code was focusing on (about) the center of the entire screen, making the marker almost invisible (as the bottom view covers it).
Thing is, I can't find the proper way to set the correct value to the Y coordinate of the map itself (meaning latitude).
What I've tried
I tried, given the height of the bottom view, and the coordinate that I've put the marker on, to calculate the delta (yet of course not change the marker itself) :
final float neededZoom = 6.5f;
int bottomViewHeight = bottomView.getHeight();
LatLng posToFocusOn = ...;
final Point point = mMap.getProjection().toScreenLocation(posToFocusOn);
final float curZoom = mMap.getCameraPosition().zoom;
point.y += bottomViewHeight * curZoom / neededZoom;
posToFocusOn = mMap.getProjection().fromScreenLocation(point);
final CameraUpdate cameraPosition = CameraUpdateFactory.newCameraPosition(new Builder().target(posToFocusOn).zoom(neededZoom).build());
Sadly, this focuses way above the marker.
The question
What's wrong with what I wrote? What can I do to fix it?
ok, I've found a workaround, which I think works on all devices (tested on 3, each with a different screen resolution and size) :
I've measured how many pixels (and then converted to DP) a change of one degree has on the marker itself.
From this, I measured the height of each view, and calculated the delta needed to move the camera.
In my case, it's this way (supposing the zoom is 6.5f) :
//measured as 223 pixels on Nexus 5, which has xxhdpi, so divide by 3
final float oneDegreeInPixels = convertDpToPixels( 223.0f / 3.0f);
final float mapViewCenter = mapViewHeight / 2.0f;
final float bottomViewHeight = ...;
final float posToFocusInPixelsFromTop = (mapViewHeight - bottomViewHeight) / 2.0f ;// can optionally add the height of the view on the top area
final float deltaLatDegreesToMove = (mapViewCenter - posToFocusInPixelsFromTop) / oneDegreeInPixels;
LatLng posToFocusOn = new LatLng(latitude - deltaLatDegreesToMove, longitude);
final CameraUpdate cameraPosition = CameraUpdateFactory.newCameraPosition(new Builder().target(posToFocusOn).zoom(neededZoom).build());
And it worked.
I wonder if it can be adjusted to support any value of zoom.
Your code is almost right, but it goes above the marker because you are taking into account bottomViewHeight when computing point.y instead of bottomViewHeight/2 (When your view's size is 200px, you only need to displace the map 100px to recenter it):
point.y += (bottomViewHeight / 2) * curZoom / neededZoom;
Update:
This is a more general approach taht takes into account the map bounds and calculates a new map bounds according to the height of your bottomView. This is zoom independent.
public void recenter() {
LatLngBounds mapBounds = mMap.getProjection().getVisibleRegion().latLngBounds;
Point nothEastPoint = mMap.getProjection().toScreenLocation(mapBounds.northeast);
Point souhWestPoint = mMap.getProjection().toScreenLocation(mapBounds.southwest);
Point newNorthEast = new Point(nothEastPoint.x, nothEastPoint.y + bottomView.getHeight() / 2);
Point newSouhWestPoint = new Point(souhWestPoint.x, souhWestPoint.y + bottomView.getHeight() / 2);
LatLngBounds newBounds = LatLngBounds.builder()
.include(mMap.getProjection().fromScreenLocation(newNorthEast))
.include(mMap.getProjection().fromScreenLocation(newSouhWestPoint))
.build();
mMap.moveCamera(CameraUpdateFactory.newLatLngBounds(newBounds, 0));
}
Note that each time you call recenter() the map will move.

GPS accuracy circle (in meters) intersects a line on map

I'm drawing a point which reflects my current location. I've got some shapes on the map as well. However, due to changing GPS accuracy I need to show that I've crossed the line on map even if a LatLng point is still before it but the accuracy is ~20(m).
What I have:
- line's start and end points
- circle's center
- circle's radius (in meters)
I'm able to calculate the distance between line and circle's center point but that gives me the value like: 0.00987506668990474 which gives me really nothing because this value does not reflect the distance in meters and I can't really convert that to meters and compare with accuracy radius.
I'm not even sure if I'm on good target to get that information. Maybe there's some another method to get the information if there's an intersection or not.
thanks for any hints
update:
using the distance * 110km I'm getting much better results.
Happy face = intersection not detected - distance < radius, Sad face = intersection detected - distance > radius.
Yellow zone edge is mine line segment. As you can see it works when the intersection is on the top(/bottom), but not on the left(/right) side.
That's my calculation algorithm calculating distance from line segment to point. Forgive the mess... I'm still struggling with that, so it's not optimized yet:
public static double pointLineSegmentDistance(final List<Double> point, final List<List<Double>> line)
{
List<Double> v = line.get(0);
List<Double> w = line.get(1);
double d = pointPointSquaredDistance(v, w);
double t;
List<Double> calculateThis;
if (d > 0)
{
boolean test = (t = ((point.get(0) - v.get(0)) * (w.get(0) - v.get(0)) + (point.get(1) - v.get(1)) * (w.get(1) - v.get(1))) / d) < 0;
if (test)
{
calculateThis = v;
}
else
{
if (t > 1)
{
calculateThis = w;
}
else
{
calculateThis = new ArrayList<Double>();
calculateThis.add(v.get(0) + t * (w.get(0) - v.get(0)));
calculateThis.add(v.get(1) + t * (w.get(1) - v.get(1)));
}
}
}
else
{
calculateThis = v;
}
return Math.sqrt(pointPointSquaredDistance(point, calculateThis));
}
public static double pointPointSquaredDistance(final List<Double> v, final List<Double> w)
{
double dx = v.get(0) - w.get(0);
double dy = v.get(1) - w.get(1);
return dx * dx + dy * dy;
}
final List point - contains 2 elements - (0) latitude (1) longitude
List> line - contains 2 lists - (0) line segment start point (0/0)lat/(0/1)long (1) line segment end point (1/0)lat/(1/1)long
Finally I've solved my problem. It comes the solution is really very simple. Much simpler than I thought. There's maputil library available here:
https://github.com/googlemaps/android-maps-utils
That library privides a function named "isLocationOnPath":
boolean isLocationOnPath(LatLng point, List<LatLng> polyline, boolean geodesic, double tolerance)
Point - is a center or my point.
Polyline - for my needs is an edge of a polygon,
Geodesic - true (of course)
Tolerance - (in meters) is my accuracy taken from GPS accuracy - basicaly is a circle radius.
Details about the method:
http://googlemaps.github.io/android-maps-utils/javadoc/com/google/maps/android/PolyUtil.html#isLocationOnPath-LatLng-java.util.List-boolean-
I hope that helps.
You have two possibilities:
1) The cheap way:
You calcualte the distances between line start/end point and circle center with the android api (distanceBetween()). The result will be in meters, and correctby means of point to point distance calculation. Take the minimum of both start / end point to center circle distances.
If the line segment is very long related to the circle radius, the normal distance of the line to center, using this kind of caluclation is wrong
2) The better way, but more complex:
You transform the line start and end and the circle center to cartesian x,y space with unit = meter and calculate the distance to the line
segment using cartesian math. (see distance to line segment calculation)
First check option1 because it is just one line of code.
Your result of 0.00987506668990474 this looks like a distance in degrees, instead of meters. multiply it with 111km gives a value of about 1km. (at equator)

Rotating the Canvas impacts TouchEvents

I have a map application using an in-house map engine on Android. I'm working on a rotating Map view that rotates the map based on the phone's orientation using the Sensor Service. All works fine with the exception of dragging the map when the phone is pointing other than North. For example, if the phone is facing West, dragging the Map up still moves the Map to the South versus East as would be expected. I'm assuming translating the canvas is one possible solution but I'm honestly not sure the correct way to do this.
Here is the code I'm using to rotate the Canvas:
public void dispatchDraw(Canvas canvas)
{
canvas.save(Canvas.MATRIX_SAVE_FLAG);
// mHeading is the orientation from the Sensor
canvas.rotate(-mHeading, origin[X],origin[Y]);
mCanvas.delegate = canvas;
super.dispatchDraw(mCanvas);
canvas.restore();
}
What is the best approach to make dragging the map consistent regardless of the phones orientation? The sensormanager has a "remapcoordinates()" method but it's not clear that this will resolve my problem.
You can trivially get the delta x and delta y between two consecutive move events. To correct these values for your canvas rotation you can use some simple trignometry:
void correctPointForRotate(PointF delta, float rotation) {
// Get the angle of movement (0=up, 90=right, 180=down, 270=left)
double a = Math.atan2(-delta.x,delta.y);
a = Math.toDegrees(a); // a now ranges -180 to +180
a += 180;
// Adjust angle by amount the map is rotated around the center point
a += rotation;
a = Math.toRadians(a);
// Calculate new corrected panning deltas
double hyp = Math.sqrt(x*x + y*y);
delta.x = (float)(hyp * Math.sin(a));
delta.y = -(float)(hyp * Math.cos(a));
}

Categories

Resources