Pinch zoom and pan to specific screen coordinate - android

I want to pinch zoom around a specific coordinate on a tiled 15f x 15f 3D board. I.e. I don't want to zoom around origin. Thus I need to pan the board accordingly.
I am using a PerspectiveCamera (near=0.1f, far=100f). For a perfect fit of the screen the camera is located at approx.z=13.4 before zooming.
Now what (I think) I want to do is to:
Unproject the screen coordinates (GestureDetector.pinch method) done once for each pinch zoom:
float icx = (initialPointer1.x + initialPointer2.x) / 2;
float icy = (initialPointer1.y + initialPointer2.y) / 2;
pointToMaintain = new Vector3(icx, icy, 0f);
mCamera.unproject(pointToMaintain);
Now for each zoom cycle (as I adjust the mCamera.z accordingly and do mCamera.update()) I project the point back to screen coordinates:
Vector3 pointNewPos = new Vector3(pointToMaintain);
mCamera.project(pointNewPos);
Then calculate the delta and pan accordingly:
int dX = (int) (pointNewPos.x - icx);
int dY = (int) (pointNewPos.y - icy);
pan(...); /* I.e. mCamera.translate(...) */
My problem is that the mCamera.z is initially above pointToMaintain.z and then goes below as the user moves the fingers:
cam.z ptm.z dX dY
0 13.40
1 13.32 13.30 12 134
2 13.24 13.30 12 -188
...
(0) is the original value of mCamera.z before zooming starts. (1) is not not valid? However (2) should be OK.
My questions are:
(1) How can I get a "valid" pointToMaintain when unprojecting the screen coordinates on the camera. I.e. a point that is not less than cam.z. (The reason I get the point to 13.30 is (I guess) because near=0.1f. But as seen above this results in (weird?) screen coordinates).
(2) Is this a good strategy for moving the tiles board closer to the coordinates the user pinched zoomed on?

To mantain focus points, I did this code:
Obs: This code relies on overloaded operators, you need to change the vectors operators by its method (addMe, subtract, etc)
void zoomAt(float changeAmmount, Vector2D focus) {
float newZoom = thisZoom + changeAmmount;
offset = focus - ((focus - offset) * newZoom / thisZoom);
thisZoom = newZoom;
}
Where
focus = Current center point to mantain
offset = Distance from 0,0
thisZoom = current zoom ammount; (starts at 1)
changeAmmount = value to increase or decrease zoom
It took me 4 tries along of 3 years to make it done, and was pretty easy when you drawn it down, its just two triangles.

Related

how to check ray intersection with object in ARCore

Is there a way to check if I touched the object on the screen ? As I understand the HitResult class allows me to check if I touched the recognized and maped surface. But I want to check this I touched the object that is set on that surface.
ARCore doesn't really have a concept of an object, so we can't directly provide that. I suggest looking at ray-sphere tests for a starting point.
However, I can help with getting the ray itself (to be added to HelloArActivity):
/**
* Returns a world coordinate frame ray for a screen point. The ray is
* defined using a 6-element float array containing the head location
* followed by a normalized direction vector.
*/
float[] screenPointToWorldRay(float xPx, float yPx, Frame frame) {
float[] points = new float[12]; // {clip query, camera query, camera origin}
// Set up the clip-space coordinates of our query point
// +x is right:
points[0] = 2.0f * xPx / mSurfaceView.getMeasuredWidth() - 1.0f;
// +y is up (android UI Y is down):
points[1] = 1.0f - 2.0f * yPx / mSurfaceView.getMeasuredHeight();
points[2] = 1.0f; // +z is forwards (remember clip, not camera)
points[3] = 1.0f; // w (homogenous coordinates)
float[] matrices = new float[32]; // {proj, inverse proj}
// If you'll be calling this several times per frame factor out
// the next two lines to run when Frame.isDisplayRotationChanged().
mSession.getProjectionMatrix(matrices, 0, 1.0f, 100.0f);
Matrix.invertM(matrices, 16, matrices, 0);
// Transform clip-space point to camera-space.
Matrix.multiplyMV(points, 4, matrices, 16, points, 0);
// points[4,5,6] is now a camera-space vector. Transform to world space to get a point
// along the ray.
float[] out = new float[6];
frame.getPose().transformPoint(points, 4, out, 3);
// use points[8,9,10] as a zero vector to get the ray head position in world space.
frame.getPose().transformPoint(points, 8, out, 0);
// normalize the direction vector:
float dx = out[3] - out[0];
float dy = out[4] - out[1];
float dz = out[5] - out[2];
float scale = 1.0f / (float) Math.sqrt(dx*dx + dy*dy + dz*dz);
out[3] = dx * scale;
out[4] = dy * scale;
out[5] = dz * scale;
return out;
}
If you're calling this several times per frame see the comment about the getProjectionMatrix and invertM calls.
Apart from Mouse Picking with Ray Casting, cf. Ian's answer, the other commonly used technique is a picking buffer, explained in detail (with C++ code) here
The trick behind 3D picking is very simple. We will attach a running
index to each triangle and have the FS output the index of the
triangle that the pixel belongs to. The end result is that we get a
"color" buffer that doesn't really contain colors. Instead, for each
pixel which is covered by some primitive we get the index of this
primitive. When the mouse is clicked on the window we will read back
that index (according to the location of the mouse) and render the
select triangle red. By combining a depth buffer in the process we
guarantee that when several primitives are overlapping the same pixel
we get the index of the top-most primitive (closest to the camera).
So in a nutshell:
Every object's draw method needs an ongoing index and a boolean for whether this draw renders the pixel buffer or not.
The render method converts the index into a grayscale color and the scene is rendered
After the whole rendering is done, retrieve the pixel color at the touch position GL11.glReadPixels(x, y, /*the x and y of the pixel you want the colour of*/). Then translate the color back to an index and the index back to an object. VoilĂ , you have your clicked object.
To be fair, for a mobile usecase you should probably read a 10x10 rectangle, iterate trough it and pick the first found non-background color - because touches are never that precise.
This approach works independently of the complexity of your objects

Offset in Tilting Phone. Accelerometer Bug

I am making a somewhat-racing game. The car automatically moves forward, but to turn it sideways, I measure the rotation of the phone. Since I have to measure the acceleration on the x axis, I use:
Direction.x = Input.acceleration.x * Time.deltaTime;
Transform.translate (Direction.x * 5f);
When I play the game, the car rotates how I want it to when I tilt the phone on the x-axis. However, the problem is when I place the phone on the table, the car travels left super slowly, which doesn't make sense since it is at a 0 degree angle. To make sure this wasn't because of the table surface, I played it in Unity Simultator and same thing happened. The car travels left super slowly. When I debug.log, it says that Direction.x is about -0.000147..., a super small number. Is there any way to fix this problem, so that when the phone is still, the car's Direction.X will be 0, or is something wrong with my code.
Sometimes in Unity Translate function glitches, because better use of standard operations with vectors. Just try to cut the minimum values of the accelerometer:
float min_value = 0.01f
if(Mathf.Abs(Input.acceleration.x) < min_value)
Direction.x = Input.acceleration.x * Time.deltaTime;
else
Direction.x = 0;
transform.position = transform.position + Direction.x * 5f;

Perform multi click event on a single image in Android

suppose take my example as PIANO image..on every key press on its keyboard i want to perform a different event..now i am not able to get where to start this..
how to perform this in android..
take below image:
As you can see in the image i have an piano image..and drawn red and blue lines in some shape..now i want to perform different event for each shape on this image..
how can i do this in android programming..
This seems like a basic Collision Detection problem.
Basically what you want to do is of course listen to the screen touches and receive the X & Y of the touch location. (You can of course use multi-touch, just remember to do this for every touch)
When a touch appears you will calculate a bunch of containsRectanglePoint for each key on the image.
So basically you will split the image into a lot of rectangles like so.
(source: gyazo.com)
Then you check if the point contains any of the rectangles.
If the touch X & Y is inside either 1 or 2 then perform the event for that key.
If the touch X & Y is inside either 3, 4 or 5 the perform the event for that key.
If the touch X & Y is inside either 6 then perform the event for that key.
If the touch X & Y is inside either 7 then perform the event for that key.
You will of course do that for all the keys.
So thereby when a collision happens you look through all that.
Simple Rectangle vs Point Collision Detection
The following code checks for collision detection between a rectangle and a point. If the point is within the rectangles bounds then the method will return true. If not it returns false.
public static boolean containsRectanglePoint(double x, double y, double w, double h, double px, double py)
{
if (px < x) { return false; }
if (py < y) { return false; }
if (px > (x + w)) { return false; }
if (py > (y + h)) { return false; }
return true;
}
x = Rectangle X (or AABB Minimum X)
y = Rectangle Y (or AABB Minimum Y)
w = Rectangle Width (or AABB Maximum X - AABB Minimum X)
h = Rectangle Height (or AABB Maximum Y - AABB Minimum Y)
px = Point X
py = Point Y
In your case px & py is the location of the touch.
You could also use Java's standard Rectangle2D class, to both store and calculate the collisions, but that requires creating a lot of instances of the class, and it will be a lot cheaper, when we are talking about memory, to just store the coordinates and then use the function I provided to you.

Moving object to touch x y

I'm trying to move an object to a new x,y position based on the user's touch location, but I've hit a brick wall.
Currently, I'm coding the movement of the axis manually, but it's producing a scripted, "x then y", resulting in a squared off movement. Ideally, I want to gain a linear path to the touch position, not a square.
My basic movement calculation is here:
//check target not met on x axis
if(spriteX != spriteTargetX){
//check if its left or right
if(spriteTargetX<spriteX){
spriteX -=spriteSpeed;
}else{
spriteX +=spriteSpeed;
}
}
if(spriteY != spriteTargetY){
//check if its up or down
if(spriteTargetY<spriteY){
spriteY -=spriteSpeed;
}else{
spriteY +=spriteSpeed;
}
}
The above algorithm always results in a square movement. I honestly don't know whether I should be performing some kind of distance/angle calculation. Any ideas?
One simple way to do this is to represent the direction of movement as a unit vector, and multiply by the speed. I'll list the basic steps, and give an example where you are at (1,1) and wish to move to (4,5) with speed 2:
Get difference between destination and current. (diff.x and diff.y)
diff.x = 3
diff.y = 4
Get the total distance from destination to current.
dist = 5 ( sqrt(3^2 + 4^2) = 5 )
Divide diff.x and diff.y by the distance
diff.x = 0.6
diff.y = 0.8
Multiply diff.x and diff.y by desired speed
diff.x = 1.2
diff.y = 1.6
Add diff.x and diff.y to sprite's x and y, respectively
sprite.x = 2.2
sprite.y = 2.6

How to rotate map view smoothly with bearing in android

I am trying to rotate map view when the user changes his direction ie if user takes left and right turns it should rotate accordingly.I am rotating map view basing on current location bearing it is rotating correctly but it was jittering.Here is the code which i used for rotation
public void onGPSUpdate(Location location)
{
boolean check=isBetterLocation(location, tempLoc);
tempLoc=location;
if(check){
showLocation(location);
}
}
isBetterLocation method is copied from google docs for better location.
private void showLocation(Location loc){
mRotateView.rotate(-loc.getBearing());
}
I registered a location updates with time interval 0 and min distance of 10 for frequent updates.Here my problem is map view is jittering always,can any one tell me how can I smoothly rotate map view like other applications like waze maps do.Thanks...
are you trying to rotate the map in a smooth way such as by one degree at a time or just have it go from degree A to degree B on location update ?
Something like
while (oldAngle != newAngle)
{
mapView.rotate(newAngle);
// this is where you would decied to add or subtract;
newAngle ++ or -- ;
}
not sure if this would work exactly as the loop would run really quickly so maybe do this as a asynctask and add a pause in there to simulate a smooth rotation.
Double angle = Math.atan2((userstartPoint.getX() - userendPoint.getX()), userstartPoint.getY() - userendPoint.getY());
angle = Math.toDegrees(angle);
map.setRotationAngle(angle);
so basically I get the start point (new location) and then the end point (old location) and do a Math.atan2 on it as you can see. Then convert that to a degree and set it to my map rotation.
Now it does not do a smooth rotation but I don't need that. Here is where you could set up your own stepper for a smooth rotate. Unless the google maps already has one.
As the bearing values of the Location are not very exact and tend to jump a little, you should use a filter for the bearing. For example, keep the last 5 bearing-values in an array and use the average of those values as the bearing to rotate the map to. Or use the filter explained in the SensorEvent docs - it's easier to use and can be tweaked better.
This will smoothen out the rotation of the map resp. keep it more stable.
EDIT:
A version of the low-pass filter:
public static float exponentialSmoothing(float input, float output, float alpha) {
output = output + alpha * (input - output);
return output;
}
use it like so:
final static float ALPHA = 0.33; // values between 0 and 1
float bearing;
// on location/bearing changed:
bearing = exponentialSmoothing(bearing, newBearing, ALPHA);
bearing would be the value to use to actually rotate the map, newBearing would be the bearing you get from every event, and with ALPHA you can control how quickly or slowly the rotation acts to a new orientation by weighting how much of the old and the new bearing is taken into account for the result. A small value weighs the old value higher, a high value weighs the new value higher.
I hope that works out better.
To change the bearing of your map, use the Camera class. You can define a new CameraPosition with the new bearing and tell the camera to move with either GoogleMap.moveCamera or GoogleMap.animateCamera if you want a smooth movement.
I have implemented this in my app. What I basically did is that I took the last and second last LatLng of my path and calculate bearing by using
public static float getRotationAngle(LatLng secondLastLatLng, LatLng lastLatLng)
{
double x1 = secondLastLatLng.latitude;
double y1 = secondLastLatLng.longitude;
double x2 = lastLatLng.latitude;
double y2 = lastLatLng.longitude;
float xDiff = (float) (x2 - x1);
float yDiff = (float) (y2 - y1);
return (float) (Math.atan2(yDiff, xDiff) * 180.0 / Math.PI);
}
Set this angle as bearing to camera position.
Note: Sometimes (rarely) it rotates map to opposite direction. i am looking for it but if anyone got reason do reply.

Categories

Resources