I have a SurfaceView that is resposible for drawing a Bitmap as a background and another one that will be used as an overlay. So I've decided to do all transformations using a Matrix that can be used for both bitmaps as it is (I think) one of the fastest ways to do it without using OpenGL.
I've been able to implement panning around and zooming but I have some problems with what I've came with:
I wasn't able to find a way how to focus on the center of the two
fingers while zooming, the image always resets to its initial state
(that is, without panning nor scalling) before the new scale being
applied. Besides looking wrong, that doesn't allow the user to zoom
out to see the whole image and then zoom in on the part that is
important.
After the scalling operation the image won't be at the
same place after the new draw pass because the translation value will
be different.
Is there a way to achieve that using a Matrix or is there another solution?
Code is below (I use a SurfaceHolder in a separate thread do lock the SurfaceView canvas and call its doDraw method):
public class MapSurfaceView extends SurfaceView implements SurfaceHolder.Callback {
public void doDraw(Canvas canvas) {
canvas.drawColor(Color.BLACK);
canvas.drawBitmap(mBitmap, mTransformationMatrix, mPaintAA);
}
#Override
public boolean onTouchEvent(MotionEvent event) {
switch (event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_POINTER_DOWN: {
if (event.getPointerCount() == 2) {
mOriginalDistance = MathUtils.distanceBetween(event.getX(0), event.getX(1), event.getY(0), event.getY(1));
mScreenMidpoint = MathUtils.midpoint(event.getX(0), event.getX(1), event.getY(0), event.getY(1));
mImageMidpoint = MathUtils.midpoint((mXPosition+event.getX(0))/mScale, (mXPosition+event.getX(1))/mScale, (mYPosition+event.getY(0))/mScale, (mYPosition+event.getY(1))/mScale);
mOriginalScale = mScale;
}
}
case MotionEvent.ACTION_DOWN: {
mOriginalTouchPoint = new Point((int)event.getX(), (int)event.getY());
mOriginalPosition = new Point(mXPosition, mYPosition);
break;
}
case MotionEvent.ACTION_MOVE: {
if (event.getPointerCount() == 2) {
final double currentDistance = MathUtils.distanceBetween(event.getX(0), event.getX(1), event.getY(0), event.getY(1));
if (mIsZooming || currentDistance - mOriginalDistance > mPinchToZoomTolerance || mOriginalDistance - currentDistance > mPinchToZoomTolerance) {
final float distanceRatio = (float) (currentDistance / mOriginalDistance);
float tempZoom = mOriginalScale * distanceRatio;
mScale = Math.min(10, Math.max(Math.min((float)getHeight()/(float)mBitmap.getHeight(), (float)getWidth()/(float)mBitmap.getWidth()), tempZoom));
mScale = (float) MathUtils.roundToDecimals(mScale, 1);
mIsZooming = true;
mTransformationMatrix = new Matrix();
mTransformationMatrix.setScale(mScale, mScale);//, mImageMidpoint.x, mImageMidpoint.y);
} else {
System.out.println("Dragging");
mIsZooming = false;
final int deltaX = (int) ((int) (mOriginalTouchPoint.x - event.getX()));
final int deltaY = (int) ((int) (mOriginalTouchPoint.y - event.getY()));
mXPosition = mOriginalPosition.x + deltaX;
mYPosition = mOriginalPosition.y + deltaY;
validatePositions();
mTransformationMatrix = new Matrix();
mTransformationMatrix.setScale(mScale, mScale);
mTransformationMatrix.postTranslate(-mXPosition, -mYPosition);
}
}
break;
}
case MotionEvent.ACTION_UP:
case MotionEvent.ACTION_POINTER_UP: {
mIsZooming = false;
validatePositions();
mTransformationMatrix = new Matrix();
mTransformationMatrix.setScale(mScale, mScale);
mTransformationMatrix.postTranslate(-mXPosition, -mYPosition);
}
}
return true;
}
private void validatePositions() {
// Lower right corner
mXPosition = Math.min(mXPosition, (int)((mBitmap.getWidth() * mScale)-getWidth()));
mYPosition = Math.min(mYPosition, (int)((mBitmap.getHeight() * mScale)-getHeight()));
// Upper left corner
mXPosition = Math.max(mXPosition, 0);
mYPosition = Math.max(mYPosition, 0);
// Image smaller than the container, should center it
if (mBitmap.getWidth() * mScale <= getWidth()) {
mXPosition = (int) -((getWidth() - (mBitmap.getWidth() * mScale))/2);
}
if (mBitmap.getHeight() * mScale <= getHeight()) {
mYPosition = (int) -((getHeight() - (mBitmap.getHeight() * mScale))/2);
}
}
}
Instead of resetting the transformation matrix every time using new Matrix(), try updating it using post*(). This way, you do only operations relative to the screen. It is easier to think in terms: "zoom to this point on the screen".
Now some code. Having calculated mScale in zooming part:
...
mScale = (float) MathUtils.roundToDecimals(mScale, 1);
float ratio = mScale / mOriginalScale;
mTransformationMatrix.postScale(ratio, ratio, mScreenMidpoint.x, mScreenMidpoint.y);
It might be even better to recalculate mScreenMidpoint on each zooming touch event. This would allow user to change the focus point a bit while zooming. For me, it is more natural than having the focus point frozen after first two finger touch.
During dragging, you translate using deltaX and deltaY instead of absolute points:
mTransformationMatrix.postTranslate(-deltaX, -deltaY);
Of course now you have to change your validatePositions() method to:
ensure deltaX and deltaY do not make image move too much, or
use transformation matrix to check if image is off screen and then move it to counter that
I will describe the second method, as it is more flexible and allows to validate zooming as well.
We calculate how much image is off screen and then move it using those values:
void validate() {
mTransformationMatrix.mapRect(new RectF(0, 0, mBitmap.getWidth(), mBitmap.getHeight()));
float height = rect.height();
float width = rect.width();
float deltaX = 0, deltaY = 0;
// Vertical delta
if (height < mScreenHeight) {
deltaY = (mScreenHeight - height) / 2 - rect.top;
} else if (rect.top > 0) {
deltaY = -rect.top;
} else if (rect.bottom < mScreenHeight) {
deltaY = mScreenHeight - rect.bottom;
}
// Horziontal delta
if (width < mScreenWidth) {
deltaX = (mScreenWidth - width) / 2 - rect.left;
} else if (rect.left > 0) {
deltaX = -rect.left;
} else if (rect.right < mScreenWidth) {
deltaX = mScreenWidth - rect.right;
}
mTransformationMatrix.postTranslate(deltaX, deltaY)
}
Related
Thanks in advance for your help first.
I have found so many examples using Gyroscope. But I couldn't find adequate one for me.
I'd like to make a simple quiz game that do actions when I tilt VM to 90 degrees forward and backward. Many examples said I might use "pitch" value of Gyroscope. Could you give some advices for me??
I have done a similar thing where i need draw a rectangle with includes nearby places and must point it to the place and show details.
public void onSensorChanged(SensorEvent event) {
final Handler handler = new Handler();
switch (event.sensor.getType()) {
case Sensor.TYPE_ACCELEROMETER:
mAcceleromterReading =
SensorUtilities.filterSensors(event.values, mAcceleromterReading);
break;
case Sensor.TYPE_MAGNETIC_FIELD:
mMagnetometerReading =
SensorUtilities.filterSensors(event.values, mMagnetometerReading);
break;
float[] orientation =
SensorUtilities.computeDeviceOrientation(mAcceleromterReading, mMagnetometerReading);
if (orientation != null) {
float azimuth = (float) Math.toDegrees(orientation[0]);
if (azimuth < 0) {
azimuth += 360f;
}
// Convert pitch and roll from radians to degrees
float pitch = (float) Math.toDegrees(orientation[1]);
float roll = (float) Math.toDegrees(orientation[2]);
if (abs(pitch - pitchPrev) > PITCH_THRESHOLD && abs(roll - rollPrev) > ROLL_THRESHOLD
&& abs(azimuth - azimuthPrev) > AZIMUTH_THRESHOLD) { // && abs(roll - rollPrev) > rollThreshold
if (DashplexManager.getInstance().mlocation != null) {
mOverlayDisplayView.setHorizontalFOV(mPreview.getHorizontalFOV());
mOverlayDisplayView.setVerticalFOV(mPreview.getVerticalFOV());
mOverlayDisplayView.setAzimuth(azimuth);
mOverlayDisplayView.setPitch(pitch);
mOverlayDisplayView.setRoll(roll);
// Update the OverlayDisplayView to red raw when sensor dataLogin changes,
// redrawing only when the camera is not pointing straight up or down
if (pitch <= 75 && pitch >= -75) {
//Log.d("issueAR", "invalidate: ");
mOverlayDisplayView.invalidate();
}
}
pitchPrev = pitch;
rollPrev = roll;
azimuthPrev = azimuth;
}
}
computeDeviceOrientation method
public static float[] computeDeviceOrientation(float[] accelerometerReading, float[] magnetometerReading) {
if (accelerometerReading == null || magnetometerReading == null) {
return null;
}
final float[] rotationMatrix = new float[9];
SensorManager.getRotationMatrix(rotationMatrix, null, accelerometerReading, magnetometerReading);
// Remap the coordinates with the camera pointing along the Y axis.
// This way, portrait and landscape orientation return the same azimuth to magnetic north.
final float cameraRotationMatrix[] = new float[9];
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_X,
SensorManager.AXIS_Z, cameraRotationMatrix);
final float[] orientationAngles = new float[3];
SensorManager.getOrientation(cameraRotationMatrix, orientationAngles);
// Return a float array containing [azimuth, pitch, roll]
return orientationAngles;
}
onDraw method
#SuppressLint("DrawAllocation")
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
// Log.d("issueAR", "onDraw: ");
// Log.d("issueAR", "mVerticalFOV: "+mVerticalFOV+" "+"mHorizontalFOV"+mHorizontalFOV);
// Get the viewports only once
if (!mGotViewports && mVerticalFOV > 0 && mHorizontalFOV > 0) {
mViewportHeight = canvas.getHeight() / mVerticalFOV;
mViewportWidth = canvas.getWidth() / mHorizontalFOV;
mGotViewports = true;
//Log.d("onDraw", "mViewportHeight: " + mViewportHeight);
}
if (!mGotViewports) {
return;
}
// Set the paints that remain constant only once
if (!mSetPaints) {
mTextPaint.setTextAlign(Paint.Align.LEFT);
mTextPaint.setTextSize(getResources().getDimensionPixelSize(R.dimen.canvas_text_size));
mTextPaint.setColor(Color.WHITE);
mOutlinePaint.setStyle(Paint.Style.STROKE);
mOutlinePaint.setStrokeWidth(mOutline);
mBubblePaint.setStyle(Paint.Style.FILL);
mSetPaints = true;
}
// Center of view
float x = canvas.getWidth() / 2;
float y = canvas.getHeight() / 2;
/*
* Uncomment line below to allow rotation of display around the center point
* based on the roll. However, this "feature" is not very intuitive, and requires
* locking device orientation to portrait or changes the sensor rotation matrix
* on device rotation. It's really quite a nightmare.
*/
//canvas.rotate((0.0f - mRoll), x, y);
float dy = mPitch * mViewportHeight;
if (mNearbyPlaces != null) {
//Log.d("OverlayDisplayView", "mNearbyPlaces: "+mNearbyPlaces.size());
// Iterate backwards to draw more distant places first
for (int i = mNearbyPlaces.size() - 1; i >= 0; i--) {
NearbyPlace nearbyPlace = mNearbyPlaces.get(i);
float xDegreesToTarget = mAzimuth - nearbyPlace.getBearingToPlace();
float dx = mViewportWidth * xDegreesToTarget;
float iconX = x - dx;
float iconY = y - dy;
if (isOverlapping(iconX, iconX).isOverlapped()) {
PointF point = calculateNewXY(new PointF(iconX, iconY + mViewportHeight));
iconX = point.x;
iconY = point.y;
}
nearbyPlace.setIconX(iconX);
nearbyPlace.setIconY(iconY);
Bitmap icon = getIcon(nearbyPlace.getIcon_id());
float width = icon.getWidth() + mTextPaint.measureText(nearbyPlace.getName()) + mMargin;
RectF recf=new RectF(iconX, iconY, width, icon.getHeight());
nearbyPlace.setRect(recf);
float angleToTarget = xDegreesToTarget;
if (xDegreesToTarget < 0) {
angleToTarget = 360 + xDegreesToTarget;
}
if (angleToTarget >= 0 && angleToTarget < 90) {
nearbyPlace.setQuadrant(1);
mQuad1Places.add(nearbyPlace);
} else if (angleToTarget >= 90 && angleToTarget < 180) {
nearbyPlace.setQuadrant(2);
mQuad2Places.add(nearbyPlace);
} else if (angleToTarget >= 180 && angleToTarget < 270) {
nearbyPlace.setQuadrant(3);
mQuad3Places.add(nearbyPlace);
} else {
nearbyPlace.setQuadrant(4);
mQuad4Places.add(nearbyPlace);
}
//Log.d("TAG", " - X: " + iconX + " y: " + iconY + " angle: " + angleToTarget + " display: " + nearbyPlace.getIcon_id());
}
drawQuadrant(mQuad1Places, canvas);
drawQuadrant(mQuad2Places, canvas);
drawQuadrant(mQuad3Places, canvas);
drawQuadrant(mQuad4Places, canvas);
}
}
It doesnot contain full code, but you may understand how pitch and azimuth with roll is used.. Best of luck
I'm new to Android, and I'm trying to get the hang of multi touch input. I've begun with a simple app that allows the user to create rectangles on a Canvas by dragging and releasing with one finger, which I have working. To expand upon that, I now want a user to be able to rotate the rectangle they are drawing using a second finger, which is where my problems begin. As it stands, adding a second finger will cause multiple rectangles to rotate, instead of just the current one, but they will revert to their default orientation as soon as the second finger is released.
I've been working at it for a while, and I think my core problem is that I'm mishandling the multiple MotionEvents that come with two (or more fingers). Logging statements I left to display the coordinates on the screen for each event stay tied to the first finger touching the screen, instead of switching to the second. I've tried multiple configurations of accessing and changing the event pointer ID, and still no luck. If anyone could provide some guidance in the right direction, I would be extremely grateful.
My code is as follows:
public class BoxDrawingView extends View {
private static final String TAG = "BoxDrawingView";
private static final int INVALID_POINTER_ID = -1;
private int mActivePointerId = INVALID_POINTER_ID;
private Box mCurrentBox;
private List<Box> mBoxen = new ArrayList<>();
private Float mLastTouchX;
private Float mLastTouchY;
...
#Override
public boolean onTouchEvent(MotionEvent event) {
switch(MotionEventCompat.getActionMasked(event)) {
case MotionEvent.ACTION_DOWN:
mActivePointerId = MotionEventCompat.getPointerId(event, 0);
current = new PointF(MotionEventCompat.getX(event, mActivePointerId),
MotionEventCompat.getY(event, mActivePointerId));
action = "ACTION_DOWN";
// Reset drawing state
mCurrentBox = new Box(current);
mBoxen.add(mCurrentBox);
mLastTouchX = MotionEventCompat.getX(event, MotionEventCompat.getPointerId(event, 0));
mLastTouchY = MotionEventCompat.getY(event, MotionEventCompat.getPointerId(event, 0));
break;
case MotionEvent.ACTION_POINTER_DOWN:
action = "ACTION_POINTER_DOWN";
mActivePointerId = MotionEventCompat.getPointerId(event, 0);
mLastTouchX = MotionEventCompat.getX(event, MotionEventCompat.getPointerId(event, 0));
mLastTouchY = MotionEventCompat.getY(event, MotionEventCompat.getPointerId(event, 0));
break;
case MotionEvent.ACTION_MOVE:
action = "ACTION_MOVE";
current = new PointF(MotionEventCompat.getX(event, mActivePointerId),
MotionEventCompat.getY(event, mActivePointerId));
if (mCurrentBox != null) {
mCurrentBox.setCurrent(current);
invalidate();
}
if(MotionEventCompat.getPointerCount(event) > 1) {
int pointerIndex = MotionEventCompat.findPointerIndex(event, mActivePointerId);
float currX = MotionEventCompat.getX(event, pointerIndex);
float currY = MotionEventCompat.getY(event, pointerIndex);
if(mLastTouchX < currX) {
// simplified: only use x coordinates for rotation for now.
// +X for clockwise, -X for counter clockwise
Log.d(TAG, "Clockwise");
mRotationAngle = 30;
}
else if (mLastTouchX > getX()) {
Log.d(TAG, "Counter clockwise");
mRotationAngle = -30;
}
}
break;
case MotionEvent.ACTION_UP:
action = "ACTION_UP";
mCurrentBox = null;
mLastTouchX = null;
mLastTouchY = null;
mActivePointerId = INVALID_POINTER_ID;
break;
case MotionEvent.ACTION_POINTER_UP:
action = "ACTION_POINTER_UP";
int pointerIndex = event.getActionIndex();
int pointerId = event.getPointerId(pointerIndex);
if(pointerId == mActivePointerId){
mActivePointerId = INVALID_POINTER_ID;
}
break;
case MotionEvent.ACTION_CANCEL:
action = "ACTION_CANCEL";
mCurrentBox = null;
mActivePointerId = INVALID_POINTER_ID;
break;
}
return true;
}
#Override
protected void onDraw(Canvas canvas){
// Fill the background
canvas.drawPaint(mBackgroundPaint);
for(Box box : mBoxen) {
// Box is a custom object. Origin is the origin point,
// Current is the point of the opposite diagonal corner
float left = Math.min(box.getOrigin().x, box.getCurrent().x);
float right = Math.max(box.getOrigin().x, box.getCurrent().x);
float top = Math.min(box.getOrigin().y, box.getCurrent().y);
float bottom = Math.max(box.getOrigin().y, box.getCurrent().y);
if(mRotationAngle != 0) {
canvas.save();
canvas.rotate(mRotationAngle);
canvas.drawRect(left, top, right, bottom, mBoxPaint);
canvas.rotate(-mRotationAngle);
canvas.restore();
mRotationAngle = 0;
} else {
canvas.drawRect(left, top, right, bottom, mBoxPaint);
}
}
}
}
There are several ways to draw things, not just in android, but in Java as well. The thing is that you are trying to draw the rectangles by rotating the Canvas. That's a way, but in my personal experience I think that is only a good choice if you want to rotate the whole picture. If not, that may get a little tricky because you need to place a rotation axis, which it seems you are not using, so Android will asume that you want to rotate from the left top corner or the center of the view (I don't remember).
If you are opting for that choice, you may try to do it like this:
Matrix matrix = new Matrix();
matrix.setRotate(angle, rectangleCenterX, rectangleCenterY);
canvas.setMatrix(matrix);
But I recommend you to try a different approach. Do the rotation directly on the rectangle that you are moving, by calculating the axes of the polygon. This you can do it using Java Math operations:
public void formShape(int cx[], int cy[], double scale) {
double xGap = (width / 2) * Math.cos(angle) * scale;
double yGap = (width / 2) * Math.sin(angle) * scale;
cx[0] = (int) (x * scale + xGap);
cy[0] = (int) (y * scale + yGap);
cx[1] = (int) (x * scale - xGap);
cy[1] = (int) (y * scale - yGap);
cx[2] = (int) (x * scale - xGap - length * Math.cos(radians) * scale);
cy[2] = (int) (y * scale - yGap - length * Math.sin(radians) * scale);
cx[3] = (int) (x * scale + xGap - length * Math.cos(radians) * scale);
cy[3] = (int) (y * scale + yGap - length * Math.sin(radians) * scale);
}
So (x,y) is the center of your rectangle and with, height tell you how big is it. In the formShape(int[], int[], double) method cx and cy are going to be used to draw your shape and scale is the value to use if you want to do zoom in or zoom out later, if not just use scale = 1;
Now for drawing your rectangles, this is how you do it:
Paint paint = new Paint();
paint.setColor(Color.GRAY);
paint.setStyle(Style.FILL);
int[] cx = new int[4];
int[] cy = new int[4];
Box box = yourBoxHere;
box.formShape(cx, cy, 1);
Path path = new Path();
path.reset(); // only needed when reusing this path for a new build
path.moveTo(cx[0], cy[0]); // used for first point
path.lineTo(cx[1], cy[1]);
path.lineTo(cx[2], cy[2]);
path.lineTo(cx[3], cy[3]);
path.lineTo(cx[0], cy[0]); // repeat the first point
canvas.drawPath(wallpath, paint);
For multitouch rotation listener you should override 2 methods in your Activity or View:
#Override
public boolean onTouch(View v, MotionEvent event) {
if(event.getId() == MotionEvent.ACTION_UP)
this.points = null;
}
}
#Override
public boolean dispatchTouchEvent(MotionEvent event) {
if(event.getPointerCount() >= 2) {
float newPoints[][] = new float[][] {
{event.getX(0), event.getY(0)},
{event.getX(1), event.getY(1)}
};
double angle = angleBetweenTwoPoints(newPoints[0][0], newPoints[0][1], newPoints[1][0], newPoints[1][1]);
if(points != null) {
double difference = angle - initialAngle;
if(Math.abs(difference) > rotationSensibility) {
listener.onGestureListener(GestureListener.ROTATION, Math.toDegrees(difference));
this.initialAngle = angle;
}
} else {
this.initialAngle = angle;
}
this.points = newPoints;
}
}
public static double angleBetweenTwoPoints(double xHead, double yHead, double xTail, double yTail) {
if(xHead == xTail) {
if(yHead > yTail)
return Math.PI/2;
else
return (Math.PI*3)/2;
} else if(yHead == yTail) {
if(xHead > xTail)
return 0;
else
return Math.PI;
} else if(xHead > xTail) {
if(yHead > yTail)
return Math.atan((yHead-yTail)/(xHead-xTail));
else
return Math.PI*2 - Math.atan((yTail-yHead)/(xHead-xTail));
} else {
if(yHead > yTail)
return Math.PI - Math.atan((yHead-yTail)/(xTail-xHead));
else
return Math.PI + Math.atan((yTail-yHead)/(xTail-xHead));
}
}
Sorry, but this answer is getting long, if you have further questions about any of those operations and you want to change the approach of your solution, please ask again and tell me in the comments.
I hope this was helpful.
I am developing an android app which visualize the map of an environment and currently i am using libgdx to draw the map, also like any map application the user should be capable of zoom, rotate and moving the map,
I have developed a GestureHandler class which implements GestureListener interface and interacts with a PerspectiveCamera(since i will use 3d components in the future):
#Override
public boolean pan(float x, float y, float deltaX, float deltaY) {
float tempX = (mapView.getCamera().position.x - deltaX * 0.5f);
float tempY = (mapView.getCamera().position.y + deltaY * 0.5f);
mapView.getCamera().position.set(
MathUtils.lerp(mapView.getCamera().position.x, tempX, mapView.getCamera().fieldOfView / 100),
MathUtils.lerp(mapView.getCamera().position.y, tempY, mapView.getCamera().fieldOfView / 100),
mapView.getCamera().position.z);
mapView.getCamera().update();
return false;
}
float initialDistance = 0;
float initialAngle = 0;
float distance = 0;
private void zoom(Vector2 initialPointer1, Vector2 initialPointer2, Vector2 pointer1, Vector2 pointer2)
{
initialDistance = initialPointer1.dst(initialPointer2);
float iDeltaX = initialPointer2.x - initialPointer1.x;
float iDeltaY = initialPointer2.y - initialPointer1.y;
initialAngle = (float)Math.atan2((double)iDeltaY,(double)iDeltaX) * MathUtils.radiansToDegrees;
if(initialAngle < 0)
initialAngle = 360 - (-initialAngle);
distance = initialPointer1.dst(pointer2);
float deltaX = pointer2.x - initialPointer1.x;
float deltaY = pointer2.y - initialPointer1.y;
newAngle = (float)Math.atan2((double)deltaY,(double)deltaX) * MathUtils.radiansToDegrees;
if(newAngle < 0)
newAngle = 360 - (-newAngle);
//Log.e("test", distance + " " + initialDistance);
//Log.e("test", newAngle + " " + initialAngle);
float ratio = initialDistance/distance;
mapView.getCamera().fieldOfView = MathUtils.clamp(initialZoomScale * ratio, 1f, 100.0f);
Log.e("zoom", String.valueOf(mapView.getCamera().fieldOfView));
mapView.getCamera().update();
}
#Override
public boolean pinch(Vector2 initialPointer1, Vector2 initialPointer2, Vector2 pointer1, Vector2 pointer2) {
zoom(initialPointer1, initialPointer2, pointer1, pointer2);
float delta1X = pointer2.x - pointer1.x;
float delta1Y = pointer2.y - pointer1.y;
newAngle = (float)Math.atan2((double)delta1Y,(double)delta1X) * MathUtils.radiansToDegrees;
if(newAngle < 0)
newAngle = 360 - (-newAngle);
System.out.println("new "+newAngle);
if(newAngle - currentAngle >= 0.01000f)
{
System.out.println("Increasing");
mapView.getCamera().rotate(0.5f,0,0,1);
}
else if(newAngle - currentAngle <= -0.010000f) {
System.out.println("DEcreasing");
mapView.getCamera().rotate(-0.5f,0,0,1);
}
if(Math.abs(newAngle - currentAngle) >= 0.01000f)
{
currentAngle = newAngle;
}
return true;
}
Everything is fine until as far as i don't rotate the camera, just like this unsolved similar question after rotating the camera, movements will be affected by applied rotation.Any help specially sample codes?
Edit:
After lots of efforts i finally solved it,
As Tenfour04 said in his answer i had to use two separate matrices for transformation and rotations, and finally set the result of their multiplication to view matrix of camera using:
camera.view.set(position).mul(orientation);
Also the most important thing is to set the Transformation Matrix of my batch to camera.view:
batch.setTransformationMatrix(camera.view)
Instead of applying the gestures directly to the camera, apply them to a pair of Matrix4's that you use to store the orientation and position separately. Then in the render method, multiply the two matrices and apply them to your camera's view.
In the render() method:
camera.view.set(orientation).mul(position); //Might need to swap orientation/position--don't remember.
camera.update();
Your zoom method is fine because field of view affects the camera's projection matrix rather than its view matrix.
I have a circle drawn on canvas. On top on this circle I draw an angle, which starts from part of the circle equal to current time.
Then onTouchEvent, the angle should be redrawn from start point (current time), to end point, based on a point on the circle.
The problem is with calculation dynamically sweep angle in method onTouchEvent.
I've tried different calculations, taken from different Stackoverflow posts / suggestions, and none did not work according to my expectation. The reaction of the angle, onTouchEvent, was always a bit unpredicted.
My code snippet:
#Override
public void draw(Canvas canvas) {
super.draw(canvas);
int radius = getRadius();
int startAngle = getStartAngle();
canvas.drawCircle(getWidth() / 2, getHeight() / 2, radius, this.bkgPaint);
this.arcRect = new RectF((getWidth() / 2) - radius, (getHeight() / 2) - radius, (getWidth() / 2) + radius, (getHeight() / 2) + radius);
canvas.drawArc(this.arcRect, startAngle, sweepAngle, true, arcPaint);
}
#Override
public boolean onTouchEvent(MotionEvent e) {
if (e.getAction() == MotionEvent.ACTION_MOVE) {
sweepAngle = (int) ((360.0D + Math.toDegrees(Math.atan2(e.getX() - 360.0D, 360.0D - e.getY()))) % 360.0D);
invalidate();
}
return true;
}
private int getStartAngle() {
Calendar cal = Calendar.getInstance();
int minutes = cal.get(Calendar.HOUR_OF_DAY) * 60 + cal.get(Calendar.MINUTE);
if (minutes > 720) {
minutes -= 720;
}
int angle = minutes / 2;
return (angle += 270) % 360;
}
private int getRadius() {
return ((80 * getWidth()) / 100) / 2;
}
I've used similar calculations in my color picker. It's open source, you can find sources here.
In your case I would start with calculating end angle, like this:
#Override
public boolean onTouchEvent(MotionEvent e) {
int action = e.getAction();
switch (action) {
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_MOVE:
int x = (int) e.getX();
int y = (int) e.getY();
int cx = x - getWidth() / 2;
int cy = y - getHeight() / 2;
endAngle = (float) (Math.toDegrees(Math.atan2(cy, cx)) + 360f) % 360f;
invalidate();
return true;
}
return super.onTouchEvent(e);
}
Adding 360 degrees with modulo is required. After this operation you'll have 0 degrees on the right side of the circle, 90 at the bottom, 180 on the left and 270 at the top.
Now your sweep angle will be endAngle - startAngle. And that's it!
I'm writing an app for android (although I think this is a generic question) and I need to display a large image (in an ImageView) that can be scrolled and zoomed. I've managed to get scrolling to work by capturing touch events and performing matrix translations, and i'm now working on zooming.
If I simply apply a scale transformation to the image, it zooms in at the origin, which is the top left-hand corner of the screen. I would like to zoom in at the center of the screen.
From what i've read, this means I need a transformation to make the origin the center of the screen. I think what is required is something like the following- assume the center of the screen is (5, 5) for simplicity...
-Translate by (-5, -5)
-Scale by the zoom factor
-Translate by (+5, +5)*zoomfactor
Unfortunatly, this doesnt seem to work- the zoom seems to go anywhere BUT the center...can someone help me out here?
EDIT: This is the code that now works
Matrix zoommatrix = new Matrix();
float[] centerpoint = {targetimageview.getWidth()/2.0f, targetimageview.getHeight()/2.0f};
zoommatrix.postScale(zoomfactor, zoomfactor, centerpoint[0], centerpoint[1]);
zoommatrix.preConcat(targetimageview.getImageMatrix());
targetimageview.setImageMatrix(zoommatrix);
targetimageview.invalidate();
Check ImageViewTouchBase in the Android source code's Camera app; its "zoomTo" method does this:
protected void zoomTo(float scale, float centerX, float centerY) {
if (scale > mMaxZoom) {
scale = mMaxZoom;
}
float oldScale = getScale();
float deltaScale = scale / oldScale;
mSuppMatrix.postScale(deltaScale, deltaScale, centerX, centerY);
setImageMatrix(getImageViewMatrix());
center(true, true);
}
That center method is probably the bit you'll really care about:
protected void center(boolean horizontal, boolean vertical) {
if (mBitmapDisplayed.getBitmap() == null) {
return;
}
Matrix m = getImageViewMatrix();
RectF rect = new RectF(0, 0,
mBitmapDisplayed.getBitmap().getWidth(),
mBitmapDisplayed.getBitmap().getHeight());
m.mapRect(rect);
float height = rect.height();
float width = rect.width();
float deltaX = 0, deltaY = 0;
if (vertical) {
int viewHeight = getHeight();
if (height < viewHeight) {
deltaY = (viewHeight - height) / 2 - rect.top;
} else if (rect.top > 0) {
deltaY = -rect.top;
} else if (rect.bottom < viewHeight) {
deltaY = getHeight() - rect.bottom;
}
}
if (horizontal) {
int viewWidth = getWidth();
if (width < viewWidth) {
deltaX = (viewWidth - width) / 2 - rect.left;
} else if (rect.left > 0) {
deltaX = -rect.left;
} else if (rect.right < viewWidth) {
deltaX = viewWidth - rect.right;
}
}
postTranslate(deltaX, deltaY);
setImageMatrix(getImageViewMatrix());
}