I'm trying to develop a simple game. My problem is I am trying to rotate an arrow image in the bottom of the screen by following the "x" event from Touch Listener.
Below is my Arrow class:
public class Arrow extends GameObject{
boolean show = true;
Bitmap bmp;
Game game;
Matrix matrix;
int i = 0;
public Arrow(Handler handler, int x, int y, int xSpeed, int ySpeed , Game game) {
super(handler, x, y, xSpeed, ySpeed);
this.game = game;
bmp = BitmapFactory.decodeResource(game.getResources(), R.drawable.arrow);
matrix = new Matrix();
}
#Override
public void tick() {
}
#Override
public void render(Canvas c) {
matrix.postRotate(x);
c.drawBitmap(bmp,matrix, null);
}
}
and this is the Event Listener
#Override
public boolean onTouch(View v, MotionEvent event) {
x = (int) event.getX();
y = (int) event.getY();
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
arrow.setX(x);
break;
case MotionEvent.ACTION_MOVE :
touch.move();
break;
case MotionEvent.ACTION_UP:
break;
}
}
}
}
return true;
}
How do I write code which rotates the bitmap?
Here is an example.
Matrix matrix = new Matrix();
//Calculate the rotation of the bitmap.
rotation += 10;
matrix.postRotate(rotation); // or matrix.postRotate(rotation,cx,cy);
canvas.drawBitmap(bitmap, matrix, null);
As an optimization, create the Matrix once outside this method and replace the creation with a call to matrix.reset().This way the canvas stays directed as before, and you can do more stuff with your Matrix like translating, scaling etc. and the matrix's content encapsulates the real meaning of your manipulation.
Calculating the angle :
For calculating the angle first you need to know the total progress length(Min value of progress - Max value of progress) of the Seekbar that can and the change in the value after seek.
Consider
Min of seek bar = 0
Max value of seekbar = 30
Now calculate the angle per unit that is,
1 unit = 360/30 = 12.
Suppose if the user has changed the seekbar position from 10 to 20. Now calculate the diff
int diff = 10-20 = -10.
rotation angle = (-10 * 12) = -120;
Example 2 :
if the user has changed the seekbar position from 15 to 10. Now calculate the diff
int diff = 15-10 = 5.
rotation angle = (5 * 12) = 60;
Here is the answer to my own question, it rotates the image according to the x input data:
float zeroPoint =MainGame.SCREEN_W / 2;
zeroPoint-=x;
float lala = MainGame.SCREEN_W/(180-y);
matrix.setRotate(zeroPoint/=lala, bmp.getWidth() /2, bmp.getHeight());
Related
I'm writing an app which displays map. User can zoom and pan. Map is rotated according to magnetometer's value (map is rotated in opposite direction of device rotation).
For scaling I'm using ScaleGestureDetector and passing scale factor to Matrix.scaleM.
For panning I'm using this code:
GlSurfaceView side:
private void handlePanAndZoom(MotionEvent event) {
int action = MotionEventCompat.getActionMasked(event);
// Get the index of the pointer associated with the action.
int index = MotionEventCompat.getActionIndex(event);
int xPos = (int) MotionEventCompat.getX(event, index);
int yPos = (int) MotionEventCompat.getY(event, index);
mScaleDetector.onTouchEvent(event);
switch (action) {
case MotionEvent.ACTION_DOWN:
mRenderer.handleStartPan(xPos, yPos);
break;
case MotionEvent.ACTION_MOVE:
if (!mScaleDetector.isInProgress()) {
mRenderer.handlePan(xPos, yPos);
}
break;
}
}
Renderer side:
private static final PointF mPanStart = new PointF();
public void handleStartPan(final int x, final int y) {
runOnGlThread(new Runnable() {
#Override
public void run() {
windowToWorld(x, y, mPanStart);
}
});
}
private static final PointF mCurrentPan = new PointF();
public void handlePan(final int x, final int y) {
runOnGlThread(new Runnable() {
#Override
public void run() {
windowToWorld(x, y, mCurrentPan);
float dx = mCurrentPan.x - mPanStart.x;
float dy = mCurrentPan.y - mPanStart.y;
mOffsetX += dx;
mOffsetY += dy;
updateModelMatrix();
mPanStart.set(mCurrentPan);
}
});
}
windowToWorld function uses gluUnProject and works because I'm using it for many other tasks. UpdateModelMatrix:
private void updateModelMatrix() {
Matrix.setIdentityM(mScaleMatrix,0);
Matrix.scaleM(mScaleMatrix, 0, mScale, mScale, mScale);
Matrix.setRotateM(mRotationMatrix, 0, mAngle, 0, 0, 1.0f);
Matrix.setIdentityM(mTranslationMatrix,0);
Matrix.translateM(mTranslationMatrix, 0, mOffsetX, mOffsetY, 0);
// Model = Scale * Rotate * Translate
Matrix.multiplyMM(mIntermediateMatrix, 0, mScaleMatrix, 0, mRotationMatrix, 0);
Matrix.multiplyMM(mModelMatrix, 0, mIntermediateMatrix, 0, mTranslationMatrix, 0);
}
Same mModelMatrix is used in gluUnproject of windowToWorld function for point translation.
So my problem two-fold:
The panning occurs twice slower than the movement of finger on the device's screen
At some point when panning continuously for a few seconds (making circles on screen, for example) the map starts to 'shake'. The amplitude of this shaking is getting bigger and bigger. Looks like some value adds up in handlePan iterations and causes this effect.
Any idea why these happen?
Thank you in advance, Greg.
Well, the problem with my code is this line:
mPanStart.set(mCurrentPan);
Simply because I drag in world coordinates, and update offset, but current location stays the same. This was my bug.
Removing this line will fix everything.
I have a Canvas that is scaled so everything fits better:
#Override
public void draw(Canvas c){
super.draw(c);
final float scaleFactorX = getWidth()/(WIDTH*1.f);
final float scaleFactorY = getHeight()/(HEIGHT*1.f);
if(c!=null) {
final int savedState = c.save();
c.scale(scaleFactorX, scaleFactorY);
(rendering)
c.restoreToCount(savedState);
}
}
It scales based on these two:
public static final int WIDTH = 856;
public static final int HEIGHT = 1050;
Which causes the problem that the coordinates of the MotionEvent that handles touch events is not equal to the coordinates that is created with the Canvas. This causes problems when I try to check collision between the MotionEvent Rect and the Rect of a class that is based on the rendering scale. This causes the class SuperCoin's X coordinate to not be equal to MotionEvent X coordinates.
Usually, MotionEvent's coordinates, both X and Y is way bigger than the screen's max size(defined by WIDTH and HEIGHT)
#Override
public boolean onTouchEvent(MotionEvent e) {
super.onTouchEvent(e);
switch (MotionEventCompat.getActionMasked(e)) {
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_POINTER_DOWN:
(...)
Rect r = new Rect((int)e.getX(), (int)e.getY(), (int)e.getX() + 3, (int)e.getY() + 3);
if(superCoins.size() != 0) {
for (SuperCoin sc : superCoins) {
if (sc.checkCollision(r)) {
progress++;
superCoins.remove(sc);
}
}
}
break;
}
return true;
}
And the SuperCoin:
public class SuperCoin {
private Bitmap bm;
public int x, y, orgY;
Clicker cl;
private Long startTime;
Random r = new Random();
public SuperCoin(Bitmap bm, int x, int y, Clicker c){
this.x = x;
this.y = y;
this.orgY = y;
this.bm = bm;
this.cl = c;
startTime = System.nanoTime();
bounds = new Rect(x, y, x + bm.getWidth(), y + bm.getHeight());
}
private Rect bounds;
public boolean checkCollision(Rect second){
if(second.intersect(bounds)){
return true;
}
return false;
}
private int velX = 0, velY = 0;
public void render(Canvas c){
long elapsed = (System.nanoTime()-startTime)/1000000;
if(elapsed>50) {
int cx;
cx = r.nextInt(2);
if(cx == 0){
velX = r.nextInt(4);
}else if(cx == 1){
velX = -r.nextInt(4);
}
velY = r.nextInt(10) + 1;
startTime = System.nanoTime();
}
if(x < 0) velX = +2;
if(x > Clicker.WIDTH) velX = -2;
x += velX;
y -= velY;
c.drawBitmap(bm, x, y, null);
}
}
How can I check collision between the two different when the MotionEvent X coordinate is bigger than the screen's scaled max coordinates?
Honestly, I am not completly sure why the Rect defined in the SuperCoin class is different from the one defined in the onTouchEvent method. I'm guessing because the X and Y is permanently different between the one defined by MotionEvent and the ones defined by the scaled canvas. The Rect in the SuperCoin class goes by the width of the Bitmap it has been passed. It scales it with the width and height of the Bitmap.
After looking through StackOverflow and Google for the past 2 days looking for something that comes close to a solution, I came over this: Get Canvas coordinates after scaling up/down or dragging in android Which solved the problem. It was really hard to find because the title was slightly misleading(of the other question)
float px = e.getX() / mScaleFactorX;
float py = e.getY() / mScaleFactorY;
int ipy = (int) py;
int ipx = (int) px;
Rect r = new Rect(ipx, ipy, ipx+2, ipy+2);
I added this as an answer and accepting it so it no longer will be an unanswered question as it is solved. The code above converts the coordinates to integers so they can be used for checking collision between the finger and the object I'm checking with
Don't scale the canvas directly. Make a Matrix object, scale that once. Then concat that to the canvas. Then you can make an inverted matrix for your touch events.
And just make the invert matrix whenever you change the view matrix:
viewMatrix = new Matrix();
viewMatrix.scale(scalefactor,scalefactor);
invertMatrix = new Matrix(viewMatrix);
invertMatrix.invert(invertMatrix);
Then apply these two matrices to the relevant events.
#Override
public boolean onTouchEvent(MotionEvent event) {
event.transform(invertMatrix);
And then on the draw events, concat the matrix.
#Override
protected void onDraw(Canvas canvas) {
canvas.concat(viewMatrix);
And you're done. everything is taken care of for you. Whatever modifications you do to the view matrix will change your viewbox and your touch events will be translated into that same scene too.
If you want to add panning or rotation, or even skew the view, it's all taken care of. Just apply that to the matrix, get the inverted matrix, and the view will look that way and the touch events will respond as you expect.
I'm new to Android, and I'm trying to get the hang of multi touch input. I've begun with a simple app that allows the user to create rectangles on a Canvas by dragging and releasing with one finger, which I have working. To expand upon that, I now want a user to be able to rotate the rectangle they are drawing using a second finger, which is where my problems begin. As it stands, adding a second finger will cause multiple rectangles to rotate, instead of just the current one, but they will revert to their default orientation as soon as the second finger is released.
I've been working at it for a while, and I think my core problem is that I'm mishandling the multiple MotionEvents that come with two (or more fingers). Logging statements I left to display the coordinates on the screen for each event stay tied to the first finger touching the screen, instead of switching to the second. I've tried multiple configurations of accessing and changing the event pointer ID, and still no luck. If anyone could provide some guidance in the right direction, I would be extremely grateful.
My code is as follows:
public class BoxDrawingView extends View {
private static final String TAG = "BoxDrawingView";
private static final int INVALID_POINTER_ID = -1;
private int mActivePointerId = INVALID_POINTER_ID;
private Box mCurrentBox;
private List<Box> mBoxen = new ArrayList<>();
private Float mLastTouchX;
private Float mLastTouchY;
...
#Override
public boolean onTouchEvent(MotionEvent event) {
switch(MotionEventCompat.getActionMasked(event)) {
case MotionEvent.ACTION_DOWN:
mActivePointerId = MotionEventCompat.getPointerId(event, 0);
current = new PointF(MotionEventCompat.getX(event, mActivePointerId),
MotionEventCompat.getY(event, mActivePointerId));
action = "ACTION_DOWN";
// Reset drawing state
mCurrentBox = new Box(current);
mBoxen.add(mCurrentBox);
mLastTouchX = MotionEventCompat.getX(event, MotionEventCompat.getPointerId(event, 0));
mLastTouchY = MotionEventCompat.getY(event, MotionEventCompat.getPointerId(event, 0));
break;
case MotionEvent.ACTION_POINTER_DOWN:
action = "ACTION_POINTER_DOWN";
mActivePointerId = MotionEventCompat.getPointerId(event, 0);
mLastTouchX = MotionEventCompat.getX(event, MotionEventCompat.getPointerId(event, 0));
mLastTouchY = MotionEventCompat.getY(event, MotionEventCompat.getPointerId(event, 0));
break;
case MotionEvent.ACTION_MOVE:
action = "ACTION_MOVE";
current = new PointF(MotionEventCompat.getX(event, mActivePointerId),
MotionEventCompat.getY(event, mActivePointerId));
if (mCurrentBox != null) {
mCurrentBox.setCurrent(current);
invalidate();
}
if(MotionEventCompat.getPointerCount(event) > 1) {
int pointerIndex = MotionEventCompat.findPointerIndex(event, mActivePointerId);
float currX = MotionEventCompat.getX(event, pointerIndex);
float currY = MotionEventCompat.getY(event, pointerIndex);
if(mLastTouchX < currX) {
// simplified: only use x coordinates for rotation for now.
// +X for clockwise, -X for counter clockwise
Log.d(TAG, "Clockwise");
mRotationAngle = 30;
}
else if (mLastTouchX > getX()) {
Log.d(TAG, "Counter clockwise");
mRotationAngle = -30;
}
}
break;
case MotionEvent.ACTION_UP:
action = "ACTION_UP";
mCurrentBox = null;
mLastTouchX = null;
mLastTouchY = null;
mActivePointerId = INVALID_POINTER_ID;
break;
case MotionEvent.ACTION_POINTER_UP:
action = "ACTION_POINTER_UP";
int pointerIndex = event.getActionIndex();
int pointerId = event.getPointerId(pointerIndex);
if(pointerId == mActivePointerId){
mActivePointerId = INVALID_POINTER_ID;
}
break;
case MotionEvent.ACTION_CANCEL:
action = "ACTION_CANCEL";
mCurrentBox = null;
mActivePointerId = INVALID_POINTER_ID;
break;
}
return true;
}
#Override
protected void onDraw(Canvas canvas){
// Fill the background
canvas.drawPaint(mBackgroundPaint);
for(Box box : mBoxen) {
// Box is a custom object. Origin is the origin point,
// Current is the point of the opposite diagonal corner
float left = Math.min(box.getOrigin().x, box.getCurrent().x);
float right = Math.max(box.getOrigin().x, box.getCurrent().x);
float top = Math.min(box.getOrigin().y, box.getCurrent().y);
float bottom = Math.max(box.getOrigin().y, box.getCurrent().y);
if(mRotationAngle != 0) {
canvas.save();
canvas.rotate(mRotationAngle);
canvas.drawRect(left, top, right, bottom, mBoxPaint);
canvas.rotate(-mRotationAngle);
canvas.restore();
mRotationAngle = 0;
} else {
canvas.drawRect(left, top, right, bottom, mBoxPaint);
}
}
}
}
There are several ways to draw things, not just in android, but in Java as well. The thing is that you are trying to draw the rectangles by rotating the Canvas. That's a way, but in my personal experience I think that is only a good choice if you want to rotate the whole picture. If not, that may get a little tricky because you need to place a rotation axis, which it seems you are not using, so Android will asume that you want to rotate from the left top corner or the center of the view (I don't remember).
If you are opting for that choice, you may try to do it like this:
Matrix matrix = new Matrix();
matrix.setRotate(angle, rectangleCenterX, rectangleCenterY);
canvas.setMatrix(matrix);
But I recommend you to try a different approach. Do the rotation directly on the rectangle that you are moving, by calculating the axes of the polygon. This you can do it using Java Math operations:
public void formShape(int cx[], int cy[], double scale) {
double xGap = (width / 2) * Math.cos(angle) * scale;
double yGap = (width / 2) * Math.sin(angle) * scale;
cx[0] = (int) (x * scale + xGap);
cy[0] = (int) (y * scale + yGap);
cx[1] = (int) (x * scale - xGap);
cy[1] = (int) (y * scale - yGap);
cx[2] = (int) (x * scale - xGap - length * Math.cos(radians) * scale);
cy[2] = (int) (y * scale - yGap - length * Math.sin(radians) * scale);
cx[3] = (int) (x * scale + xGap - length * Math.cos(radians) * scale);
cy[3] = (int) (y * scale + yGap - length * Math.sin(radians) * scale);
}
So (x,y) is the center of your rectangle and with, height tell you how big is it. In the formShape(int[], int[], double) method cx and cy are going to be used to draw your shape and scale is the value to use if you want to do zoom in or zoom out later, if not just use scale = 1;
Now for drawing your rectangles, this is how you do it:
Paint paint = new Paint();
paint.setColor(Color.GRAY);
paint.setStyle(Style.FILL);
int[] cx = new int[4];
int[] cy = new int[4];
Box box = yourBoxHere;
box.formShape(cx, cy, 1);
Path path = new Path();
path.reset(); // only needed when reusing this path for a new build
path.moveTo(cx[0], cy[0]); // used for first point
path.lineTo(cx[1], cy[1]);
path.lineTo(cx[2], cy[2]);
path.lineTo(cx[3], cy[3]);
path.lineTo(cx[0], cy[0]); // repeat the first point
canvas.drawPath(wallpath, paint);
For multitouch rotation listener you should override 2 methods in your Activity or View:
#Override
public boolean onTouch(View v, MotionEvent event) {
if(event.getId() == MotionEvent.ACTION_UP)
this.points = null;
}
}
#Override
public boolean dispatchTouchEvent(MotionEvent event) {
if(event.getPointerCount() >= 2) {
float newPoints[][] = new float[][] {
{event.getX(0), event.getY(0)},
{event.getX(1), event.getY(1)}
};
double angle = angleBetweenTwoPoints(newPoints[0][0], newPoints[0][1], newPoints[1][0], newPoints[1][1]);
if(points != null) {
double difference = angle - initialAngle;
if(Math.abs(difference) > rotationSensibility) {
listener.onGestureListener(GestureListener.ROTATION, Math.toDegrees(difference));
this.initialAngle = angle;
}
} else {
this.initialAngle = angle;
}
this.points = newPoints;
}
}
public static double angleBetweenTwoPoints(double xHead, double yHead, double xTail, double yTail) {
if(xHead == xTail) {
if(yHead > yTail)
return Math.PI/2;
else
return (Math.PI*3)/2;
} else if(yHead == yTail) {
if(xHead > xTail)
return 0;
else
return Math.PI;
} else if(xHead > xTail) {
if(yHead > yTail)
return Math.atan((yHead-yTail)/(xHead-xTail));
else
return Math.PI*2 - Math.atan((yTail-yHead)/(xHead-xTail));
} else {
if(yHead > yTail)
return Math.PI - Math.atan((yHead-yTail)/(xTail-xHead));
else
return Math.PI + Math.atan((yTail-yHead)/(xTail-xHead));
}
}
Sorry, but this answer is getting long, if you have further questions about any of those operations and you want to change the approach of your solution, please ask again and tell me in the comments.
I hope this was helpful.
I am trying to implement zooming on a canvas which should focus on a pivot point. Zooming works fine, but afterwards the user should be able to select elements on the canvas. The problem is, that my translation values seem to be incorrect, because they have a different offset, than the ones where I don't zoom to the pivot point (zoom without pivot point and dragging works fine).
I used some code from this example.
The relevant code is:
class DragView extends View {
private static float MIN_ZOOM = 0.2f;
private static float MAX_ZOOM = 2f;
// These constants specify the mode that we're in
private static int NONE = 0;
private int mode = NONE;
private static int DRAG = 1;
private static int ZOOM = 2;
public ArrayList<ProcessElement> elements;
// Visualization
private boolean checkDisplay = false;
private float displayWidth;
private float displayHeight;
// These two variables keep track of the X and Y coordinate of the finger when it first
// touches the screen
private float startX = 0f;
private float startY = 0f;
// These two variables keep track of the amount we need to translate the canvas along the X
//and the Y coordinate
// Also the offset from initial 0,0
private float translateX = 0f;
private float translateY = 0f;
private float lastGestureX = 0;
private float lastGestureY = 0;
private float scaleFactor = 1.f;
private ScaleGestureDetector detector;
...
private void sharedConstructor() {
elements = new ArrayList<ProcessElement>();
flowElements = new ArrayList<ProcessFlow>();
detector = new ScaleGestureDetector(getContext(), new ScaleListener());
}
/**
* checked once to get the measured screen height/width
* #param hasWindowFocus
*/
#Override
public void onWindowFocusChanged(boolean hasWindowFocus) {
super.onWindowFocusChanged(hasWindowFocus);
if (!checkDisplay) {
displayHeight = getMeasuredHeight();
displayWidth = getMeasuredWidth();
checkDisplay = true;
}
}
#Override
public boolean onTouchEvent(MotionEvent event) {
ProcessBaseElement lastElement = null;
switch (event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
mode = DRAG;
// Check if an Element has been touched.
// Need to use the absolute Position that's why we take the offset into consideration
touchedElement = isElementTouched(((translateX * -1) + event.getX()) / scaleFactor, (translateY * -1 + event.getY()) / scaleFactor);
//We assign the current X and Y coordinate of the finger to startX and startY minus the previously translated
//amount for each coordinates This works even when we are translating the first time because the initial
//values for these two variables is zero.
startX = event.getX() - translateX;
startY = event.getY() - translateY;
}
// if an element has been touched -> no need to take offset into consideration, because there's no dragging possible
else {
startX = event.getX();
startY = event.getY();
}
break;
case MotionEvent.ACTION_MOVE:
if (mode != ZOOM) {
if (touchedElement == null) {
translateX = event.getX() - startX;
translateY = event.getY() - startY;
} else {
startX = event.getX();
startY = event.getY();
}
}
if(detector.isInProgress()) {
lastGestureX = detector.getFocusX();
lastGestureY = detector.getFocusY();
}
break;
case MotionEvent.ACTION_UP:
mode = NONE;
break;
case MotionEvent.ACTION_POINTER_DOWN:
mode = ZOOM;
break;
case MotionEvent.ACTION_POINTER_UP:
break;
}
detector.onTouchEvent(event);
invalidate();
return true;
}
private ProcessBaseElement isElementTouched(float x, float y) {
for (int i = elements.size() - 1; i >= 0; i--) {
if (elements.get(i).isTouched(x, y))
return elements.get(i);
}
return null;
}
#Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.save();
if(detector.isInProgress()) {
canvas.scale(scaleFactor,scaleFactor,detector.getFocusX(),detector.getFocusY());
} else
canvas.scale(scaleFactor, scaleFactor,lastGestureX,lastGestureY); // zoom
// canvas.scale(scaleFactor,scaleFactor);
//We need to divide by the scale factor here, otherwise we end up with excessive panning based on our zoom level
//because the translation amount also gets scaled according to how much we've zoomed into the canvas.
canvas.translate(translateX / scaleFactor, translateY / scaleFactor);
drawContent(canvas);
canvas.restore();
}
/**
* scales the canvas
*/
private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
#Override
public boolean onScale(ScaleGestureDetector detector) {
scaleFactor *= detector.getScaleFactor();
scaleFactor = Math.max(MIN_ZOOM, Math.min(scaleFactor, MAX_ZOOM));
return true;
}
}
}
Elements are saved with their absolute position on the canvas (with dragging in mind). I suspect that I don't take the new offset from the pivot point to translateX and translateY in consideration, but I can't figure out where and how I should do this.
Any help would be appreciated.
Okay, so you're basically trying to figure out where a certain screen X/Y coordinate corresponds to, after the view has been scaled (s) around a certain pivot point {Px, Py}.
So, let's try to break it down.
For the sake of argument, lets assume that Px & Py = 0, and that s = 2. This means the view was zoomed by a factor of 2, around the top left corner of the view.
In this case, the screen coordinate {0, 0} corresponds to {0, 0} in the view, because that point is the only point which hasn't changed. Generally speaking, if the screen coordinate is equal to the pivot point, then there is no change.
What happens if the user clicks on some other point, lets say {2, 3}? In this case, what was once {2, 3} has now moved by a factor of 2 from the pivot point (which is {0, 0}), and so the corresponding position is {4, 6}.
All this is easy when the pivot point is {0, 0}, but what happens when it's not?
Well, lets look at another case - the pivot point is now the bottom right corner of the view (Width = w, Height = h - {w, h}). Again, if the user clicks at the same position, then the corresponding position is also {w, h}, but lets say the user clicks on some other position, for example {w - 2, h - 3}? The same logic occurs here: The translated position is {w - 4, h - 6}.
To generalize, what we're trying to do is convert the screen coordinates to the translated coordinate. We need to perform the same action on this X/Y coordinate we received that we performed on every pixel in the zoomed view.
Step 1 - we'd like to translate the X/Y position according to the pivot point:
X = X - Px
Y = Y - Py
Step 2 - Then we scale X & Y:
X = X * s
Y = Y * s
Step 3 - Then we translate back:
X = X + Px
Y = Y + Py
If we apply this to the last example I gave (I will only demonstrate for X):
Original value: X = w - 2, Px = w
Step 1: X <-- X - Px = w - 2 - w = -2
Step 2: X <-- X * s = -2 * 2 = -4
Step 3: X <-- X + Px = -4 + w = w - 4
Once you apply this to any X/Y you receive which is relevant prior to the zoom, the point will be translated so that it is relative to the zoomed state.
Hope this helps.
I want crop image like Facebook profile image selection on Android, where the user can move and scale an image, causing it to be resized and/or cropped:
How might I accomplish this?
I had the same requirement. I solved it combining PhotoView and Cropper by replacing the ImageView with PhotoView in cropper lib.
I had to modify the CropWindow class in order to avoid touch events not being correctly handled:
public void setImageView(PhotoView pv){
mPhotoView = pv;
}
#Override
public boolean onTouchEvent(MotionEvent event) {
// If this View is not enabled, don't allow for touch interactions.
if (!isEnabled()) {
return false;
}
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
boolean dispatch = onActionDown(event.getX(), event.getY());
if(!dispatch)
mPhotoView.dispatchTouchEvent(event);
return dispatch;
case MotionEvent.ACTION_UP:
case MotionEvent.ACTION_CANCEL:
getParent().requestDisallowInterceptTouchEvent(false);
onActionUp();
return true;
case MotionEvent.ACTION_MOVE:
onActionMove(event.getX(), event.getY());
getParent().requestDisallowInterceptTouchEvent(true);
return true;
default:
return false;
}
}
In CropImageView class changed few things as well:
private void init(Context context) {
final LayoutInflater inflater = LayoutInflater.from(context);
final View v = inflater.inflate(R.layout.crop_image_view, this, true);
mImageView = (PhotoView) v.findViewById(R.id.ImageView_image2);
setImageResource(mImageResource);
mCropOverlayView = (CropOverlayView) v.findViewById(R.id.CropOverlayView);
mCropOverlayView.setInitialAttributeValues(mGuidelines, mFixAspectRatio, mAspectRatioX, mAspectRatioY);
mCropOverlayView.setImageView(mImageView);
}
You can notice that I have replaced ImageView with PhotoView inside R.layout.crop_image_view in Cropper library.
Cropper library supports fixed size and PhotoView allows you to move and scale the photo, giving you the best from both worlds. :)
Hope it helps.
Edit, for those that asked how to get the image that is only inside the crop area:
private Bitmap getCurrentDisplayedImage(){
Bitmap result = Bitmap.createBitmap(mImageView.getWidth(), mImageView.getHeight(), Bitmap.Config.RGB_565);
Canvas c = new Canvas(result);
mImageView.draw(c);
return result;
}
public Bitmap getCroppedImage() {
Bitmap mCurrentDisplayedBitmap = getCurrentDisplayedImage();
final Rect displayedImageRect = ImageViewUtil2.getBitmapRectCenterInside(mCurrentDisplayedBitmap, mImageView);
// Get the scale factor between the actual Bitmap dimensions and the
// displayed dimensions for width.
final float actualImageWidth =mCurrentDisplayedBitmap.getWidth();
final float displayedImageWidth = displayedImageRect.width();
final float scaleFactorWidth = actualImageWidth / displayedImageWidth;
// Get the scale factor between the actual Bitmap dimensions and the
// displayed dimensions for height.
final float actualImageHeight = mCurrentDisplayedBitmap.getHeight();
final float displayedImageHeight = displayedImageRect.height();
final float scaleFactorHeight = actualImageHeight / displayedImageHeight;
// Get crop window position relative to the displayed image.
final float cropWindowX = Edge.LEFT.getCoordinate() - displayedImageRect.left;
final float cropWindowY = Edge.TOP.getCoordinate() - displayedImageRect.top;
final float cropWindowWidth = Edge.getWidth();
final float cropWindowHeight = Edge.getHeight();
// Scale the crop window position to the actual size of the Bitmap.
final float actualCropX = cropWindowX * scaleFactorWidth;
final float actualCropY = cropWindowY * scaleFactorHeight;
final float actualCropWidth = cropWindowWidth * scaleFactorWidth;
final float actualCropHeight = cropWindowHeight * scaleFactorHeight;
// Crop the subset from the original Bitmap.
final Bitmap croppedBitmap = Bitmap.createBitmap(mCurrentDisplayedBitmap,
(int) actualCropX,
(int) actualCropY,
(int) actualCropWidth,
(int) actualCropHeight);
return croppedBitmap;
}
public RectF getActualCropRect() {
final Rect displayedImageRect = ImageViewUtil.getBitmapRectCenterInside(mBitmap, mImageView);
final float actualImageWidth = mBitmap.getWidth();
final float displayedImageWidth = displayedImageRect.width();
final float scaleFactorWidth = actualImageWidth / displayedImageWidth;
// Get the scale factor between the actual Bitmap dimensions and the displayed
// dimensions for height.
final float actualImageHeight = mBitmap.getHeight();
final float displayedImageHeight = displayedImageRect.height();
final float scaleFactorHeight = actualImageHeight / displayedImageHeight;
// Get crop window position relative to the displayed image.
final float displayedCropLeft = Edge.LEFT.getCoordinate() - displayedImageRect.left;
final float displayedCropTop = Edge.TOP.getCoordinate() - displayedImageRect.top;
final float displayedCropWidth = Edge.getWidth();
final float displayedCropHeight = Edge.getHeight();
// Scale the crop window position to the actual size of the Bitmap.
float actualCropLeft = displayedCropLeft * scaleFactorWidth;
float actualCropTop = displayedCropTop * scaleFactorHeight;
float actualCropRight = actualCropLeft + displayedCropWidth * scaleFactorWidth;
float actualCropBottom = actualCropTop + displayedCropHeight * scaleFactorHeight;
// Correct for floating point errors. Crop rect boundaries should not exceed the
// source Bitmap bounds.
actualCropLeft = Math.max(0f, actualCropLeft);
actualCropTop = Math.max(0f, actualCropTop);
actualCropRight = Math.min(mBitmap.getWidth(), actualCropRight);
actualCropBottom = Math.min(mBitmap.getHeight(), actualCropBottom);
final RectF actualCropRect = new RectF(actualCropLeft,
actualCropTop,
actualCropRight,
actualCropBottom);
return actualCropRect;
}
private boolean onActionDown(float x, float y) {
final float left = Edge.LEFT.getCoordinate();
final float top = Edge.TOP.getCoordinate();
final float right = Edge.RIGHT.getCoordinate();
final float bottom = Edge.BOTTOM.getCoordinate();
mPressedHandle = HandleUtil.getPressedHandle(x, y, left, top, right, bottom, mHandleRadius);
if (mPressedHandle == null)
return false;
mTouchOffset = HandleUtil2.getOffset(mPressedHandle, x, y, left, top, right, bottom);
invalidate();
return true;
}
I have some additions to #Nikola Despotoski answer.
Firstly, you don't have to change ImageView in R.layout.crop_image_view to PhotoView, because PhotoView logic can be simply attached in code as new PhotoViewAttacher(mImageView).
Also in default logic, a CropView's overlay size calculates only on its initialization according to imageView bitmap size. So it is not appropriate logic for us, becouse we change bitmap size by touches according to the requirement. So, we should change stored bitmap sizes in CropOverlayView and invalidate it each time when we change the main image.
And the last is that a range, where user can make cropping normally based on the image size, but if we made image bigger, it can go beyond the border of screen, so it will be possible to user to move a cropping view beyond the border, which is incorrect. So we also should handle this situation and provide limitation.
And the corresponding part of code for this three issues:
In CropImageView:
private void init(Context context) {
final LayoutInflater inflater = LayoutInflater.from(context);
final View v = inflater.inflate(R.layout.crop_image_view, this, true);
mImageView = (ImageView) v.findViewById(R.id.ImageView_image);
setImageResource(mImageResource);
mCropOverlayView = (CropOverlayView) v.findViewById(R.id.CropOverlayView);
mCropOverlayView.setInitialAttributeValues(mGuidelines, mFixAspectRatio, mAspectRatioX, mAspectRatioY);
mCropOverlayView.setOutlineTouchEventReceiver(mImageView);
final PhotoViewAttacher photoAttacher = new PhotoViewAttacher(mImageView);
photoAttacher.setOnMatrixChangeListener(new PhotoViewAttacher.OnMatrixChangedListener() {
#Override
public void onMatrixChanged(RectF imageRect) {
final Rect visibleRect = ImageViewUtil.getBitmapRectCenterInside(photoAttacher.getVisibleRectangleBitmap(), photoAttacher.getImageView());
imageRect.top = Math.max(imageRect.top, visibleRect.top);
imageRect.left = Math.max(imageRect.left, visibleRect.left);
imageRect.right = Math.min(imageRect.right, visibleRect.right);
imageRect.bottom = Math.min(imageRect.bottom, visibleRect.bottom);
Rect bitmapRect = new Rect();
imageRect.round(bitmapRect);
mCropOverlayView.changeBitmapRectInvalidate(bitmapRect);
}
});
}
In CropOverlayView:
#Override
public boolean onTouchEvent(MotionEvent event) {
// If this View is not enabled, don't allow for touch interactions.
if (!isEnabled()) {
return false;
}
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
return onActionDown(event.getX(), event.getY());
case MotionEvent.ACTION_UP:
case MotionEvent.ACTION_CANCEL:
getParent().requestDisallowInterceptTouchEvent(false);
return onActionUp();
case MotionEvent.ACTION_MOVE:
boolean result = onActionMove(event.getX(), event.getY());
getParent().requestDisallowInterceptTouchEvent(true);
return result;
default:
return false;
}
}
public void changeBitmapRectInvalidate(Rect bitmapRect) {
mBitmapRect = bitmapRect;
invalidate();
}
private boolean onActionDown(float x, float y) {
final float left = Edge.LEFT.getCoordinate();
final float top = Edge.TOP.getCoordinate();
final float right = Edge.RIGHT.getCoordinate();
final float bottom = Edge.BOTTOM.getCoordinate();
mPressedHandle = HandleUtil.getPressedHandle(x, y, left, top, right, bottom, mHandleRadius);
if (mPressedHandle == null){
return false;
}
// Calculate the offset of the touch point from the precise location
// of the handle. Save these values in a member variable since we want
// to maintain this offset as we drag the handle.
mTouchOffset = HandleUtil.getOffset(mPressedHandle, x, y, left, top, right, bottom);
invalidate();
return true;
}
/**
* Handles a {#link MotionEvent#ACTION_UP} or
* {#link MotionEvent#ACTION_CANCEL} event.
* #return true if event vas handled, else - false
*/
private boolean onActionUp() {
if (mPressedHandle == null)
return false;
mPressedHandle = null;
invalidate();
return true;
}
/**
* Handles a {#link MotionEvent#ACTION_MOVE} event.
*
* #param x the x-coordinate of the move event
* #param y the y-coordinate of the move event
*/
private boolean onActionMove(float x, float y) {
if (mPressedHandle == null)
return false;
// Adjust the coordinates for the finger position's offset (i.e. the
// distance from the initial touch to the precise handle location).
// We want to maintain the initial touch's distance to the pressed
// handle so that the crop window size does not "jump".
x += mTouchOffset.first;
y += mTouchOffset.second;
// Calculate the new crop window size/position.
if (mFixAspectRatio) {
mPressedHandle.updateCropWindow(x, y, mTargetAspectRatio, mBitmapRect, mSnapRadius);
} else {
mPressedHandle.updateCropWindow(x, y, mBitmapRect, mSnapRadius);
}
invalidate();
return true;
}
For properly getting cropped image you should use the second part of #Nikola Despotoski answer
what you want can be exactly achieved by this lib simple-crop-image-lib
Thanks to all. Able to achieve this using the answers above, using Photoview and Cropper library. Added options to pick images from Camera or Gallery. Sharing the project on Github. Added an apk file in the project. Use real device for testing camera as emulator doesn't handle camera well. Here's the link to my project.
https://github.com/ozeetee/AndroidImageZoomCrop