I am having a problem in zooming the canvas. I have made a customized view in which I am drawing relationship diagrams now when I zoom out the canvas in goes to the position (0,0). I have seen different threads and questions but could not find appropriate answer.
What i am doing in onDraw Method is.
canvas.scale(mScaleFactor, mScaleFactor);
I have also seen the canvas.scale(x, y, px, py) method but i do not know how to get the pivot points of x and y.
public boolean onScale(ScaleGestureDetector detector) {
mScaleFactor *= detector.getScaleFactor();
// Don't let the object get too small or too large.
mScaleFactor = Math.max(0.4f, Math.min(mScaleFactor, 5.0f));
if(mScaleFactor>=1)
mScaleFactor=1f;
invalidate();
return true;
}
The pivot points are basically the point that your canvas will be transformed around, so scaling with a pivot of 0,0 makes it shrink towards that point.
using the following method you can change the pivot point to wherever you want it:
canvas.scale(x, y, px, py);
Now for the new stuff:
If you want your canvas to be scaled towards its centre, you will just have to know the point in the middle of your canvas:
float cX = canvas.getWidth()/2.0f; //Width/2 gives the horizontal centre
float cY = canvas.getHeight()/2.0f; //Height/2 gives the vertical centre
And then you can scale it using those coordinates:
canvas.scale(x, y, cX, cY);
Take a look at these two answers, they describe the problem and its solutions very well.
Android Bitmap/Canvas offset after scale
Scaling image of ImageView while maintaining center point in same place
Related
When drawing on view without zoom it works fine. See the screenshot
But when zooming and then drawing on view It is slightly up or down. See the screenshot
Here is my code for Custom View http://www.paste.org/78026 and for zoom http://www.paste.org/78027 and my xml http://www.paste.org/78028
Please can you tell me where I am wrong
Finally after lot of searching I found how to get relative X,Y when View is zoomed. It may be helpful to someone
// Get the values of the matrix
float[] values = new float[9];
matrix.getValues(values);
// values[2] and values[5] are the x,y coordinates of the top left corner of the drawable image, regardless of the zoom factor.
// values[0] and values[4] are the zoom factors for the image's width and height respectively. If you zoom at the same factor, these should both be the same value.
// event is the touch event for MotionEvent.ACTION_UP
float relativeX = (event.getX() - values[2]) / values[0];
float relativeY = (event.getY() - values[5]) / values[4];
Is there any ways of implementing smooth pinch to zoom without using matrix?
I am building drawing app and I want it to have pinch to zoom.
I get pivot point for scaling with:
centerposX = mScaleDetector.getFocusX();
centerposY = mScaleDetector.getFocusY();
and for scaling then I use:
canvas.scale(scaleFactor, scaleFactor, centerposX, centerposY);
But the problem is that it immediately centers view at the pivot point and them zooms it, rather than using it as a guide for centering.
I've seen that this problem has been solved by using matrix, but I don't want to use them as I need to keep track of offsets, ZoomTranslations which are calculated from centerposX/Y and scaleFactor to put drawings where they belong on screen.
So is there any way to solve this pivot point problem smoothly?
Thanks!
After a week I understood that I need to use matrix and get absolute coordinates on the screen, so I used Gesture detector to set matrix scale
matrix.postScale(mScaleFactor, mScaleFactor, focusX, focusY);
In my onDraw method I used canvas.concat(matrix); so not only Bitmap, but WHOLE canvas get's matrix transformation, and to get real screen coordinates I used a method i found on stack-overflow:
public float[] getAbsolutePosition(float Ax, float Ay) {
matrix.getValues(m);
float x = width - ((m[Matrix.MTRANS_X] - Ax) / m[Matrix.MSCALE_X])
- (width - getTranslationX());
float y = height - ((m[Matrix.MTRANS_Y] - Ay) / m[Matrix.MSCALE_X])
- (height - getTranslationY());
return new float[] { x, y };
}
I call this every time in my onTouchEvent() method, supplying event.getX and event.getY() as arguments.
After that, everything is easy peasy, also because of this beautiful method I've got rid of ~4-5 variables which I had to use to find real touch location.
I'm trying to show a picture, which can be zoomed and panned, and that rotates with a compass reading. With the code below, all three actions kind of work, but they influence each other.
Here is what I want to achieve:
1. Rotate around the center of the screen
2. Scale leaving the same part of the picture in the center
3. Pan to the desired spot in the picture
Here is what is actually happening with the code below:
1. Rotation works as intended, around the center of the screen
2. Scaling works, but it scales around the center of the picture
3. Translation only works as intended if angle is zero, otherwise it's moving in a wrong direction
// the center of the view port
float centerX = screen.right/2;
float centerY = screen.bottom/2;
Matrix m = new Matrix();
m.preRotate(angle, centerX, centerY);
m.preScale(mScaleFactor, mScaleFactor, centerX, centerY);
// scaling the amount of translation,
// rotating the translation here gave crazy results
float x = mPosX / mScaleFactor;
float y = mPosY / mScaleFactor;
m.preTranslate(x, y);
canvas.drawBitmap(pic, m, null);
If I translate first, and later rotate, the translation works as intended but the rotation is not around the center of the view port anymore.
How can I apply all three transformations, without them influencing each other?
I'm not sure about the scaling around the centre of the image, but as for the translation being in the wrong direction is it not as a consequence of you rotating the image but not the translations? Maybe try something like this:
float x = (mPosX*(cos(angle) + sin(angle)) / mScaleFactor;
float y = (mPosY*(cos(angle) - sin(angle)) / mScaleFactor;
m.preTranslate(x, y);
Also does the matrix object have a method to apply an affine transformation directly? Because then you won't need to think about the order of operations.
The affine transformation might look something like this:
M = |mScaleFactor*cos(angle) sin(angle) x|
| -sin(angle) mScaleFactor*cos(angle) y|
| 0 0 1|
But this will rotate around the corner of the image so you need to use the preTranslate() function first like so:
Mt.preTranslate(-centerX,-centerY);
And APPLY Mt to pic before applying M and then after you need to apply -Mt
I'm trying to write a graph class I can use in Android(I'm aware pre-made ones exist), but converting all of my coordinates would be a pain. Is there an easy way to make the screen coordinates start at the bottom left?
No, I don't know of a way to move 0,0 to the bottom left and get what you would typically think of as "normal" coordinates.
But combining scale() and translate() might do the trick to achieve the same effect.
canvas.translate(0,canvas.getHeight()); // reset where 0,0 is located
canvas.scale(1,-1); // invert
You can flip your Canvas with something like canvas.scale(1, -1) and then translate it to the right place.
You can use canvas.translate() http://developer.android.com/reference/android/graphics/Canvas.html#translate(float, float) to move the origin to where you want.
The android canvas has the origin at the top left. You want to translate this to the bottom right. To do this translation, subtract the y co-ordinate from the Canvas height.
float X1 = xStart;
float Y1 = canvas.getHeight() - yStart; //canvas is a Canvas object
float X2 = xEnd;
float Y2 = canvas.getHeight() - yEnd;
canvas.drawLine(X1, Y1, X2, Y2, paint ); //paint is a Paint object
This should make your line start from the bottom left.
I am working on my first "real" Android application, a graphical workflow editor. The drawing is done in a custom class, that is a subclass of View.At the moment my elements are rectangles, which are drawn on a canvas. To detect actions on elements I compare the coordinates and check for elements on the touch location.
To implement a zoom gesture I tried http://android-developers.blogspot.com/2010/06/making-sense-of-multitouch.html
With the 4 argument canvas.scale(...) function the centered zooming works well, but I lose the ability to calculate the canvas coordinates using the offset with mPosX and mPosY to detect if the touch after a zoom is on an element.
I tried to change the example in the blogpost above to center the canvas on the zoom gesture with:
canvas.save();
canvas.translate(mPosX, mPosY);
canvas.scale(mScaleFactor, mScaleFactor, mScalePivotX, mScalePivotY);
//drawing ....
canvas.restore();
I did not find any examples on how this could be done without losing the reference offset to calculate the coordinates. Is there an easy workaround? I tried to calculate the offset with the gesture center and the scaling factor, but failed :/
I have seen that other examples which use an ImageView often use a Matrix to transform the image. Could this be done with a custom View and a Canvas? If yes, how can I get the x and y offset to check the coordinates?
Also, if my ideas are completely wrong, I would be very happy to see some examples on how this is done properly.
Thx! ;)
Perhaps the following code will help you to calculate coordinates with the gesture center and the scaling factor. I use this method in my class representing opengl-sprite.
void zoom(float scale, PointF midPoint) {
if (zoomFactor == MAX_ZOOM_FACTOR && scale > 1) return;
if (zoomFactor == MIN_ZOOM_FACTOR && scale < 1) return;
zoomFactor *= scale;
x = (x - midPoint.x) * scale + midPoint.x;
y = (y - height + midPoint.y) * scale + height - midPoint.y;
if (zoomFactor >= MAX_ZOOM_FACTOR) {
zoomFactor = MAX_ZOOM_FACTOR;
} else if (zoomFactor < MIN_ZOOM_FACTOR) {
zoomFactor = MIN_ZOOM_FACTOR;
x = 0;
y = 0;
}
}
X and Y coordinates are processed in different ways, because of distinction between directions of opengl coordinate system (right and up) and midPoint's coordinate system (right and down). midPoint is taken from MotionEvents coordinates.
All other operations are understandable, i think.
Hope it will help you.