Android - Transforming user coordinate system based on imageview.matrix - android

I have a graphing application that has an image displayed on screen. The image is dependent on internal variables for minimum and maximum X and Y coordinates.
For example the default coordinates may be from (-5,-5) to (+5,+5). Using these coordinates I draw the contents of the image. Think of a complex graph/image that can be panned and zoomed.
In Android I use an onTouchListener, postScale, postTranslate and setImageMatrix. This allows the user to drag and pinch zoom the onscreen image. This part works as expected. The user can pinch zoom and the onscreen image is zoomed in and gets blurry. The problem comes when they release their finger(s) and I need to regenerate the imageview contents. I need to know how the internal minx, maxx, miny and maxy coordinates changed in relation to the android coordinates.
How can I translate scale my coordinate system by the pan/zoom amounts the user pans and zooms? I have setup all sorts of bad code to try and push/pull/scale my coordinate system based on the imageview matrix transforms, but nothing works reliably. Is there a simple set of math formulas to help me do what I want?
I hope this isn't too vague.

Related

Set Matrix minimum and maximum scale ImageView?

I have a code which zooms and pans the imageview matrix, it works well but i want the imageview to not be zoomed smaller than my screen, and i don't want it to be zoomed very much, i want to set a limit for zooming and to the same thing for dragging(panning) it should pan horizontally if image's width is larger than screen, and it should pan vertically if image's height is larger than the screen, how can i achieve this result ? i tried some methods from mike Ortiz's but i couldn't get them to work.
I coded this for my app, and it's tricky to get it all right.
I created some Rects and RectFs to do a lot of the interim calculations right off the bat. It's more efficient when you don't have to allocate these on every operation.
I used Matrix.setRectToRect() to find the minimum scale factor, and 3x that for the maximum. Then after the postScale on zoom, I clamp the new [absolute] scale factor to min/max.
Also after the postScale, I also compare the rect coordinates to the screen coordinates and add a translation to keep the image corners outside the screen boundaries. This same logic is also done for dragging operations.
look into this library
https://github.com/davemorrissey/subsampling-scale-image-view
Highly configurable, easily extendable view with pan and zoom gestures for displaying huge images without loss of detail. Perfect for photo galleries, maps, building plans etc.

How do I draw with the screen coordinates on canvas?

As is known, drawCircle(x, y, radius, paint); takes in the canvas coordinates, which may be different from the screen coordinates.
So if I just want to draw something on one particular point on the screen, how would I do that with that method?
I am asking this because the canvas may be moved or even zoomed, but I do NOT wish my circle to move. I want it to stay at that particular point on screen.
Here is the description of drawCircle() method.
It seems that there are no direct converting methods.
We have to convert it manually by the help of maths.
We can process the coordinates according to the properties, like translation or scaling or even rotation.

How to create 3D rotation effect in Android OpenGL?

I am currently working on a 3D model viewer for Android using OpenGL-ES. I want to create a rotation effect according to the gesture given.
I know how to do single-axis rotation, such as rotate solely on the x-, y- or z-axis. However, my problem is that I don't know how to combine them all 3 together and have my app know in which axis I want to rotate depending on the touch gesture.
Gestures I have in mind were:
Swipe up/down for x-axis
Swipe left/right for y-axis
swipe in circular motion for z-axis
How can I do this?
EDIT: I found out that 3 types of swipes can make the moment very ugly. Therefore what I did was remove the z-axis motion. After removing that condition, I found that the other 2 work really well in conjunction with the same algorithm.
http://developer.android.com/resources/articles/gestures.html has some info on building a 'gesture library'. Not checked it out myself, but seems to fit what you're looking for
Alternatively, try something like gestureworks.com (again, I've not tried it myself)
I use a progressbar view for zooming in and out. I then use movement in x on the GLSurfaceView to rotate around the y axis and movement in y to rotate around the x axis
The problem with a gesture is that the response is not instant as the app tries to determine what gesture the user used. If you then use how far the user moves their finger to determine the amount to rotate/zoom then there is no instant feedback and so it takes time for the user to learn how to control rotate/zoom amount. I guess it would work if you were rotating/zooming by a set amount each gesture
It sounds like what you are looking to do is more math intensive than you might know. There are two ways to do this mathematically. (1) using quaternions (2) using basic linear algebra (but will result in gimbal lock if you arent careful.. but since you are just spinning then this is not a concern to you).. Lets go the second route since its easier.. What you need to do is recieve the beginning and end points of the swipe via a gesture implement and when you have those two points.. calculate the line that it makes. When you have that line, you can easily find the perpendicular vector to that line with high school math. That should now be your axis of rotation in your rotation matrix:
//Rotate around the axis based on the rotation matrix (rotation, x, y, z)
gl.glRotatef(rot, 1.0f, 1.0f, 0.0f);
No need for the Z rotation since you can not rotate in the Z plane with a 2D tablet. The 1.0f, 1.0f are the values that should be variables that represent the x,y of your vector. The (rot) should serve as the magnitude of the distance between the two points.
I havent done this in a while so let me know if you need more precision.

How do I rotate a canvas without disturbing the coordinate system in Android?

I am trying to rotate a canvas with canvas.rotate and move an object on it at the same time. The problem is that with the rotation, the coordinate system of the canvas rotates as well, so I get cases when my object is supposed to be moving along the y axis, but the y axis is rotated on place of the x axis. It is a mess. Is there a way to go around this?
It's using matrix math; if you do things in the opposite order (translate then rotate, or vice versa), you'll get the opposite effect.
Also, use SetMatrix(null) to clear the matrix to the identity between operations; not sure if that's the kind of mess you're having trouble with.

matrix help: how does postScale affect the translation part of a matrix?

Ive been trying to implement a limit to prevent the user from scaling the image too much in my multitouch zoom app. Problem is, when i set the max zoom level by dumping the matrix, the image starts to translate downward once the overall scale of the image hits my limit. I believe it is doing this because the matrix is still being affected by postScale(theScaleFactorX,theScaleFactorY,myMidpointX,myMidpointY) where theScaleFactorX/Y is the amount to multiply the overall scale of the image (so if the theScaleFatorX/Y is recorded as 1.12, and the image is at .60 of its origional size, the overall zoom is now .67). It seems like some sort of math is going on thats creating this translation, and was wondering if anyone knew what it was so i can prevent it from translating, and only allow the user to zoom back out.
Still unsure how postScale affects translation, but I fixed it by having an if statement saying as long as we are within the zoom limit i set, post scale as normal. Otherwise, divide the set zoom limit by the saved global zoom level recorded on ACTION_DOWN and set the scale so it will keep the image at the proper zoom level

Categories

Resources