I want to scale view on touch event of ACTION_MOVE.
all bitmap are in square format. (point1= top+left, point2 =top+right, point3=bottom+left, point4=bottom+right)
i got the all four points.
when i drag the yellow gripper on top of the bitmap , red ball resize according to that (gripper can be move at any direction).
my question:
--> how should i calculate the distance of gripper center point and bitmap (left+top) point1 and scale according to bitmap.
means if i drag the bitmap from top+left corner then it scale/resize from point1,point2,point4 only (bottom+right corner remail at its position).
--> i am using canvas to draw bitmap, is it the rightway to handle bitmap scale/rotate ?
You can get the x,y coordinates on the event ACTION_DOWN and the x',y' coordinates on ACTION_UP. With the two points, you can make your measures (Euclidean Distance, for example).
gripper.setOnTouchListener(new OnTouchListener(){
public boolean onTouch(View v, MotionEvent event) {
if(event.getAction() == android.view.MotionEvent.ACTION_DOWN){
x1 = event.getX();
y1 = event.getY();
}
else if(event.getAction() == android.view.MotionEvent.ACTION_UP){
x2 = event.getX();
y2 = event.getY();
}
return false;
}
});
You can also get the left,top position of the bitmap with getLeft(),getTop() methods (but note that these methods return the position relative to the View's parent layout).
There is an Matrix parameter on the createBitmap method that will allow you to resize your Bitmap and so you can store it with the resolution you want. See how here (it's an excellent tutorial, by the way). You probably already know how to do that, but I see no problem in sharing it anyway =P.
Hope it helps
Related
I'm working on an face recognition in android and i'm wonder if I have an imageview can I draw a rectangle with name over it assume that I know the 4 coordinate to draw and when I clicked on the rectangle or the text it will call a function
P/s: If it is too complicated drawing a clickable text on an imageview would be nice too. Thank you
I believe that if your ImageView is the background of your app, you could use a method like this:
public void drawRect(Graphics g){
Paint paint = new Paint();
paint.setColor(Color.GREEN);
g.drawRect(left, top, right, bottom, paint);
g.drawText(name, x, y, paint);
}
I don't know how your face recognition program works but you could call this method whenever you recognize a face.
For the "clicking part", you would use a MouseListener by creating a class mouseListener implementing View.onTouchListener or you can simply add the method onTouchEvent(MotionEvent me) into your main_activity.
If you go with the first choice of creating a different class (don't forget to create the object mouseListener to main and set it with setOnTouchListener), you'd have the usual onTouch(View view, MotionEvent me) method and to handle the clicks, you could do something like this:
int x = (int) me.getX();
int y = (int) me.getY();
if(me.getAction() == MotionEvent.ACTION_DOWN){
if(x >= lowerXBoundaryOfRectangle && x <= biggerXBoundaryOfRectangle &&
y >= lowerYBoundaryOfRectangle && y <= biggerYBoundaryOfRectangle){
//Basically, just check if x and y are in the rectangle w/ the known coordinates.
function(); //You call the fct needed here.
}
}
I am drawing a complicated shape using canvas.drawPath. The result of these drawPath methods is something like a map which is bigger than the screen. I would like the user to do the following: move the map using one finger. Or rotate it around the center of the screen using 2 fingers. Basically it should behave exactly like Google Maps only without scaling. So far I was able to get the desired movement and rotation:
private void handleTouch(MotionEvent event)
{
switch(event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_POINTER_DOWN:
{
rotate=true;
move=false;
a=Math.toDegrees(-Math.atan2(event.getX(0)-event.getX(1),event.getY(0)-event.getY(1)));
if(a>360)a=a-360;
if(a<-360)a=a+360;
da=a-angle;
if(da>360)da=da-360;
if(da<-360)da=da+360;
}
break;
case MotionEvent.ACTION_POINTER_UP:
{
rotate=false;
move=false;
x = (int)event.getX();
y = (int)event.getY();
dx = x - translate.x;
dy = y - translate.y;
}
break;
case MotionEvent.ACTION_DOWN:
{
move=true;
x = (int)event.getX();
y = (int)event.getY();
dx = x - translate.x;
dy = y - translate.y;
}
break;
case MotionEvent.ACTION_MOVE:
{
if(rotate && event.getPointerCount()==2)
{
angle=Math.toDegrees((-Math.atan2(event.getX(0)-event.getX(1),event.getY(0)-event.getY(1))))-da;
if(angle>360)angle=angle-360;
if(angle<-360)angle=angle+360;
vmap.invalidate();
}
else if(move==true)
{
translate.set((int)event.getX() - dx,(int)event.getY() - dy);
//trans.setTranslate(translate.x,translate.y);
vmap.invalidate();
}
}
break;
case MotionEvent.ACTION_UP:
{
//your stuff
}
break;
}
The problem is correctly applying them together. I can easily make the rotate around its own center if I first translate and then rotate around the center coordinate of the map. However if I try to rotate around the center of the screen things start to get complicated. Lets say I moved the canvas 20 pixels left and then rotated it 30 degrees, then moved 50 pixels down and then rotated it another 50 degrees. If I try to add the two angles together during the second rotation, I will now be rotating around a different coordinate, meaning that as soon as I do the new rotation the map is going to suddenly jump.
I looked into using matrices, but so far that didn't help.
Take a look at the Matrix class. It lets you transform ImageViews in many ways. For this problem, I created a test project that included the members:
ImageView imageView;
Matrix imageMatrix = new Matrix();
PointF screenCenter = new PointF();
Then, in onCreate():
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
DisplayMetrics metrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(metrics);
screenCenter.x = metrics.widthPixels/2;
screenCenter.y = metrics.heightPixels/2;
imageView = (ImageView) findViewById(R.id.image);
imageView.setScaleType(ImageView.ScaleType.MATRIX);
imageMatrix.set(imageView.getImageMatrix());
}
The screenCenter object now contains the coordinates of the center of the screen, which will be used later. The imageView scale type has been set to MATRIX, and the imageMatrix has been set with the starting matrix that was applied to the view when we changed the scale type.
Now, when you translate you modify the imageMatrix, then apply the new matrix to the view with setImageMatrix():
void translate(float x, float y) {
imageMatrix.postTranslate(x, y);
imageView.setImageMatrix(imageMatrix);
}
Rotation works the same way, but we need the coordinates of the center of the screen:
void rotate(float angle) {
imageMatrix.postRotate(angle, screenCenter.x, screenCenter.y);
imageView.setImageMatrix(imageMatrix);
}
This works on my emulator, translate moves the expected direction and rotate always moves around the center of the screen, no matter how it's translated. Admittedly, it's a test project and it doesn't look very pretty.
OpenGL ES is included in the Android framework, and you might get better graphics performance if you go that route. If you do, then this answer might be helpful.
Hello I am trying to place an imageview on an area where the user touches.
using MotionEvent event
Simply doing imageview.setX(event.getX()) and imageview.setY(event.getY()) is not the complete solution.
I realize these are pixel values, so I tried converting the event values to density independent values using (int) TypedValue.applyDimension(
TypedValue.COMPLEX_UNIT_DIP, event.getX() , getResources()
.getDisplayMetrics());
but this still does not give me coordinates that match where I touch, when I try to show the imageview at this location.
Also, I am under the suspicion that the imageview draws its upper left corner at these coordinates, when I want the coordinates to be the at center of the imageview.
Insight appreciated
You will most likely want to do something like the following:
If you are trying to find the points on touch event points of a scaled image:
public static float[] getPointerCoords(ImageView view, MotionEvent e)
{
final int index = e.getActionIndex();
final float[] coords = new float[] { e.getX(index), e.getY(index) };
Matrix matrix = new Matrix();
view.getImageMatrix().invert(matrix);
matrix.postTranslate(view.getScrollX(), view.getScrollY());
matrix.mapPoints(coords);
return coords;
}
public boolean onTouch(View v, MotionEvent event)
{
float[] returnedXY = getPointerCoords((ImageView) v, event);
imageView.setLeft(returnedXY[0] + (imageView.getWidth() /2));
imageView.setTop(returnedXY[1] + (imageView.getHeight() /2));
}
if not, just use the events.getX and getY. You may need to use the getRawX and getRawY of the event.
I am trying to gain some more familiarity with the Android SurfaceView class, and in doing so am attempting to create a simple application that allows a user to move a Bitmap around the screen. The troublesome part of this implementation is that I am also including the functionality that the user may drag the image again after it has been placed. In order to do this, I am mapping the bitmap to a simple set of coordinates that define the Bitmap's current location. The region I am mapping the image to, however, does not match up with the image.
The Problem
After placing an image on the SurfaceView using canvas.drawBitmap(), and recording the coordinates of the placed image, the mapping system that I have set up misinterprets the Bitmap's coordinates somehow and does not display correctly. As you can see in this image, I have simply used canvas.drawLine() to draw lines representing the space of my touch region, and the image is always off and to the right:
The Code
Here, I shall provide the relevant code excerpts to help answer my question.
CustomSurface.java
This method encapsulates the drawing of the objects onto the canvas. The comments clarify each element:
public void onDraw(Canvas c){
//Simple black paint
Paint paint = new Paint();
//Draw a white background
c.drawColor(Color.WHITE);
//Draw the bitmap at the coordinates
c.drawBitmap(g.getResource(), g.getCenterX(), g.getCenterY(), null);
//Draws the actual surface that is receiving touch input
c.drawLine(g.left, g.top, g.right, g.top, paint);
c.drawLine(g.right, g.top, g.right, g.bottom, paint);
c.drawLine(g.right, g.bottom, g.left, g.bottom, paint);
c.drawLine(g.left, g.bottom, g.left, g.top, paint);
}
This method encapsulates how I capture touch events:
public boolean onTouchEvent(MotionEvent e){
switch(e.getAction()){
case MotionEvent.ACTION_DOWN:{
if(g.contains((int) e.getX(), (int) e.getY()))
item_selected = true;
break;
}
case MotionEvent.ACTION_MOVE:{
if(item_selected)
g.move((int) e.getX(), (int) e.getY());
break;
}
case MotionEvent.ACTION_UP:{
item_selected = false;
break;
}
default:{
//Do nothing
break;
}
}
return true;
}
Graphic.java
This method is used to construct the Graphic:
//Initializes the graphic assuming the coordinate is in the upper left corner
public Graphic(Bitmap image, int start_x, int start_y){
resource = image;
left = start_x;
top = start_y;
right = start_x + image.getWidth();
bottom = start_y + image.getHeight();
}
This method detects if a user is clicking inside the image:
public boolean contains(int x, int y){
if(x >= left && x <= right){
if(y >= top && y <= bottom){
return true;
}
}
return false;
}
This method is used to move the graphic:
public void move(int x, int y){
left = x;
top = y;
right = x + resource.getWidth();
bottom = y + resource.getHeight();
}
I also have 2 methods that determine the center of the region (used for redrawing):
public int getCenterX(){
return (right - left) / 2 + left;
}
public int getCenterY(){
return (bottom - top) / 2 + top;
}
Any help would be greatly appreciated, I feel as though many other StackOverflow users could really benefit from a solution to this issue.
There's a very nice and thorough explanation of touch/multitouch/gestures on Android Developers blog, that includes free and open source code example at google code.
Please, take a look. If you don't need gestures -- just skip that part, read about touch events only.
This issue ended up being much simpler than I had thought, and after some tweaking I realized that this was an issue of image width compensation.
This line in the above code is where the error stems from:
c.drawBitmap(g.getResource(), g.getCenterX(), g.getCenterY(), null);
As you can tell, I manipulated the coordinates from within the Graphic class to produce the center of the bitmap, and then called canvas.drawBitmap() assuming that it would draw from the center outward.
Obviously, this would not work because the canvas always drops from the top left of an image downwards and to the right, so the solution was simple.
The Solution
Create the touch region with regards to the touch location, but draw it relative to a distance equal to the image width subtracted from the center location in the x and y directions. I basically changed the architecture of the Graphic class to implement a getDrawX() and getDrawY() method that would return the modified x and y coordinates of where it should be drawn in order to have the center_x and center_y values (determined in the constructor) actually appear to be at the center of the region.
It all comes down to the fact that in an attempt to compensate for the way the canvas draws bitmaps, I unfortunately incorporated some bad behaviors and in the end had to handle the offset in a completely different way.
I've drawn 5 bitmaps from .png files on a canvas - a head, a body and two arms and legs.
How can I detect which of these has been touched on an OnTouch? And, more specifically, can I detect if the OnTouch was within the actual shape of the body part touched?
What I mean is, obviously, the .pngs themselves are rectangular, but does Android know, or can I tell it, to ignore the transparency within the image?
You could get the colour of pixel touched and compare it to the colour of pixel on the background at those co-ords.
EDIT: ok, ignore that, you can't get the colour of a pixel on the canvas, so instead, get the x,y of the touch, check if any of the body part images have been touched, if so, take the x,y of the image from the touch x,y, then get the pixel of the image, which should be transparent or colour.
public boolean onTouchEvent(MotionEvent event)
{
int x = (int) event.getX();
int y = (int) event.getY();
int offsetx, offsety;
for(int i = 0;i<NUM_OF_BODY_PARTS;i++)
{
if(bodyPartRect[i].intersects(x,y,x+1,y+1))
{
offsetx = x - bodyPartRect[i].left;
offsety = y - bodyPartRect[i].top;
if(bodyPartBMP[i].getPixel(offsetx,offsety) == TRANSPARENT)
{
//whatever
}
}
}
}