I find that the unit of the coordinate system of Canvas is different from that of screen.
For example in my case as below:
For one particular point, its on-screen coordinate obtained from ImageView.getX() and ImageView.getY() is (336, 578).
Then by trial and error, I draw a dot on the Canvas so that this dot falls EXACTLY the same position as the ImageView. I called canvas.drawCircle(330, 440, radius, paint); to achieve this.
Here comes the question:
Why would the 2 coordinates, (336, 578) and (330, 440), different?
Is it because the screen and the canvas use different units?
Is it an issue regarding pixel, dp and all that?
You could try converting to and from window coordinates using View.getLocationInWindow this way you'll always be sure you are using the same coordinates system.
Coordinates of ImageView.getX an Y are relative to the parent. Canvas coordinates are relative to the view. This could cause the differences
Edit:
I assume in one place you get the coordinates of the view (using getX and getY). Convert these to window coordinates. Call these viewXY[].
When you need to use them, to draw in your canvas, also convert the top and left coordinates to window coordinates. We call these canvasXY[].
Now, the X position to draw at would be viewXY[0] - canvasXY[0], and the Y position would be viewXY[1] - canvasXY[1].
A view's position co-ordinates are always relative to it's parent. It's like saying, where with respect to it's parent is this view placed.
There in another thing, screen Co-ordinate. This says where with respect to the window top/left (think actual physical screen) is this object placed. This can be obtained through View.getLocationOnScreen.
If you want to compare two locations, they will only be equivalent if the parent view of both these locations is same or you're using absolute screen co-ordinates. Otherwise, you'll only get results that "seem" different.
If you want to unify the two either take them to a common parent or use absolute co-ordinates.
Related
I am try to implement a hand-painted app by android (like infinite design )
and decide to use vector because it can scale and not distortion.
I think a lot and try to use the mode of viewport-horizon-world
Viewport which rely on the sizeof phone (etc 1080*1920) it is the path you see and you touch
Horizon the thing which will display on viewport
World the real coordinates of the point(which make up the path etc line
,bessel).
this model works like
first you touch the screen of phone and horizon will translate the point to the real world coordinates (if you move of scale) and save the value to world
second you can move and scale by gestures it will change the attribute of horizon etc you move left 100 and down 100 the horizon will know now it offset (100,100) and the bound will change ((0,0),(1080,1920))->((100,100),(1180,2020))
last when draw i find the path which include in horizon (calculate the bound of horizon and the bound of path) then calculate the display coordinate rely on horizonand draw the path by canvas.draw() etc
now the problem is when i just offset the horizon ,calculate the display coordinate just need to plus the offset values.but when scale it become difficult.for example a path bound in ((0,0),(100,100)) and the horizon scale 0.5 in point (500,500) i don't know the position of the bound and don't know how to calculate the anchor of new path the size and width(maybe just multiplied by the scale factor)
the function i want to implement like the viewport in svg
i think it should use coordinate mapping but how?
please give me some clue
I have a 600x1000 pixel screen and my player starts near the bottom left. His x and y coordinates are (50, 900). I'm drawing this player to a canvas and using
canvas.translate(-player.getX()+GAMEWIDTH/2, 0)
to make sure the player stays in the middle of the screen with respect to the x-axis. This results in my player being rendered at (300, 900), however the player's x and y coordinates remain at (50, 900). This presents an issue because I need the player's coordinates to be the same as his rendered location because I use touch events on the player's collision rectangles to move him around. Is there any way to make my screen's coordinates be relative to my canvas coordinates so that my player's coordinates correspond to where they actually get rendered? Or is there maybe another way to do it?
The touchEvents are always based on the the x & y relative to the canvas element itself. Since the view is centering the character by translating the view canvas.translate(-player.getX() + GAMEWIDTH/2 , 0); you need to also apply that translation to the touchEvent. You can do it in the touch handler itself, or you could store both a relative and absolute position for the items in your game world. This will become more important if items become nested in one another. This is typically done by storing the parent element of a sprite/object.
//example of how I usually handle object hierarchy
this._x = this.x + this.parent._x;
this._y = this.y + this.parent._y;
the canvas/stage would also store it's center as a ._x and ._y which would be the parent of the the objects added, this way when you generate the global position to do your touchEvents against instead of passing in the .x or .y you would pass in the ._x and ._y which are already translated by the GAMEWIDTH/2.
I need to apply click/touch events for only visible part of the View. Say for example a image of size 200X200. Apart from center 50X50, remaining part is transparent. I want to get touch events only for that 50X50 visible part Not on remaining transparent part.
In above image (its single image), only inner Diamond has got visible part. Apart from that Diamond is transparent area. So, if I touch Diamond then only I want to do something else ignore.
Edit :
Rachita's link helped me. I gone through that link and got idea how can I implement. But I could not understand some constants like 320, 240 etc while creating Points. In my case, I know the Diamond (in above image) x and y Ponits (hard coded values asctually). So, using those how can I determine, whether I touched inside Diamond or outside?
my Diamond points are as below
pointA = new Point(0, 183);
pointB = new Point(183, 0);
pointC = new Point(366, 183);
pointD = new Point(183, 366);
Edit :
Finally got solution from Luksprog. Its based on checking touched point pixel color. If color is 0 means, you touched transparent layer else you touched some colored part of the image. Simple, but very effective. Have a look at it here.
AFAIK you can not implement this with onclick listener or my any other direct way .You will have to use onTouchListener .
Firstly set your view dynamically at a specific (x,y) position using this How can I dynamically set the position of view in Android?
Calculate the region your diamond will occupy (you should khow the size of image inorder to calculate area of diamond)
3.Trigger a action in onTouchListener only when x, y fall in the required region. Use How to get the Touch position in android?
check this link to calculate if a given point lies in the required square
EDIT
To understand the coordinate system of android refer to this link How do android screen coordinates work?
Display mdisp = getWindowManager().getDefaultDisplay();
int maxX= mdisp.getWidth();
int maxY= mdisp.getHeight();
(x,y) :-
1) (0,0) is top left corner.
2) (maxX,0) is top right corner
3) (0,maxY) is bottom left corner
4) (maxX,maxY) is bottom right corner
here maxX and maxY are screen maximum height and width in pixels, which we have retrieved in above given code.
Remember if you want to support multiple devices with different screen sizes,make sure you use a relative value for x,y ie some ratio of screen height or width ,as different devices have different ppi
Check if touched point lies in the required polygon
I thinks these link might help you determining if the point touched (you can get x,y from onTouch event eg.event.getX()) lies in the required polygon whose points you have mentioned in the question . determine if a given point is inside the polygon and How can I determine whether a 2D Point is within a Polygon?
"Hit rectangle in parent's coordinates". But what does that mean?
To amplify, what I really want to know is the meaning of the phrase "hit rectangle". What is it for? How does the framework process it? When in the lifecycle is the return value meaningful? How might it differ from the rectangle defined by getLeft(),getTop(), getRight(), getBottom()?
Based on the name of the function I can of course guess at an answer, and try a few examples, but that's not satisfactory. I can find nothing useful about this function on the Android Developer website, or anywhere else I've looked.
Here appears to be the most complete explanation.
The getHitRect() method gets the child's hit rectangle (touchable
area) in the parent's coordinates.
The example snippet uses the function to determine the current touchable area of a child view (after layout) in order to effectively extend it by creating a TouchDelegate
They should certainly do a better job of documenting. If you look at the source View#getHitRect(Rect), you'll see that if the view has an "identity matrix", or is not attached to a window, it returns exactly what we're thinking. The alternate branch means the view has a transform, therefore to get the 'hit rect' for the parent, which is the smallest possible rect in its coordinate system that covers view, you have to move the rect to origin, run the transform and then add back its original position.
So you can use it as a shortcut for this purpose if there's no transform. If there's a transform, remember that you'll be getting values in the rect that may be outside or inside the view as currently displayed.
The Rect that is returned contains 4 values:
bottom
left
right
top
bottom specifies the y coordinate of the bottom of the rectangle. left specifies the x coordinate of the left side of the rectangle. etc.
The parent coordinate means that the hit rectangle values are specified in the parent's coordinate system.
Imagine yourself standing in an open field at night, looking up at the moon. Your position on the earth can be expressed in many ways.
If we expressed your position in your local coordinate system, you would be located at (latitude/longitude) (0, 0). As you walk around, your local coordinate system never changes, you are always centered at (0,0) in your local coordinate system.
However, if we expressed your location using the earth's coordinate system, you might be at (latitude, longitude) (16, 135).
You are standing on earth, so earth is your parent coordinate system.
In the same way, a View may be contained in a LinearLayout. The LinearLayout would be the View's parent and so the values from getHitRect() would be expressed in the coordinate system of the LinearLayout.
EDIT
In generic terms hit rectangle is a term used to define a rectangular area used for collision detection. In terms of Android, a hit rectangle is just an instance of type Rect.
The methods getLeft() etc are just accessors for the data in the Rect, so the members of a Rect define the same rectangle that you would get by calling the methods.
A common usage scenario for a Rect would be handling tap events:
//Imagine you have some Rect named myRect
//And some tap event named event
//Check if the tap event was inside the rectangle
if(myRect.contains(event.getX(), event.getY()){
//it's a hit!
}
You might also want to see if two rectangle's intersect each other
if(myRect.contains(myRect2)){
//collision!
}
For a View, the hit rectangle isn't really used directly, you can see the source. Top, bottom, left, right are used a ton in the View but the getHitRect() method is really more of a convenience to pass back those parameters (top/bottom/left/right) to people that need them in a tidy package.
Assuming that i have an android phone with max x=400 and max y=300
I have a 5000 by 5000 px bitmap and i display a 400 by 300 part of it on screen. I scroll it around using touch events. I want to draw a smaller bitmap(30 by 30px) onto that larger bitmap at location (460,370). But i cant statically do it(as my game requires). I want to draw and undraw that smaller image according to the player's input or collision detections.
What i am confused about is that suppose currently (50,50) to (450,350) of the larger image is displayed on the screen and now ,due to a certain requirement of the game at this moment, i have to draw that 30by30 bitmap at (460,370) but this point is in two systems - 1)In the big bitmap's co-ordinate system it's 460,370 but when it will come inside the visible area of screen it will have values something between (0,0) and (400,300) depending upon its position on the screen as per the player's movements...
So how do i track that image- i mean using which co-ordinate system, and how do i go about doing all this?
I know I've put it here in a very confusing way but that's the best way i can put it and i'll be really glad, and grateful to you if somebody could help me on this.....
Terminology:
Let's call the coordinates on the big map world coordinates
Let's call the coordinates on the screen view coordinates. You are talking to the drawing system (canvas?) in view coordinates.
Your currently displayed region of the bigger map is 100% defined by an offset, let's call this (offsetX, offsetY). Formally, this offset maps your world coordinates onto your view coordinates.
If you have the small image displayed at position (x, y) in world coordinates, all you need to do is subtract your offset to get to view coordinates, i.e.
viewX = worldX - offsetX;
viewY = worldY - offsetY;