I can easily use the following code to scroll the view to the left using
TouchUtils.dragViewToX(this, myView, Gravity.LEFT, -1000);
It will even just sit there for a second if it can't scroll anymore, like it's still trying to, which is the expected behavior, so the contents of the view shouldn't be the issue.
But if I do the opposite, it acts as though it's not even there.
TouchUtils.dragViewToX(this, myView, Gravity.LEFT, 1000);
It doesn't even pause for a second to simulate the dragging like the other one will no matter what. It even returns the propper pixel value for distance covered! Why will it only listen to this function when it's supplied with a negative value? Why will it not drag in the opposite direction?
It's not even a positive/negative issue, it will scroll to the left if I supply a positive value and a different Gravity (like RIGHT or END), but no matter what it won't scroll to the right.
First of all, dragViewToX will try to drag the view you specify to the X-coordinate you specify.
The Gravity field specifies which part of the view is to be dragged.
So I'd suggest you use the rect values rather that using hard coded coordinates so that the test will work independent of the device specification. Here's a sample code.
View view = sListView.findViewById(R.id.foo);
Rect rect = new Rect();
view.getHitRect(rect);
TouchUtils.dragViewToX(this, view, Gravity.CENTER, rect.left); // To drag left. Make sure the view is to the right of rect.left
TouchUtils.dragViewToX(this, view, Gravity.CENTER, rect.right); // To drag left. Make sure the view is to the left of rect.right
The only thing that is wrong with your code is that the value 1000 is already out of the screen and therefore there is no where to drag to :)
Related
I come from an ActionScript/Flash/AIR background and I'm new to the Android View hierarchy. Given that, I found something very odd about ViewGroup and it's coordinate system.
Let's say we have the following parent/child relationship inside the main Activity View which extends RelativeView (or basically some group layout that allows absolute coordinates and scale):
circle1 = new ImageView(context);
circle1.setImageBitmap(...);
circle2 = new ImageView(context);
circle2.setImageBitmap(...);
cont = new RelativeView(context);
cont.addView(circle1);
cont.addView(circle2);
this.addView(cont);
The idea is to create a container which is centered on the main view and to center two circles inside of the container. This allows the container to be animated, grouping the two circles.
cont.setX(getWidth() / 2);
cont.setY(getHeight() / 2);
circle1.setX(-circle1.getWidth() / 2);
circle1.setY(-circle1.getHeight() / 2);
circle2.setX(-circle2.getWidth() / 2);
circle2.setY(-circle2.getHeight() / 2);
Doing negative coordinates inside a ViewGroup, I immediately noticed that 'cont' clips them at the [0, 0] coordinate and cont.setClipChildren(false); has to be called. I'm guessing this is a bad idea, because it looks like a optimization for the invalidate() area. After disabling the clipping, the result renders as expected, but there is another problem:
illustration
Adding a touch event listener to circle2 returns a bogus touch rectangle (marked in purple) instead of the expected one (marked in cyan) which should be staring at the negative [X, Y] offset of circle2. The resulting rectangle starts at [0, 0] of 'cont' and ends at [circle2X + circle2W, circle2Y + circle2H] as if it's clipped at [0, 0] of 'cont'.
I know that you can solve the issue by not using negative coordinates, but that's not really a good solution if you are porting from ActionScript where negative coordinates make perfect sense (as in any real world coordinate system) and touch rectangles are calculated correctly in the DisplayObjectContainer class.
So what other solutions are there?
Should a custom ViewGroup class be created and what has to be done there?
Is there a magical setting which can allow touch rectangles of children Views not to be clipped at [0, 0] of the parent ViewGroup?
Something else?
Thanks.
You can use View#scrollTo to move the origin so that your content is shown entirely within the View's bounding rect.
Many things in the Android UI toolkit rely on the measured and laid out bounds of Views being accurate. In general you don't want to go against the grain here.
I'm trying to move a view to the upper right corner of the screen, using ObjectAnimator.ofFloat(...) But, I'm not getting the results I expect. I'm getting the coordinates of the view beforehand, using ViewTreeListener, etc, and I already know the x value I need to offset from the end of the overall width. I can't get either dimension to move to where I want it. Relevant code:
Getting the starting coordinate; where the view currently is:
int[] userCoords = new int[]{0,0};
userControlLayout.getLocationInWindow(userCoords);
//also tried getLocationInScreen(userCoords); same result
userUpLeft = userCoords[0];
userUpTop = userCoords[1];
Surprisingly, I get the same value as userUpLeft (which is in screen coordinates, and not relative to parent) when I call userControlLayout.getLeft() I'd expect them to be different per my understanding of the docs. Anyway...
Constructing the ObjectAnimators:
//testXTranslate is a magic number of 390 that works; arrived at by trial. no idea why that
// value puts the view where I want it; can't find any correlation between the dimension
// and measurements I've got
ObjectAnimator translateX = ObjectAnimator.ofFloat(userControlLayout, "translationX",
testXTranslate);
//again, using another magic number of -410f puts me at the Y I want, but again, no idea //why; currently trying the two argument call, which I understand is from...to
//if userUpTop was derived using screen coordinates, then isn't it logical to assume that -//userUpTop would result in moving to Y 0, which is what I want? Doesn't happen
ObjectAnimator translateY = ObjectAnimator.ofFloat(userControlLayout, "translationY",
userUpTop, -(userUpTop));
My understanding is that the one arg call is equivalent to specifying the end coordinate you want to translate/move to, and the two arg version is starting at...ending at, or, from...to I've messed around with both and can't get there.
Clearly, I'm missing very fundamental knowledge, just trying to figure out what exactly that is. Any guidance much appreciated. Thanks.
First, userControlLayout.getLeft() is relative to the parent view. If this parent is aligned to the left edge of the screen, those values will match. For getTop() it's generally different simply because getLocationInWindow() returns absolute coordinates, which means that y = 0 is the very top left of the window -- i.e. behind the action bar.
Generally you want to translate the control relative to its parent (since it won't even be drawn if it moves outside those bounds). So supposing you want to position the control at (targetX, targetY), you should use:
int deltaX = targetX - button.getLeft();
int deltaY = targetY - button.getTop();
ObjectAnimator translateX = ObjectAnimator.ofFloat(button, "translationX", deltaX);
ObjectAnimator translateY = ObjectAnimator.ofFloat(button, "translationY", deltaY);
When you supply multiple values to an ObjectAnimator, you're indicating intermediate values in the animation. So in your case userUpTop, -userUpTop would cause the translation to go down first and then up. Remember that translation (as well as rotation and all the other transformations) is always relative to the original position.
I have a custom ImageView and in the OnDraw I have added some bitmap overlays. I can capture the click event for any overlay using action up type event in the OnTouchEvent callback. The thing is for me to display a popup window I must supply a view anchor for the popup window to show but I am using the bitmaps as anchors not a particular view, so I am stuck. I cannot find any sollution on the web regarding this issue. Passing the parent ImageView and adding the overlay x-y offsets (with Gravity.NO_GRAVITY) are not producing the desired result(Also the window arrow will not point correctly).
Use PopupWindow.showAtLocation
The problem is that x/y params into that function is offset relative to Window, to which anchor view belongs. You can see it clearly from function's source, where nothing from 'parent' is used except of its window token.
public void showAtLocation(View parent, int gravity, int x, int y) {
showAtLocation(parent.getWindowToken(), gravity, x, y);
}
So to make your offsets good, you have to determine anchor's position relative to its window.
For this, use
Rect rc = new Rect();
View.getWindowVisibleDisplayFrame(rc);
int[] xy = new int[2];
View.getLocationInWindow(xy);
rc.offset(xy[0], xy[1]);
now you have
inx x = rc.left, y = rc.top;
This is point of your ImageView's left-top corner relative to its window.
Finally show your PopupWindow at location relative to this point, and it should display correctly on top of your ImageView.
You referring to the below kind of view right...I would suggest you to follow the below link and implement the same in your case as well..If found difficult post ur comment and will get back to you..MAPViewBallons
"Hit rectangle in parent's coordinates". But what does that mean?
To amplify, what I really want to know is the meaning of the phrase "hit rectangle". What is it for? How does the framework process it? When in the lifecycle is the return value meaningful? How might it differ from the rectangle defined by getLeft(),getTop(), getRight(), getBottom()?
Based on the name of the function I can of course guess at an answer, and try a few examples, but that's not satisfactory. I can find nothing useful about this function on the Android Developer website, or anywhere else I've looked.
Here appears to be the most complete explanation.
The getHitRect() method gets the child's hit rectangle (touchable
area) in the parent's coordinates.
The example snippet uses the function to determine the current touchable area of a child view (after layout) in order to effectively extend it by creating a TouchDelegate
They should certainly do a better job of documenting. If you look at the source View#getHitRect(Rect), you'll see that if the view has an "identity matrix", or is not attached to a window, it returns exactly what we're thinking. The alternate branch means the view has a transform, therefore to get the 'hit rect' for the parent, which is the smallest possible rect in its coordinate system that covers view, you have to move the rect to origin, run the transform and then add back its original position.
So you can use it as a shortcut for this purpose if there's no transform. If there's a transform, remember that you'll be getting values in the rect that may be outside or inside the view as currently displayed.
The Rect that is returned contains 4 values:
bottom
left
right
top
bottom specifies the y coordinate of the bottom of the rectangle. left specifies the x coordinate of the left side of the rectangle. etc.
The parent coordinate means that the hit rectangle values are specified in the parent's coordinate system.
Imagine yourself standing in an open field at night, looking up at the moon. Your position on the earth can be expressed in many ways.
If we expressed your position in your local coordinate system, you would be located at (latitude/longitude) (0, 0). As you walk around, your local coordinate system never changes, you are always centered at (0,0) in your local coordinate system.
However, if we expressed your location using the earth's coordinate system, you might be at (latitude, longitude) (16, 135).
You are standing on earth, so earth is your parent coordinate system.
In the same way, a View may be contained in a LinearLayout. The LinearLayout would be the View's parent and so the values from getHitRect() would be expressed in the coordinate system of the LinearLayout.
EDIT
In generic terms hit rectangle is a term used to define a rectangular area used for collision detection. In terms of Android, a hit rectangle is just an instance of type Rect.
The methods getLeft() etc are just accessors for the data in the Rect, so the members of a Rect define the same rectangle that you would get by calling the methods.
A common usage scenario for a Rect would be handling tap events:
//Imagine you have some Rect named myRect
//And some tap event named event
//Check if the tap event was inside the rectangle
if(myRect.contains(event.getX(), event.getY()){
//it's a hit!
}
You might also want to see if two rectangle's intersect each other
if(myRect.contains(myRect2)){
//collision!
}
For a View, the hit rectangle isn't really used directly, you can see the source. Top, bottom, left, right are used a ton in the View but the getHitRect() method is really more of a convenience to pass back those parameters (top/bottom/left/right) to people that need them in a tidy package.
I have a listview that I am implementing pinch zooming on. I resize each child view of the listview which changes the row height. When these rows are resized, the top item of the list view remains stationary and the items below it move up to recover the slack space.
I'm trying to implement the pinch zoom in 2 parts. First, make the zoom origin the center of the visible list (right now it's the top, like I said). Second, adjust the origin based on the focal point of the zoom gesture.
My problem is that I can't smoothly adjust the origin of the zoom. The method I have right now does scroll the list, but it does it a few milliseconds after the view is updated (it actually looks pretty bad... It's really choppy).
Is there a better way to do what I'm doing in the code below?
mMsgs is a listview.
// Center align the scroll
int yviewspace = mMsgs.getHeight();
int newyviewspace = (int) (yviewspace * zoomratio);
//mMsgs.scrollBy(0, -(yviewspace - newyviewspace) / 2); //This is smooth, but, sadly, it doesn't work on listviews.
mMsgs.smoothScrollBy(-(yviewspace - newyviewspace) / 2, 0);