Jumping ImageView while dragging. getX() and getY() values are jumping - android

I've created an onTouchListener for dragging Views. Images drag smoothly if I use getRawX() and getRawY(). The problem with that is the image will jump to the second pointer when you place a second pointer down then lift the first pointer.
This onTouchListener attempts to fix that issue by keeping track of the pointerId. The problem with this onTouchListener is while dragging an ImageView, the ImageView jumps around pretty crazily. The getX() and getY() values jump around.
I feel like I'm doing this correctly. I don't want to have to write a custom view for this because I've already implemented a scaleGestureDetector and written a custom rotateGestureDetector that work. Everything works fine but I need to fix the issue I get when using getRawX() and getRawY().
Does anybody know what I'm doing wrong here?
Here's my onTouchListener:
final View.OnTouchListener onTouchListener = new View.OnTouchListener()
{
#Override
public boolean onTouch(View v, MotionEvent event)
{
relativeLayoutParams = (RelativeLayout.LayoutParams) v.getLayoutParams();
final int action = event.getAction();
switch (action & MotionEvent.ACTION_MASK)
{
case MotionEvent.ACTION_DOWN:
{
final float x = event.getX();
final float y = event.getY();
// Where the user started the drag
lastX = x;
lastY = y;
activePointerId = event.getPointerId(0);
break;
}
case MotionEvent.ACTION_MOVE:
{
// Where the user's finger is during the drag
final int pointerIndex = event.findPointerIndex(activePointerId);
final float x = event.getX(pointerIndex);
final float y = event.getY(pointerIndex);
// Calculate change in x and change in y
final float dx = x - lastX;
final float dy = y - lastY;
// Update the margins to move the view
relativeLayoutParams.leftMargin += dx;
relativeLayoutParams.topMargin += dy;
v.setLayoutParams(relativeLayoutParams);
// Save where the user's finger was for the next ACTION_MOVE
lastX = x;
lastY = y;
v.invalidate();
break;
}
case MotionEvent.ACTION_UP:
{
activePointerId = INVALID_POINTER_ID;
break;
}
case MotionEvent.ACTION_CANCEL:
{
activePointerId = INVALID_POINTER_ID;
break;
}
case MotionEvent.ACTION_POINTER_UP:
{
// Extract the index of the pointer that left the touch sensor
final int pointerIndex = (action & MotionEvent.ACTION_POINTER_INDEX_MASK) >> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
final int pointerId = event.getPointerId(pointerIndex);
if(pointerId == activePointerId)
{
// This was our active pointer going up. Choose a new
// active pointer and adjust accordingly
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
lastX = (int) event.getX(newPointerIndex);
lastY = (int) event.getY(newPointerIndex);
activePointerId = event.getPointerId(newPointerIndex);
}
break;
}
}
return true;
}
};
image1.setOnTouchListener(onTouchListener);

The issue was simple, but unexpected. getRawX/Y() returns absolute coordinates, while getX/Y() returns coordinates relative to the view. I would move the view, reset lastX/Y, and the image wouldn't be in the same spot anymore so when I get new values they'd be off. In this case I only needed where I originally pressed the image (not the case when using `getRawX/Y').
So, the solution was to simply remove the following:
// Save where the user's finger was for the next ACTION_MOVE
lastX = x;
lastY = y;
I hope this will help somebody in the future, because I've seen others with this problem, and they had similar code to me (resetting lastX/Y)

After a lot of research, I found this to be the issue, getRawX is absolute and getX is relative to view. hence use this to transform one to another
//RawX = getX + View.getX
event.getRawX == event.getX(event.findPointerIndex(ptrID1))+view.getX()

I have One tip & think it will help.
invalidate() : does only redrawing a view,doesn't change view size/position.
What you should use is
requestLayout(): does the measuring and layout process. & i think requestLayout() will be called when call setLayoutParams();
So Try by removing v.invalidate()
or try using view.layout(left,top,right,bottom) method instead of setting layoutParams.

Related

How to move an object on a SurfaceView without touching the object itself?

I found this question, but it's for iOS development. I'm doing something in Android, so it wasn't any help.
I have a simple white rectangle on a black canvas, and I have an onTouchEvent set up. I can move the rectangle as expected, but this is not the functionality I'm looking for.
I'm needing to know how I can touch anywhere on the screen and move an object? I don't want the object's X and Y to become my touch event's getX() and getY().
For now, I have:
#Override
public boolean onTouchEvent(MotionEvent event) {
switch(event.getAction()) {
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_MOVE:
myPoint.set((int)event.getX(), (int)event.getY());
}
return true;
}
My reasoning for this is so the object can be seen without a finger hovering over it. Is this possible?
Here is a visual representation of what I'm wanting to achieve:
Why not get and store the previous MotionEvent coordinates in the onTouchEvent callback, then when onTouchEvent fires again (first ACTION_MOVE after ACTION_DOWN and then so on for each ACTION_MOVE callback) work out the offset/difference between the values then get your own myPoint coordinates and add the offset to their current values. This way you're only moving the myPoint object by the position offset, not the actual coordinates.
This code isn't tested and I've not used an IDE but something like this:
private float oldX;
private float oldY;
#Override
public boolean onTouchEvent(MotionEvent event) {
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
oldX = event.getX();
oldY = event.getY();
break;
case MotionEvent.ACTION_MOVE:
float newX = event.getX();
float newY = event.getY();
float offsetX = getOffset(oldX, newX, myPoint.x);
float offsetY = getOffset(oldY, newY, myPoint.y);
myPoint.set((int) offsetX, (int) offsetY);
oldX = newX;
oldY = newY;
break;
}
return true;
}
private float getOffset(float oldVal, float newVal, float current) {
return current + (newVal - oldVal);
}

Android Dragging and Scaling in FrameLayout

Recently I have been trying to implement dragging and scaling on a picture that I place in a FrameLayout. What I want to achieve is simple: to be able to drag the picture around and zoom it. I went to the Android Developer website and followed the guide there.
Then following the code examples on that website I wrote MyCustomView:
public class MyCustomView extends ImageView {
private static final int INVALID_POINTER_ID = 0xDEADBEEF;
private ScaleGestureDetector mScaleDetector;
private float mScaleFactor = 1.f;
private float mLastTouchX, mLastTouchY;
private int mActivePointerId = INVALID_POINTER_ID;
private LayoutParams mLayoutParams;
private int mPosX, mPosY;
public MyCustomView(Context context) {
super(context);
mScaleDetector = new ScaleGestureDetector(context, new CustomScaleListener());
mLayoutParams = (LayoutParams) super.getLayoutParams();
if (mLayoutParams != null) {
mPosX = mLayoutParams.leftMargin;
mPosY = mLayoutParams.topMargin;
} else {
mLayoutParams = new LayoutParams(300, 300);
mLayoutParams.leftMargin = 0;
mLayoutParams.topMargin = 0;
}
}
#Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.save();
canvas.scale(mScaleFactor, mScaleFactor);
canvas.restore();
}
#Override
public boolean onTouchEvent(MotionEvent ev) {
// Let the ScaleGestureDetector inspect all events
mScaleDetector.onTouchEvent(ev);
final int action = MotionEventCompat.getActionMasked(ev);
switch (action) {
case MotionEvent.ACTION_DOWN: {
final int pointerIndex = MotionEventCompat.getActionIndex(ev);
//final float x = MotionEventCompat.getX(ev, pointerIndex);
//final float y = MotionEventCompat.getY(ev, pointerIndex);
final float x = ev.getRawX();
final float y = ev.getRawY();
// Remember where we started (for dragging)
mLastTouchX = x;
mLastTouchY = y;
// Save the ID of this pointer (for dragging)
mActivePointerId = MotionEventCompat.getPointerId(ev, 0);
break;
}
case MotionEvent.ACTION_MOVE: {
// Find the index of the active pointer and fetch its position
final int pointerIndex = MotionEventCompat.findPointerIndex(ev, mActivePointerId);
//final float x = MotionEventCompat.getX(ev, pointerIndex);
//final float y = MotionEventCompat.getY(ev, pointerIndex);
final float x = ev.getRawX();
final float y = ev.getRawY();
final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;
//TODO: Update the location of this view
mPosX += dx;
mPosY += dy;
mLayoutParams.leftMargin += dx;
mLayoutParams.topMargin += dy;
super.setLayoutParams(mLayoutParams);
invalidate();
mLastTouchX = x;
mLastTouchY = y;
break;
}
case MotionEvent.ACTION_UP: {
mActivePointerId = INVALID_POINTER_ID;
break;
}
case MotionEvent.ACTION_CANCEL: {
mActivePointerId = INVALID_POINTER_ID;
break;
}
case MotionEvent.ACTION_POINTER_UP: {
final int pointerIndex = MotionEventCompat.getActionIndex(ev);
final int pointerID = MotionEventCompat.getPointerId(ev, pointerIndex);
if (pointerID == mActivePointerId) {
// This was our active pointer going up. Choose a new active pointer and
// adjust accordingly
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
//mLastTouchX = MotionEventCompat.getX(ev, newPointerIndex);
//mLastTouchY = MotionEventCompat.getY(ev, newPointerIndex);
mLastTouchX = ev.getRawX();
mLastTouchY = ev.getRawY();
mActivePointerId = MotionEventCompat.getPointerId(ev, newPointerIndex);
}
break;
}
}
return true;
}
private class CustomScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
#Override
public boolean onScale(ScaleGestureDetector detector) {
mScaleFactor *= detector.getScaleFactor();
mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 5.0f));
invalidate();
return true;
}
}
In the MainActivity I simply instantiated a MyCustomView object and attached it to ViewGroup at the background, which is a FrameLayout. The xml file has nothing but a FrameLayout there.
public class MainActivity extends AppCompatActivity {
private ViewGroup layoutRoot;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
layoutRoot = (ViewGroup) findViewById(R.id.view_root);
final MyCustomView ivAndroid = new MyCustomView(this);
ivAndroid.setImageResource(R.mipmap.ic_launcher);
ivAndroid.setLayoutParams(new FrameLayout.LayoutParams(300, 300));
layoutRoot.addView(ivAndroid);
}
}
And here comes the problem that troubles me:
The Android Developer website uses this to obtain the coordinates of the finger that touches the picture:
final float x = MotionEventCompat.getX(ev, pointerIndex);
final float y = MotionEventCompat.getY(ev, pointerIndex);
But it works horribly! The picture moves, but it does not follow my finger exactly, it always moves LESS than my finger does, and most importantly, it flashes.
So that is why you can see in MyCustomView that I have commented out this line and instead used this code:
final float x = ev.getRawX();
final float y = ev.getRawY();
While this time the picture moves smoothly in accordance with my finger, this change introduces a new problem. On the Android Developer website for dragging and scaling, there is a design principle that says:
In a drag (or scroll) operation, the app has to keep track of the original pointer (finger), even if additional fingers get placed on the screen. For example, imagine that while dragging the image around, the user places a second finger on the touch screen and lifts the first finger. If your app is just tracking individual pointers, it will regard the second pointer as the default and move the image to that location.
After I started using ev.getRawX() and ev.getRawY(), adding a second finger to the screen gives me exactly the problem stated above. But MotionEventCompat.getX(ev, pointerIndex) and MotionEventCompat.getY(ev, pointerIndex) does not.
Can somebody help me explain why it happens? I know that MotionEventCompat.getX(ev, pointerIndex) returns the coordinate after some sort of adjustment, and that ev.getRawX() returns the absolute coordinate. But I don't understand how exactly the adjustment works (Is there a formula or graphical explanation for it?). I also want to know why using MotionEventCompat.getX(...) would prevent the picture from jumping to the second finger on screen (after the first finger has been lifted).
Last but not least, the scaling code simply doesn't work AT ALL. If someone and teach me on that it will also be greatly appreciated!
This question is long, so I will partionate it in smaller bits. Also, english is not my native language so I had some difficulties writting the answer. Comment if a part is not clear.
Can somebody help me explain why it happens?
getRawX() and ev.getRawY() will both give you the absolute pixel value of the event. Those will also (for the sake of backwards compatibility, when most screens could only track 1 "region" at a time) will always consider the finger as the first (and only) finger that is interacting with the device.
Then, came improvements that allowed to track the finger ID., the MotionEventCompat.getX(ev, pointerIndex) and MotionEventCompat.getY(ev, pointerIndex) functions allowed for further finesse when creating our onTouch() Listeners.
Is there a formula or graphical explanation for it?
Basically, you need to take into consideration the "Screen Density" of that device. Such as:
float SCREEN_DENSITY = getResources().getDisplayMetrics().density;
protected void updateFrame(FrameLayout frameLayout, int h, int w, int x, int y) {
FrameLayout.LayoutParams params = new FrameLayout.LayoutParams(
(int) ((w * SCREEN_DENSITY) + 0.5),
(int) ((h * SCREEN_DENSITY) + 0.5)
);
params.leftMargin = (int) ((x * SCREEN_DENSITY) + 0.5);
params.topMargin = (int) ((y * SCREEN_DENSITY) + 0.5);
frameLayout.setLayoutParams(params);
}
I also want to know why using MotionEventCompat.getX(...) would prevent the picture from jumping to the second finger on screen (after the first finger has been lifted)
If you take into consideration that the first "finger" was lifted, then the new one, has a different "initial point", and different "history", because of that, it can send its event in relation to the movement made, not the final position on screen. This way it wont "jump to where the finger is" but will move according to the ammount of "x units" and "y units" traversed.
Last but not least, the scaling code simply doesn't work AT ALL. If someone and teach me on that it will also be greatly appreciated!
You are consuming the event (by returning true on your onTouch Listener), because of that, no other Listener can continue reading from the event, in a way that you can trigger more Listeners.
If you desire, move both functions (move and resize) inside the onTouch. My onTouch Listener has over 1700 lines of code (because it does a lot of stuff, including programatically creating Views and adding listeners to that), so I cant post it here, but basically:
1 Finger = move the frame. Get raw values, and use the "updateFrame"
2 Fingers = resize the frame. Get raw values, and use the "updateFrame"
3+ Fingers = Drop first finger, suppose 2 Fingers.

pinch to zoom don't center zoom

Im trying to do a pinch to zoom on a imageview. The problem is that the when i zoom it does'nt scale where i pinched it scales up to the left corner. Im not sure why i does this and i seems that there is a lot of people having the same problem but i havent found a soloution to it yet.
public override bool OnTouchEvent(MotionEvent ev)
{
_scaleDetector.OnTouchEvent(ev);
MotionEventActions action = ev.Action & MotionEventActions.Mask;
int pointerIndex;
switch (action)
{
case MotionEventActions.Down:
_lastTouchX = ev.GetX();
_lastTouchY = ev.GetY();
_activePointerId = ev.GetPointerId(0);
break;
case MotionEventActions.Move:
pointerIndex = ev.FindPointerIndex(_activePointerId);
float x = ev.GetX(pointerIndex);
float y = ev.GetY(pointerIndex);
if (!_scaleDetector.IsInProgress)
{
// Only move the ScaleGestureDetector isn't already processing a gesture.
float deltaX = x - _lastTouchX;
float deltaY = y - _lastTouchY;
_posX += deltaX;
_posY += deltaY;
Invalidate();
}
_lastTouchX = x;
_lastTouchY = y;
break;
case MotionEventActions.Up:
case MotionEventActions.Cancel:
// We no longer need to keep track of the active pointer.
_activePointerId = InvalidPointerId;
break;
case MotionEventActions.PointerUp:
// check to make sure that the pointer that went up is for the gesture we're tracking.
pointerIndex = (int) (ev.Action & MotionEventActions.PointerIndexMask) >> (int) MotionEventActions.PointerIndexShift;
int pointerId = ev.GetPointerId(pointerIndex);
if (pointerId == _activePointerId)
{
// This was our active pointer going up. Choose a new
// action pointer and adjust accordingly
int newPointerIndex = pointerIndex == 0 ? 1 : 0;
_lastTouchX = ev.GetX(newPointerIndex);
_lastTouchY = ev.GetY(newPointerIndex);
_activePointerId = ev.GetPointerId(newPointerIndex);
}
break;
}
return true;
}
protected override void OnDraw(Canvas canvas)
{
base.OnDraw(canvas);
canvas.Save();
canvas.Translate(_posX, _posY);
canvas.Scale(_scaleFactor, _scaleFactor, _lastTouchX, _lastTouchY);
_icon.Draw(canvas);
canvas.Restore();
}
I think it might have to do with this code in the beginning of the class where the bounds of the image is set to 0. But if i delete that code the image wont render.
public GestureRecognizer(Context context, ImageView imgview)
: base(context, null, 0)
{
//_icon = context.Resources.GetDrawable(Resource.Drawable.ic_launcher);
_icon = imgview.Drawable;
_icon.SetBounds(0, 0, _icon.IntrinsicWidth, _icon.IntrinsicHeight);
_scaleDetector = new ScaleGestureDetector(context, new MyScaleListener(this));
}
Have a look at MultiTouchController library. It makes it much easier to write multitouch applications for Android and pinch to zoom, rotate and drag is nicely implemented. Here is the link.

Detect scale and translate gestures without ScaleGestureDetector

I try to implement multi-touch-gesture detection for API level 7. That means I do not have a ScaleGestureDetector. At the moment I have something like this but it does not work well and I have more open questions on it than a full understanding:
public boolean onTouchEvent(MotionEvent ev)
{
final int action = ev.getAction();
switch (action & MotionEvent.ACTION_MASK) // why mask it with ACTION_MASK?
{
case MotionEvent.ACTION_DOWN:
{
mLastTouchX=ev.getX();
mLastTouchY=ev.getY();
mActivePointerId = ev.getPointerId(0);
break;
}
case MotionEvent.ACTION_POINTER_DOWN:
{
mLastTouchX2=ev.getX();
mLastTouchY2=ev.getY();
if (ev.getPointerCount()>1)
mActivePointerId2 = ev.getPointerId(1);
break;
}
case MotionEvent.ACTION_MOVE:
{
int pointerIndex;
float x=0.0f,y=0.0f;
try
{
if ((mActivePointerId!=INVALID_POINTER_ID) || (mActivePointerId2!=INVALID_POINTER_ID))
{
// get one of the active pointers - unfortunately here I'm not sure which one is the active one so I only can guess
pointerIndex= ev.findPointerIndex(mActivePointerId);
x= ev.getX(pointerIndex);
y= ev.getY(pointerIndex);
}
if ((mActivePointerId!=INVALID_POINTER_ID) && (mActivePointerId2!=INVALID_POINTER_ID))
{
float d1,d2;
pointerIndex = ev.findPointerIndex(mActivePointerId2);
if (pointerIndex<0) return false;
float x2 = ev.getX(pointerIndex);
float y2 = ev.getY(pointerIndex);
d1=android.util.FloatMath.sqrt((x-x2)*(x-x2)+(y-y2)*(y-y2));
d2=android.util.FloatMath.sqrt((mLastTouchX-mLastTouchX2)*(mLastTouchX-mLastTouchX2)+
(mLastTouchY-mLastTouchY2)*(mLastTouchY-mLastTouchY2));
if ((d1>0) && (d2>0)) // seems to be a scale gesture with two pointers
{
float w,h,s;
transOffsetX=0.0f;
transOffsetY=0.0f;
s=d1/d2;
mScaleFactor*=s;
matrix.postScale(s,s);
w=(scrWidth-(scrWidth*s))/2.0f;
h=(scrHeight-(scrHeight*s))/2.0f;
matrix.postTranslate(w,h);
imgOffsetX+=w;
imgOffsetY+=h;
}
mLastTouchX2 = x2;
mLastTouchY2 = y2;
}
else if (mScaleFactor==1.0) // seems to be a translate gesture with only one pointer
{
mScaleFactor=1.0f;
transOffsetX+=(x-mLastTouchX);
transOffsetY+=(y-mLastTouchY);
matrix.setTranslate(transOffsetX,transOffsetY);
}
if ((mActivePointerId!=INVALID_POINTER_ID) || (mActivePointerId2!=INVALID_POINTER_ID))
{
mLastTouchX = x;
mLastTouchY = y;
}
}
catch (ArrayIndexOutOfBoundsException aioobe)
{
// this is really strange, this exception can be caused by
// pointerIndex= ev.findPointerIndex(mActivePointerId);
// x= ev.getX(pointerIndex);
// above which seems to be a Android bug?
}
break;
}
case MotionEvent.ACTION_UP:
case MotionEvent.ACTION_POINTER_UP:
{
breakMapThread=true;
mActivePointerId = INVALID_POINTER_ID;
mActivePointerId2 = INVALID_POINTER_ID;
// gestrue seems to be finished so trigger update of the view here
...
break;
}
}
return true;
}
The whole thing is working really lousy. Scale gestures cause high, additional translations, a single tap into the view causes a translation too and translations are not very accurate. Beside of that I found some motion event constants ACTION_POINTER_1/2/3_DOWN/UP which never seem to be used. So I'm absolutely unsure if my whole assignment form _DOWN/_UP to the pointers one and two are correct.
Any ideas, hints, tips to get this thing working?
One idea is to find the source of ScaleGestureDetector and include it into your project.
But there are problems: sometimes the system detects click and longclick (especially when a finger goes out of window boundaries).

Motionevent.ACTION_UP firing constantly?

In my program I am having it draw a rectangle while the finger is down and moving than erase it after the finger is up. this is to show the user the range of values he/she is using as a "guess" to find the root. however the rectangle never shows up! But, if I remove the call to close the rectangle in the "action_up" part the user can draw the rectangle.
Here's the code:
in the on draw function:
if(dataline>1)//if greater than 1, draw rectangle
{
myPaint.setColor(Color.CYAN);
canvas.drawRect(tX1,0, tX2,canvas.getHeight(),myPaint);
}
in the motion event function:
public boolean onTouchEvent(MotionEvent ev) {
final int action = ev.getAction();
switch (action) {
case MotionEvent.ACTION_DOWN: {
final float x = ev.getX();
final float y = ev.getY();
// Remember where we started
mLastTouchX = x;
mLastTouchY = y;
tX1=(int)ev.getX();
tX2=tX1;
x_1 = ev.getX();
x_1=(x_1-X1)/(zoom_x);
clicks= 1;
tX1=(int) ev.getX();//set first x coord
tX2=tX1;// make second x coord equal to the first
}
case MotionEvent.ACTION_MOVE: {
final float x = ev.getX();
final float y = ev.getY();
// Calculate the distance moved
final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;
mLastTouchX = x;
mLastTouchY = y;
dataline=2;//let onDraw() draw the rectangle while dragging finger
tX2+= (int)dx;// find new second coordinate
}
case MotionEvent.ACTION_UP: {
dataline=0;//if commented out, rectangle is drawn otherwise, it is never seen.
}
}
return true;
}
Problem solved! I have learned that you have to put a return statement in each case, or else it will just run all the cases.

Categories

Resources