I am using the photoview library of chris banes to handle all zoom events for my gallery.
I want to detect if the image has been zoomed in or has been zoomed out by the user without overriding all those double tap,and touch event methods.Is there an efficient way to achieve this?
To answer my own question:-
I found a much better way to implement zoom detection which works for pinch zoom as well as double tap(any zooming event, without having to override each method).Found it nowhere on the web to detect a zoom event.If there's a much more efficient way, please let me know :)
(Also I am using chris banes photoview library to handle the zooming.)
So to detect a zooming event,get the rectangle of the current photoview and when a zoom event happens , OnMatrixChangeListener gets called and there you compare the rectangles to see .
(Now simply using this listener to handle a zooming event won't work, because the listener gets called, everytime you change your image,(in case you're using it in a gallery) ,it also gets called when the orientation of the screen is changed and also when simple no-zoom intending touches are made to the screen.)
Also, when the screen orientation changes,sometimes the photoview returns a 0 rectangle,so you have to check for that as well,here's my code:-
if (savedInstanceState.getBoolean(Constants.ZOOM)) {
photoViewAttacher = new PhotoViewAttacher(backgroundImage);
mWindowRect = new RectF(photoViewAttacher.getDisplayRect());
photoViewAttacher.setOnMatrixChangeListener(new PhotoViewAttacher.OnMatrixChangedListener() {
#Override
public void onMatrixChanged(RectF rect) {
//need to differentiate between screen orientation
if (mWindowRect.left == 0 && mWindowRect.top == 0 && mWindowRect.right == 0 && mWindowRect.bottom == 0) {
mWindowRect = new RectF(rect);
}
if (Math.abs(mWindowRect.left - rect.left) < 2 && Math.abs(mWindowRect.top - rect.top) < 2 && Math.abs(mWindowRect.right - rect.right) < 2 && Math.abs(mWindowRect.bottom - rect.bottom) < 2) {
viewPager.setLocked(false);
thumbnailsContainer.startAnimation(appear);
thumbnailsContainer.setClickable(true);
} else {
viewPager.setLocked(true);
thumbnailsContainer.startAnimation(disappear);
thumbnailsContainer.setClickable(false);
}
Log.i("ZOOM", "default rect: " + mWindowRect);
Log.i("ZOOM", "zoom rect: " + rect);
}
});
}
I can see a bunch of listeners, that can be notified, when a certain event on a PhotoView occurrs.
Refer to:
IPhotoView
Here I can see the listeners:
void setOnDoubleTapListener(GestureDetector.OnDoubleTapListener newOnDoubleTapListener);
void setOnScaleChangeListener(PhotoViewAttacher.OnScaleChangeListener onScaleChangeListener);
void setOnSingleFlingListener(PhotoViewAttacher.OnSingleFlingListener onSingleFlingListener);
Hope this helps.
Related
I am trying to get if the user is swiping horizontally or vertically using a GestureDetector widget. For some reason I am unable to make the onVerticalDragEnd property work. HorizontalDragEnd works just fine.
Am I supposed to add anything else?
`
child: new GestureDetector(
onHorizontalDragEnd: (DragEndDetails details) {
print("horizontal drag");
},
onVerticalDragEnd : (DragEndDetails details) {
print("vertical drag");
} ,
child: new GridView.count(..
`
Try logging when your GestureDetector and/or parent components are being created, and see if it's being re-rendered before the End event. I had the same problem, and it was due to side effects of the Update event I used causing a re-render. After making some modifications to prevent that from happening, the End event started working.
Instead of GestureDetector, you can use Listener, it worked for me. Here's how I implemented it -
import 'dart:math';
double radians(double degree) {
return ((degree * 180) / pi);
}
void swipe(moveEvent) {
double angle = radians(moveEvent.delta.direction);
if (angle >= -45 && angle <= 45) {
print("Swipe Right");
} else if (angle >= 45 && angle <= 135) {
print("Swipe Down");
} else if (angle <= -45 && angle >= -135) {
print("Swipe Up");
} else {
print("Swipe Left");
}
}
child: Listener(
onPointerMove: (moveEvent) => swipe(moveEvent),
child: GridView.count(...),
)
But this calls function multiple times on a single swipe
Looks like GridView widget is catching the vertical drag gesture. May not be an optimal solution, but by making GestureDetector widget as a parent for EACH GridView child, I was able to make this work.
Good day,
The reason this did not work is because you cannot specify onHorizontalDrag and onVerticalDrag at the same time. Doing so will result in one of the recognizer being ignored. In this case, it seems as though the vertical gesture was ignored in favor of horizontal.
Ref: https://github.com/flutter/flutter/blob/03a1f4acb315bd5cd99c5cafe19a4875f9f98422/packages/flutter/lib/src/widgets/gesture_detector.dart#L187
Hopes this clears things up for you :)
Using setTranslationX, I'm trying to animate a view as I swipe it across the screen. Then after it passes a threshold-X, I assign the view a new RelativeLayout.RIGHT_OF.
I want it to stop animating (whether or not I continue swiping) at that point and basically lock to that new anchor.
This is where the problem is: suddenly the view jumps X position to the right of its new anchor.
I've tried, when it's >= threshold, to set setTranslationX(0), but then I see the view twitch/flash twice, once to its original 0, then to the new 0.
I would love to get rid of that double twitch/flash, but don't know how at this point.
#Override
public void onChildDraw(Canvas c ... float dX) {
threshold = anchorView.getRight();
if (animate) {
if (dX >= 0) {
translationX = Math.min(dX, threshold);
if (dX >= threshold) {
translationX = 0; // (A) if I do this, then mainView flashs twice: original 0, then new 0
setToRightOf(mainView, anchorView);
mainView.invalidate(); // has no effect
}
} else {
translationX = 0;
}
// if I don't do (A), then mainView will suddenly jump to 2*threshold
mainView.setTranslationX(translationX);
return;
}
super.onChildDraw(c ... dX);
}
Okay, instead of assigning RelativeLayout.RIGHT_OF during onDraw to set the threshold boundary, I took it out and assigned it when my touch left the screen.
But to insure I wouldn't swipe back behind that threshold while swiping, I had to add another case to check translationX and instead of previously trying to rely on the RelativeLayout anchor.
Now, I'm using setTag() and getTag() to help confirm the threshold during the swipe:
if (dX >= 0) {
if ((Object) past != tag)
translationX = Math.min(dX, threshold);
else
translationX = threshold;
if (dX >= threshold) {
if ((Object) past != tag) {
anchorView.setTag(past);
}
}
} else {
...
}
Plus a couple other places to make sure I reset anchorView's tag and the translationX when needed, then it's all good.
It works for now!
(doesn't directly solve the double flash/twitch issue, but a different approach to the same goal)
(any other recommendations besides using setTag()?)
P.S. In my earlier attempts, instead of invalidate(), I later tried mainView.requestLayout() with no success either, thinking requestLayout() also factors in position.
I am trying to change the value of a number based on sliding my finger. I am currently using ACTION_MOVE to change the value when I drag across my view but if I drag to fast the number barely changes. If I drag slowly I can get the correct number.
Is there a way to make the change quicker depending on the speed of the motion. I am looking into Velocity Tracker but this only returns the speed of the move and I need to make the change while dragging my finger.
Is there an optimization needed to detect ACTION_MOVE in real-time?
More like pseudo code right now, but Here is what I would do:
Extend the image view to check if the event x value is between the starting x value of the image view and the width as such:
private void checkFingerPosition(int eventXPosition) {
if(eventXPosition > this.X && eventXPosition < this.x + this.getWidth())
imageInterface.showImage(this.getDrawable());
}
I would create the interface that your activity would have to implement
public interface ImageInterface {
public void showImage(Drawable drawable);
}
In your activity, implement the ImageInterface like so
implements ImageInterface
#Override
public void showImage(Drawable drawable) {
//TODO - show drawable here
}
Then take your touch event ACTION_MOVE, report the x value to all you image views like so:
ACTION_MOVE:
for(ExtendedImageView extendedImageView : ExtendedImageViewArray) {
entendedImageView.checkFingerPosition(event.X);
}
I have had pinch-to-zoom working pretty well for a few application versions now using Mike Ortiz's TouchImageView and a custom TouchViewPager so that it can work inside a view pager. This was working pretty well - the only method that TouchViewPager overrides is onInterceptTouchEvent:
public boolean onInterceptTouchEvent(MotionEvent ev) {
TouchImageView view = getTouchView();
if( null != view ){
if( !view.isAtLimit() && view.getCurrentScale() > 1){
return false;
} else {
Log.v(TAG, "View limit = " + view.isAtLimit());
Log.v(TAG, "View scale = " + view.getCurrentScale());
}
}
return super.onInterceptTouchEvent(ev);
}
Recently, we created a new view that extends FrameLayout and manages downloading the bitmap from the internet. While the bitmap download is in progress, this layout shows a ProgressBar, and then adds the TouchImageView once it's ready. Visually, this seems to work - we see a ProgressBar and then the TouchImageView loads - but the touches get all messed up and generally just don't work consistently.
I've tried delegating onTouchEvent from the progress image view to the touch image view - no luck. I've also tried overriding onInterceptTouchEvent in the progress image view to always return false, and that didn't work either.
Can anyone help with some ideas on where to continue debugging this problem? I've been working on this for about a week now, with little success.
This is the behavior I am seeing:
Activity loads, image is visible inside the ViewPager -> FrameLayout -> TouchImageView.
ViewPager works to swipe and change images, but pinch zoom does not
work on first or second images. Eventually you get to a position
that pinch zoom will work on. I can't figure out what's different.
If you do try to pinch zoom before swiping, then the ViewPager swipes won't work,
and it will take several swipes to get the ViewPager to page.
Once you are on a "working position", swipe-to-page works fine, as
well as pinch zoom and panning around a zoomed image. After this all positions work.
I was able to get this working, strangely, by implementing a reset method inside the TouchImageView class:
public void resetZoom() {
Matrix imageMatrix = getImageMatrix();
RectF drawableRect = new RectF(0, 0, bmWidth, bmHeight);
RectF viewRect = new RectF(0, 0, getWidth(), getHeight());
imageMatrix.setRectToRect(drawableRect, viewRect, Matrix.ScaleToFit.CENTER);
matrix = imageMatrix;
setImageMatrix(matrix);
fitAndCenterView();
invalidate();
}
This method resets the zoom on the image when called - and when we set up the image inside the framelayout calling this gets everything set up. Oddly, this is not required when the image is not inside a FrameLayout.
I'd of course appreciate any code cleanup and whatnot that we can do.
I'm learning to use libgdx with universal-tween-engine and haven't been able to figure out how to touch (or click on the desktop app) a point on the screen and have a texture move all the way to the touched location without keeping the touch or click active until the end-point is reached.
When the touch event is initiated, the animation begins and the graphic moves towards the location. The graphic will follow the finger/mouse-pointer if a touch and drag is initiated. If I touch a point, the graphic will move towards the point until the touch is released. Then it stops where it was when touch is released.
I'm looking to touch-and-release and have that graphic move to the touched point, and am probably not understanding something about the tween engine implementation. I've pasted the tweening code below.
public void render() {
camera.update();
batch.setProjectionMatrix(camera.combined);
batch.begin();
batch.draw(texture.getTexture(), texture.getBoundingBox().x, texture.getBoundingBox().y);
batch.end();
Tween.registerAccessor(Plane.class, new TextureAccessor());
TweenManager planeManager = new TweenManager();
float newX = 0;
float newY = 0;
boolean animateOn = false;
if(Gdx.input.isTouched()) {
newX = Gdx.input.getX();
newY = Gdx.input.getY();
animateOn = true;
}
if (animateOn == true && (texture.getX() != newX || texture.getY() != newY)) {
Tween.to(texture, TextureAccessor.POSITION_XY, 10)
.target(newX, newY)
.ease(TweenEquations.easeNone)
.start(planeManager);
planeManager.update(1);
if (texture.getX() == newX && texture.getY() == newY) {
animateOn = false;
}
}
}
Originally, I had the tweening code inside the conditional for isTouched() and didn't use the newX, newY or animateOn variables. I thought using isTouched() to only set the new coordinates and animation state would then make the loop trigger the tween. The older code looked like this:
if(Gdx.input.isTouched()) {
newX = Gdx.input.getX();
newY = Gdx.input.getY();
Tween.to(texture, TextureAccessor.POSITION_XY, 10)
.target(newX, newY)
.ease(TweenEquations.easeNone)
.start(planeManager);
planeManager.update(1);
}
I've also tried using justTouched(), but the graphic would only move very slightly toward the touched point.
I've been struggling with this for a few hours, I'd really appreciate it if anyone could point me in the right direction.
Thanks.
Tween.registerAccessor(Plane.class, new TextureAccessor());
TweenManager planeManager = new TweenManager();
These two lines should go in the create() method, not the render() one! Here, you're instantiating a new manager on every frame, you only need one manager, that's all, not an army of them!
Also, you need to update the manager on every frame, not just when animateOn is true, else you'll need to keep your finger pressed...
The correct code is as follows, learn from it, you'll get a better understanding of how the Tween Engine works :)
// Only one manager is needed, like a Spritebatch
private TweenManager planeManager;
public void create() {
Tween.registerAccessor(Plane.class, new TextureAccessor());
planeManager = new TweenManager();
}
public void render() {
// The manager needs to be updated on every frame.
planeManager.update(Gdx.graphics.getDeltaTime());
camera.update();
batch.setProjectionMatrix(camera.combined);
batch.begin();
batch.draw(texture.getTexture(), texture.getBoundingBox().x, texture.getBoundingBox().y);
batch.end();
// When the user touches the screen, we start an animation.
// The animation is managed by the TweenManager, so there is
// no need to use an "animateOn" boolean.
if (Gdx.input.justTouched()) {
// Bonus: if there is already an animation running,
// we kill it to prevent conflicts with the new animation.
planeManager.killTarget(texture);
// Fire the animation! :D
Tween.to(texture, TextureAccessor.POSITION_XY, 10)
.target(Gdx.input.getX(), Gdx.input.getY())
.ease(TweenEquations.easeNone)
.start(planeManager);
}
}
I was trying to implement this behavior in the wrong way. Instead of using isTouched or justTouched(), I needed to use touchDown() from GestureListener.
I created a class that implemented GestureDetector (call it touchListener())inside of my main class (the one that implements ApplicationLisetener )in the main libgdx project and put the x and y capturing code inside of toucDown (I noticed tap() was also being triggered). I moved the tween functions (the actual tweening, the call to registerAccessor(), and the creation of the new tween manager) into the update() method of touchListener().
I added a call to touchListener()'s update function inside the render() loop of the main libgdx class.
I doubt I did this is the best way, but I hope it's helpful to someone else in the future.