The Android Gesture API is a real help and removes the need to reimplement a pretty complex wheel.
I however need the API to be more "responsive".
As way of an example. Lets say I define a Gesture, such as "Circle" in which a single finger gesture event is defined to be, yep you guessed it, a single finger circle.
If the user undertakes this gesture continuously, without lifting the finger, I would like for "onGesturePerformed ()" to be called repeatedly, ie the user continues to perform the same gesture.
I would like a firing granularity of maybe 1/4 second. I have seen a similar question but in that case the user wanted a longer delay and not a shorter delay where the user does not lift the finger.
Many thanks in advance.
Paul
Related
Sometimes when i tap on the map the tap is recognized as onPanStart. I need to do something when the user swipes on the map and something different when he taps , but there is no onSwipe gesture in the MapGesture.OnGestureListener. With using onPanStart sometimes the wrong action is called. Is there a better way to handle this?
Differentiating between gestures is somewhat subjective and as a result the parameters are generally tweaked to provide a good UX for the given use case.
Without knowing more about your exact use case, if you want the actions to correspond directly to how the HERE SDK is interpreting the input, then using the callbacks onPanStart and onTapEvent would be the right thing to do. Note that even though a Pan is technically triggered, it could have such a small velocity that the Map doesn't move much. "Pan" is equivalent to a "Swipe" gesture.
If you do want to tweak the UX a bit, an option would be to write your own Android GestureDetector to get the feel you would like (potentially fusing the result with the output of OnGestureListener events as well). Alternatively, you could also check that the Map actually moves a certain amount after onPanStart is called before triggering your event, perhaps using Map#OnTransformListener but this could be tricky to get right.
I can detect touch for my tiles and it works well from programmer's point of view. But I would love to increase an usability as well. See screenshot:
The idea is simple - if user touches some tile an action is performed. The problem is when user touches point close to two tiles. Currently I just do simple maths and select one. But this is not ergonomic. User might wanted to touch the second tile but his touch was not precise enough.
I am looking for best practice for such situation. onTouchEvent returns a single event so I cannot guess the touch direction. I lean to build some tolerance (e.g. 5 dp) and ignore such ambiguous events. Is there a better practice? And how big the tolerance area shall be?
Is it possible to detect every pixel being touched? More specifically, when the user touches the screen, is it possible to track all the x-y coordinates of the cluster of points touched by the user? How can I tell the difference between when users are drawing with their thumb and when they are drawing with the tip of a finger? I would like to reflect the brush difference depending on how users touch the screen, and would also like to track x-y coordinates of all the pixels being touched over time. Thanks so much in advance for any help.
This would be very tricky primarily because every android phone is going to behave differently. There are some touch screen devices that are very, very sensitive and some that are basically "dull" by comparison.
It also sounds more like you are wanting to track pressure - how hard is the user pushing on the screen - which is actually supported on android devices.
I think some of your answer may be found by monitoring all of the touch events - in practice, most applications ignore a great number of events or perform some kind of "smoothing" of the events since there is literally a deluge of touch events when the user is manipulating the screen. Doing this may negatively impact your applications performance though.
I would recommend that you look into pressure sensitivity and calculate a circular region around the primary touch point based on pressure, then build your brush around that.
Another idea would be to incorporate more of a gesture approach to what you are trying to do - for example, visualize touching the screen with the tip of two fingers together (index and middle) and rolling the middle finger around the index finger or simply moving the middle finger up and down in relation to the index finger. Both fingers would be moved together for painting. This could be used to manipulate drawing angle on the fly or perhaps even toggle between a set of pre-selected brushes or could change brush size on the fly as you are painting.
Some of the above ideas I would love to see implemented - let me know when you have your app ready.
Good luck!
Rodney
If you have a listener on your image it will respond that there was a touch within that bounding box, basically.
So, to get what you want, you could, but, I would never do this, create a box around every pixel, or small group of pixels, and listen for a touch.
Wherever you get a touch, it may fire off an event, then you can react accordingly.
I can't think of any other solution that will give you each pixel that a person touched, at one time.
You may want to read up on multitouch though, as there are some suggestions in here that my help you:
http://android-developers.blogspot.com/2010/06/making-sense-of-multitouch.html
If you're looking for a way to get your content view as a View after Activity#setContentView(int), then you can set an id on the outer-most element of your layout:
android:id="#+id/entire_view" and reference it in your onCreate() method after setContentView:
View view = getViewById(R.id.entire_view);
view.setOnTouchListener( ... );
I am trying to automatically scroll the browser using monkeyrunner. So far I can scroll by "Drag" event, but how can I scroll by "Flick". I appreciate if you can give me some hits or instructions.
Using drag:
for i in range(1, 40):
device.drag((400,700),(400,300),0.15,1)
MonkeyRunner.sleep(.7071)
edit
We cannot replicate the pressure using Monkeyrunner so we cannot do the flick. Just dragging is only way we have for now
MonkeyDevice.java doesn't have any flick method in it, but you can adjust the duration parameter to drag, which appears to be the third argument. A fling is basically a very quick drag, so perhaps by reducing the duration to a very small number (0.01, maybe?) you can get the emulator or device to respond to a fling.
As a work around, why not just 'drag' it many times?
It may take a little bit of work, but you should be able to reproduce the flick effect by performing lots of little drags.
Sorry I can't provide much more then that
I want to move an image around the screen according to the trackball movement. I am able to capture the movements using the onTrackballEvent method. But this is being called for very small float values (I believe for each rotation?) of the trackball.
Now as X and Y positions of views should be integers when specifying with LayoutParams, it makes no sense to move the view with every rotation. Instead I want to move the view only after the user stops rotating the trackball.
Is there any method by which we can get whether the user stopped using the trackball or not?
If there isn't (I don't know Android), the classic approach is to start a timer in each event, and keep track of the deltas reported in it (by adding them to a running sum, one for X and one for Y). On any subsequent event, before the timer has triggered, just re-start the same timer, pushing its triggering further into the future.
When the user stops generating events, the timer will eventually trigger, and then you can apply the sum of all the movements, and clear the sums.
A suitable timeout must be chosen so that the user interaction is not perceived as being too sluggish. Perhaps something on the order of 200-300 milliseconds, or more. This is easy to tweak until it feels "right".
I've also gotten a lot of value out of setting a threshold. So if the total movement in a specific direction adds up to a given amount, then trigger a navigational event.