I have an Android app which I'm running on a Chromebook. I have views which scale with pinch-and-zoom gestures when the user touches the device's screen, and these work fine on the Chromebook. I'm trying to get pinch-and-zoom working with the touchpad as well.
I can three-finger drag scrollable elements. I can two-finger drag and it drags around screen elements where dragging makes sense. I still get hover events and the events claim there are two pointers, so that's all good. However, as soon as the fingers start moving in opposing directions, the event stream stops.
Is there any way I can get the unfiltered everything input event stream so I can see what's going on? I feel like the emulation layer's best-effort attempt to make everything "just work" (and it's a really good effort!) is biting me here. I also notice that some events are coming in as generic motion events, and some are coming in as touch events. And some, like tap-to-click do some of each. If it matters, the input device data for ChromeOS Mouse claims it has the ( touchscreen mouse ) sources, which mostly makes sense. Except shouldn't it be touchpad instead since it's not directly attached to a display?
On this page, list item #5 implies that some kind of synthetic event might be created and used somehow. Is there any way to see if those are being generated? And if yes, how would I take advantage?
Help!
A little more detail: Single finger operation of the touchpad gives me ACTION_HOVER_MOVE generic events. Two-finger drag gives me ACTION_MOVE touch events so long as both fingers are moving together. As soon as they start heading in different directions, the event stream stops.
Pinch-to-zoom support for Touchpad is still work in progress. Once it is there, it will work seamlessly with the standard gesture recognizer used for touchscreen zoom as well, you should not have to do anything.
I can highly recommend upgrading to API level 24 if you want to target Chromebooks, there are also more details on input devices on Chromebooks to be found here: https://developer.android.com/topic/arc/input-compatibility.html
edit: The "touchpad" device type is very confusingly named. It is reserved for off-screen devices. The touchpad is treated as a mouse since it moves the mouse cursor on screen.
Related
I am working on a project for which I have to measure the touch surface area. This works both for android and iOS as long as the surface area is low (e.g. using the thumb). However, when the touch area increases (e.g. using the ball of the hand), the touch events are no longer passed to the application.
On my IPhone X (Software Version 14.6), the events where no longer passed to the app when the UITouch.majorRadius exceeded 170. And on my Android device (Redmi 9, Android version 10) when MotionEvent.getPressure exceeded 0.44.
I couldn't find any documentation on this behavior. But I assume its to protect from erroneous inputs.
I looked in the settings of both devices, but I did not find a way to turn this behavior off.
Is there any way to still receive touch events when the touch area is large?
Are there other Android or iOS devices that don't show this behavior?
I would appreciate any help.
So I've actually done some work in touch with unusual areas. I was focusing on multitouch, but its somewhat comparable. The quick answer is no. Because natively to the hardware there is no such thing as a "touch event".
You have capacitance changes being detected. That is HEAVILY filtered by the drivers which try to take capacitance differences and turn it into events. The OS does not deliver raw capacitance data to the apps, it assumes you always want the filtered versions. And if it did deliver that- it would be very hardware specific, and you'd have to reinterpret them into touch events
Here's a few things you're going to find out about touch
1)Pressure on android isn't what you should be looking at. Pressure is meant for things like styluses. You want getSize, which returns the normalized size. Pressure is more for how hard someone is pushing, which really doesn't apply to finger touches these days.
2)Your results will vary GREATLY by hardware. Every single different sensor will differ from each other.
3)THe OS will confuse large touch areas and multitouch. Part of this is because when you make contact with a large area like your heel of your hand, the contact is not uniform throughout. Which means the capacitances will differ, which will make it think you're seeing multiple figures. Also when doing heavy multitouch, you'll see the reverse as well (several nearby fingers look like 1 large touch). This is because the difference between the two, on a physical level, is hard to tell.
4)We were writing an app that was enabling 10 finger multitouch actions on keyboards. We found that we missed high level multitouch from women (especially asian women) more than others- size of your hand greatly effected this, as does how much they hover vs press down. The idea that there were physical capacitance differences in the skin was considered. We believed that it was more due to touching the device more lightly, but we can't throw out actual physical differences.
Some of that is just a dump because I think you'll need to know to look out for it as you continue. I'm not sure exactly what you're trying to do, but best of luck.
I'm developing an Android application in which a user may be rapidly tapping with multiple fingers. Initial testing on the Nexus 10 tablet reveals that a tap rapidly followed by another tap nearby is often interpreted as a single touch-and-slide motion, with both touch points attributed to a single pointer index. This appears to be a hardware issue, or at least an issue with some low-level touch processing in Android.
Similarly, I've seen tapping with two fingers placed together simultaneously being interpreted as a single touch from one large finger.
Has anyone experienced any similar difficulty with rapid pairs of touches being mistakenly interpreted as a single touch? I'm hoping that someone has some clever workarounds, or ways to cleverly infer that a moving touch may actually be coming from a single touch?
To be clear, my observations are based on visualizing touches by turning on "Pointer Location" from the developer settings, and so are independent of my application.
Is there any technique to differentiate between finger touch and palm rest on surface while drawing on touch surface in iOS touch based os?
Edit : Added android tag as question is relevant for any touch based os.
Thanks
I can't speak for Android systems, but on iOS I do not believe there is any definitive distinction as all touches, regardless of surface area' are resolved to individual points.
That said, there are ways you could determine whether a specific touch is a palm or finger. If you are making a hand-writing App for instance, if you ask the user whether they are left or right handed then you could ignore all touches to the right/left of the touch that would logically be the finger/stylus. Your method for eliminating the "palm touches" would be specific to the App you are writing.
Something else I have noticed (although I hasten to add this is not particularly reliable!) is that a palm touch will often result in many small touches around the palm area rather than a single touch due to the shape of the palm, small palm movement/rolling, etc. If you know you only ever need to receive and use one touch, this may be something you can manipulate.
In the end, any technique you use to differentiate a palm touch from a finger touch will probably be logical, rather than API/hardware-available. Again, this is iOS specific I am not at all familiar with Android's touch system.
I am trying to make a game on mobile with adobe air. Everything went smoothly 'till I encounter problems with mouse click event. I experience very slow response on button/movie clip when adding mouse click event listener to mimic tap/touch event on mobile. The delay time after player's finger tap the button/movie clip till the execution is 1-2 seconds (really annoying really).
So I wonder I should change to touch_tap event instead of mouse click event and hope things change for good. Unfortunately it doesn't really show any difference.
I played a lot of games on android (and I think they are made by flash) and I can not understand why their tap event and response time is unbelievably fast (almost instant after the my touch/tap on the button/movieclip). Anyone could help me shed light on this?
I don't think handling TouchEvent make THAT much difference since Flex framework currently deals with MouseEvents and there's basically no such delay.
What it reminds me though is a rare bug I met in some previous versions of FlashPlayer and (desktop) AIR where mouse and keyboard events were delayed up to several minutes(!) on some specific hardware at some specific views (I mean some set of objects on the screen). The important moment here is to say that current framerate was high and constant(!), so it's not a general performance issue. Event though Adobe says it was fixed, I'm not really sure as they didn't show any certainty about it.
So try to check if framerate is OK, if it is — nasty runtime bug... and you should try to play around with display list, blend modes, cache-as-bitmaps (if presents).
Make sure you disable doubleClick. Sometimes this is the reason for a delayed response... guess doubleClick-time for touchinputs is even longer than on desktop...
In AS3 on Android is it bad from a performance perspective to attach mouse event listeners to individual sprites rather than to the stage?
I am writing an app for an Android phone using AS3 in Flash Builder. The app has multiple screens that respond to user touch. The screens are arranged in a hierarchy and show list data so that when you click on an item in a list you are presented with a new screen with a new sub list on it.
I have been using an event listener to detect mouse / touch input and based on something I read that indicated that performance is much better if you keep the number of objects you are listening to to a minimum I have attached the mouse listeners from each screen to the stage object.
This all works fine but I am finding that as I move between screens (and they get popped or pushed onto the dislay stack) I have to keep track of alot of adding and removing listeners to the stage object. If I don't then windows higher up the hierarchy than the current screen keep receiving mouse events.
If I used listeners attached to sprites in each window then when the window was removed from the display even though it is kept in memory (ready to be popped back when a child window is closed) it won't receive any mouse events....
Performance doesn't seem to be impacted using listeners directly on sprites when using my HTC phone to test with, however I obviously don't know what it will be like on other phones. Does anyone have any experience either way or a view on the best approach?
I would recommend to use Listeners on specific sprites, as it will be easier to code and maintain, also coordinates conversion might get cumbersome to manage with different screen/sprites sizes, and removing/adding listeners might not be so easy to maintain as you add more screens...
As for performance, I don't believe Listeners will have any impact, it is just a function that get called when the sprite is clicked, if you don't set a Listener, the OS registers the click anyway and sends it down to the lower level View until it eventually finds a Listener to the event, or drops it.