I'm trying to measure the pressure of something on an android screen, however it can't be detected using an onTouch method because it isn't anything like a finger (something like a bottle, book, etc..). Is there a way to bypass this, or can the touch screen not read these kinds of objects?
I don't need a specific pressure measurement, just either 0 or 1, but nothing I've found ever addresses this.
To give you a complete answer :
"A capacitive touch screen is a control display that uses the conductive touch of a human finger or a specialized device for input.
Unlike resistive and surface wave panels, which can sense input from either fingers or simple styluses, capacitive touch screen panels must be touched with a finger or a special capacitive pen or glove. The panel is coated with a material that can store electrical charges and the location of touch to the screen is signaled by the change in capacitance in that location."
Most devices these days use capacitive screens and therefore you probably wont be able to achieve what you want to achieve. taken from : https://whatis.techtarget.com/definition/capacitive-touch-screen
Related
I am working on a project for which I have to measure the touch surface area. This works both for android and iOS as long as the surface area is low (e.g. using the thumb). However, when the touch area increases (e.g. using the ball of the hand), the touch events are no longer passed to the application.
On my IPhone X (Software Version 14.6), the events where no longer passed to the app when the UITouch.majorRadius exceeded 170. And on my Android device (Redmi 9, Android version 10) when MotionEvent.getPressure exceeded 0.44.
I couldn't find any documentation on this behavior. But I assume its to protect from erroneous inputs.
I looked in the settings of both devices, but I did not find a way to turn this behavior off.
Is there any way to still receive touch events when the touch area is large?
Are there other Android or iOS devices that don't show this behavior?
I would appreciate any help.
So I've actually done some work in touch with unusual areas. I was focusing on multitouch, but its somewhat comparable. The quick answer is no. Because natively to the hardware there is no such thing as a "touch event".
You have capacitance changes being detected. That is HEAVILY filtered by the drivers which try to take capacitance differences and turn it into events. The OS does not deliver raw capacitance data to the apps, it assumes you always want the filtered versions. And if it did deliver that- it would be very hardware specific, and you'd have to reinterpret them into touch events
Here's a few things you're going to find out about touch
1)Pressure on android isn't what you should be looking at. Pressure is meant for things like styluses. You want getSize, which returns the normalized size. Pressure is more for how hard someone is pushing, which really doesn't apply to finger touches these days.
2)Your results will vary GREATLY by hardware. Every single different sensor will differ from each other.
3)THe OS will confuse large touch areas and multitouch. Part of this is because when you make contact with a large area like your heel of your hand, the contact is not uniform throughout. Which means the capacitances will differ, which will make it think you're seeing multiple figures. Also when doing heavy multitouch, you'll see the reverse as well (several nearby fingers look like 1 large touch). This is because the difference between the two, on a physical level, is hard to tell.
4)We were writing an app that was enabling 10 finger multitouch actions on keyboards. We found that we missed high level multitouch from women (especially asian women) more than others- size of your hand greatly effected this, as does how much they hover vs press down. The idea that there were physical capacitance differences in the skin was considered. We believed that it was more due to touching the device more lightly, but we can't throw out actual physical differences.
Some of that is just a dump because I think you'll need to know to look out for it as you continue. I'm not sure exactly what you're trying to do, but best of luck.
Given that I can read the effects of gravity on my device on three axes, I'm looking for some math that will get me a more logical view of what's happening to the device in my hands. I want to create a two dimensional control based on tilting the device forward and back, and tilting the device to the right and left.
The seemingly complex part is that I'd like to have the controls behave in a predictable way regardless of the starting position that the device happened to be in when starting the controls. For instance, if the user is in bed holding the phone upside down above their head, everything should still work the same from the user's perspective even though the numbers coming off of the accelerometer will be entirely different. I can envision some sort of transformation that would yield numbers that look like the device is starting off on a flat table, given the actual state of the device when the controls start to be used.
I'm developing an Android application in which a user may be rapidly tapping with multiple fingers. Initial testing on the Nexus 10 tablet reveals that a tap rapidly followed by another tap nearby is often interpreted as a single touch-and-slide motion, with both touch points attributed to a single pointer index. This appears to be a hardware issue, or at least an issue with some low-level touch processing in Android.
Similarly, I've seen tapping with two fingers placed together simultaneously being interpreted as a single touch from one large finger.
Has anyone experienced any similar difficulty with rapid pairs of touches being mistakenly interpreted as a single touch? I'm hoping that someone has some clever workarounds, or ways to cleverly infer that a moving touch may actually be coming from a single touch?
To be clear, my observations are based on visualizing touches by turning on "Pointer Location" from the developer settings, and so are independent of my application.
I am doing some testing on my Nexus 7 tablet's "Chrome" browser, and found the curious webkitForce property in the touch object provided by touch events.
Sure enough, it appears to be a scalar that is set usually between about 0.05 and around 1.2 which appears to scale with the finger pressure on the capacitive touch screen. It works with multiple simultaneous touches; I have a test page that drew out circles scaled to this value and it correlates with finger pressure/position quite well, providing a 3 dimensional quantity for each touch.
There is some slight inconsistency that may be seen when changing pressure on one finger affecting the reading provided for another finger that is close to it in either axis. This looks to be a result of limitations from either the actual capacitive touch hardware or software that processes output from it.
I have googled this and found practically nothing. It is really strange that Google does not have any sort of site (that I can find through the search engine) that documents this.
So my question is where can I find more information about this neat little feature? How come iOS devices (with arguably more responsive and capable touch screens) provide nothing of this sort? Which Android devices and OS/Browser combinations provide this feature?
.force is part of the Touch Events v2 spec: http://dvcs.w3.org/hg/webevents/raw-file/default/touchevents.html#widl-Touch-force
a relative value of pressure applied, in the range 0 to 1, where 0 is
no pressure, and 1 is the highest level of pressure the touch device
is capable of sensing; 0 if no value is known. In environments where
force is known, the absolute pressure represented by the force
attribute, and the sensitivity in levels of pressure, may vary.
Is there any technique to differentiate between finger touch and palm rest on surface while drawing on touch surface in iOS touch based os?
Edit : Added android tag as question is relevant for any touch based os.
Thanks
I can't speak for Android systems, but on iOS I do not believe there is any definitive distinction as all touches, regardless of surface area' are resolved to individual points.
That said, there are ways you could determine whether a specific touch is a palm or finger. If you are making a hand-writing App for instance, if you ask the user whether they are left or right handed then you could ignore all touches to the right/left of the touch that would logically be the finger/stylus. Your method for eliminating the "palm touches" would be specific to the App you are writing.
Something else I have noticed (although I hasten to add this is not particularly reliable!) is that a palm touch will often result in many small touches around the palm area rather than a single touch due to the shape of the palm, small palm movement/rolling, etc. If you know you only ever need to receive and use one touch, this may be something you can manipulate.
In the end, any technique you use to differentiate a palm touch from a finger touch will probably be logical, rather than API/hardware-available. Again, this is iOS specific I am not at all familiar with Android's touch system.