I am looking into developing a new input method on the Android platform that would emulate the touch input of the screen, is it possible to create a service that can interact directly or indirectly with the touch API to achieve this?
Specifics:
The interactions will come in the form of colour tracking from the camera which is processed into x/y coordinates and touch:0/1 events. Is it possible to have these interact with the touchscreen just as if it were a touch on the screen itself?
I understand that there may be permission problems with this approach of 'injection' control or piggybacking?
Also this is a technical exercise for an experimental report rather than a distributable app/piece of software so root/modifications are not a problem.
I have searched to no avail on the subject (at least not on the android platform) and i would like to find out the feasibility/difficulty of the project before undertaking it so any input would be much appreciated!
I'm sort of guessing here, but MotionEvent has some set...-functions like setLocation(float x, float y). There is also MotionEvent.Pointer to play with.
Related
I am working on a project for which I have to measure the touch surface area. This works both for android and iOS as long as the surface area is low (e.g. using the thumb). However, when the touch area increases (e.g. using the ball of the hand), the touch events are no longer passed to the application.
On my IPhone X (Software Version 14.6), the events where no longer passed to the app when the UITouch.majorRadius exceeded 170. And on my Android device (Redmi 9, Android version 10) when MotionEvent.getPressure exceeded 0.44.
I couldn't find any documentation on this behavior. But I assume its to protect from erroneous inputs.
I looked in the settings of both devices, but I did not find a way to turn this behavior off.
Is there any way to still receive touch events when the touch area is large?
Are there other Android or iOS devices that don't show this behavior?
I would appreciate any help.
So I've actually done some work in touch with unusual areas. I was focusing on multitouch, but its somewhat comparable. The quick answer is no. Because natively to the hardware there is no such thing as a "touch event".
You have capacitance changes being detected. That is HEAVILY filtered by the drivers which try to take capacitance differences and turn it into events. The OS does not deliver raw capacitance data to the apps, it assumes you always want the filtered versions. And if it did deliver that- it would be very hardware specific, and you'd have to reinterpret them into touch events
Here's a few things you're going to find out about touch
1)Pressure on android isn't what you should be looking at. Pressure is meant for things like styluses. You want getSize, which returns the normalized size. Pressure is more for how hard someone is pushing, which really doesn't apply to finger touches these days.
2)Your results will vary GREATLY by hardware. Every single different sensor will differ from each other.
3)THe OS will confuse large touch areas and multitouch. Part of this is because when you make contact with a large area like your heel of your hand, the contact is not uniform throughout. Which means the capacitances will differ, which will make it think you're seeing multiple figures. Also when doing heavy multitouch, you'll see the reverse as well (several nearby fingers look like 1 large touch). This is because the difference between the two, on a physical level, is hard to tell.
4)We were writing an app that was enabling 10 finger multitouch actions on keyboards. We found that we missed high level multitouch from women (especially asian women) more than others- size of your hand greatly effected this, as does how much they hover vs press down. The idea that there were physical capacitance differences in the skin was considered. We believed that it was more due to touching the device more lightly, but we can't throw out actual physical differences.
Some of that is just a dump because I think you'll need to know to look out for it as you continue. I'm not sure exactly what you're trying to do, but best of luck.
I want to make a drawing app using Flutter. There is this widget called CustomPaint that allows you to easily have a Canvas and draw on it with you fingers.
Let's say that I want to use a tablet with a dedicated stylus will CustomPaint take into account the pressure sensitivity automatically.
If not, what should I do for my app to support the stylus.
I've been looking around for example apps and the only ones I found don't even mention the possibility of pressure sensitivity or even just plain usage with stylus.
Example apps
https://github.com/vemarav/signature
https://github.com/psuzn/draw-it
For basic input handling you would use the GestureDetector widget.
For low level input detection you can use the Listener widget that has onPointerDown, onPointerMove, onPointerHover and onPointerUp event listeners (and much more), which you can use to get the information of your stylus.
The information you can get from the listeners can be found under the according PointerEvent given by each event listener. One of the information you can get from PointerEvent is the pressure.
You can find a basic introduction to input detection under Taps, drags, and other gestures.
I'm with not much experience in Android development.
I'm considering a big project and before getting deeply into it, I want to check whether my requirements are even possible:
My goal is to manipulate the system by changing the coordinates of a user's touch on the touchscreen. For example: If a user is touching the screen on point (X,Y), I want any opened application to act like the user touched (X+5,Y-3).
I have thought on a few levels that this may be possible to be defined in:
Touch-screen's driver level, OS level, application level (i.e. background application).
A big advantage will be to built it in a way that will allow as much compatibility as possible.
What is the best/right way to do it?
I'm not looking for a full solution, only a hint regarding the best direction to start digging...
Thanks in advance.
What is the complete flow of events when touching the screen in Android?
As per my understanding, when user touches the screen:
Touch driver will find the co-ordinates and pass them to kernel
Kernel will pass it to framework
Framework will ask the graphic library to perform zoom and render (after it determines how much to zoom)
How do the drivers, kernel, native libraries, framework and application interact to achieve a desired action? I'd be great to have some light shed on this.
Please take a look at [diagram here]. In nutshell it is system calls and they are handled by OS
Hope this is will help you out
Touch Event Flow in Android
I am developing simple mobile app for iPhone and Android platform and I am looking for algorithms that would allow me to trigger certain events (functions) when we detect a certain gesture using internal accelerometer. I work with Phonegap that utilizes HTML5 and javascript which reads three coordinates (x,y and z) from accelerometer on pre-set interval (e.g. every 0.04 sec.).
I wrote a simple function that detects a shaking motion and it works quite fine but it is primitive (it only detects shaking, not the direction) - and I want to detect some other gestures such as:
- tilt (to the left/right)
- shake up/down
- shake left/right
- circular motion
- turn upside down
- etc....
Does anybody have algorithms (or at least mathematical formulas/functions) that can calculate (detect) this kind of gestures based on input values I have (x,y,z and time interval for each call)?
I am looking for any code in any programming language (I will rewrite it to javascript myself. Thanks in advance!
Dynamic Time Warping (DTW) does a good job, however I would recommend using Fast Dynamic Time Warping (Fast DTW). Especially for mobile scenarios, FastDTW is really applicable!
For a detailed version, take a look at this research paper: http://cs.fit.edu/~pkc/papers/tdm04.pdf
Edit: Some time ago, I wrote my thesis about 3D gestures for controlling devices in a smart-home setting. See it in action here (there is a link to the PDF, too). I used FastDTW for recognizing gestures on an iPhone.
You might want to try dynamic time warping. An illustrative example is here.
If I may be so bold, Fast DTW (and the related, but different FTW of Sakurai and Faloutsos) are not good solutions.
If you constrain the warping (a):
Then using a lower bound DTW is as fast as euclidean distance [b][c]
The accuracy will improve.(a,b)
For constrained warping FTW and Fast DTW are slower than brute force due to overhead (Ira Assent among others have shown this).
a) Ratanamahatana, C. A. and Keogh. E. (2004). Everything you know about Dynamic Time Warping is Wrong
b) Xiaoyue Wang, Hui Ding, Goce Trajcevski, Peter Scheuermann, Eamonn J. Keogh: Experimental Comparison of Representation Methods and Distance Measures for Time Series Data CoRR abs/1012.2789: (2010)
c) http://www.cs.ucr.edu/~eamonn/LB_Keogh.htm