Android: MotionEvent Need some clarification of GetHistoricalX and GetHistoricalY - android

In my application, I required to draw strokes to the view with touches. I would have to save a previous coordinated when touch down and touch move (after processed the touch move). When I looked at the API, there is GetHistoricalX and GetHistoricalY.
1) How does these historical data work. Will they ever be removed?
2) Will it start keeping all the historical data when the touch start moving?
Since I using Xamarin Form which also implement for IOS. Does IOS has the same thing as this.

Android:
On Android, GetHistoricalX|Y will contain the X/Y that have not been reported since the last ACTION_MOVE event (i.e. this are batched into a single touch event for efficiency).
The coordinates are "historical" only insofar as they are older than the current coordinates in the batch; however, they are still distinct from any other coordinates reported in prior motion events. To process all coordinates in the batch in time order, first consume the historical coordinates then consume the current coordinates.
Note: Since there is no standard Input Sampling rates defined for /dev/input/event0, the rate is determined by the hardware developer and how their digitizer grid driver is written/configured. Android will then collect the number of samples available and offer those to the developer within the historical data along with the most current X/Y sample. If everyone knows how to get this frequency from the OS, I would love to know ;-)
You can use the GetHistorySize to get the number of "points" available, process them first and then process the current X/Y, but remember these are only the locations since the last move batch event.
There is sample Java code under the Batching section # https://developer.android.com/reference/android/view/MotionEvent.html
iOS:
On iOS, the number of touch events reported are based on a 60hz sampling rate of their digitizer. Some iDevices have a faster frequency (newer iPads at 120hz & iPad Pro at 240hz). These 'extra" points are reported within the coalescedTouchesForTouch method (Xamarin.iOS = GetCoalescedTouches).
Note: iOS even has predictedTouchesForTouch (Xamarin.iOS = GetPredictedTouches) that might be available within the UIEvent. These can be used to "pre-draw" where the user might be moving to, Apple has dev code samples of this when using the Apple Pencil to prevent a visual "lag" from the tip of the pencil...
Net Result:
In the end, if you need to preserve a history of X/Y touch locations in order to replay them, you will need to store these yourself as neither iOS or Android is going to buffer these outside of a touch event.

Related

How to receive touch events when touch area is very large (e.g. ball of the hand)? (Androis/iOS)

I am working on a project for which I have to measure the touch surface area. This works both for android and iOS as long as the surface area is low (e.g. using the thumb). However, when the touch area increases (e.g. using the ball of the hand), the touch events are no longer passed to the application.
On my IPhone X (Software Version 14.6), the events where no longer passed to the app when the UITouch.majorRadius exceeded 170. And on my Android device (Redmi 9, Android version 10) when MotionEvent.getPressure exceeded 0.44.
I couldn't find any documentation on this behavior. But I assume its to protect from erroneous inputs.
I looked in the settings of both devices, but I did not find a way to turn this behavior off.
Is there any way to still receive touch events when the touch area is large?
Are there other Android or iOS devices that don't show this behavior?
I would appreciate any help.
So I've actually done some work in touch with unusual areas. I was focusing on multitouch, but its somewhat comparable. The quick answer is no. Because natively to the hardware there is no such thing as a "touch event".
You have capacitance changes being detected. That is HEAVILY filtered by the drivers which try to take capacitance differences and turn it into events. The OS does not deliver raw capacitance data to the apps, it assumes you always want the filtered versions. And if it did deliver that- it would be very hardware specific, and you'd have to reinterpret them into touch events
Here's a few things you're going to find out about touch
1)Pressure on android isn't what you should be looking at. Pressure is meant for things like styluses. You want getSize, which returns the normalized size. Pressure is more for how hard someone is pushing, which really doesn't apply to finger touches these days.
2)Your results will vary GREATLY by hardware. Every single different sensor will differ from each other.
3)THe OS will confuse large touch areas and multitouch. Part of this is because when you make contact with a large area like your heel of your hand, the contact is not uniform throughout. Which means the capacitances will differ, which will make it think you're seeing multiple figures. Also when doing heavy multitouch, you'll see the reverse as well (several nearby fingers look like 1 large touch). This is because the difference between the two, on a physical level, is hard to tell.
4)We were writing an app that was enabling 10 finger multitouch actions on keyboards. We found that we missed high level multitouch from women (especially asian women) more than others- size of your hand greatly effected this, as does how much they hover vs press down. The idea that there were physical capacitance differences in the skin was considered. We believed that it was more due to touching the device more lightly, but we can't throw out actual physical differences.
Some of that is just a dump because I think you'll need to know to look out for it as you continue. I'm not sure exactly what you're trying to do, but best of luck.

Does Tensorflow TFDetect demo track objects end-to-end?

This question is regarding TFDetect demo which is available as a part of Tensorflow Android Camera Demo. The description says,
Demonstrates a model based on Scalable Object Detection using Deep
Neural Networks to localize and track people in the camera preview in
real-time.
When I ran the demo, the app was creating boxes around detected objects with a fractional number assigned to each object (I guess the confidence score). My question is, how is tracking being performed here. Is it multiple object tracking (described here) where there is an id assigned to each track and the tracks are stored in the memory, or is it just detection of objects across multiple frames to see how the object is moving?
Please correct me if I missed out on anything.
Two main things are going on here:
1: detection is being done in a background thread. This takes on the order of 100-1000ms depending on device, so is not sufficient to maintain smooth tracking.
2: tracking is being done in the UI thread. This generally takes < 5ms per frame, and can be done on every frame once the position of objects is known. The tracker implements pyrimidal lucas-kanade optical flow on the median movement of FAST features -- press the volume key and you'll see the individual features being tracked.
The tracker runs on every frame, storing optical flow keypoints at every timestamp. Thus, when a detection comes in, the tracker is able to figure out where it currently is by walking the position forward along the collected keypoints deltas. There is also some non-max suppression being done as well by multiboxtracker.
Once an object is tracked by the tracker, no further input from the detector is required. The tracker will automatically drop the track when the normalized cross-correlation with the original detection drops below a certain threshold, or update the position/appearance when the detector finds a better match with significant overlap.

Detect pattern of motion on an Android device

I want to detect a specific pattern of motion on an Android mobile phone, e.g. if I do five sit-stands.
[Note: I am currently detecting the motion but the motion in all direction is the same.]
What I need is:
I need to differentiate the motion downward, upward, forward and backward.
I need to find the height of the mobile phone from ground level (and the height of the person holding it).
Is there any sample project which has pattern motion detection implemented?
This isn't impossible, but it may not be extremely accurate, given that the accuracy of the accelerometer and gyroscopes in phones have improved a lot.
What your app will doing is taking sensor data, and doing a regression analysis.
1) You will need to build a model of data that you classify as five sit and stands. This could be done by asking the user to do five sit and stands, or by loading the app with a more fine-tuned model from data that you've collected beforehand. There may be tricks you could do, such as loading several models of people with different heights, and asking the user to submit their own height in the app, to use the best model.
2) When run, your app will be trying to fit the data from the sensors (Android has great libraries for this), to the model that you've made. Hopefully, when the user performs five sit-stands, he will generate a set of motion data similar enough to your definition of five sit-stands that your algorithm accepts it as such.
A lot of the work here is assembling and classifying your model, and playing with it until you get an acceptable accuracy. Focus on what makes a stand-sit unique to other up and down motions - For instance, there might be a telltale sign of extending the legs in the data, followed by a different shape for straightening up fully. Or, if you expect the phone to be in the pocket, you may not have a lot of rotational motion, so you can reject test sets that registered lots of change from the gyroscope.
It is impossible. You can recognize downward and upward comparing acceleration with main gravity force but how do you know is your phone is in the back pocket when you rise or just in your waving hand when you say hello? Was if 5 stand ups or 5 hellos?
Forward and backward are even more unpredictable. What is forward for upside-down phone? What if forward at all from phone point of view?
And ground level as well as height are completely out of measurement. Phone will move and produce accelerations in exact way for dwarf or giant - it more depends on person behavior or motionless then on height.
It's a topic of research and probably I'm way too late to post it here, but I'm foraging the literature anyway, so what?
All kind of machine learning approaches have been set on the issue, I'll mention some on the way. Andy Ng's MOOC on machine learning gives you an entry point to the field and into Matlab/Octave that you instantly can put to practice, it demystifies the monsters too ("Support vector machine").
I'd like to detect if somebody is drunk from phone acceleration and maybe angle, therefore I'm flirting with neuronal networks for the issue (they're good for every issue basically, if you can afford the hardware), since I don't want to assume pre-defined patterns to look for.
Your task could be approached pattern based it seems, an approach applied to classify golf play motions, dancing, behavioural every day walking patterns, and two times drunk driving detection where one addresses the issue of finding a base line for what actually is longitudinal motion as opposed to every other direction, which maybe could contribute to find the baselines you need, like what is the ground level.
It is a dense shrub of aspects and approaches, below just some more.
Lim e.a. 2009: Real-time End Point Detection Specialized for Acceleration Signal
He & Yin 2009: Activity Recognition from acceleration data Based on
Discrete Consine Transform and SVM
Dhoble e.a. 2012: Online Spatio-Temporal Pattern Recognition with Evolving Spiking Neural Networks utilising Address Event Representation, Rank Order, and Temporal Spike Learning
Panagiotakis e.a.: Temporal segmentation and seamless stitching of motion patterns for synthesizing novel animations of periodic dances
This one uses visual data, but walks you through a matlab implementation of a neuronal network classifier:
Symeonidis 2000: Hand Gesture Recognition Using Neural Networks
I do not necessarily agree with Alex's response. This is possible (although maybe not as accurate as you would like) using accelerometer, device rotation and ALOT of trial/error and data mining.
The way I see that this can work is by defining a specific way that the user holds the device (or the device is locked and positioned on the users' body). As they go through the motions the orientation combined with acceleration and time will determine what sort of motion is being performed. You will need to use class objects like OrientationEventListener, SensorEventListener, SensorManager, Sensor and various timers e.g. Runnables or TimerTasks.
From there, you need to gather a lot of data. Observe, record and study what the numbers are for doing specific actions, and then come up with a range of values that define each movement and sub-movements. What I mean by sub-movements is, maybe a situp has five parts:
1) Rest position where phone orientation is x-value at time x
2) Situp started where phone orientation is range of y-values at time y (greater than x)
3) Situp is at final position where phone orientation is range of z-values at time z (greater than y)
4) Situp is in rebound (the user is falling back down to the floor) where phone orientation is range of y-values at time v (greater than z)
5) Situp is back at rest position where phone orientation is x-value at time n (greatest and final time)
Add acceleration to this as well, because there are certain circumstances where acceleration can be assumed. For example, my hypothesis is that people perform the actual situp (steps 1-3 in my above breakdown) at a faster acceleration than when they are falling back. In general, most people fall slower because they cannot see what's behind them. That can also be used as an additional condition to determine the direction of the user. This is probably not true for all cases, however, which is why your data mining is necessary. Because I can also hypothesize that if someone has done many situps, that final situp is very slow and then they just collapse back down to rest position due to exhaustion. In this case the acceleration will be opposite of my initial hypothesis.
Lastly, check out Motion Sensors: http://developer.android.com/guide/topics/sensors/sensors_motion.html
All in all, it is really a numbers game combined with your own "guestimation". But you might be surprised at how well it works. Perhaps (hopefully) good enough for your purposes.
Good luck!

Xe5 locationsensor distance doesn't work?

I use in my app TLocationSensor on Android but I have a problem with Distance property. If I set 10 meters and I don't move OnLocationChange is fired.
What should I set and how to make it work ?
Probably the XE5's location sensor component does work - it is just the device incapable of providing Delphi accurate enough data under given circumstances.
You can take some mapping software like MapSoft Navigator and record a track for few hours. If that track would show the significant deviations, then it means the device thinks it is being sporadically moved. And it reports those movements to Delphi, which triggers the events.
See the data presumably gathered under open skies, even without reinforced concrete walls making reflections and distortions: https://gis.stackexchange.com/questions/12011

Drawing in 2D space using Accelerometer (gyroscope?)

I am trying to create an application that will track movement of the device in 2D space. After doing research online, all I could find that one way to do it is integrate linear acceleration twice but the error is horrible.
Are there any solutions to this problem? I would like to be able to move my phone up, which would cause a vertical line to be drawn on the screen, to scale of how far the phone was moved. Then if I move the phone to the left, horizontal line would be drawn - effectively allowing me to draw on the screen using movements of the phone.
Can this be done at all? If so, what direction should I take in the development? I don't know where to start...
EDIT: More about the project:
I am trying to make an exercise app that will track the movement of the leg/arm: for example, when you are doing stomach crunches and the phone is attached with an armstrap to your ankle.
The app would track repeated movements of the leg.
Unfortunately the accelerometers in these phones are nowhere near what you need to implement an inertial measurement unit. The big problem is since you are integrating twice an integration always comes with a constant integral(x,dx) = x^2/2 +c this constant is what makes this difficult. To make things worse you get it twice, once when integrating to get velocity and once to get position.
One method of fixing this that I have seen in commercial innertial measurement units is called a zero velocity null, this is where you use some other source of data to tell it when you have stopped the motion of the device so you can zero out the velocity. For example I saw a project put an inertial measurement unit on a shoe and it would zero the velocity whenever it detected the shoe being put on the ground which vastly improved the accuracy. Its possible that you could use a camera or something to determine this, however I have not seen it done. If you would like to start messing with this then you are an awesome person and I would love to hear how it turns out.
Edit: I should clarify that the constant I mention above is where the error accumulates. If you can zero velocity null it then you periodically drop the accumulated error from your stored current velocity. The error in position will still accumulate, however this would make it not drift when they are holding it relatively still which may make it passable for drawing.
I know no other way other than integrating the acceleration twice.
Moreover I think that it's not possible if you don't have knowledge about other sensors that might be in your device (for example on one of my devices I have 7 (seven) sensors related to various physical signals the device might be receiving).
Other than that remember that the sensor data is noisy and almost always must be pre-filtered. For example you can use geometric mean of last 10 samples. That should lower your error by providing a smoother input data to the integrating function.

Categories

Resources