I'd like my app to be able to detect when the user carrying the phone falls, using only accelerometer data (because it's the only sensor available on all smartphones).
I first tried to implement an algorithm to detect free fall (accelerometer total acceleration nearing zero, followed by high acceleration due to ground hitting, and a short period of motionlessness to ditch false positives when the user is just walking downstairs quickly), but there's a lot of ways to fall, and for my algorithm implementation, I can always find a case where a fall is not detected, or where a fall is wrongly detected.
I think Machine Learning can help me solve this issue, by learning from a lot of sensor values coming from different devices, with different sampling rates, what is a fall and what is not.
Tensorflow seems to be what I need for this as it seems it can run on Android, but while I could find tutorials to use it for offline image classifying (here for example), I didn't find any help to make model that learns patterns from motion sensors values.
I tried to learn how to use Tensorflow using the Getting Started page, but failed to, probably because I'm not fluent in Python, and do not have machine learning background. (I'm fluent in Java and Kotlin, and used to Android APIs).
I'm looking for help from the community to help me use Tensorflow (or something else in machine learning) to train my app to recognize falls and other motion sensors patterns.
As a reminder, Android reports motion sensors values at a random rate, but provides a timestamp in nanoseconds for each sensor event, which can be used to infer the time elapsed since the previous sensor event, and the sensor readings are provided as a float (32bits) for each axis (x, y, z).
If you have your data well organized, then you might be able to use the Java-based Weka machine learning environment:
http://www.cs.waikato.ac.nz/ml/weka/
You can use Weka to play around with all the different algorithms on your data. Weka uses a ARFF file for the data. It's pretty easy to create that if you have your data in JSON or CSV.
Once you find a algo/model that works, the you can easily put that into your Android app:
http://weka.wikispaces.com/Use+Weka+in+your+Java+code
You really don't need Tensorflow if you dont require deep learning algos, which I don't think you require. If you did need the deep learning algo, then DeepLearning4J is a java based open source solution for Android:
https://deeplearning4j.org/android
STEP 1)
Create a training database.
You need some sample of accelerometer data labelled ‘falling’ and ‘not falling’.
So you will basically record the acceleration in different situations and label them. i.e. To give an order of magnitude of the quantity of data, 1000 to 100,000 periods of 0.5 to 5 seconds.
STEP 2)
Use SK learn with python. Try different model to classify your data.
X is your vectors containing your sample of 3 accelerations axes.
Y is your target. (falling/not falling)
You will create a classifier that can classify X to Y.
STEP 3)
Make your classifier compatible with Android.
Sklearn-porter will port you code in the coding language that you like.
https://github.com/nok/sklearn-porter
STEP 4)
Implement this ported classifier in your app. Feed it with data.
Related
I'm working on an indoor positioning system and I have one doubt after failing at all my real tests:
I have done some work with android sensors values and some machine learning algorithms (with good theorical results) but in a real environement i found some problems.
My proposal was to have three phases:
The fist phase consist on collecting data through an android app with a map with some points. You move to real point position and save the values of sensors asociated with the coordinates of the point.
The second phase consist on creating a machine learning model (in this case, a classifier) to predict the user position based on sensor values at every time.
Then, we export the classifier to the device and get predictions of user position in real time.
The data we stored on fingerprinting phase (phase 1) was the x,y,z values of accelerometer, magnetomer and gyroscope given by the Android Sensor Manager. On a second approach, we used a median filter to filtrate noise from that values. Our problem is that the way you hold the phone change the measurements. The reason is that Android sensors values are given for device coordinate system, so sensor values are variable to phone orientation and tilt.
Android Device Coordinate System
So, the question is:
Is posible or there is a way to build an indoor localization system (with a positioning accuracy around 2-3 meters) by only taking in account android smartphone sensors (accelerometer, gyroscope and magnetomer) using machine learning algorithms (or other algorithms) to work on real environements?
Thanks in advance!!
There are a few companies that started doing fingerprinting solely based on magnetometer, but as far as I know they ended up at least mixing it with other technologies, like BLE beacons or similar.
From what I was told the problem is that magnetic fields can change drastically due to changes inside your building but also changes outside of your scope (i.e. thunderstorms).
Taking one step back I see another problem with your approach: different device models behave radically different in terms of the data their sensors provide. To make things worse, the same device may provide very different data today than it did yesterday. This is especially true for the magnetometer - at least from my experience.
I'm trying to do some pattern detection experiments with Android motion sensors data, especially accelerometer data. First, in a recording mode, I get the sensors data associated with my pattern. And then in a detection mode, I would like to re-do the same pattern, and match the two dataset somehow (using data analytics, machine learning, etc) to detect if I did the same pattern again, so to raise a flag.
For instance, I gather sensor data when rotating the phone in a circle or S-like movement. Then would like to detect if I do a similar pattern, the pattern matches or not.
OR
One way I think to do it, is to keep 100 samples of my gesture say if we want the gesture to be 5 seconds long (at 20 Hz). So then we have to apply ML with the 2nd dataset to see if it is almost a "match".
Anyone has any experience with such sensor data recognition? Any helps or suggestions how to achieve it?
I'm trying to write an android code that uses the device sensors to detect freefall scenario.
I searched the web a lot for a solution to this problem but I was unable to find anything useful.
I did see that there are several apps that does exactly this so it is possible, but i didn't find any code sample or tutorial on how to do that.
Can anyone please help me with a code snippet or even with a mathematical calculation using the sensors data?
Thanks in advance
The device is in free fall if the length of the vector given by TYPE_ACCELEROMETER is approximately zero. In theory, it should be exactly zero, in practice, it well be only near zero. So you need to come up with some threshold by trial and error and declare that the device is in free fall if the length of that vector is below this threshold.
Check out the API here SensorEvent Values and the math behind FreeFall here Wikipedia.
You are trying to detect speed in some direction. Look at the motion equations on wikipedia. You are detecting acceleration over time that is normalized for the gyroscopic rotation of the device.
Also see: How to approach Fall Detection Algorithm
From what I've read, the accelerometer normally measures gravity. Thus, if you're in freefall and the device is not being moved laterally, all accelerometer readings should be zero. (Disclaimer: I have not written any accelerometer code.)
Google the iFall project by a group at Florida State University. The have published a paper describiing the approach they took for their Android iFall application, which gives a host of references for further/extended study. They also have an API available and explain how to use it, if you want a fast shortcut approach. (To use their API, I believe you just need to download and install their iFall app form the Playstore)
I'm building an application for Android devices that requires it to recognize, by accelerometer data, the difference between walking noise and double tapping it. I'm trying to solve this problem using Neural Networks.
At the start it went pretty well, teaching it to recognize the taps from noise such as standing up/ sitting down and walking around at a slower pace. But when it came to normal walking it never seemed to learn even though I fed it with a large proportion of noise data.
My question: Are there any serious flaws in my approach? Is the problem based on lack of data?
The network
I've choosen a 25 input 1 output multi-layer perceptron, which I am training with backpropagation. The input is the changes in acceleration every 20ms and output ranges from -1 (for no-tap) to 1 (for tap). I've tried pretty much every constallation of hidden inputs there are, but had most luck with 3 - 10.
I'm using Neuroph's easyNeurons for the training and exporting to Java.
The data
My total training data is about 50 pieces double taps and about 3k noise. But I've also tried to train it with proportional amounts of noise to double taps.
The data looks like this (ranges from +10 to -10):
Sitting double taps:
Fast walking:
So to reiterate my questions: Are there any serious flaws in my approach here? Do I need more data for it to recognize the difference between walking and double tapping? Any other tips?
Update
Ok so after much adjusting we've boiled the essential problem down to being able to recognize double taps while taking a brisk walk. Sitting and regular (in-house) walking we can solve pretty good.
Brisk walk
So this is some test data of me first walking then stopping, standing still, then walking and doing 5 double taps while I'm walking.
If anyone is interested in the raw data, I linked it for the latest (brisk walk) data here
Do you insist on using a neural network? If not, here is an idea:
Take a window of 0.5 seconds and consider the area under the curve (or since your signal is discrete, the sum of the absolute values of each sensor reading-- the red area in the attached image). You will probably find that that sum is high when the user is walking and much much lower when they are sitting and/or tapping. You can set a threshold above which you consider a given window to be taken while the user is walking. Alternatively, since you have labelled data, you can train any binary classifier to differentiate between walking and not walking.
You can probably improve your system by considering other features of the signal, such as how jagged the line is. If the phone is sitting on a table, the line will be almost flat. If the user is typing, the line will be kind of flat, and you will see a spike every now and then. If they are walking, you will see something like a sine wave.
Have you considered that the "fast walking" and "fast walking + double tapping" signals might be too similar to differentiate using only accelerometer data? It may simply not be possible to achieve accuracy above a certain amount.
Otherwise, neural networks are probably a good choice for your data, and it still may be possible to get better performance out of them.
This very-useful paper (http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf) recommends that you whiten your dataset so that it has a mean of zero and unit covariance.
Also, since your problem is a classification problem, you should make sure that you are training your network using a cross-entropy criteria (http://arxiv.org/pdf/1103.0398v1.pdf ) rather than RMSE. (I have no idea whether Neuroph supports cross-entropy or not.)
Another relatively simple thing you could try, as other posters suggested, is transforming your data. Using an FFT or DCT to transform your data to the frequency domain is relatively standard for time-series classification.
You could also try training networks on different sized windows and averaging the results.
If you want to try some more difficult NN architectures, you could look at the Time-Delay-Neural-Network (just google this for the paper), which takes multiple windows into account in its structure. It should be relatively straightforward to use one of the Torch libraries (http://www.torch.ch/) to implement this, but it might be hard to export the network to an Android environment.
Finally, another method of getting better classification performance in time-series data is to consider the relationships between adjacent labels. Conditional Neural Fields (http://code.google.com/p/cnf/ - note:I have never used this code) do this by integrating neural networks into conditional random fields, and, depending on the patterns of behavior in your actual data, may do a better job.
What probably would work is to filter the data using a Fourier transform first. Walking has a sinus like amplitude, your double taps would stand-out in the transform-result as a different frequency. I guess a neural network can than determine if the data contains your double tabs because it has the extra frequency (the double tabs frequency). Some questions remain:
How long the sample of data needs to be?
Can your phone do all the work it needs to do, does it have enough processing power?
You might even want to consider using the GPU for this.
Another option is to use the Fourier output and some good old Fuzzy Logic.
This sound like fun...
I've been researching for a bit now and now I have to decide which road to take.
Mine requirements: Need to know device's orientation relative to the true heading (geographic north pole, not magnetic pole).
For that I must use compass and now I have to decide which other thing, accelerometer or gyroscope.
Since this is a new thing to me I've spent last few hours reading stacks and wikipedia articles and still I am confused.
I am targeting both platforms (iOS and Android) and I am developing them with Appcelerator Titanium. With Titanium I can easily get accelerometer's values (x,y,z) and trueHeading.
Since iPhone 3GS does not have gyroscope obviously I can't use it on that device. Newer iPhones and Android devices have it.
So the questions are:
Is accelerometer's XYZ and compas's TrueHeading data enough for me to calculate device pitch, roll and yaw? But it has to be accurate.
Is it more accurate to use TrueHeading from compas and use gyroscope's values instead of accelerometer's?
Is it clever to combine both accelerometer and gyroscope with TrueHeading?
If I take the first road I don't have to write Titanium module for fetching the gyroscope data since it gives me only accelerometer data and I can use this on 3GS iPhone also.
If I take the second road I have to write two modules (iOS and Android) to fetch me gyroscope data and I lose 3GS support.
If I take the third road I again have to write Titanium modules and I lose 3GS support.
First of all if you don't have a huge installed base of 3GS users but write a new app, don't care about old hardware. IMO it doesn't make sense from an economical point of view but will shrink your number of alternatives in system architecture.
What you are looking for is called sensor fusion. Doing this consequently requires some heavy math like Kalman Filters etc. The good news is that it exists on iPhone (Core Motion) and AFAIK on Andriod as well (sounds like it is in Sensor fusion implemented on Android?).
I don't know much about appcelerator aside from the name and thus cannot say anything about an easy way to use it. Anyway if not implemtented on an abstract layer, I assume appcelerator provides you with the possibility to do native API calls. Thus you should be able to embed the native API (after fiddling around some time;-).
In general it depends on your requirements. The faster you need to have an exact result the more I'd recommend real sensor fusion including all 3 sensors. If you are fine with a slower response time, a combination of compass and accelerometer will do.