Finding the cartesian coordinates of another smartphone? - android

Considering I have two smartphones, A and B. If I am holding smartphone A, is there a way to determine the location of B in relation to myself?
So if we had the situation of this image:
:
it would tell me B is at position (2, 1).
Inventive methods like using the strength of wifi signals to get position are more then welcomed. Could I also determine if there is a wall between the two phones?

As far as I understand, both Bluetooth and Wi-Fi signals move in a radial wave in all directions - so while you may be able to measure the distance between the two terminals, I doubt this would give you a correct reading, since it could either be on your "side" of the circular area or another one equidistant to the source of the signal.
While GPS may be the obvious solution since it provides exactly what you're looking for, I'm not sure if you're including this as an option. Once you get the two coordinate sets for the devices, it's a matter of calculating the offset (N/S and E/W) from device 1.
This makes me think on the accuracy given by GPS, which considering that you were including the tag Bluetooth in the question and since Bluetooth has a range of around 15-30 feet (type 2-3) and the GPS has an error margin of 25-35 feet, this may not be good either.
If you do manage to get a connection (Bluetooth) between the two devices, you'd already know your in that range, but not in what direction. You can get a signal strength measure from Android 2.1: How do I poll the RSSI value of an existing Bluetooth connection? but there-again I'm not sure as to how to detect in what direction the user is relative to you, just how close he is, in any direction. Hell the other device could be on top of you or under you and you'd get virtually the same reading as next to you given equal distances.
This is all on a "static" approach, meaning both devices are stationary. However if you measure that value and then take a step to your left and re-measure you can tell if you're closer or further away from the source, so with a little trial and error of one of the devices moving you could determine a relative position, this however may not be useful for you since you'd either need to tell the phone manually that you moved left and to re-measure or use something more complicated like monitoring the accelerometer of the phone which could tell in what direction the phone moved and map the strength of the signal.
Am I losing my mind? Probably.
No answer as far as I'm concerned for now, just thoughts. Depending on what the application will do, there may be other viable approaches. This is my brain-dump answer so hopefully someone else can read it and come up with a more elaborate answer instead of rumbling thoughts.

If the distance from A to B is more than a few metres, then if you can get the GPS location of both A & B, you can easily calculate distance and bearing between them using the Location.distanceTo() and Location.bearingTo() methods.
Depending on the physical environment, it is probable that two GPSs which are physically close together will be using the same satellites for calculation, and that the errors will be similar in both. So in some cases using GPS may work for small distances.

Related

How to detect passing over speedbreaker via android

I am new to Android and not familiar with the different type of sensors. I am working on an App and part of it has to count the number of speedbreakers that a car will pass over during its journey.
The phone will remain stationary in one position
I have tried using the accelerometer and tried to use a peak in vertical acceleration followed by a negative vertical acceleration as an indicator of a speedbreaker but there is too much fluctuation for an accurate result.
Here speedbreakers are smooth slopes of cement, usually a few inches high.
Any help or guidance would be greatly appreciated.
I would start out with logging the accelerometer data (probably the speed data (from gps) as well) and manually marking the points where you pass a speedbreaker.
Then the first step would be to see if there is something to see here: maybe there is a clear 'signal' that's apart from the normal fluctuation, but you haven't gotten the tweaking right.
If there isn't, you can always see if there is something there that you haven't recognized. Some sort of of normal behaviour that stops for a bit. These can be harder to detect visually so you'd have to do something with the signal.
If you know nothing about signal processing it might be tricky, but as a random starting point, read up on how step-detection works: https://en.wikipedia.org/wiki/Step_detection
Some of the methods might be usefull to you. Look at the FFT, process your signal to filter out the points you need. Maybe even train a simple network to see if it finds anything going on at your desired points?

Track a smartphone in 3D space with an outside source tracking.

unsure if this questions is right for this site, but anyway...
In few months I'll be beginning my Uni dissertation and I have a few ideas, some revolve around tracking the position of a smartphone in 3D space, I'm wondering if there is any way to do this.
I believe there is no way to do this solely with the phone and its' gyroscope and accelerometer, even though I feel it might be possible. I am expecting the best case will be to use 1-3 Raspberry Pi and hopefully apply something to them which will enable them to track the phone in 3D space.
Ideally I'd like something which can get precise readings of the phones' movements, even through objects such as a TV screen or even a wall, but to simply get the phones location in space as a blank dot. I'm not sure of any signals smartphones emit, or one the external 1-3 raspberry Pis could give out which could be used to precisely triangulate it's position, but I feel its possible.
Interesting question. I am probably just narrowing down the field of options suggested by Jim, but Android does have a WiFi manager class. http://developer.android.com/reference/android/net/wifi/WifiManager.html , which seems to have methods for calculating and comparing signal strengths. I would say with a sufficient number of cheap wifi routers/access points at known locations, and a little bit if math this would be very doable, not to mention fun. It almost seems too easy, but if you have to factor in some sort of calibration of the strength of each access point, etc,, it might make a good dissertation.
Your question is about "tracking with an outside source" but you later reference it's internal sensors like you will have access to them. You also reference a signal it might emit that you could measure and track.
Tracking in 3D space requires a reference point (a known location) and then coordinates indicating the object position relative to that point (called "triangulation" by most people). Cell towers have enough data to triangulate position based on their constant collection of the phone's cellular radio signal and the ability for several known locations to collect that data. So, is it possible? Absolutely and they do it all the time. Look up "cell tower hand off" or "handover" for more information. However, in most situations you will not have access to all of this data and several known locations to make appropriate determinations. Also, they typically don't need very fine location data (less than a meter), so it isn't necessarily helpful to you. Maybe a good dissertation though...
Most other signals produced by the device (e.g.. bluetooth, wifi) use much shorter wavelengths (higher frequency) and are subject to much more interference caused by objects like walls to use for exact triangulation purposes. However, it seems like that might make a good dissertation, since that's what you're interested in doing. It's heavy on signal processing and it may not even be possible. Further reading here:
http://www.networkworld.com/article/2170751/tech-primers/location-based-wi-fi-services-can-add-immediate-value-to-wi-fi-deployments.html
http://en.wikipedia.org/wiki/Radio_spectrum
So, if we assume access to sensors, it helps. However, you still need "known locations" because sensor data is subject to error. And as the device moves and reports information about how it is moving, small errors turn into big ones. Think about shooting a laser across a room vs. across a city. Again, not a bad concept for a dissertation wherein you might spend the time and energy collecting various sensor data in order to reduce the impact of the error on location calculations.
EDIT
Signal timing is radar and the devices probably are precise enough, but maybe you could find a way. Signal strength is what is used currently. Roughly it's like this - Station A measures the device signal at "X". Station B measures the same signal at "1/8 of X". Because signal strength is typically a cube root function, you might assume Station B is twice as far from the signal as Station A. If you add another station, you could triangulate the position - if it were a perfect world. However, the phone might be in the person's pocket while they are standing next to Station B. How would you know? You need several stations. Also, if the phone is next to a surface that reflects it's signal back to Station A, it may "appear" to be closer if the reflection occurs just right.
We haven't even covered things like station calibration (how do we know the signal is "1/8" strength at B compared to A?), determining the actual location of each station, doing real-time signal processing and other types of interference.
I don't do this kind of work. I did some signal processing a long time ago, so this is just high-level stuff that I find interesting. You should also look at this (in case you weren't aware):
http://en.wikipedia.org/wiki/IBeacon

Tracking linear movement on mobile devices [duplicate]

I was looking into implementing an Inertial Navigation System for an Android phone, which I realise is hard given the accelerometer accuracy, and constant fluctuation of readings.
To start with, I set the phone on a flat surface and sampled 1000 accelerometer readings in the X and Y directions (parallel to the table, so no gravity acting in these directions). I then averaged these readings and used this value to calibrate the phone (subtracting this value from each subsequent reading).
I then tested the system by again placing it on the table and sampling 5000 accelerometer readings in the X and Y directions. I would expect, given the calibration, that these accelerations should add up to 0 (roughly) in each direction. However, this is not the case, and the total acceleration over 5000 iterations is nowhere near 0 (averaging around 10 on each axis).
I realise without seeing my code this might be difficult to answer but in a more general sense...
Is this simply an example of how inaccurate the accelerometer readings are on a mobile phone (HTC Desire S), or is it more likely that I've made some errors in my coding?
You get position by integrating the linear acceleration twice but the error is horrible. It is useless in practice.
Here is an explanation why (Google Tech Talk) at 23:20. I highly recommend this video.
It is not the accelerometer noise that causes the problem but the gyro white noise, see subsection 6.2.3 Propagation of Errors. (By the way, you will need the gyroscopes too.)
As for indoor positioning, I have found these useful:
RSSI-Based Indoor Localization and Tracking Using Sigma-Point Kalman Smoothers
Pedestrian Tracking with Shoe-Mounted Inertial Sensors
Enhancing the Performance of Pedometers Using a Single Accelerometer
I have no idea how these methods would perform in real-life applications or how to turn them into a nice Android app.
A similar question is this.
UPDATE:
Apparently there is a newer version than the above Oliver J. Woodman, "An introduction to inertial navigation", his PhD thesis:
Pedestrian Localisation for Indoor Environments
I am just thinking out loud, and I haven't played with an android accelerometer API yet, so bear with me.
First of all, traditionally, to get navigation from accelerometers you would need a 6-axis accelerometer. You need accelerations in X, Y, and Z, but also rotations Xr, Yr, and Zr. Without the rotation data, you don't have enough data to establish a vector unless you assume the device never changes it's attitude, which would be pretty limiting. No one reads the TOS anyway.
Oh, and you know that INS drifts with the rotation of the earth, right? So there's that too. One hour later and you're mysteriously climbing on a 15° slope into space. That's assuming you had an INS capable of maintaining location that long, which a phone can't do yet.
A better way to utilize accelerometers -even with a 3-axis accelerometer- for navigation would be to tie into GPS to calibrate the INS whenever possible. Where GPS falls short, INS compliments nicely. GPS can suddenly shoot you off 3 blocks away because you got too close to a tree. INS isn't great, but at least it knows you weren't hit by a meteor.
What you could do is log the phones accelerometer data, and a lot of it. Like weeks worth. Compare it with good (I mean really good) GPS data and use datamining to establish correlation of trends between accelerometer data and known GPS data. (Pro tip: You'll want to check the GPS almanac for days with good geometry and a lot of satellites. Some days you may only have 4 satellites and that's not enough) What you might be able to do is find that when a person is walking with their phone in their pocket, the accelerometer data logs a very specific pattern. Based on the datamining, you establish a profile for that device, with that user, and what sort of velocity that pattern represents when it had GPS data to go along with it. You should be able to detect turns, climbing stairs, sitting down (calibration to 0 velocity time!) and various other tasks. How the phone is being held would need to be treated as separate data inputs entirely. I smell a neural network being used to do the data mining. Something blind to what the inputs mean, in other words. The algorithm would only look for trends in the patterns, and not really paying attention to the actual measurements of the INS. All it would know is historically, when this pattern occurs, the device is traveling and 2.72 m/s X, 0.17m/s Y, 0.01m/s Z, so the device must be doing that now. And it would move the piece forward accordingly. It's important that it's completely blind, because just putting a phone in your pocket might be oriented in one of 4 different orientations, and 8 if you switch pockets. And there's many ways to hold your phone, as well. We're talking a lot of data here.
You'll obviously still have a lot of drift, but I think you'd have better luck this way because the device will know when you stopped walking, and the positional drift will not be a perpetuating. It knows that you're standing still based on historical data. Traditional INS systems don't have this feature. The drift perpetuates to all future measurements and compounds exponentially. Ungodly accuracy, or having a secondary navigation to check with at regular intervals, is absolutely vital with traditional INS.
Each device, and each person would have to have their own profile. It's a lot of data and a lot of calculations. Everyone walks different speeds, with different steps, and puts their phones in different pockets, etc. Surely to implement this in the real world would require number-crunching to be handled server-side.
If you did use GPS for the initial baseline, part of the problem there is GPS tends to have it's own migrations over time, but they are non-perpetuating errors. Sit a receiver in one location and log the data. If there's no WAAS corrections, you can easily get location fixes drifting in random directions 100 feet around you. With WAAS, maybe down to 6 feet. You might actually have better luck with a sub-meter RTK system on a backpack to at least get the ANN's algorithm down.
You will still have angular drift with the INS using my method. This is a problem. But, if you went so far to build an ANN to pour over weeks worth of GPS and INS data among n users, and actually got it working to this point, you obviously don't mind big data so far. Keep going down that path and use more data to help resolve the angular drift: People are creatures of habit. We pretty much do the same things like walk on sidewalks, through doors, up stairs, and don't do crazy things like walk across freeways, through walls, or off balconies.
So let's say you are taking a page from Big Brother and start storing data on where people are going. You can start mapping where people would be expected to walk. It's a pretty sure bet that if the user starts walking up stairs, she's at the same base of stairs that the person before her walked up. After 1000 iterations and some least-squares adjustments, your database pretty much knows where those stairs are with great accuracy. Now you can correct angular drift and location as the person starts walking. When she hits those stairs, or turns down that hall, or travels down a sidewalk, any drift can be corrected. Your database would contain sectors that are weighted by the likelihood that a person would walk there, or that this user has walked there in the past. Spatial databases are optimized for this using divide and conquer to only allocate sectors that are meaningful. It would be sort of like those MIT projects where the laser-equipped robot starts off with a black image, and paints the maze in memory by taking every turn, illuminating where all the walls are.
Areas of high traffic would get higher weights, and areas where no one has ever been get 0 weight. Higher traffic areas are have higher resolution. You would essentially end up with a map of everywhere anyone has been and use it as a prediction model.
I wouldn't be surprised if you could determine what seat a person took in a theater using this method. Given enough users going to the theater, and enough resolution, you would have data mapping each row of the theater, and how wide each row is. The more people visit a location, the higher fidelity with which you could predict that that person is located.
Also, I highly recommend you get a (free) subscription to GPS World magazine if you're interested in the current research into this sort of stuff. Every month I geek out with it.
I'm not sure how great your offset is, because you forgot to include units. ("Around 10 on each axis" doesn't say much. :P) That said, it's still likely due to inaccuracy in the hardware.
The accelerometer is fine for things like determining the phone's orientation relative to gravity, or detecting gestures (shaking or bumping the phone, etc.)
However, trying to do dead reckoning using the accelerometer is going to subject you to a lot of compound error. The accelerometer would need to be insanely accurate otherwise, and this isn't a common use case, so I doubt hardware manufacturers are optimizing for it.
Android accelerometer is digital, it samples acceleration using the same number of "buckets", lets say there are 256 buckets and the accelerometer is capable of sensing from -2g to +2g. This means that your output would be quantized in terms of these "buckets" and would be jumping around some set of values.
To calibrate an android accelerometer, you need to sample a lot more than 1000 points and find the "mode" around which the accelerometer is fluctuating. Then find the number of digital points by how much the output fluctuates and use that for your filtering.
I recommend Kalman filtering once you get the mode and +/- fluctuation.
I realise this is quite old, but the issue at hand is not addressed in ANY of the answers given.
What you are seeing is the linear acceleration of the device including the effect of gravity. If you lay the phone on a flat surface the sensor will report the acceleration due to gravity which is approximately 9.80665 m/s2, hence giving the 10 you are seeing. The sensors are inaccurate, but they are not THAT inaccurate! See here for some useful links and information about the sensor you may be after.
You are making the assumption that the accelerometer readings in the X and Y directions, which in this case is entirely hardware noise, would form a normal distribution around your average. Apparently that is not the case.
One thing you can try is to plot these values on a graph and see whether any pattern emerges. If not then the noise is statistically random and cannot be calibrated against--at least for your particular phone hardware.

Implementing a Pedometer: How to find a local peak?

I am trying to make a very simple Android pedometer, but so far it's failing pretty badly. I got some advice here and there on the internet, but nothing seems to be working.
I basically set an acceleration sensor and get the values of the x, y and z axis. After that I calculate their distance from the origin, which is basically:
d = sqrt(x²+y²+z²) followed by the calculation of their moving average. My idea was whenever I find a local peak I should count as a step. The issue is, I have no idea how to find the local peak right away in order to count the step. I am sorry if this seems like a simple problem, but I really have no idea how to go on from here.
Thank you very much.
I tried to implement this and the approach you take is subject to substantial measurement errors. You should just accept it. The reasons are:
a phone can be in any location, not only the trousers' pocket
phone accelerators are not medically precise, and they can deviate and "flow" given exactly the same position in space
moving average is not the best known technique to do this, a better one would use some sort of waves and wavelet analysis
One step has two local maximums and two local minimums (if I remember correctly)
There is no strict definition of a "step" globally accepted, this is due to physiology, measurements and various techniques used in the research field
Now to your question:
Plot the signal from the three axis you have, this will dramatically help you (signal vs time)
Define a window of a fixed (or slightly moving) size, moving window is required to detect people who walk slower, run or have disability
Every time you have a new measurement (usual frequency is about 20-30 Hz), put one to the tail of the window (your signal measurement's queue) and pop one from the head. In this way you will always have a queue with the last N measurements
Again for every mesurements recalculate your stuff and decide if the window contains one (or two!) minimums and count it as a step
good luck!

How to detect phone orientation relative to direction of movement

Problem: Consider an Android device mounted in a vehicle. We want to measure various things using the accelerometer. These measurements should be relative to the vehicle's coordinate system. Thus we need to figure out how the device is oriented in relation to the vehicle. The simple solution would be to just average the "early" acceleration after startup, but I'm worried that the first thing the driver will do is leave a parking lot or a turning left onto the road, thus describing a curve. It would be feasible to ask the user to start measuring after getting on the road, but what if there is no acceleration at that point?
Question: Can someone suggest a strategy or an algorithm that would do a reasonable job of telling how the phone is oriented in relation to the vehicle? A pointer to some FOSS source that solves a similar problem would be even better.
Notes:
I do not want to use GPS for this as it would complicate things for the user.
We can interact with the user, for example by requesting that the user starts measurements before starting out.
The accelerometer alone would not provide sufficient information for your purpose, I would hazard: The vectors acting upon the device, besides vehicle acceleration, will be the vibration of the vehicle itself, road inclines, braking and centripetal force from turns.
The amount of data from sensors due to all those forces would be impractical to aggregate on a phone, hence moving averages or other cumulation approaches would not give even vaguely precise results.
Also, a lot of the acceleration data would be lost between sensor sampling times, even if you were to use the highest available sensor rate.
Recommendation: Use GPS or network positioning information, generate moving averages to account for minor aberrations, and use the result.

Categories

Resources