Finding the direction between two bluetooth devices (localization) - android

I have an app that scans for a specific UUID that another phone is broadcasting and measures the signal strength between them and tells you whether you're getting closer or further away from them. I want to add an arrow for direction. I know you can use triangulation if you have 3 points but I want to get something somewhat accurate with 2 points.
Any algorithm or suggestion that would help?
My current idea (since for this app's use case one node will be relatively still) is to have some algorithm learning as you walk. Like if you get further away the arrow knows to disregard this direction and keep refining itself as you walk different directions.
I found a bunch of research papers on the topic but I'm not an electrical engineer so it's easy to get lost. Also I read this post and understand the many pitfalls How to measure distance between two iphone devices using bluetooth?
Thanks!

Such a solution is extremely convoluted.
Getting the direction of the signal requires far more data than normal app usage can provide. The user would have to slowly rotate the phone while taking many samples of all directions.
Getting the relative distance, as you said like "getting closer" or "going farther away" is kinda possible within a 12 meter range using the formula posted below. Above 12 meters it gets very buggy. But it's quite complex and requires you to use a moving average solution with a length that you consider adequate.
You can calculate the relation between RSSI and distance using the following formula:
RSSI = -(10*n*log10(d) + A)
In this case,
n = path loss exponent, since you don't want an exact distance, just a way to check if it's closer or further, you can use 2 as a value, I found this the most realistic one in most cases.
A = the measured RSSI value at 1 meter distance, you need to measure this in advance to do your calculations, but since this varies a lot, just use a value like -60dB. Also this one is about average in most situations.
So getting a crudely guessed distance in meters can be achieved by the formula:
distance = 10 ^ ((-RSSI - 60) / (10 * 2));
This isn't very exact but will be sufficient to tell you if it is closer/farther within a limited range.

Related

Accurate distance with GPS?

I'm developing iOS/Android app that tracks mileage user has driven in his car.
Even though the task seems pretty trivial, there are 2 problems:
1) Mileage is not accurate comparing to car's odometer. (OD-10mi, App-8.5mi)
2) When user stays still outside the car, mileage keeps accumulating (it can add up like 4mi within 30 minutes.)
Is there any "easy" fix for that without adding complicated filtering, etc?
There are two small but significant things you can do:
For each GPS sample, check its accuracy. If it's over some threshold (say 20 meters) - ignore it.
Add a method that detects if the mobile is static or not. You can do it by reading the device's accelerometer - if the delta between two readings is bigger than some threshold - the car is moving. If it's too small - ignore the GPS. You'll have to try some values until you find the right threshold/
For question 1, vehicle odometer, in the US, are only required to be within 5mph of the actual speed at 50mph. My experience shows most vehicles are more erroneous than the law requires. That 10% difference could easily become the 1.5 miles you saw.
Vehicles odometers are allowed to over estimate in europe by 7%.
My car has about 3% over estimate.
There are simple solutions, that work for cars, that have been posted here on Stackoverflow multiple times, including by myself.
There is no simple solution for pedestrians.
Answer to question 2: problem certainly comes from the accurary of the GPS location.
Android Location object comes with an estimated accuracy for the given coordinates.
Suppose you stay in absolute position (0,0) without moving. The android device GPS could produce the following Locations stream:
(1,1) with an accuracy of 2m
(-2,3) with an accuracy of 5m
(0,0) with an accuracy of 1m
etc...
If you just keep adding the distances between the successive Locations, the sum will indefinitely increase, although you don't move.
One solution could be that you take into account new Locations from the stream only if their accuracy is small enough compared to the distance to the last location.

Tracking linear movement on mobile devices [duplicate]

I was looking into implementing an Inertial Navigation System for an Android phone, which I realise is hard given the accelerometer accuracy, and constant fluctuation of readings.
To start with, I set the phone on a flat surface and sampled 1000 accelerometer readings in the X and Y directions (parallel to the table, so no gravity acting in these directions). I then averaged these readings and used this value to calibrate the phone (subtracting this value from each subsequent reading).
I then tested the system by again placing it on the table and sampling 5000 accelerometer readings in the X and Y directions. I would expect, given the calibration, that these accelerations should add up to 0 (roughly) in each direction. However, this is not the case, and the total acceleration over 5000 iterations is nowhere near 0 (averaging around 10 on each axis).
I realise without seeing my code this might be difficult to answer but in a more general sense...
Is this simply an example of how inaccurate the accelerometer readings are on a mobile phone (HTC Desire S), or is it more likely that I've made some errors in my coding?
You get position by integrating the linear acceleration twice but the error is horrible. It is useless in practice.
Here is an explanation why (Google Tech Talk) at 23:20. I highly recommend this video.
It is not the accelerometer noise that causes the problem but the gyro white noise, see subsection 6.2.3 Propagation of Errors. (By the way, you will need the gyroscopes too.)
As for indoor positioning, I have found these useful:
RSSI-Based Indoor Localization and Tracking Using Sigma-Point Kalman Smoothers
Pedestrian Tracking with Shoe-Mounted Inertial Sensors
Enhancing the Performance of Pedometers Using a Single Accelerometer
I have no idea how these methods would perform in real-life applications or how to turn them into a nice Android app.
A similar question is this.
UPDATE:
Apparently there is a newer version than the above Oliver J. Woodman, "An introduction to inertial navigation", his PhD thesis:
Pedestrian Localisation for Indoor Environments
I am just thinking out loud, and I haven't played with an android accelerometer API yet, so bear with me.
First of all, traditionally, to get navigation from accelerometers you would need a 6-axis accelerometer. You need accelerations in X, Y, and Z, but also rotations Xr, Yr, and Zr. Without the rotation data, you don't have enough data to establish a vector unless you assume the device never changes it's attitude, which would be pretty limiting. No one reads the TOS anyway.
Oh, and you know that INS drifts with the rotation of the earth, right? So there's that too. One hour later and you're mysteriously climbing on a 15° slope into space. That's assuming you had an INS capable of maintaining location that long, which a phone can't do yet.
A better way to utilize accelerometers -even with a 3-axis accelerometer- for navigation would be to tie into GPS to calibrate the INS whenever possible. Where GPS falls short, INS compliments nicely. GPS can suddenly shoot you off 3 blocks away because you got too close to a tree. INS isn't great, but at least it knows you weren't hit by a meteor.
What you could do is log the phones accelerometer data, and a lot of it. Like weeks worth. Compare it with good (I mean really good) GPS data and use datamining to establish correlation of trends between accelerometer data and known GPS data. (Pro tip: You'll want to check the GPS almanac for days with good geometry and a lot of satellites. Some days you may only have 4 satellites and that's not enough) What you might be able to do is find that when a person is walking with their phone in their pocket, the accelerometer data logs a very specific pattern. Based on the datamining, you establish a profile for that device, with that user, and what sort of velocity that pattern represents when it had GPS data to go along with it. You should be able to detect turns, climbing stairs, sitting down (calibration to 0 velocity time!) and various other tasks. How the phone is being held would need to be treated as separate data inputs entirely. I smell a neural network being used to do the data mining. Something blind to what the inputs mean, in other words. The algorithm would only look for trends in the patterns, and not really paying attention to the actual measurements of the INS. All it would know is historically, when this pattern occurs, the device is traveling and 2.72 m/s X, 0.17m/s Y, 0.01m/s Z, so the device must be doing that now. And it would move the piece forward accordingly. It's important that it's completely blind, because just putting a phone in your pocket might be oriented in one of 4 different orientations, and 8 if you switch pockets. And there's many ways to hold your phone, as well. We're talking a lot of data here.
You'll obviously still have a lot of drift, but I think you'd have better luck this way because the device will know when you stopped walking, and the positional drift will not be a perpetuating. It knows that you're standing still based on historical data. Traditional INS systems don't have this feature. The drift perpetuates to all future measurements and compounds exponentially. Ungodly accuracy, or having a secondary navigation to check with at regular intervals, is absolutely vital with traditional INS.
Each device, and each person would have to have their own profile. It's a lot of data and a lot of calculations. Everyone walks different speeds, with different steps, and puts their phones in different pockets, etc. Surely to implement this in the real world would require number-crunching to be handled server-side.
If you did use GPS for the initial baseline, part of the problem there is GPS tends to have it's own migrations over time, but they are non-perpetuating errors. Sit a receiver in one location and log the data. If there's no WAAS corrections, you can easily get location fixes drifting in random directions 100 feet around you. With WAAS, maybe down to 6 feet. You might actually have better luck with a sub-meter RTK system on a backpack to at least get the ANN's algorithm down.
You will still have angular drift with the INS using my method. This is a problem. But, if you went so far to build an ANN to pour over weeks worth of GPS and INS data among n users, and actually got it working to this point, you obviously don't mind big data so far. Keep going down that path and use more data to help resolve the angular drift: People are creatures of habit. We pretty much do the same things like walk on sidewalks, through doors, up stairs, and don't do crazy things like walk across freeways, through walls, or off balconies.
So let's say you are taking a page from Big Brother and start storing data on where people are going. You can start mapping where people would be expected to walk. It's a pretty sure bet that if the user starts walking up stairs, she's at the same base of stairs that the person before her walked up. After 1000 iterations and some least-squares adjustments, your database pretty much knows where those stairs are with great accuracy. Now you can correct angular drift and location as the person starts walking. When she hits those stairs, or turns down that hall, or travels down a sidewalk, any drift can be corrected. Your database would contain sectors that are weighted by the likelihood that a person would walk there, or that this user has walked there in the past. Spatial databases are optimized for this using divide and conquer to only allocate sectors that are meaningful. It would be sort of like those MIT projects where the laser-equipped robot starts off with a black image, and paints the maze in memory by taking every turn, illuminating where all the walls are.
Areas of high traffic would get higher weights, and areas where no one has ever been get 0 weight. Higher traffic areas are have higher resolution. You would essentially end up with a map of everywhere anyone has been and use it as a prediction model.
I wouldn't be surprised if you could determine what seat a person took in a theater using this method. Given enough users going to the theater, and enough resolution, you would have data mapping each row of the theater, and how wide each row is. The more people visit a location, the higher fidelity with which you could predict that that person is located.
Also, I highly recommend you get a (free) subscription to GPS World magazine if you're interested in the current research into this sort of stuff. Every month I geek out with it.
I'm not sure how great your offset is, because you forgot to include units. ("Around 10 on each axis" doesn't say much. :P) That said, it's still likely due to inaccuracy in the hardware.
The accelerometer is fine for things like determining the phone's orientation relative to gravity, or detecting gestures (shaking or bumping the phone, etc.)
However, trying to do dead reckoning using the accelerometer is going to subject you to a lot of compound error. The accelerometer would need to be insanely accurate otherwise, and this isn't a common use case, so I doubt hardware manufacturers are optimizing for it.
Android accelerometer is digital, it samples acceleration using the same number of "buckets", lets say there are 256 buckets and the accelerometer is capable of sensing from -2g to +2g. This means that your output would be quantized in terms of these "buckets" and would be jumping around some set of values.
To calibrate an android accelerometer, you need to sample a lot more than 1000 points and find the "mode" around which the accelerometer is fluctuating. Then find the number of digital points by how much the output fluctuates and use that for your filtering.
I recommend Kalman filtering once you get the mode and +/- fluctuation.
I realise this is quite old, but the issue at hand is not addressed in ANY of the answers given.
What you are seeing is the linear acceleration of the device including the effect of gravity. If you lay the phone on a flat surface the sensor will report the acceleration due to gravity which is approximately 9.80665 m/s2, hence giving the 10 you are seeing. The sensors are inaccurate, but they are not THAT inaccurate! See here for some useful links and information about the sensor you may be after.
You are making the assumption that the accelerometer readings in the X and Y directions, which in this case is entirely hardware noise, would form a normal distribution around your average. Apparently that is not the case.
One thing you can try is to plot these values on a graph and see whether any pattern emerges. If not then the noise is statistically random and cannot be calibrated against--at least for your particular phone hardware.

Finding the cartesian coordinates of another smartphone?

Considering I have two smartphones, A and B. If I am holding smartphone A, is there a way to determine the location of B in relation to myself?
So if we had the situation of this image:
:
it would tell me B is at position (2, 1).
Inventive methods like using the strength of wifi signals to get position are more then welcomed. Could I also determine if there is a wall between the two phones?
As far as I understand, both Bluetooth and Wi-Fi signals move in a radial wave in all directions - so while you may be able to measure the distance between the two terminals, I doubt this would give you a correct reading, since it could either be on your "side" of the circular area or another one equidistant to the source of the signal.
While GPS may be the obvious solution since it provides exactly what you're looking for, I'm not sure if you're including this as an option. Once you get the two coordinate sets for the devices, it's a matter of calculating the offset (N/S and E/W) from device 1.
This makes me think on the accuracy given by GPS, which considering that you were including the tag Bluetooth in the question and since Bluetooth has a range of around 15-30 feet (type 2-3) and the GPS has an error margin of 25-35 feet, this may not be good either.
If you do manage to get a connection (Bluetooth) between the two devices, you'd already know your in that range, but not in what direction. You can get a signal strength measure from Android 2.1: How do I poll the RSSI value of an existing Bluetooth connection? but there-again I'm not sure as to how to detect in what direction the user is relative to you, just how close he is, in any direction. Hell the other device could be on top of you or under you and you'd get virtually the same reading as next to you given equal distances.
This is all on a "static" approach, meaning both devices are stationary. However if you measure that value and then take a step to your left and re-measure you can tell if you're closer or further away from the source, so with a little trial and error of one of the devices moving you could determine a relative position, this however may not be useful for you since you'd either need to tell the phone manually that you moved left and to re-measure or use something more complicated like monitoring the accelerometer of the phone which could tell in what direction the phone moved and map the strength of the signal.
Am I losing my mind? Probably.
No answer as far as I'm concerned for now, just thoughts. Depending on what the application will do, there may be other viable approaches. This is my brain-dump answer so hopefully someone else can read it and come up with a more elaborate answer instead of rumbling thoughts.
If the distance from A to B is more than a few metres, then if you can get the GPS location of both A & B, you can easily calculate distance and bearing between them using the Location.distanceTo() and Location.bearingTo() methods.
Depending on the physical environment, it is probable that two GPSs which are physically close together will be using the same satellites for calculation, and that the errors will be similar in both. So in some cases using GPS may work for small distances.

android detect distance from router

I need to estimate the direction and the distance from the router in android application.
in this way, for example, I can know in which room I am in my house.
any ideas?
If you go to each room in your house and take average measurements as to the signal strength/rate etc, basically taking as much information for reference as possible in every room and in various parts of the room, you could make a reasonable estimate.
Bearing in mind that all this information varies quite significantly even in different parts of a single room, you would probably end up with a reading which suggested the likelihood you were in any particular room, based on perhaps the number of times that combination of data occured in different parts of that room.
It's quite a lot of work for a fairly inaccurate result but it would be good fun as a development project if that's what you're after. The principle is you need to take reference information and build a database to consult for suggestions with varying probability.
You may be able to estimate distance, but it's not going to be reliable -- the thing to do is to wander around your building, noting power levels at each point. You can then work out possible locations based on the current power level. You're not going to get a single value for your distance, though, because the signal will be affected by walls and other obstructions.
Automating any of this is left as an exercise to the reader :).
Unfortunately I don't think there is a way to find the direction of the signal but there is a math formula to calculate the distance. Just to let you know the results are quite reliable if there are no objects between the android device and the router(like 20cm divergence) but even if you are between the router and the device the results are like 10 meters different from the real distance.
This is the formula
public double getDistance(){
double exp = (27.55 - (20 * Math.log10(freqInMHz)) - signalLevelInDb) / 20.0;
double res = Math.pow(10, exp);
return res;
}

How to Calibrate Android Accelerometer & Reduce Noise, Eliminate Gravity

So, I've been struggling with this problem for some time, and haven't had any luck tapping the wisdom of the internets and related SO posts on the subject.
I am writing an Android app that uses the ubiquitous Accelerometer, but I seem to be getting an incredible amount of "noise" even while at rest, and can't seem to figure out how to deal with it as my readings need to be relatively accurate. I thought that maybe my phone (HTC Incredible) was dysfunctional, but the sensor seems to work well with other games and apps I've played.
I've tried to use various "filters" but I can't seem to wrap my mind around them. I understand that gravity must be dealt within some way, and maybe that's where I am going wrong. Currently I have tried this, adapted from a SO answer, which refers to an example from the iPhone SDK:
accel[0] = event.values[0] * kFilteringFactor + accel[0] * (1.0f - kFilteringFactor);
accel[1] = event.values[1] * kFilteringFactor + accel[1] * (1.0f - kFilteringFactor);
double x = event.values[0] - accel[0];
double y = event.values[1] - accel[1];
The poster says to "play with" the kFilteringFactor value (kFilteringFactor = 0.1f in the example) until satisfied. Unfortunately I still seem to get a lot of noise, and all this seems to do is make the readings come in as tiny decimals, which doesn't help me all that much, and it appears to just make the sensor less sensitive. The math centers of my brain are also atrophied from years of neglect, so I don't completely understand how this filter is working.
Can someone explain to me in some detail how to go about getting a useful reading from the accelerometer? A succinct tutorial would be an incredible help, as I haven't found a really good one (at least aimed at my level of knowledge). I get frustrated because I feel like all of this should be more apparent to me. Any help or direction would be greatly appreciated, and of course I can provide more samples from my code if needed.
I hope I'm not asking to be spoon-fed too much; I wouldn't be asking unless I've been trying to figure it our for a while. It also looks like there is some interest from other SO members.
To get a correct reading from the accelerometer you need to use the equation speed = SQRT(x*x + y*y + z*z). Using this, when the phone is at rest the speed will be that of gravity - 9.8m/s. So if you subtract that (SensorManager.GRAVITY_EARTH) then when the phone is at rest, you will have a reading of 0 m/s. As for noise, Blrfl might be right about cheap accelerometers, even when my phone is at rest, it continuously flickers a few fractions of a metre per second. You could just set a small threshold e.g 0.4m/s and if the speed doesn't go over that, then it is at rest.
Partial answer:
Accuracy. If you're looking for high accuracy, the inexpensive accelerometers you find in handsets won't cut the mustard. For comparison, a three-axis sensor suitable for industrial or scientific use runs north of $1,500 for just the sensor; adding the hardware to power it and turn its readings into something a computer can use doubles the price. The sensor in a handset runs well below $5 in quantity.
Noise. Cheap sensors are inaccurate, and inaccuracy translates to noise. An inaccurate sensor that isn't moving won't always show zeros, it will show values on either side within some range. About the best you can do is characterize the sensor while motionless to get some idea how noisy it is and use that to round your measurements to a less-precise scale based on expected error. (In other words, If it's within ±x m/s^2 of zero, it's safe to say the sensor's not moving, but you can't be precisely sure because it could be moving very slowly.) You'll have to do this on every device, because they don't all use the same accelerometer and they all behave differently. I guess that's one advantage the iPhone has: the hardware's pretty much homogeneous.
Gravity. There's some discussion in the SensorEvent documentation about factoring gravity out of what the accelerometer says. You'll notice it bears a lot of similarity to the code you posted, except that it's clearer about what it's doing. :-)
HTH.
How do you deal with jitteriness? You smooth the data. Instead of looking at the sequence of values from the sensor as your values, you average them on an ongoing basis, and the new sequence formed become the values you use. This moves each jittery value closer to the moving average. Averaging necessarily gets rid of quick variations in adjacent values.. and is why people use the terminology Low (frequency) Pass filtering since data that originally may have varied a lot per sample (or unit time) now varies more slowly.
eg, instead of using values 10 6 7 11 7 10, you can average these in many ways. For example, we can compute the next value from an equal weight of the running average (ie, of your last processed data point) with the next raw data point. Using a 50-50 mix for the above numbers, we'd get 10, 8, 7.5, 9.25, 8.125, 9.0675. This new sequence, our processed data, would be used in lieu of the noisy data. And we could use a different mix than 50-50 of course.
As an analogy, imagine you are reporting where a certain person is located using only your eyesight. You have a good view of the wider landscape, but the person is engulfed in a fog. You will see pieces of the body that catch your attention .. a moving left hand, a right foot, shine off eyeglasses, etc, that are jittery, BUT each value is fairly close to the true center of mass. If we run some sort of running averaging, we'd get values that approach the center of mass of that target as it moves through the fog and are in effect more accurate than the values we (the sensor) reported which was made noisy by the fog.
Now it seems like we are losing potentially interesting data to get a boring curve. It makes sense though. If we are trying to recreate an accurate picture of the person in the fog, the first task is to get a good smooth approximation of the center of mass. To this we can then add data from a complementary sensor/measuring process. For example, a different person might be up close to this target. That person might provide very accurate description of the body movements, but might be in the thick of the fog and not know overall where the target is ending up. This is the complementary position to what we first got -- the second data gives detail accurately without a sense of the approximate location. The two pieces of data would be stitched together. We'd low pass the first set (like your problem presented here) to get a general location void of noise. We'd high pass the second set of data to get the detail without unwanted misleading contributions to the general position. We use high quality global data and high quality local data, each set optimized in complementary ways and kept from corrupting the other set (through the 2 filterings).
Specifically, we'd mix in gyroscope data -- data that is accurate in the local detail of the "trees" but gets lost in the forest (drifts) -- into the data discussed here (from accelerometer) which sees the forest well but not the trees.
To summarize, we low pass data from sensors that is jittery but stays close to the "center of mass". We combine this base smooth value with data that is accurate at the detail but drifts, so this second set is high-pass filtered. We get the best of both worlds as we process each group of data to clean it of incorrect aspects. For the accelerometer, we smooth/low pass the data effectively by running some variation of a running average on its measured values. If we were treating the gyroscope data, we'd do math that effectively keeps the detail (accepts deltas) while rejecting the accumulated error that would eventually grow and corrupt the accelerometer smooth curve. How? Essentially, we use the actual gyro values (not averages), but use a small number of samples (of deltas) a piece when deriving our total final clean values. Using a small number of deltas keeps the overall average curve mostly along the same averages tracked by the low pass stage (by the averaged accelerometer data) which forms the bulk of each final data point.

Categories

Resources