I am developing an indoor positioning app based on fingerprinting approach. I am stuck at the point where I should store the wi-fi rss values in the database in the training phase. Since rss values vary significantly, will storing the absolute rss values lead to large errors in localization?
I have read many articles and http://www.csd.uoc.gr/~hy439/papers/WILL-pre.pdf says the absolute rss values of each AP varies but their difference relationship maintains. The author introduces a concept called Rss Stacking Difference which is the cumulative difference between one AP and all other APs. Can i store this Rss Stacking Difference in the database rather than absolute values?
Thanks in advance.
Why you do not try to collect several RSS from each reference node for each cell or interesting position (depends on how you segmented the map). That will mitigate the fluctuation of RSS values. then by taking the mean value for each reference node, you will have several mean values for each position or segment. Then you determine the position based on the minimum difference between the data sets in the database and the collected values in online mode.
let the position at point (x=100,y=120) associated with the following finger print
{mac1=xx:xx:xx:xx:xx:xx,rssaverage=-47.54 ; {mac2=xx:xx:xx:xx:xx:xx,rssaverage=-60.1 ; ...}
and the collected values in online mode will be structured in the same way and compared respectively.
I hope that can be helpful
good luck
Related
I have android app, and I want to find all data that have high similarities with selected data. example:
I have data that has value like this.
No Name Distance Rating Price
1. Coffee Shop 1.3 KM 4.6 40
And I want to display all data that has similarities with the data above (assuming has weight to count like 'similarity score').
what kind algorithm that most suitable and easy to implement with my case?
From what i have been looking for i got several algorithm that i think it would works
- K-Means Clustering
- K-Nearest Neighbor
- ElasticSearch
- Cosine Similarity
In my current assumption, I still considering using K-Means because it's the only algorithm that I have learnt before
If you use K-Means you will get groups of data clustered together. But here I think k-Nearest Neighbors would suit better for your query since from what I understand you will get queries of data and you are trying to find similar data to it. With k-Nearest Neighbors you can just adjust how many you want to include by saying, say nearest 5 or 50 neighbors. So I would go with kNN in this case.
Use a database like MySQL. SQL has joins and methods to sort similar data.
I'm working on a program that requires metadata information in order to populate some arrays. Let's say I have information like "Countries", "Districts" and a bunch of other metadata. That information is stored in a sqlite database. The program at some time need to load all the countries and iterate them in order to search for one. My question is: What is the best way to proceed: Keep the metadata in an array after query them, or every time I need them I should query the database?
Here's some more information so you can evaluate the performance:
Metadata tables (like countries): ~10
Estimated times I need to iterate the metadata: several (~100)
the arrays contains aprox. 5 fields (primitive types.)
If the amount of data is so large that if affects the amount of data available for your other data or for other app, you should keep it in the database and access it dynamically.
If the amount of data is rather small, and it's queried rather often, keeping it in memory is more efficient.
If the amount of data is rather small, and it's queried not very often, it will not make any noticeable difference what you do.
Your particular case is one of these three, but the only way to find out is to measure the performance yourself.
My app downloads info from the web (which is btw legal according to the sites rules),
and then stores them into an array after parsing through each page (64 pages).
Those arrays of strings are converted into arrays of relative layouts to make the loading of data onto scrollview much faster (so that if the same page comes again it doesnt have to format all the data again).
I have about 64 arrays of strings and similarly 64 arrays of relative layouts. I need so many arrays because each contain a specific type of data which have to put into the relative layouts in a different formats.
The data consists of statistics and links.
How can I manage such a large amount of data with?
I have come up with how to manage it...but it requires a lot of switch cases and if/else statements.
Any other ideas?
Optimizing your code to ensure that the least amount of time is necessarily is very specific to the information being processed so it is difficult for me to optimize a process I have not seen.
However, I will say that this is a long task and should be in an Asynctask. You can also update the views as the data is being processed so that it is available the moment you have processed it. This would be done as a onProgressUpdate().
Information about asynctasks:
What would be the best way to do this?
I have an application that gets two values about each and every 10th second (when user touches the screen). From this i get two values, latitude and longitude of a sphere object that the user has touched.
Now I would like to compare thoose values to values from a file, with the real latitude longitude of a location and then compare thoose values and se how far away the user was.
My file will be built up with two values and one key (location) in each index.
What would be the best way to do this, would it be to read the whole file in the beggining with a bufferedInputStreamReader and store thoose in a
HashMap<String, List<Float>>
or would i be better of using some kind of database structure like SqlLite?
Since Im doing this on a mobile platform performance is quite important and that's mainly why i ask this question.
Depending on the size of the data you need to compare against, you could either look up each time against a database (slower) or do a binary search in memory (faster).
If you store in a HashMap (for the in memory method), then you will need to sort and implement a binary search for maximum speed. Otherwise you will be searching linearly (iterating) throughout the collection of values (that might be acceptable to you).
I would say if you have a few thousands entries, then do it in memory, if you have more then go down the database route.
I am looking into writing an Android app that has a database of approximately 2000 longitudes and latitudes which are effectively hard coded.
I assume that once my app is installed, I can put this information into the SQLite database, but how should I distribute this information when the app is downloaded?
One option I thought of was some kind of Patricia Trie to minimise the size of the data (the points will be in a number of clusters, rather than evenly distributed), but I'm not sure whether such a collection would work when there are two associated numbers to store, along with perhaps some other information such as place name.
Does anyone have any thoughts, input or suggestions?
Rich
2000 ints is not many.
In fact I recently tried to load up my web app that has similar numbers for lat lon. I realized i need to optimize a bit, but its load time wasn't completely terrible.
You may want to just request the data you need at any given moment. There must be some other data associated with the lat lons that can help you with that... or maybe you should only display pins within some boundary of lat lon, like +1,-1 in every direction of the center of your map or something.