Google goggles is the latest android application designed to search stuffs on the internet by PHOTO.
Now one can upload photo to application and then it will find related profiles and other links on the internet.
I want to know that which mechanism they are using behind that ?
Firstly, I think about color intensity but it might not work..then I think about shape distribution to x axis and y axis with color intensity but i think its not correct.
Now can anyone tell me that which technology they are using in back end ?
Now can anyone tell me that which technology they are using in back
end ?
There are some possibilities. They maybe use neural network like rofls says but I think they are using Data Mining with genetic algorithm I think the method is more effective for searching and clustering with very big data. Here a very good explanation of data mining using genetic algorithm and another paper about it Incremental Clustering in Data Mining
Yes, they are using the Machine Learning. Most likely something like Neural Networks, where there is essentially "black box" that predicts the correct thing. See this: Where to start Handwritten Recognition using Neural Network?, for an example. They train there neural networks on huge servers though, which is why they can deal with complicated images, etc... unlike our computers would be able to.
Related
I'm trying to detect the class & location of handwritten alphanumerics in a given image,
after looking around I found that the best way to achieve that is using deep neural networks, is there is a way to achieve that using tensorflow? (I would like to come up with something fast as it will be used in a live detection app on Android & IOS). more importantly, what kind of dataset can I use for this? will EMNIST be good for that?
or can I use YOLOV2 but with Alphanumerics for that ? if that's possible can EMNIST be used?
I am trying to work on android mobile app where I have a functionality to find matches according to interest and location. Many dating apps are already doing some kinda functionality for example Tinder matches based on locations, gender and age etc.
I do not want to reinvent the wheel if it has been done already. I have searched on google and some suggested to use clustering algorithm for this Algorithm for clustering people with similar interests User similarities algorithm
Lets I have data in this JSON format for users
User1: {location: "Delhi, India", interests: ["Jogging", "Travelling", "Praying"] }
User2: {location: "Noida, India", interests: ["Running", "Eating", "Praying"] }
User3: {location: "Bangalore, India", interests: ["Exercise", "Visiting new places", "Chanting"] }
I am writing a matching algorithm that matches few below criteria -
If user1 is having an interest in "Jogging" and another user2 is having an interest in "Running" so as jogging and running is alternatively a kind of exercise so they should match both the profiles as well as it should be location wise also as nearest should be on top.
The algorithm, when running at scale, should be fairly performant. This means I'd like to avoid comparing each user individually to each other user. For N users this is an O(N^2) operation. Ideally, I'd like to develop some sort of "score" that I can generate for each user in isolation since this involves looping through all users only once. Then I can find other users with similar scores and determine the best match based off that.
Can anyone suggest me with some implementation of how can I achieve this with the help of firebase-cloud-function and firebase-database.
I think hard coding similarity is a wrong approach. FYI none of the major search engines rely on such mappings.
A better approach is to be more data driven. Create an ad hoc methodology to start with and once you have sufficient data build machine learning models to rank matches. This way you do not have to assume anything.
For the location, have some kind of a radius (preferably this can be set by the user) and match people within the radius.
First of all i would say get rid of the redundant features in your dataset, Jogging and running could be 1 feature instead of 2, also after that you can use K-means algorithm to group data in an unsupervised way
to learn more about K-means you can go to this link:
https://www.coursera.org/learn/machine-learning/lecture/93VPG/k-means-algorithm
Also as you're building an online system, it has to improve itself everyday
You can watch this for learning a bit more about online learning
https://www.coursera.org/learn/machine-learning/lecture/ABO2q/online-learning
Also https://www.coursera.org/learn/machine-learning/lecture/DoRHJ/stochastic-gradient-descent this stochastic gradient will be helpful to know.
These are conceptual videos do not implement anything yourself, you can always use a library like tensorflow https://www.tensorflow.org/
I know this looks a bit hard to understand but you'll need this knowledge in order to build your own custom recommendation system.
As I needed to implement "snap GPS location to road" function for an Android application I've modified Android example of https://github.com/graphhopper to suit my needs. It actually did what was expected, but now I'm quite confused about data format i should provide to users device.
Is it possible to provide pbf.osm files? What should I do to provide the user as small data chunks as possible?
Or is this a completely wrong approach to achieve "snap to road" to a native Android app (not web based)?
I'm not that familar with Graphhopper in detail, but please take into account that it's just an routing engine and thus tuned for that purpose.
What you are looking for is a very simple method of 'reverse geocoding' that just returns the clothest point on a road for a given geopos. This doesn't work on a (simplified) routing graph as routers does, but on a optimized structure that is just tuned for geospatial queries. Maybe there are existing offline maps frameworks that already implement it?
I need a Map API for Android that can provide me with indexed nodes and indices that make up the road network. The main idea is to determine if two GPS devices are on the same road. Thank you in advance
A Map API by itself will not have that information. Anyway, you can get it from OpenStreetMap freely. You can download it from here.
I don't understand from your question if you intend on displaying the results on a map. If so, and you want a nice and free map API, I would suggest Leaflet. It's not as mature as the likes of OpenLayers but, as you've tagged this post with "android", Leaflet just kicks ass in the mobile department.
OpenStreetMap is definitely a good source of data for this kind of project. Unlike google maps, it gives developers access to the underlying vector data of a map (fully open). This allows interesting new use cases which simply are not possible with google maps, and something involving geometric calculations like this would definitely fit into that category. You either need OpenStreetMap or some other source of "vector" map data, and beyond OpenStreetMap this can be expensive.
Unfortunately that's not the full answer to your question. You still have a lot of work to do to use the data in the way you intend. You need to calculate the proximity of two points (GPS readings from two devices?) to nearby roads, and figure out which road the point lies closest to. It's the kind of powerful geo calculation you might do using a GIS package such as QGIS or a functions of a geo-aware database system PostGIS.
But that's not the answer to your question, because you need to do these calculations on device. I'm not aware of an off-the-shelf library to do this on android. I think you would have to roll your own.
But another challenge is to get the vector data onto the device in a suitable format in the first place, and this is the first thing to solve. You'd want the vector data either as a large download for a whole country, or perhaps a smaller area, perhaps with an on-the-fly download feature within this app. Whole countries are not infeasible when working with maps in vector form (ever tried the awesome MayDroyd app?), but require some compact formatting. Happily some of these problems are starting to be solved in open source off-the-shelf libraries. You could try to build on top of MapsForge for example.
So then you're back to the challenge of writing on-device code to poke around in this data and do the calculations you want to do. I suppose it could be rather good if projects like MapsForge included generic PostGIS style geo-functions to make this easier. Something to ask the mapsforge developers about perhaps.
First, sorry abour my poor english.
I'm planning to build an augmented reality app for android mobile platform and the main feature is the ability of the user to take a shoot of a shop and the application recognize the shop that he is photographing. I Do not know if the best option would be to use an image recognition api as many existing, but I think it would be something more specific. Maybe own a bank of images would help.
My plan was to have a database of stores with their locations and use one of many tools for image recognition and search in my database to the same location. But I found that all search engines images (kooba, iqengines, etc.) are not free and not a little cheaper. So would a tool that could use a limited catalog, like shops images in a shopping mall for example and send photos of smartphones (both android or iphone).
Can someone help me get started?
I've been doing something similar for my dissertation at University. I developed an application which detected signposts, read the content on them, then personalised / prioritised it depending on the user's preferences (with mixed success).
As part of this I had to look into Image Recognition.
Two things you may want to look at are:
The Qualcomm QCAR SDK. This was a little bit too image specific for what I was after, but if you were to do it on a small range of shops it may work. This would require a collection of shop images to match against - I don't know how successful it would be.
What I implemented used JavaCV (a conversion of OpenCV), which also has an Android conversion. It seems to allow for image recognition a bit more generally than the previous option which is why I used it. It would require you to run your own training to create a classifier though (unless there is another way of doing image recognition within it). But there are a number of guides which can help with that.
I used it for recognising signposts with reasonable success off just some basic training, though did tend to recognise a number of false positives.
Within my application I then used location to match up with previous detections etc.
Hopefully these may get you started.