Say I have a set of say 100 stored images (say sports team logos, rather than very similar images like faces) in my android app. In a manner similar to google goggle's continuous mode, I would like to use the camera to decide which image is being focused on.
What is the best, most efficient way to accomplish this? Any open source libraries, sdks etc would be great.
Thanks.
If you would like serverside recognition you should check out kooaba's API. its free for small reference databases such as yours and the best in the field...
Www.kooaba.com
Related
For my bachelor thesis I have to make an app that recognize logos, eg : I see the logo of a car and I want to find out what car it is. I take a picture of the car’s logo and the app should recognize the image and send me back the Word „Mercedes” so that I can search for information about the car online. I would like that no matter the position of the logo or the light or the color, the app to recognize what that logo represents.
I have tried with recognize.im API but it doesn't work well because it is callibrated and adjusted for comparison not classification, and I definitely need classification.
I would like to go with on-cloud recognition, but on-device would work too (in which case what algorithm should I use ?).
Thank you very much
There are few APIs which provide general image recognition such as Google Vision or Imagga. These services can give you some general information about the scene - for example it can tell whether there is a car in the image or not.
However, your car brand recognition task is very specific and you might achieve better results by using customizable service such as vize.ai, which allows you to train your task-specific API endpoint. In order to train it, you need to prepare example images with logos for each car brand you want to recognize (30 to 50 images per brand). You upload these images to vize.it using the web browser interface and you'll get the API endpoint to a classifier trained for your task. Then you can simply classify new images by calling the API.
Edit:More details added (as requested by ρяσѕρєя K)
Disclaimer: I'm working at vize.it.
Edit: Link changed
This is mainly a question for someone who has adequate experience with mobile development in both android and ios, and knows about mobile application optimizations and performance, so please refrain from answering generic solutions like "choose what you like or what suits your preference etc".
So, I am developing a mobile app for ios and android in phonegap, and it has graphics for almost all possible social medias like facebook, twitter, snapchat. My questions is whether I should maintain the icons/graphics for each media as individual files, or as a combined sprite image.
I understand on web sprites are the best options, but since these graphics are embedded in the app, it should not be a problem for the mobile app. Only thing I am concerned about is how the number of embedded images and icons in the app will affect the performance of the app.
I prefer to keep each social media icon in separate file because it is used at various places in the app with different styles and sizes, so using a sprite in that place would mean taking care of background size, image width all individually, whereas with the individual icons it is very straight forward. Also makes it easy to add/change medias to the app without modifying existing graphics.
So if someone can please tell me what effect will it have on keeping individual icons and graphics within the phonegap app instead of sprites and whether it is the better option or not!
i have worked with a android for over two years and i have realized that pngs are the best format to work with in android application and for JPEGs the best practise is to use a hyphen to name the JPEGs eg image_one.jpeg
Using png's will be the best format as they have never failed.I would gladly help if u need more assistance
First, sorry abour my poor english.
I'm planning to build an augmented reality app for android mobile platform and the main feature is the ability of the user to take a shoot of a shop and the application recognize the shop that he is photographing. I Do not know if the best option would be to use an image recognition api as many existing, but I think it would be something more specific. Maybe own a bank of images would help.
My plan was to have a database of stores with their locations and use one of many tools for image recognition and search in my database to the same location. But I found that all search engines images (kooba, iqengines, etc.) are not free and not a little cheaper. So would a tool that could use a limited catalog, like shops images in a shopping mall for example and send photos of smartphones (both android or iphone).
Can someone help me get started?
I've been doing something similar for my dissertation at University. I developed an application which detected signposts, read the content on them, then personalised / prioritised it depending on the user's preferences (with mixed success).
As part of this I had to look into Image Recognition.
Two things you may want to look at are:
The Qualcomm QCAR SDK. This was a little bit too image specific for what I was after, but if you were to do it on a small range of shops it may work. This would require a collection of shop images to match against - I don't know how successful it would be.
What I implemented used JavaCV (a conversion of OpenCV), which also has an Android conversion. It seems to allow for image recognition a bit more generally than the previous option which is why I used it. It would require you to run your own training to create a classifier though (unless there is another way of doing image recognition within it). But there are a number of guides which can help with that.
I used it for recognising signposts with reasonable success off just some basic training, though did tend to recognise a number of false positives.
Within my application I then used location to match up with previous detections etc.
Hopefully these may get you started.
I've been thinking about working on an application. You take a picture of something at a yard sale and it compares it against an image database.
For example say you take a picture of a spoon, and compares the image taken against images in the database and throws back to the user the top 5 possible matches.
Is this possible with current Android?
If so point me in the right direction, for stuff I'd need.
Thanks,
abolbridge
Look forward to your guys feedback.
That is rather possible, but too much CPU consuming and therefore not possible on Android itself. You'd have to build a serverside application for that.
It is going to be hard though. Quite.
Take a look at Google Goggles for an idea. The image processing is entirely made on the server side.
Check out openCV, as it contains a lot of useful object recognition functions and can be used on android. However, this approach will push the limits of the phones CPU and more so, its memory when using higher resolution images. A server-side implementation may be more appropiate.
I'm looking at developing an app that could benefit from having a image recognition system. I've seen this sort of thing in iPhone and Android apps. Take a picture of a book and the app takes you to Amazon where you can find that book. I'm not looking for general image recognition, but more the ability to pick a single image out of a library of about 10k images.
Any ideas of what services are available for this sort of thing?
Google Goggles does something similar to Amazon Remembers. It uses OCR if text can be identified and they want to use it with the similar image search from Google Images. I think they generate some kind of hash for an image with the feature that if the images are similar the images are similar to.
My best guess would be try to start with the character recognition and do a text search for the title of your card. This means your user has to make a very clear image maybe even in a specific position. But for a first application this would be great already. As somebody playing magic I would buy the tool for trading and cataloging my cards.
Actually, while short of getting an actual Amazon employee to tell you there is no way to confirm this, I am fairly certain that the Amazon Remembers feature you refer to is actually the work of crowd sourcing- using lots of people combing through data to make it appear like a computer is doing it. I think they may actually be using there own Mechanical Turk system.
Edit: Also, I found this SO question that might interest you. It is specifically for playing cards, but some of the answers (such as the machine learning example) can be modified to be more helpful for what you want to do with magic cards.