I am trying to build an android application that allows user to login through signing on a canvas, I can't find any algorithm that will help me implement this functionality in reliable manner , I read alot of papers and methods about signature recognition such as fourier transform method but all the papers involve very complex mathematical operations which makes it very hard to implement using the android sdk.
so is there any open source algorithm that would compare two signature bitmaps(drawables) to obtain descent results?
if not how should I go about this issue?
Comparing drawables may be not sufficient, as signatures may differ in size and position. You can attempt recognition based on invariant moments ( I use this technology for OCR,
but on printed characters, see demos in http://sourceforge.net/projects/javaocr/ project )
You may get better results analysing signature process in dynamics - path, speed, energy etc.
I don't know anything about it but I feel there is sample app installed in the emulator called GestureBuilder which can help you. You can refer Gesture for this.
Related
Im new in the Computer Vision field, so I'm learning from scratch how to generate point clouds from multiple image captures. I'm not implementing any of this on code yet, first I want to learn how this whole process should be done and then I'll code it.
So far I've learned about features detection algorithms , mostly SIFT and the remarkably more accurate A-KAZE, which detects much more features on each image and thus generates more dense clouds.
Then there comes the key matching algorithms, mainly Brute Force (BF) and FLANN.
Finally it should be a process in which you:
-first: get all the cameras orientation
-finally : generate the sparse point cloud.
But, until now , I've only found examples in OpenCV in which only two images are matched and their matched features are drawn. Im not able to find any example in which more images are matched and , more importantly, Im not able to find out how to find camera's orientation and to generate point clouds on OpenCV. Please, I need some help on those last stages. If you find any example of multiple image matching, point cloud generation it would be very helpfull . Thanks in advance!
OpenMvg has a nice structure-from-motion pipeline example to reconstruct 3D sparse point clouds from SIFT and AKAZE features. It even works without given any camera intrinsics (focal length, principal point).
I want to do Face recoginition (Not Face Detection) in my Android and iOS app. I have studied a lot on Web and found following possible solutions:
1.) openCV: I don't want to go into writing my own API using this. Also, I don't have prior experience in JNI for Android.
2.) Betaface API So far this is good.
3.) Sky Biometrics is also Good.
Now, I am searching for the solution from 3-5 days and came to know that I can use above API (so far I have decided to purchase license for Sky Biometrics). And this API will provide me a list of Features for the faces it recognised.
But, Now I am confused That how to use these features and save in my local data base to recognise faces from the pictures.So My queries are following
1.) How to convert Face features to Actual working Face recognition API means What is the actual algo or solution I can use to merge diffrent face features of a same person to identify him correctly.
2.) Uploading images and then creating database for Face-features set is a very time taking process. Do any one knows any Android/iOS Face Recoginition SDK to do this so that I can do this accurately and timely with no or less time taking process.
3.) Both solution-2 and 3 can be used with Images. Is there any other solution is available which can do the same with less efforts but with more accuracy.
OpenBR may be also interesting for you: http://openbiometrics.org/
Finally I am using Rekognition API. And this is good enough to serve my purpose.
I'm trying to develop a mobile app for traffic sign recognition and i want it to be in real-time. I'm trying only to detect circles signs and to find out what sign is in order to notify the driver. I want to know what is the best method I should use. For now I've tried using java and opencv to find the circles in an image (using HoughCircles) but is not quite what I've expected - a lot of signs aren't identified. Then I tried to use opencv for training it to learn the signs - to obtain an xml trained classifier, but it takes too long and to be functionally I need a really large amount of data. I don't know what to do ... Thank you in advance.
I found this work with some research: https://www.academia.edu/4950526/Traffic_Sign_Recognition_system_on_Android_devices
I'm writing a game for Google Glass, but unfortunately SpeechRecognizer API isn't available on the current builds on Google Glass GDK.
So I've been thinking about implementing an algorithm for a very simple voice recognition.
Let's say I want to recognize only: "Yes" and "No".
Do you know any example code or any helpful resources to help me in implementing this ?
Is it so hard that I should drop the idea and go with big frameworks like CMUSphinx ?
What about recognizing: up, down, right, left or numbers from 1 to 10 ?
As I know, there often used transition to the frequency domain by fast Fourier transform (FFT) and it analyzing. Also need some dictionary of speeched words for frequency correlation.
Please see this links:
CMU Sphinx have java implementation.
David Wagner have a good article and matlab implementation.
P.S. Ohh, if you speak in russian, why you don't read this article - very simple, with java examples.
P.P.S. Honestly, I never use this framework, but if you have only a superficial knowledge about speech recognition, robust and easyest way is to use existing complete solutions like frameworks or libraries, otherwise you need spend time to possess the necessary knowledge threshold. In this case you can read this article.
We're currently working on an android ocr app using opencv.pre-processing ,segmentation ,Feature extraction steps are done. Classification is the remaining step and we're stuck ..We're using a DB table which is filled with each letter features ..Firstly we had only 1 feature per letter and we used euclidean distance ,but results wasn't accurate and more features needed to be obtained and so we did.The problem now is we have 7 features per letter and absolutely no idea of how to classify i/p based on them..some have recommended using knn ,but we can't figure out how and the opencv documentation in that part ain't clear ..so if anybody can help it wud be great.
Thanks in advance
Briefly and without discussing the details. Vector space comes in handy here. You need to build a feature vector
<feature1, feature2, feature3.. featureN> for each of the instances in your training set.
From each of these images you extract features that you think or you read in the research articles are important for image classification. For example you can do centroid, Gaussian blur, histograms, etc.
Once you have these values linear algebra comes into play with some classification algorithm: knn, svm, naive bayes etc that you run on your training set, that is you build your model.
If the model is ready you run it on your test set.
Use cross validation for more comprehensive results.
For more details check the course notes:
http://www.inf.ed.ac.uk/teaching/courses/iaml/slides/knn-2x2.pdf
or
http://www.inf.ed.ac.uk/teaching/courses/inf2b/lectureSchedule.html
would like to add that OpenCV may not have the sort of classifiers you might prefer.
There are several libraries out there, though you may have to see which works best when on a mobile platform. Could you give some details on the features you are using?
The simplest KNN (k-nearest neighbors) measure would be to find the Euclidean distance in n dimensions (for an n-dimensional feature vector) between the input sample's features and each of the vectors in your DB table. Also explore Mahalanobis distance (used to measure distance between a point and a dataset/class) if you have multiple classes and the input image is to be classified as one such 'type' or 'class' of image.
As #matcheek mentioned, more sophistication can be possible using machine learning techniques such as SVM, Neural Nets, etc. However first you might consider a simpler thing like kNN, considering its a mobile platform which may limit the computational complexity.