How to enhance feature detection using ORB in Opencv? - android

I'm developing an Android application to match two images using ORB feature detection .
The processing and matching logic is called in java using JNI functions.
The problem is that the feature detections works well for some images, but fails in some images and some cases.
Here is an example of images that fails in some unknown conditions
After some thoughts and discussions, I figured out that the problem is that the problem is the lack of features that's why the program fails. Someone in the opencv community tried this image and it gave him 60 keypoints which all of them doesn't survive the RobustMatcher tests.
So I need to enhance to features in this image in order to make the matching work.
In addition to equalizeHist, what can I do ?
I hope you can help me with some suggestions and maybe some examples guys.

One way is to enhance the edges of the image. Do a Laplacian filter for example and multiply the result to the original image. This job makes the features (edge) more salient. Of course before everything convert the image to a float type and at the end, normalize your image.

Related

OpenCV, Computer Vision

Im new in the Computer Vision field, so I'm learning from scratch how to generate point clouds from multiple image captures. I'm not implementing any of this on code yet, first I want to learn how this whole process should be done and then I'll code it.
So far I've learned about features detection algorithms , mostly SIFT and the remarkably more accurate A-KAZE, which detects much more features on each image and thus generates more dense clouds.
Then there comes the key matching algorithms, mainly Brute Force (BF) and FLANN.
Finally it should be a process in which you:
-first: get all the cameras orientation
-finally : generate the sparse point cloud.
But, until now , I've only found examples in OpenCV in which only two images are matched and their matched features are drawn. Im not able to find any example in which more images are matched and , more importantly, Im not able to find out how to find camera's orientation and to generate point clouds on OpenCV. Please, I need some help on those last stages. If you find any example of multiple image matching, point cloud generation it would be very helpfull . Thanks in advance!
OpenMvg has a nice structure-from-motion pipeline example to reconstruct 3D sparse point clouds from SIFT and AKAZE features. It even works without given any camera intrinsics (focal length, principal point).

How to detect a logo using camera in android?

I'm working on an android project that requires Real time Image Recognition feature. I'm a newbie and don't have too much knowledge of image processing. I have to detect only one image by the application, that is nothing more than a logo. Logo is in the shape of circle.
Please suggest appropriate solution.
Thank you.
I recommend to use OpenCV library. It will allow you to learn your application to recognize diffrent things. For example I've made my application to recognize cars based on the size and shape of the object.
there is a lot of examples for OpenCV how to recognize a logo or similar things
for detection of specific object you can follow some basic techniques of object localization such as Normalized Cross-correlation in other words its also known as template matching, you have to prepare a template of your log and just use it as a convolution mask, and convolve your input image with this mask, ideally at the location of the desired object, response of convolution will quite high, so you can further fine tune the process to localize your object.
For how to use template matching in opencv you can refer to its document page http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
OR
As you have mentioned in your question, that your region of interest is circular in shape, you can use some shape measures after initial segmentation of your image.

Step by step object detection with ORB

I must create an Android app that recognizes some objects from the camera (car steering wheel, car wheel). I tried with Haar classifier but without success and I'm running out of time (it's a school project). So I decided to look for another way. I found some other methods for my goal - ORB. I found what should I do in this answer. My problem is that things are messed up in my head. Can you give me a step-by-step answer of what to do to implement the answer from the question in the link I gave:
From extracting the feature points to training the KD tree and using it for every frame from the camera.
Bonus questions:
Can you give a definition of feature point? It's something I couldn't exactly understand.
Will be the detecting slow using ORB? I know OpenCV can be used in native android, wouldn't that make the things faster?
I need to create this app as soon as possible. Please help!
I am currently developing a similar application. I would recommend getting something working with a single reference image first for a couple of reasons:
It's easier to do and understand if you're just starting out, and you can change it later.
For android applications you have limited processing capabilities so more images = lower fps.
You should have a look at the OpenCV tutorials which are quite helpful. Once you go through the “OpenCV for Android SDK” section and understand the three tutorials you can pretty easily add in functionality that will allow you to analyse the video feed.
The basic logic path I'd recommend following when making the app is:
Read in the reference image.
Create and use your FeatureDetector, DescriptorExtractor and DescriptorMatcher.
Use the above to detect keypoints and then descrive keypoints (the first two, don't forget to convert it to a mat and then to greyscale).
Every time you get a frame from your camera repeat step 3. on it and then compare the keypoints in the images (with the third part of 2.).
Use the result to determine if there is a match (if there is then draw a box around it or something).
Get a new frame.
Try making it to work for a single object and then add in others later. Another thing you could add is a screen at the start to allow users to pick what they want to search for.
Also ORB is reasonably fast, especially compared to SIFT and SURF. I get about 3fps on a HTC One with a single reference image.

Blur algorithm for android app

I am looking for a blur algorithm for an android app. The algorithm I found here Fast Bitmap Blur For Android SDK doesn't work in an AsyncTask.
I receive data from a sensor over a long time (one until two houres). Depending on the data an image must be blurred more or less. All pure java code I found is not fast enough wherefore I want to access native C code over jni. Is there anybody could give a hint?
Thanks anko

Image classification with opencv

We're currently working on an android ocr app using opencv.pre-processing ,segmentation ,Feature extraction steps are done. Classification is the remaining step and we're stuck ..We're using a DB table which is filled with each letter features ..Firstly we had only 1 feature per letter and we used euclidean distance ,but results wasn't accurate and more features needed to be obtained and so we did.The problem now is we have 7 features per letter and absolutely no idea of how to classify i/p based on them..some have recommended using knn ,but we can't figure out how and the opencv documentation in that part ain't clear ..so if anybody can help it wud be great.
Thanks in advance
Briefly and without discussing the details. Vector space comes in handy here. You need to build a feature vector
<feature1, feature2, feature3.. featureN> for each of the instances in your training set.
From each of these images you extract features that you think or you read in the research articles are important for image classification. For example you can do centroid, Gaussian blur, histograms, etc.
Once you have these values linear algebra comes into play with some classification algorithm: knn, svm, naive bayes etc that you run on your training set, that is you build your model.
If the model is ready you run it on your test set.
Use cross validation for more comprehensive results.
For more details check the course notes:
http://www.inf.ed.ac.uk/teaching/courses/iaml/slides/knn-2x2.pdf
or
http://www.inf.ed.ac.uk/teaching/courses/inf2b/lectureSchedule.html
would like to add that OpenCV may not have the sort of classifiers you might prefer.
There are several libraries out there, though you may have to see which works best when on a mobile platform. Could you give some details on the features you are using?
The simplest KNN (k-nearest neighbors) measure would be to find the Euclidean distance in n dimensions (for an n-dimensional feature vector) between the input sample's features and each of the vectors in your DB table. Also explore Mahalanobis distance (used to measure distance between a point and a dataset/class) if you have multiple classes and the input image is to be classified as one such 'type' or 'class' of image.
As #matcheek mentioned, more sophistication can be possible using machine learning techniques such as SVM, Neural Nets, etc. However first you might consider a simpler thing like kNN, considering its a mobile platform which may limit the computational complexity.

Categories

Resources