I have devised some code for image comparison. The matching part is still a bit flawed and I would love some assitance. The project can be found at - Github .
I have these two images img1 and img2 .
These are the images of same person . When I want to compare these two images , the following code returns false .
bmpimg1 = Bitmap.createScaledBitmap(bmpimg1, 100, 100, true);
bmpimg2 = Bitmap.createScaledBitmap(bmpimg2, 100, 100, true);
Mat img1 = new Mat();
Utils.bitmapToMat(bmpimg1, img1);
Mat img2 = new Mat();
Utils.bitmapToMat(bmpimg2, img2);
Imgproc.cvtColor(img1, img1, Imgproc.COLOR_RGBA2GRAY);
Imgproc.cvtColor(img2, img2, Imgproc.COLOR_RGBA2GRAY);
img1.convertTo(img1, CvType.CV_32F);
img2.convertTo(img2, CvType.CV_32F);
//Log.d("ImageComparator", "img1:"+img1.rows()+"x"+img1.cols()+" img2:"+img2.rows()+"x"+img2.cols());
Mat hist1 = new Mat();
Mat hist2 = new Mat();
MatOfInt histSize = new MatOfInt(180);
MatOfInt channels = new MatOfInt(0);
ArrayList<Mat> bgr_planes1= new ArrayList<Mat>();
ArrayList<Mat> bgr_planes2= new ArrayList<Mat>();
Core.split(img1, bgr_planes1);
Core.split(img2, bgr_planes2);
MatOfFloat histRanges = new MatOfFloat (0f, 180f);
boolean accumulate = false;
Imgproc.calcHist(bgr_planes1, channels, new Mat(), hist1, histSize, histRanges, accumulate);
Core.normalize(hist1, hist1, 0, hist1.rows(), Core.NORM_MINMAX, -1, new Mat());
Imgproc.calcHist(bgr_planes2, channels, new Mat(), hist2, histSize, histRanges, accumulate);
Core.normalize(hist2, hist2, 0, hist2.rows(), Core.NORM_MINMAX, -1, new Mat());
img1.convertTo(img1, CvType.CV_32F);
img2.convertTo(img2, CvType.CV_32F);
hist1.convertTo(hist1, CvType.CV_32F);
hist2.convertTo(hist2, CvType.CV_32F);
double compare = Imgproc.compareHist(hist1, hist2, Imgproc.CV_COMP_CHISQR);
Log.d("ImageComparator", "compare: "+compare);
if(compare>0 && compare<1500) {
Toast.makeText(MainActivity.this, "Images may be possible duplicates, verifying", Toast.LENGTH_LONG).show();
new asyncTask(MainActivity.this).execute();
}
else if(compare==0)
Toast.makeText(MainActivity.this, "Images are exact duplicates", Toast.LENGTH_LONG).show();
else
Toast.makeText(MainActivity.this, "Images are not duplicates", Toast.LENGTH_LONG).show();
startTime = System.currentTimeMillis();
How can I change my code so that on compare these two images , it returns true ? Any advice is of great help .
You seem to only be comparing limited feature vectors, which in this case is only a histogram of the image.
The algorithm you have used is not suitable for facial recognition as it only identifies on the image color spectrum.
See this possible duplicate on how to perform facial recognition.
Related
I'm writing simple application on android using open cv that will draw red contours on black detected objects.
Here is the processing code
Mat hsv = new Mat();
Mat maskInrange = new Mat();
Mat dilateMat = new Mat();
List<MatOfPoint> contours= new ArrayList<>();
Imgproc.cvtColor(rgbaImage, hsv, Imgproc.COLOR_RGB2HSV);
Scalar lowerThreshold = new Scalar(0, 0, 0);
Scalar upperThreshold = new Scalar(15, 15, 15);
Core.inRange(hsv, lowerThreshold, upperThreshold, maskInrange);
Imgproc.dilate(maskInrange, dilateMat, new Mat());
Mat h=new Mat();
Imgproc.findContours(dilateMat, contours, h, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
Imgproc.drawContours ( rgbaImage, contours, -1, new Scalar(255,0,0), 1);
The problem is that instead of one big contour representing the object there are many little unstable contours.
I guess it's about noise, but what should be the next step in improving the app?
In openCV I want to compare two histograms. First I transform Bitmap to Mat:
Bitmap puzzleBmp = BitmapFactory.decodeFile(photoPath, options)
mat = new Mat(puzzleBmp.getHeight(), puzzleBmp.getWidth(), CvType.CV_8U, new Scalar(4));
Utils.bitmapToMat(puzzleBmp, mat);
Next I want to create histogram of this image:
Mat mRgba = new Mat();
Imgproc.cvtColor(mat, mRgba, Imgproc.COLOR_RGBA2RGB);
Imgproc.GaussianBlur(mRgba, mRgba, new Size(5, 5), 0, Imgproc.BORDER_DEFAULT);
Mat mHSV = new Mat();
Imgproc.cvtColor(mRgba, mHSV, Imgproc.COLOR_RGB2HSV_FULL);
Mat hist = new Mat();
int h_bins = 30;
int s_bins = 32;
MatOfInt mHistSize = new MatOfInt (h_bins, s_bins);
MatOfFloat mRanges = new MatOfFloat(0, 179, 0, 255);
MatOfInt mChannels = new MatOfInt(0, 1);
List<Mat> lHSV = Arrays.asList(mHSV);
Mat mask2 = new Mat();
mask2 = Mat.zeros( mRgba.rows() + 2, mRgba.cols() + 2, CvType.CV_8UC1 );
Range rowRange = new Range( 1, mask2.rows() - 1 );
Range colRange = new Range( 1, mask2.cols() - 1 );
Mat mask = new Mat();
mask = mask2.submat(rowRange, colRange);
boolean accumulate = false;
Imgproc.calcHist(lHSV, mChannels, mask, hist, mHistSize, mRanges, accumulate);
Core.normalize(hist, hist, 0, 255, Core.NORM_MINMAX, -1, new Mat());
From this part I get hist. But when I make these transformations with two different images i always get from method Imgproc.compareHist() same values, which means they are the same pictures.
I'm trying to train my svm with 4 image. all of my images are 300*400. I resize them to 304*400 so i can get the HOGDescriptor of my images because of 16*16 block. then I use Core.hconcat(mats, trainData) to gather all of my images into one Mat. after that when I try to set lables for my trainData, in train part I get below Error. I'm new to openCV. what is wrong?
Mat rose1 = new Mat();
Mat rose2 = new Mat();
Mat rose3 = new Mat();
Mat rose4 = new Mat();
Mat rose5 = new Mat();
try {
rose1 = org.opencv.android.Utils.loadResource(
getApplicationContext(), R.drawable.rose1);
rose2 = org.opencv.android.Utils.loadResource(
getApplicationContext(), R.drawable.rose2);
rose3 = org.opencv.android.Utils.loadResource(
getApplicationContext(), R.drawable.rose3);
rose4 = org.opencv.android.Utils.loadResource(
getApplicationContext(), R.drawable.rose4);
rose5 = org.opencv.android.Utils.loadResource(
getApplicationContext(), R.drawable.rose5);
} catch (IOException e) {
e.printStackTrace();
}
Mat rose1Resized = new Mat();
Mat rose2Resized = new Mat();
Mat rose3Resized = new Mat();
Mat rose4Resized = new Mat();
Size sz = new Size(304, 400);
Imgproc.resize(rose1, rose1Resized, sz);
Imgproc.resize(rose2, rose2Resized, sz);
Imgproc.resize(rose3, rose3Resized, sz);
Imgproc.resize(rose4, rose4Resized, sz);
// HOG
MatOfFloat rose1Float = new MatOfFloat();
MatOfFloat rose2Float = new MatOfFloat();
MatOfFloat rose3Float = new MatOfFloat();
MatOfFloat rose4Float = new MatOfFloat();
HOGDescriptor hog = new HOGDescriptor(new Size(304, 400), new Size(16,
16), new Size(new Point(8, 8)), new Size(new Point(8, 8)), 9);
hog.compute(rose1Resized, rose1Float);
hog.compute(rose2Resized, rose2Float);
hog.compute(rose3Resized, rose3Float);
hog.compute(rose4Resized, rose4Float);
ArrayList<Mat> mats = new ArrayList<>();
mats.add(rose1Float);
mats.add(rose2Float);
mats.add(rose3Float);
mats.add(rose4Float);
// SVM
Mat trainData = new Mat();
Core.hconcat(mats, trainData);
float[] lableFloat = { 1, 1, 1, 1 };
Mat lables = new Mat(1, 4, CvType.CV_32FC1);
lables.put(0, 0, lableFloat);
CvSVM svm = new CvSVM();
CvSVMParams params = new CvSVMParams();
params.set_svm_type(CvSVM.C_SVC);
params.set_kernel_type(CvSVM.LINEAR);
params.set_term_crit(new TermCriteria(TermCriteria.EPS, 100, 1e-6));
svm.train(trainData, lables, new Mat(), new Mat(), params);
Error is:
E/AndroidRuntime(27347): CvException [org.opencv.core.CvException: cv::Exception: /home/reports/ci/slave_desktop/50-SDK/opencv/modules/ml/src/inner_functions.cpp:671: error: (-209) Response array must contain as many elements as the total number of samples in function cvPreprocessCategoricalResponses
first of all i reshape MatOfFloat after get HOG. because rose1Float was 65268*1 and i need it in one row Mat.
Mat roseReshaped1 = rose1Float.reshape(1, 1);
Mat roseReshaped2 = rose2Float.reshape(1, 1);
Mat roseReshaped3 = rose3Float.reshape(1, 1);
Mat roseReshaped4 = rose4Float.reshape(1, 1);
then i used push_back instead of "Core.hconcat(mats, trainData)"
Mat trainData = new Mat(0, sizeOfCols, CvType.CV_32FC1);
trainData.push_back(roseReshaped1);
trainData.push_back(roseReshaped2);
trainData.push_back(roseReshaped3);
trainData.push_back(roseReshaped4);
my trainData would be 4*65268 and this is my label. or as opencv say, response!
int[] l = { 1, 2, 3, 4 };
Mat lables = new Mat(4, 1, CvType.CV_32SC1);
lables.put(0, 0, l);
now everything works perfectly fine. Thanks to #berak.
I'm trying to get the red rectangle region below "C", as image below:
And below is my source code use Opencv4Android:
public void threshold() {
Mat rgbMat = new Mat();
Mat grayMat = new Mat();
Mat edgeMat = new Mat();
Utils.bitmapToMat(bmp, rgbMat);
Mat intermediate = new Mat();
Imgproc.cvtColor(rgbMat, intermediate, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur(intermediate, intermediate, new Size(3, 3), 0);
Imgproc.threshold(intermediate, intermediate, 190, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);
Imgproc.Canny(intermediate, intermediate, 60, 140);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat mHierarchy = new Mat();
Imgproc.findContours(intermediate, contours, mHierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
Scalar CONTOUR_COLOR = new Scalar(255,0,0,255);
Log.e(TAG, "Contours count: " + contours.size());
Imgproc.drawContours(intermediate, contours, -1, CONTOUR_COLOR);
Bitmap edgeBmp = Bitmap.createBitmap(bmp.getWidth(), bmp.getHeight(), Config.ARGB_8888);
Utils.matToBitmap(intermediate, edgeBmp);
imageView.setImageBitmap(edgeBmp);
}
but the result is not as I expected: as image below:
As log show, Contours count: 372, and the rectangle region is discontinuous, How can I get the contour of the red rectangle region, and filter another useless region. I have referenced some other questions, but the question still not be solved, Could you do me a favor?
[update] change the code by the suggest from Morotspaj,
public void thresholdNew() {
Mat rgbMat = new Mat();
Mat grayMat = new Mat();
Utils.bitmapToMat(bmp, rgbMat);
Imgproc.cvtColor(rgbMat, grayMat, Imgproc.COLOR_BGR2GRAY);
Vector<Mat> bgr_planes = new Vector<Mat>();
Core.split(rgbMat, bgr_planes);
Mat redMat = bgr_planes.get(2);
Mat redness = new Mat();
Core.subtract(redMat, grayMat, redness);
Mat intermediateMat1 = new Mat();
Mat intermediateMat2 = new Mat();
Imgproc.GaussianBlur(redness, intermediateMat1, new Size(15,15), 0);
Imgproc.GaussianBlur(redness, intermediateMat2, new Size(55,55), 0);
Mat red_mask = new Mat();
Core.subtract(intermediateMat1, intermediateMat2, red_mask );
Imgproc.threshold(red_mask , red_mask , 90, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);
Mat masked_image = rgbMat.clone();
masked_image = masked_image.setTo(new Scalar(255,0,0), red_mask );
Bitmap edgeBmp = Bitmap.createBitmap(bmp.getWidth(), bmp.getHeight(), Config.ARGB_8888);
Utils.matToBitmap(masked_image, edgeBmp);
imageView.setImageBitmap(edgeBmp);
}
But the result is not as I expected and different with Morotspaj's.
Any error exist in the above code?
[update] Sorry, I am very busy these days, I will be try again later, and If I can not implement with Java, I will use Morotspaj's code through JNI. I will be update soon.
I made a filter to mask out the red rectangle region, just for you ;)
Mat rgbMat = imread("red_rectangle.jpg", -1);
Mat grayMat;
cvtColor(rgbMat, grayMat, COLOR_BGR2GRAY);
// Separate the red channel and compare it to the gray image
Mat channels[3];
split(rgbMat, channels);
Mat redness = Mat_<float>(channels[2]) - Mat_<float>(grayMat);
// Find the sharp red region
Mat red_blur1;
Mat red_blur2;
GaussianBlur(redness, red_blur1, Size(15,15), 0);
GaussianBlur(redness, red_blur2, Size(55,55), 0);
Mat red_mask = (red_blur1-red_blur2) > 2;
// Store result
Mat masked_image = rgbMat.clone();
masked_image.setTo(Scalar(0,0,255), red_mask);
imwrite("red_mask.png", red_mask);
imwrite("masked_image.png", masked_image);
The GaussianBlur method calls can be replaced by boxFilter if you need better performance, and the constants here and there can of course be tweaked. Hope this helps!
EDIT: Taking the difference of two differently blurred images is known as Difference of Gaussians (DoG). It finds changes in a certain scale depending on the size of the kernels. The smaller kernel is used to smooth away small details and noise. The bigger kernel destroys the details we are interested in but not the regions with very smooth changes that we don't want. By taking the difference between them we end up with only the details in the scale we are interested in! A mask can then be created easily by thresholding with the > operator.
I want to compare two pictures similarity
Code:
Mat mat1=Highgui.imread("/mnt/sdcard/91.png");
Mat mat2=Highgui.imread("/mnt/sdcard/92.png");
double distance = Imgproc.compareHist(mat1, mat2, Imgproc.CV_COMP_CORREL); //(this line throws an exception)
Exception information:
01-30 10:48:20.203: E/AndroidRuntime(3540): Caused by: CvException
[org.opencv.core.CvException:
/home/andreyk/OpenCV2/trunk/opencv/modules/imgproc/src/histogram.cpp:1387:
error: (-215) H1.type() == H2.type() && H1.type() == CV_32F in
function double cv::compareHist(const cv::_InputArray&, const
cv::_InputArray&, int)
Can anybody help me? How should I solve this?
At first make sure that both images have 1 channel (if not, than convert them to grayscale with cvtColor or choose one channel witn cvSplit) and have one type, for instance, CV_8UC1.
Then calculate histograms of this images.
Example of code:
int histSize = 180;
float range[] = {0, 180};
const float* histRange = {range};
bool uniform = true;
bool accumulate = false;
cv::Mat hist1, hist2;
cv::calcHist(&mat1, 1, 0, cv::Mat(), hist1, 1, &histSize, &histRange, uniform, accumulate );
cv::calcHist(&mat2, 1, 0, cv::Mat(), hist2, 1, &histSize, &histRange, uniform, accumulate );
double result = cv::compareHist( hist1, hist2, CV_COMP_CORREL);
On Android the code would be similar to:
Mat image0 = ...; //
Mat image1 = ...;
Mat hist0 = new Mat();
Mat hist1 = new Mat();
int hist_bins = 30; //number of histogram bins
int hist_range[]= {0,180};//histogram range
MatOfFloat ranges = new MatOfFloat(0f, 256f);
MatOfInt histSize = new MatOfInt(25);
Imgproc.calcHist(Arrays.asList(image0), new MatOfInt(0), new Mat(), hist0, histSize, ranges);
Imgproc.calcHist(Arrays.asList(image1), new MatOfInt(0), new Mat(), hist1, histSize, ranges);
double res = Imgproc.compareHist(image0, image01, Imgproc.CV_COMP_CORREL);
#skornos your code is wrong.
It should be
Mat image0 = ...; //
Mat image1 = ...;
Mat hist0 = new Mat();
Mat hist1 = new Mat();
int hist_bins = 30; //number of histogram bins
int hist_range[]= {0,180};//histogram range
MatOfFloat ranges = new MatOfFloat(0f, 256f);
MatOfInt histSize = new MatOfInt(25);
Imgproc.calcHist(Arrays.asList(image0), new MatOfInt(0), new Mat(), hist0, histSize, ranges);
Imgproc.calcHist(Arrays.asList(image1), new MatOfInt(0), new Mat(), hist1, histSize, ranges);
double res = Imgproc.compareHist(hist0, hist1, Imgproc.CV_COMP_CORREL);
Note the last line, it should be hist0 compare to hist1 not comparing images
Mat hsvRef = new Mat();
Mat hsvCard = new Mat();
Mat srcRef = new Mat(refImage.getHeight(), refImage.getWidth(), CvType.CV_8UC2);
Utils.bitmapToMat(refImage, srcRef);
Mat srcCard = new Mat(cardImage.getHeight(), cardImage.getWidth(), CvType.CV_8UC2);
Utils.bitmapToMat(cardImage, srcCard);
/// Convert to HSV
Imgproc.cvtColor(srcRef, hsvRef, Imgproc.COLOR_BGR2HSV);
Imgproc.cvtColor(srcCard, hsvCard, Imgproc.COLOR_BGR2HSV);
/// Using 50 bins for hue and 60 for saturation
int hBins = 50;
int sBins = 60;
MatOfInt histSize = new MatOfInt( hBins, sBins);
// hue varies from 0 to 179, saturation from 0 to 255
MatOfFloat ranges = new MatOfFloat( 0f,180f,0f,256f );
// we compute the histogram from the 0-th and 1-st channels
MatOfInt channels = new MatOfInt(0, 1);
Mat histRef = new Mat();
Mat histCard = new Mat();
ArrayList<Mat> histImages=new ArrayList<Mat>();
histImages.add(hsvRef);
Imgproc.calcHist(histImages,
channels,
new Mat(),
histRef,
histSize,
ranges,
false);
Core.normalize(histRef,
histRef,
0,
1,
Core.NORM_MINMAX,
-1,
new Mat());
histImages=new ArrayList<Mat>();
histImages.add(hsvCard);
Imgproc.calcHist(histImages,
channels,
new Mat(),
histCard,
histSize,
ranges,
false);
Core.normalize(histCard,
histCard,
0,
1,
Core.NORM_MINMAX,
-1,
new Mat());
double resp1 = Imgproc.compareHist(histRef, histCard, 0);
Log.d(TAG, "HIST COMPARE 0" + resp1);
double resp2 = Imgproc.compareHist(histRef, histCard, 1);
Log.d(TAG, "HIST COMPARE 1" + resp2);
double resp3 = Imgproc.compareHist(histRef, histCard, 2);
Log.d(TAG, "HIST COMPARE 2" + resp3);
double resp4 = Imgproc.compareHist(histRef, histCard, 3);
Log.d(TAG, "HIST COMPARE 3" + resp4);