I am trying to make an automation android applicaton which finds an image(template/subimage) inside another image(main/bigger image).
Menu image is from onePlus 3T.
Whatsapp icon image is from motoG 3.
I have tried to find whatsapp image from oneplus 3T in its menu image and it was successfully found.
But, when i am trying to find some subimage from different device of different screensize, it is not working.
Can someone please help. Below is the code I am using.
class MatchingDemo {
public Mat run(Mat img, Mat templ, String outFile, int match_method) {
System.out.println("\nRunning Template Matching");
// / Create the result matrix
int result_cols = img.cols() - templ.cols() + 1;
int result_rows = img.rows() - templ.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32FC1);
// / Do the Matching and Normalize
Imgproc.matchTemplate(img, templ, result, match_method);
Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat());
// / Localizing the best match with minMaxLoc
MinMaxLocResult mmr = Core.minMaxLoc(result);
Point matchLoc;
if (match_method == Imgproc.TM_SQDIFF || match_method == Imgproc.TM_SQDIFF_NORMED) {
matchLoc = mmr.minLoc;
} else {
matchLoc = mmr.maxLoc;
}
System.out.println("matchloc.x "+ matchLoc.x);
System.out.println("templ.cols "+ templ.cols());
System.out.println("matchloc.y "+ matchLoc.y);
System.out.println("templ.rows "+ templ.rows());
// / Show me what you got
Imgproc.rectangle(img, matchLoc, new Point(matchLoc.x + templ.cols(),
matchLoc.y + templ.rows()), new Scalar(0, 255, 0), 20);
// Save the visualized detection.
System.out.println("Writing "+ outFile);
Imgcodecs.imwrite(outFile, img);
return img;
}
}
I cropped the template image from your given snapshot and everything worked fine:
New template Image:
Code:
import cv2
import numpy as np
img_1 = cv2.imread("path/to/snapshot", 0)
img_rgb = cv2.imread("path/to/snapshot")
template_img = cv2.imread("path/to/template", 0)
h, w = template_img.shape
res = cv2.matchTemplate(img_1, template_img, cv2.TM_CCOEFF)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
cv2.rectangle(img_rgb, max_loc, (max_loc[0]+w, max_loc[1]+h), np.array([0, 0, 255]), 3)
cv2.imwrite("./debug.png", img_rgb)
Output:
Note: matchTemplate is a very basic implementation, to get more scale invariant results you may try with SIFT features
Related
int match_method = Imgproc.TM_CCOEFF;
int result_cols = labReport.cols() - textMat.cols() + 1;
int result_rows = labReport.rows() - textMat.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32FC1);
Imgproc.matchTemplate(labReport, textMat, result, match_method);
Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat());
MinMaxLocResult mmr = Core.minMaxLoc(result);
Point matchLoc;
if (match_method == Imgproc.TM_SQDIFF
|| match_method == Imgproc.TM_SQDIFF_NORMED) {
matchLoc = mmr.minLoc;
} else {
matchLoc = mmr.maxLoc;
}
return matchLoc;
I'm using this code. I am trying to find a template in my image, however if I use a template that I know is not in the image, Imgproc.matchTemplate, it will return me a point as a result.
I need something like threshold for this. I need similarity more than 80% and if it's not, then return me some value like null.
I have downloaded and successfully run the example provided in opencv4android sdk.
I am able to simply display the camera frames without any processing,
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
return inputFrame.rgba();
}
I want to process live frame with some predefined image template to recognize that template. I have taken reference from this post and implemented accordingly. But I get black screen only.
private Mat mCameraMat = new Mat();
private Mat mTemplateMat;
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mCameraMat = inputFrame.rgba();
initialize();
int match_method = Imgproc.TM_SQDIFF;
// Create the result matrix
int result_cols = mCameraMat.cols() - mTemplateMat.cols() + 1;
int result_rows = mCameraMat.rows() - mTemplateMat.rows() + 1;
Log.d(TAG, " mCameraMat cols "+mCameraMat.cols());
Log.d(TAG, " mCameraMat rows "+mCameraMat.rows());
Log.d(TAG, " mTemplateMat cols "+mTemplateMat.cols());
Log.d(TAG, " mTemplateMat rows "+mTemplateMat.rows());
Mat result = new Mat(result_rows, result_cols, CvType.CV_32F);
// Do the Matching and Normalize
Imgproc.matchTemplate(mCameraMat, mTemplateMat, result, match_method);
Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat());
// Localizing the best match with minMaxLoc
MinMaxLocResult mmr = Core.minMaxLoc(result);
Point matchLoc;
if (match_method == Imgproc.TM_SQDIFF || match_method == Imgproc.TM_SQDIFF_NORMED) {
matchLoc = mmr.minLoc;
} else {
matchLoc = mmr.maxLoc;
}
Rect roi = new Rect((int) matchLoc.x, (int) matchLoc.y, mTemplateMat.cols(), mTemplateMat.rows());
Core.rectangle(mCameraMat, new Point(roi.x, roi.y), new Point(roi.width - 2, roi.height - 2), new Scalar(255, 0, 0, 255), 2);
return result;
}
public void initialize(){
try {
if (mCameraMat.empty())
return;
if(mTemplateMat == null){
Mat temp = Utils.loadResource(Tutorial1Activity.this, R.drawable.icon);
mTemplateMat = new Mat(temp.size(), CvType.CV_32F);
Imgproc.cvtColor(temp, mTemplateMat, Imgproc.COLOR_BGR2RGBA);
Log.d(TAG, "initialize mTemplateMat cols "+mTemplateMat.cols());
Log.d(TAG, "initialize mTemplateMat rows "+mTemplateMat.rows());
}
} catch (IOException e) {
e.printStackTrace();
}
}
Note:
My ultimate goal is to recognize the playing cards from live camera. Kindly suggest best approach. Should I use image templates or any other thing to make things faster?
This is how I want to recognize multiple cards from live camera:
Result should be: ♠A ♠K ♠Q ♠J ♠10 when camera preview seems like below
Template matching is unlikely to be the best approach here.
Try aSIFT to do an affine invariant SIFT matching or a normal SIFT (OpenCV implementation exists). However,
since these are in C++, you may want to use JNI to make calls to it from Java on an Android device. This is probably the best way to detect the suit of the card from the 4 symbols.
Another option to detect and recognize the numbers/alphabets on the cards is to use a text detector like MSER and then a text recognizer on the regions of interest indicated by the MSER filter.
In any case, you are unlikely to be able produce the best of results from the kind of image you've shown in the image. You may be able to get acceptable performance for full frontal, upright images with the first method.
i'm developing to Android using openCV in Eclipse. I'm trying do template matching Frame a Frame. I can't convert the template and doing the match template on it. I'm using that function:
public void initialize(){
if (src.empty())
return;
if(template == null){
Mat templ = Highgui.imread(getFileAbsPath("1.png",
Highgui.CV_LOAD_IMAGE_UNCHANGED);
template = new Mat(templ.size(), CvType.CV_32F);
Imgproc.cvtColor(templ, (Mat) template, Imgproc.COLOR_BGR2RGBA);
}
}
private String getFileAbsPath(String fileName) {
File f = new File(cacheDir, fileName);
return f.getAbsolutePath();
}
I get a error on:
Imgproc.cvtColor(templ, (Mat) template, Imgproc.COLOR_BGR2RGBA);
My image is that (it's number 1):
https://drive.google.com/file/d/0B5tH_Qo3-GvhaV9QSTUteXFiQmM/view?usp=sharing
Next, I've my method:
#Override
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
src = inputFrame.rgba();
initialize();
int match_method = Imgproc.TM_SQDIFF;
// Create the result matrix
int result_cols = src.cols() - ((Mat) template).cols() + 1;
int result_rows = src.rows() - ((Mat) template).rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32F);
// Do the Matching and Normalize
Imgproc.matchTemplate(src, (Mat) template, result, match_method);
Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat());
MinMaxLocResult mmr = Core.minMaxLoc(result);
Point matchLoc;
if (match_method == Imgproc.TM_SQDIFF || match_method == Imgproc.TM_SQDIFF_NORMED) {
matchLoc = mmr.minLoc;
} else {
matchLoc = mmr.maxLoc;
}
Rect roi = new Rect((int) matchLoc.x, (int) matchLoc.y, ((Mat) template).cols(), ((Mat) template).rows());
Core.rectangle(src, new Point(roi.x, roi.y), new Point(roi.width - 2, roi.height - 2), new Scalar(255, 0, 0, 255), 2);
return src;
}
I get a error on that line:
Imgproc.matchTemplate(src, (Mat) template, result, match_method);
I can't do the match, don't know why ... Cans someone help me ?
I'm a beginner in openCV4android and I would like to get some help if possible
.
I'm trying to detect colored triangles,squares or circles using my Android phone camera but I don't know where to start.
I have been reading OReilly Learning OpenCV book and I got some knowledge about OpenCV.
Here is what I want to make:
1- Get the tracking color (just the color HSV) of the object by touching the screen
- I have already done this by using the color blob example from the OpenCV4android example
2- Find on the camera shapes like triangles, squares or circles based on the color choosed before.
I have just found examples of finding shapes within an image. What I would like to make is finding using the camera on real time.
Any help would be appreciated.
Best regards and have a nice day.
If you plan to implement NDK for your opencv stuff then you can use the same idea they are using in OpenCV tutorial 2-Mixedprocessing.
// on camera frames call your native method
public Mat onCameraFrame(CvCameraViewFrame inputFrame)
{
mRgba = inputFrame.rgba();
Nativecleshpdetect(mRgba.getNativeObjAddr()); // native method call to perform color and object detection
// the method getNativeObjAddr gets the address of the Mat object(camera frame) and passes it to native side as long object so that you dont have to create and destroy Mat object on each frame
}
public native void Nativecleshpdetect(long matAddrRgba);
In Native side
JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial2_Tutorial2Activity_Nativecleshpdetect(JNIEnv*, jobject,jlong addrRgba1)
{
Mat& mRgb1 = *(Mat*)addrRgba1;
// mRgb1 is a mat object which points to the address of the input camera frame, so all the manipulations you do here will reflect on the live camera frame
//once you have your mat object(i.e mRgb1 ) you can implement all the colour and shape detection algorithm you have learnt in opencv book
}
since all manipulations are done using pointers you have to be bit careful handling them. hope this helps
Why dont you make use of JavaCV i think its a better alternative..you dont have to use the NDK at all for this..
try this:
http://code.google.com/p/javacv/
If you check OpenCV's Back Projection tutorial it does what you are looking for (and a bit more).
Back Projection:
"In terms of statistics, the values stored in the BackProjection
matrix represent the probability that a pixel in a image belongs to
the region with the selected color."
I have converted that tutorial to OpenCV4Android (2.4.8) like you were looking for, it does not use Android NDK. You can see all the code here at Github.
You can also check this answer for more details.
Though its a bit late i would like to make a contribution to the question.
1- Get the tracking color (just the color HSV) of the object by
touching the screen - I have already done this by using the color blob
example from the OpenCV4android example
Implement OnTouchListener to your activity
onTouch function
int cols = mRgba.cols();
int rows = mRgba.rows();
int xOffset = (mOpenCvCameraView.getWidth() - cols) / 2;
int yOffset = (mOpenCvCameraView.getHeight() - rows) / 2;
int x = (int) event.getX() - xOffset;
int y = (int) event.getY() - yOffset;
Log.i(TAG, "Touch image coordinates: (" + x + ", " + y + ")");
if ((x < 0) || (y < 0) || (x > cols) || (y > rows)) return false;
Rect touchedRect = new Rect();
touchedRect.x = (x > 4) ? x - 4 : 0;
touchedRect.y = (y > 4) ? y - 4 : 0;
touchedRect.width = (x + 4 < cols) ? x + 4 - touchedRect.x : cols - touchedRect.x;
touchedRect.height = (y + 4 < rows) ? y + 4 - touchedRect.y : rows - touchedRect.y;
Mat touchedRegionRgba = mRgba.submat(touchedRect);
Mat touchedRegionHsv = new Mat();
Imgproc.cvtColor(touchedRegionRgba, touchedRegionHsv, Imgproc.COLOR_RGB2HSV_FULL);
// Calculate average color of touched region
mBlobColorHsv = Core.sumElems(touchedRegionHsv);
int pointCount = touchedRect.width * touchedRect.height;
for (int i = 0; i < mBlobColorHsv.val.length; i++)
mBlobColorHsv.val[i] /= pointCount;
mBlobColorRgba = converScalarHsv2Rgba(mBlobColorHsv);
mColor = mBlobColorRgba.val[0] + ", " + mBlobColorRgba.val[1] + ", " + mBlobColorRgba.val[2] + ", " + mBlobColorRgba.val[3];
Log.i(TAG, "Touched rgba color: (" + mBlobColorRgba.val[0] + ", " + mBlobColorRgba.val[1] +
", " + mBlobColorRgba.val[2] + ", " + mBlobColorRgba.val[3] + ")");
mRGBA is a mat object which was initiated in onCameraViewStarted as
mRgba = new Mat(height, width, CvType.CV_8UC4);
And for the 2nd part:
2- Find on the camera shapes like triangles, squares or circles based
on the color choosed before.
I have tried to find out the selected contours shape using approxPolyDP
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(0).toArray());
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true) * 0.02;
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
System.out.println("points length" + points.toArray().length);
if( points.toArray().length == 5)
{
System.out.println("Pentagon");
mShape = "Pentagon";
}
else if(points.toArray().length > 5)
{
System.out.println("Circle");
Imgproc.drawContours(mRgba, contours, 0, new Scalar(255, 255, 0, -1));
mShape = "Circle";
}
else if(points.toArray().length == 4)
{
System.out.println("Square");
mShape = "Square";
}
else if(points.toArray().length == 4)
{
System.out.println("Triangle");
mShape = "Triangle";
}
This was done on onCameraFrame function after i obtained the contour list
For me if the length of point array was more than 5 it was usually a circle. But there is other algorithm to obtain circle and its attributes.
I'm trying to match an image with the camera input in Android using template matching. When i try this with static 2 images like in here: OpenCV Template Matching example in Android, everything works just fine. But when I try to use the captured images from the camera, I do not get the correct result. Following is the code that I have written:
String baseDir = Environment.getExternalStorageDirectory().getAbsolutePath();
Mat img = Highgui.imread(baseDir + "/mediaAppPhotos/img2.png");
Mat templ = Highgui.imread(baseDir+ "/mediaAppPhotos/chars.png");
int result_cols = img.cols() - templ.cols() + 1;
int result_rows = img.rows() - templ.rows() + 1;
Mat result = new Mat(result_cols, result_rows, CvType.CV_32FC1);
// / Do the Matching and Normalize
Imgproc.matchTemplate(img, templ, result, Imgproc.TM_CCOEFF);
Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1,
new Mat());
// / Localizing the best match with minMaxLoc
MinMaxLocResult mmr = Core.minMaxLoc(result);
Point matchLoc;
if (Imgproc.TM_CCOEFF == Imgproc.TM_SQDIFF
|| Imgproc.TM_CCOEFF == Imgproc.TM_SQDIFF_NORMED) {
matchLoc = mmr.minLoc;
} else {
matchLoc = mmr.maxLoc;
}
// / Show me what you got
Core.rectangle(
img,
matchLoc,
new Point(matchLoc.x + templ.cols(), matchLoc.y
+ templ.rows()), new Scalar(0, 255, 0));
// Save the visualized detection.
System.out.println("Writing " + baseDir+ "/mediaAppPhotos/result.png");
Highgui.imwrite(baseDir + "/mediaAppPhotos/result.png", img);
I want to this template matching to work when the image is captured from the camera as well. Any help is greatly appreciated!
maybe is like this:
https://play.google.com/store/apps/details?id=in.mustafaak.imagematcher&hl=es_419
code available in github