Currently I'm developing an app that will detect colored circles. I'm trying to do this by following this tutorial, where guy detects red circles on image with Python. I've written the same code, just for Java.
Mat mat = new Mat(bitmap.getWidth(), bitmap.getHeight(),
CvType.CV_8UC3);
Mat hsv_image = new Mat();
Utils.bitmapToMat(bitmap, mat);
Imgproc.cvtColor(mat, hsv_image, Imgproc.COLOR_BGR2HSV);
Mat lower_red_hue_range = new Mat();
Mat upper_red_hue_range = new Mat();
Core.inRange(hsv_image, new Scalar(0, 100, 100), new Scalar(10, 255, 255), lower_red_hue_range);
Core.inRange(hsv_image, new Scalar(160, 100, 100), new Scalar(179, 255, 255), upper_red_hue_range);
Utils.matToBitmap(hsv_image, bitmap);
mutableBitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
image.setImageBitmap(mutableBitmap);
Image I use is identical to one from tutorial:
This is image with applied BGR2HSV:
When I execute the code using lower red hue range, it detects the blue circle. When I use upper red hue range it gives me black bmp(doesn't detect anything). How can it be? What am I doing wrong? This is literally copy moved from python to Java. Why's the result different then?
Thanks in advance.
Your mat is of CvType.CV_8UC1 image, i.e. you are working on a grayscale image. Try with CvType.CV_8UC3
Mat mat = new Mat(bitmap.getWidth(), bitmap.getHeight(), CvType.CV_8UC3);
hsv_image should look like this:
How to select a custom range:
You may want to detect a green circle.
Well, in HSV, tipically the range is:
H in [0,360]
S,V in [0,100]
However, for CV_8UC3 images, each component H,S,V can be represented by only 256 values at most, since it's stored in 1 byte. So, in OpenCV, the ranges H,S,V for CV_8UC3 are:
H in [0,180] <- halved to fit in the range
S,V in [0,255] <- stretched to fit the range
So to switch from typical range to OpenCV range you need to:
opencv_H = typical_H / 2;
opencv_S = typical_S * 2.55;
opencv_V = typical_V * 2.55;
So, green colors are around the value of hue of 120. The hue can have a value in the interval [0,360].
However, for Mat3b HSV images, the range for H is in [0,180], i.e. is halved so it can fit in a 8 bit representation with at most 256 possible values.
So, you want the H value to be around 120 / 2 = 60, say from 50 to 70.
You also set a minimum value for S,V to 100 in order to prevent very dark (almost black) colors.
Mat green_hue_range
inRange(hsv_image, cv::Scalar(50, 100, 100), cv::Scalar(70, 255, 255), green_hue_range);
use the following code and pass color to Blob detector and then pass an image to the detector
private Scalar converScalarRgba2HSV(Scalar rgba) {
Mat pointMatHsv= new Mat();
Mat pointMatRgba = new Mat(1, 1, CvType.CV_8UC3, rgba);
Imgproc.cvtColor(pointMatRgba,pointMatHsv, Imgproc.COLOR_RGB2HSV_FULL, 4);
return new Scalar(pointMatHsv.get(0, 0));}
// Blob Detector
public class ColorBlobDetector {
// Lower and Upper bounds for range checking in HSV color space
private Scalar mLowerBound = new Scalar(0);
private Scalar mUpperBound = new Scalar(0);
// Minimum contour area in percent for contours filtering
private static double mMinContourArea = 0.1;
// Color radius for range checking in HSV color space
private Scalar mColorRadius = new Scalar(25,50,50,0);
private Mat mSpectrum = new Mat();
private List<MatOfPoint> mContours = new ArrayList<MatOfPoint>();
Mat mPyrDownMat = new Mat();
Mat mHsvMat = new Mat();
Mat mMask = new Mat();
Mat mDilatedMask = new Mat();
Mat mHierarchy = new Mat();
public void setColorRadius(Scalar radius) {
mColorRadius = radius;
}
public void setHsvColor(Scalar hsvColor) {
double minH = (hsvColor.val[0] >= mColorRadius.val[0]) ? hsvColor.val[0]-mColorRadius.val[0] : 0;
double maxH = (hsvColor.val[0]+mColorRadius.val[0] <= 255) ? hsvColor.val[0]+mColorRadius.val[0] : 255;
mLowerBound.val[0] = minH;
mUpperBound.val[0] = maxH;
mLowerBound.val[1] = hsvColor.val[1] - mColorRadius.val[1];
mUpperBound.val[1] = hsvColor.val[1] + mColorRadius.val[1];
mLowerBound.val[2] = hsvColor.val[2] - mColorRadius.val[2];
mUpperBound.val[2] = hsvColor.val[2] + mColorRadius.val[2];
mLowerBound.val[3] = 0;
mUpperBound.val[3] = 255;
Mat spectrumHsv = new Mat(1, (int)(maxH-minH), CvType.CV_8UC3);
for (int j = 0; j < maxH-minH; j++) {
byte[] tmp = {(byte)(minH+j), (byte)255, (byte)255};
spectrumHsv.put(0, j, tmp);
}
Imgproc.cvtColor(spectrumHsv, mSpectrum, Imgproc.COLOR_HSV2RGB_FULL, 4);
}
public Mat getSpectrum() {
return mSpectrum;
}
public void setMinContourArea(double area) {
mMinContourArea = area;
}
public void process(Mat rgbaImage) {
Imgproc.pyrDown(rgbaImage, mPyrDownMat);
Imgproc.pyrDown(mPyrDownMat, mPyrDownMat);
Imgproc.cvtColor(mPyrDownMat, mHsvMat, Imgproc.COLOR_RGB2HSV_FULL);
Core.inRange(mHsvMat, mLowerBound, mUpperBound, mMask);
Imgproc.dilate(mMask, mDilatedMask, new Mat());
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(mDilatedMask, contours, mHierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
// Find max contour area
double maxArea = 0;
Iterator<MatOfPoint> each = contours.iterator();
while (each.hasNext()) {
MatOfPoint wrapper = each.next();
double area = Imgproc.contourArea(wrapper);
if (area > maxArea)
maxArea = area;
}
// Filter contours by area and resize to fit the original image size
mContours.clear();
each = contours.iterator();
while (each.hasNext()) {
MatOfPoint contour = each.next();
if (Imgproc.contourArea(contour) > mMinContourArea*maxArea) {
Core.multiply(contour, new Scalar(4,4), contour);
mContours.add(contour);
}
}
}
public List<MatOfPoint> getContours() {
return mContours;
}}
now set detector
public void initDetector() {
mDetector = new ColorBlobDetector();
mSpectrum = new Mat();
mBlobColorRgba = new Scalar(255);
mBlobColorHsv = new Scalar(255);
SPECTRUM_SIZE = new org.opencv.core.Size(500, 64);
CONTOUR_COLOR = new Scalar(0, 255, 0, 255);
mDetector.setHsvColor(converScalarRgba2HSV(new Scalar(0,255,255,255)));
Imgproc.resize(mDetector.getSpectrum(), mSpectrum, SPECTRUM_SIZE, 0, 0, Imgproc.INTER_LINEAR_EXACT);
mIsColorSelected = true;
}
now pass an image to a detector object
Mat mRgba = new Mat(inputFrame.height(), inputFrame.width(), CvType.CV_8UC4);
mRgba = inputFrame;
mDetector.process(mRgba);
List<MatOfPoint> contours = mDetector.getContours();
Log.e(TAG, "Contours count: " + contours.size());
drawContours(mRgba, contours, -1, CONTOUR_COLOR);
return mRgba;
Happy Codeing !!!
Related
Im detecting patches squares at strip image
I think boundaries are not clear
How can i get this patches clearly
please help me
this is my code and images
Thanks
Code
Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.src2);
// bmp = changeBitmapContrastBrightness(bmp, (float)1.5, 0);
Mat src = new Mat();
Utils.bitmapToMat(bmp, src);
// Creating an empty matrix to store the result
Mat dst = new Mat();
// Creating kernel matrix
Mat kernel = Mat.ones(1,1, CvType.CV_32F);
for(int i = 0; i<kernel.rows(); i++) {
for(int j = 0; j<kernel.cols(); j++) {
double[] m = kernel.get(i, j);
for(int k = 1; k<m.length; k++) {
m[k] = m[k]/(2 * 2);
}
kernel.put(i,j, m);
}
}
Imgproc.filter2D(src, dst, -1, kernel);
Imgproc.cvtColor(dst, dst, Imgproc.COLOR_BGR2GRAY);
// Preparing the kernel matrix object
Mat kernel1 = Imgproc.getStructuringElement(Imgproc.MORPH_RECT,
new Size((2*2) + 1, (2*2)+1));
Imgproc.dilate(dst, dst, kernel1);
Imgproc.threshold(dst, dst, 160, 255, Imgproc.THRESH_BINARY);
// Creating kernel matrix
Mat kernel2 = Mat.ones(5,5, CvType.CV_32F);
Imgproc.morphologyEx(dst, dst, Imgproc.MORPH_OPEN, kernel2);
}
private static List<MatOfPoint> contourFind(Mat img){
List<MatOfPoint> contours = new ArrayList<>();
Imgproc.findContours(img, contours, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
List<MatOfPoint> squares = new ArrayList<>();
for(MatOfPoint cnt: contours){
MatOfPoint2f curve = new MatOfPoint2f(cnt.toArray());
MatOfPoint2f approxCurve = new MatOfPoint2f();
Imgproc.approxPolyDP(curve, approxCurve, 0.02 * Imgproc.arcLength(curve, true), true);
int numberVertices = (int) approxCurve.total();
double contourArea = Imgproc.contourArea(cnt);
if (Math.abs(contourArea) < img.size().area() / 10){
squares.add(cnt);
}
}
return squares;
}
Original Image
enter image description here
After process Image
enter image description here
You might want to look at some segmentation algorithm and train your model based on sample images for each category. There are algorithms like the Watershed algorithm for classical Machine learning. Or look at semantic segmentation if you can use deep learning and neural networks.
I am trying to detect laser light dot of any colour of laser.and i have done some reference code from here OpenCV Android Track laser dot
That code is running perfectly for Only RED colour detection and i want any colour of laser dot detection.
I am new in OpenCV.
Here's what i have done till now :
Mat originalFrame= new Mat();
Mat frame = new Mat();
cvf.rgba().copyTo(originalFrame);
cvf.rgba().copyTo(frame);
Mat frameH;
Mat frameV;
Mat frameS;
mRgba = cvf.rgba();
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
// Mat frameS;
// Convert it to HSV
Imgproc.cvtColor(frame, frame, Imgproc.COLOR_RGB2HSV);
// Split the frame into individual components (separate images for H, S,
// and V)
mChannels.clear();
Core.split(frame, mChannels); // Split channels: 0-H, 1-S, 2-V
frameH = mChannels.get(0);
frameS = mChannels.get(1);
frameV = mChannels.get(2);
// Apply a threshold to each component
Imgproc.threshold(frameH, frameH, 155, 160, Imgproc.THRESH_BINARY);
// Imgproc.threshold(frameS, frameS, 0, 100, Imgproc.THRESH_BINARY);
Imgproc.threshold(frameV, frameV, 250, 256, Imgproc.THRESH_BINARY);
// Perform an AND operation
Core.bitwise_and(frameH, frameV, frame);
//
// Core.bitwise_and(frame,frameS,frame);
Imgproc.findContours(frame, contours, hierarchy, Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
hierarchy.release();
for ( int contourIdx=0; contourIdx < contours.size(); contourIdx++ )
{
// Minimum size allowed for consideration
MatOfPoint2f approxCurve = new MatOfPoint2f();
MatOfPoint2f contour2f = new MatOfPoint2f( contours.get(contourIdx).toArray() );
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true)*0.02;
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint( approxCurve.toArray() );
// Get bounding rect of contour
Rect rect = Imgproc.boundingRect(points);
Imgproc.rectangle(originalFrame, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(0, 0, 255), 3);
}
This is old question but i have findy my solution using core.InRange
Follow my alternative version
#Override
public void onCameraViewStarted(int width, int height) {
mat1 = new Mat(height, width, CvType.CV_16UC4);
mat2 = new Mat(height, width, CvType.CV_16UC4);
}
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat src = inputFrame.rgba();
Imgproc.cvtColor(inputFrame.rgba(), mat1, Imgproc.COLOR_BGR2HSV);
//rangeLow and RangeHight is Scalar
Core.inRange(mat1, rangeLow, rangeHight, mat2);
Core.MinMaxLocResult mmG = Core.minMaxLoc(mat2);
Core.rotate(src, src, Core.ROTATE_90_CLOCKWISE);
Imgproc.circle(src,mmG.maxLoc,30,new Scalar(0,255,0), 5, Imgproc.LINE_AA);
return src;
}
The code you posted carries out two thresholding operations. One on the hue and one on the value. It then ANDs the results together. Because of the way it thresholds the hue, the effect is that it is looking for a bright red(ish) spot.
My first solution would be to look for just a bright spot (so just look on the hue frame). You might also try looking for high saturations (except that a laser spot may well overload the sensors, and result in an apparently unsaturated pixel).
To select the appropriate threshold values, you will have to experiment with various images.
I'm working on Android and OpenCV 3.2, I want to apply the perspective transform but I have some trouble
I referenced to this official document OpenCV (go to perspective transform example ) http://docs.opencv.org/trunk/da/d6e/tutorial_py_geometric_transformations.html
but i get a rotated image result 90° or 180°
intput img = [input img ][2] output img output image 90 ° output 2 [output image 180 °][4]
this is my code
public void prespective(Bitmap img){
Mat imgSrc = new Mat();
Bitmap bmp32 = img.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp32, imgSrc);
Mat gray = new Mat();
cvtColor( imgSrc, gray, COLOR_RGBA2GRAY ); //Convert to gray
Mat thr = new Mat();
threshold( gray, thr, 125, 255, THRESH_BINARY ); //Threshold the gray
List<MatOfPoint> contours = new LinkedList<>(); // Vector for storing contours
findContours( thr, contours,new Mat(), RETR_CCOMP, CHAIN_APPROX_SIMPLE ); // Find the contours in the image
MatOfPoint m = biggestCountousMatOfPoint(contours);
RotatedRect box = Imgproc.minAreaRect(new MatOfPoint2f(m.toArray()));
Rect destImageRect = box.boundingRect();
Mat destImage = new Mat(destImageRect.size(),imgSrc.type());
final Point[] pts = new Point[4];
box.points(pts);
Mat _src = new MatOfPoint2f(pts[0], pts[1], pts[2], pts[3]);
Mat _dst = new MatOfPoint2f(new Point(0, 0), new Point(destImage.width() - 1, 0), new Point(destImage.width() - 1, destImage.height() - 1), new Point(0, destImage.height() - 1));
Mat perspectiveTransform=Imgproc.getPerspectiveTransform(_src, _dst);
Imgproc.warpPerspective(imgSrc, destImage, perspectiveTransform, destImage.size());
if(destImage.height() > 0 && destImage.width() > 0) {
Bitmap cinBitmapRotated = Bitmap.createBitmap(destImage.width(), destImage.height(), Bitmap.Config.ARGB_8888);
if (cinBitmapRotated != null) {
Utils.matToBitmap(destImage, cinBitmapRotated);
prenom2Cin.setImageBitmap(cinBitmapRotated);
}
}
}
what's the problem in my code ?
I am trying to draw the contour of every element in a picture with separated musical notations.
This is the code that I am running in android/java:
public static Bitmap findNotationContours(Bitmap inputImage) {
Mat inputImageMat = new Mat();
Utils.bitmapToMat(inputImage, inputImageMat);
Imgproc.cvtColor(inputImageMat, inputImageMat, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur(inputImageMat, inputImageMat, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(inputImageMat, inputImageMat, 255, 1, 1, 11, 2);
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(inputImageMat, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
int contourColor = android.R.color.holo_red_dark;
Scalar contourScalar = new Scalar(Color.red(contourColor), Color.green(contourColor), Color.blue(contourColor));
for (int i = 0; i < contours.size(); i++) {
Rect rect = Imgproc.boundingRect(contours.get(i));
Imgproc.rectangle(inputImageMat,
new Point(rect.x, rect.y),
new Point(rect.x + rect.width, rect.y + rect.height),
contourScalar, 3);
}
Utils.matToBitmap(inputImageMat, inputImage);
return inputImage;
}
The result I am getting is:
If you zoom in enough you can see that the contour for the notations are there, but I need to keep the original picture with just a rectangle contour around each of them, so I can save those as a pattern.
Can you please tell me what I am doing wrong?
With the crucial information from #Dan Mašek the problem was fixed, I will add the revised code that needs no explanation, just that we keep a Mat with the initial 3-channel image which we use to draw the contour rectangles on.
public static Bitmap findNotationContours(Bitmap inputImage) {
Mat inputImageMat = new Mat();
Mat resultImageMat = new Mat();
Utils.bitmapToMat(inputImage, inputImageMat);
Utils.bitmapToMat(inputImage, resultImageMat);
Imgproc.cvtColor(inputImageMat, inputImageMat, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur(inputImageMat, inputImageMat, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(inputImageMat, inputImageMat, 255, 1, 1, 11, 2);
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(inputImageMat, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
Scalar contourScalar = new Scalar(255,0,0);
for (int i = 0; i < contours.size(); i++) {
Rect rect = Imgproc.boundingRect(contours.get(i));
Imgproc.rectangle(resultImageMat,
new Point(rect.x, rect.y),
new Point(rect.x + rect.width, rect.y + rect.height),
contourScalar, 3);
Log.i("contour", "contour" + i + " x:" + rect.x + " y:" + rect.y);
}
Utils.matToBitmap(resultImageMat, inputImage);
return inputImage;
}
final result
I am trying to recognize pedestrian traffic signal.I am converting the image to HSV color space, then applying in-range function to get only green lights.Here is my original image
This is my code..
public void onManagerConnected(int status) {
switch (status) {
case LoaderCallbackInterface.SUCCESS:
{
Log.i(TAG, "OpenCV loaded successfully...................");
Mat img = null;
try {
img = Utils.loadResource(getBaseContext(), R.drawable.glarrygreen, Highgui.CV_LOAD_IMAGE_COLOR);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Mat mHSV = new Mat();
Mat mRgba2 = new Mat();
Mat mHSVThreshed = new Mat();
Imgproc.cvtColor(img, mHSV, Imgproc.COLOR_BGR2HSV, 3);
//This works for red lights
Core.inRange(mHSV, new Scalar(0, 64, 200), new Scalar(69, 255, 255), mHSVThreshed);
//this works for green lights
Core.inRange(mHSV, new Scalar(85, 64, 200), new Scalar(170, 255, 255),
mHSVThreshed);
List < MatOfPoint > contours = new ArrayList < MatOfPoint > ();
Mat hierarchy = new Mat();
Imgproc.findContours(mRgba2, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
double maxArea = -1;
int maxAreaIdx = -1;
for (int idx = 0; idx < contours.size(); idx++) {
Mat contour = contours.get(idx);
double contourarea = Imgproc.contourArea(contour);
if (contourarea > maxArea) {
maxArea = contourarea;
maxAreaIdx = idx;
}
}
Imgproc.cvtColor(mHSVThreshed, img, Imgproc.COLOR_GRAY2BGR, 0);
Imgproc.cvtColor(img, mRgba2, Imgproc.COLOR_BGR2RGBA, 0);
Bitmap bmp = Bitmap.createBitmap(img.cols(), img.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(mRgba2, bmp);
}
}
}
This is my output image
Now I need to filter out other green signals in the scene.How do I do this?How do I get the most prominent green signal in the scene.
EDIT 1:
I am trying to use findCountours() method, get list of contours ,iterate through the results and get the largest, then display only the largest contour.How to remove smaller contours?
You can try to filter your binary image with a non-maxima suppression algorithm.
Here is a Java Demo of non-max suppression.
Note that NMS algorithm can be coded using morphological function (Erosion and Dilatation).
EDIT it seems that opencv already has a NMS function with the following prototype:
void nonMaximaSuppression(const Mat& src, const int sz, Mat& dst, const Mat mask)
EDIT2 From opencv doc: For every possible (sz x sz) region within src, an element is a local maxima of src iff it is strictly greater than all other elements of windows which intersect the given element.
Try with 50x50 patch (sz := 50)
FYI The method is derived from the following paper: A. Neubeck and L. Van Gool. "Efficient Non-Maximum Suppression," ICPR 2006
I finally got this working by using findCountours() and drawContours() methods,
This is how it works,
Imgproc.GaussianBlur(mHSVThreshed, mHSVThreshed, new Size(5, 5), 5);
Imgproc.findContours(mHSVThreshed, contours,newcont, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
double maxArea = -1;
int maxAreaIdx = -1;
for (int idx = 0; idx < contours.size(); idx++) {
Mat contour = contours.get(idx);
double contourarea = Imgproc.contourArea(contour);
if (contourarea > maxArea) {
maxArea = contourarea;
maxAreaIdx = idx;
}
}
Imgproc.drawContours(img, contours, maxAreaIdx, new Scalar(120, 255, 120), 1);
Bitmap bmp = Bitmap.createBitmap(img.cols(), img.rows(), Bitmap.Config.ARGB_8888);
Scalar c = new Scalar(255, 0, 0, 255);
Core.putText(img, VAL, new Point(100,100), 3, 1, c, 2);
Imgproc.erode(mHSVThreshed, mHSVThreshed, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(1.5,1.5)));
Utils.matToBitmap(img, bmp);