I am working on opencv in android and i want to change eye pupil color through Hue channel and i achieve this already but the problem is that the region i detected is in rectangle but i want this region circular as eye pupil is circular region. kindly help how i achieve this.
private Mat get_template(CascadeClassifier clasificator, Rect area,int size){
Mat template = new Mat();
Mat mROI = mGray.submat(area);
MatOfRect eyes = new MatOfRect();
Point iris = new Point();
Rect eye_template = new Rect();
clasificator.detectMultiScale(mROI, eyes, 1.15, 2,Objdetect.CASCADE_FIND_BIGGEST_OBJECT|Objdetect.CASCADE_SCALE_IMAGE, new Size(30,30),new Size());
Rect[] eyesArray = eyes.toArray();
for (int i = 0; i < eyesArray.length; i++){
Rect e = eyesArray[i];
e.x = area.x + e.x;
e.y = area.y + e.y;
Rect eye_only_rectangle = new Rect((int)e.tl().x,(int)( e.tl().y + e.height*0.4),(int)e.width,(int)(e.height*0.6));
mROI = mGray.submat(eye_only_rectangle);
Mat vyrez = mRgba.submat(eye_only_rectangle);
Core.MinMaxLocResult mmG = Core.minMaxLoc(mROI);
Core.circle(vyrez, mmG.minLoc,2, new Scalar(255, 255, 255, 255),2);
iris.x = mmG.minLoc.x + eye_only_rectangle.x;
iris.y = mmG.minLoc.y + eye_only_rectangle.y;
eye_template = new Rect((int)iris.x-size/2,(int)iris.y-size/2 ,size,size);
Core.rectangle(mRgba,eye_template.tl(),eye_template.br(),new Scalar(255, 0, 0, 255), 2);
template = (mGray.submat(eye_template)).clone();
return template;
}
return template;
}
Some potential solutions:
the simplest, although it might not be very robust is to calculate the inscribed circle (the circle bound by the rectangle) and change it's color - if your pupil detection is very accurate this solution may work fine.
a more robust solution would be to detect the area of the pupil based on color or gradient (edge detection)
Related
I am trying to detect laser light dot of any colour of laser.and i have done some reference code from here OpenCV Android Track laser dot
That code is running perfectly for Only RED colour detection and i want any colour of laser dot detection.
I am new in OpenCV.
Here's what i have done till now :
Mat originalFrame= new Mat();
Mat frame = new Mat();
cvf.rgba().copyTo(originalFrame);
cvf.rgba().copyTo(frame);
Mat frameH;
Mat frameV;
Mat frameS;
mRgba = cvf.rgba();
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
// Mat frameS;
// Convert it to HSV
Imgproc.cvtColor(frame, frame, Imgproc.COLOR_RGB2HSV);
// Split the frame into individual components (separate images for H, S,
// and V)
mChannels.clear();
Core.split(frame, mChannels); // Split channels: 0-H, 1-S, 2-V
frameH = mChannels.get(0);
frameS = mChannels.get(1);
frameV = mChannels.get(2);
// Apply a threshold to each component
Imgproc.threshold(frameH, frameH, 155, 160, Imgproc.THRESH_BINARY);
// Imgproc.threshold(frameS, frameS, 0, 100, Imgproc.THRESH_BINARY);
Imgproc.threshold(frameV, frameV, 250, 256, Imgproc.THRESH_BINARY);
// Perform an AND operation
Core.bitwise_and(frameH, frameV, frame);
//
// Core.bitwise_and(frame,frameS,frame);
Imgproc.findContours(frame, contours, hierarchy, Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));
hierarchy.release();
for ( int contourIdx=0; contourIdx < contours.size(); contourIdx++ )
{
// Minimum size allowed for consideration
MatOfPoint2f approxCurve = new MatOfPoint2f();
MatOfPoint2f contour2f = new MatOfPoint2f( contours.get(contourIdx).toArray() );
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true)*0.02;
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint( approxCurve.toArray() );
// Get bounding rect of contour
Rect rect = Imgproc.boundingRect(points);
Imgproc.rectangle(originalFrame, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(0, 0, 255), 3);
}
This is old question but i have findy my solution using core.InRange
Follow my alternative version
#Override
public void onCameraViewStarted(int width, int height) {
mat1 = new Mat(height, width, CvType.CV_16UC4);
mat2 = new Mat(height, width, CvType.CV_16UC4);
}
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat src = inputFrame.rgba();
Imgproc.cvtColor(inputFrame.rgba(), mat1, Imgproc.COLOR_BGR2HSV);
//rangeLow and RangeHight is Scalar
Core.inRange(mat1, rangeLow, rangeHight, mat2);
Core.MinMaxLocResult mmG = Core.minMaxLoc(mat2);
Core.rotate(src, src, Core.ROTATE_90_CLOCKWISE);
Imgproc.circle(src,mmG.maxLoc,30,new Scalar(0,255,0), 5, Imgproc.LINE_AA);
return src;
}
The code you posted carries out two thresholding operations. One on the hue and one on the value. It then ANDs the results together. Because of the way it thresholds the hue, the effect is that it is looking for a bright red(ish) spot.
My first solution would be to look for just a bright spot (so just look on the hue frame). You might also try looking for high saturations (except that a laser spot may well overload the sensors, and result in an apparently unsaturated pixel).
To select the appropriate threshold values, you will have to experiment with various images.
I am trying to find contours from the raw bytes received from onPreviewFrame.
These raw bytes received are not rotated when we do setDisplayOrientation(according to the android developer docs). It is becoming difficult to rotate the contours alone. How to rotate these bytes efficiently and then process it ? I am using openCV to find the contours.
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
previewSize = cameraConfigUtil.cameraInstance.getParameters().getPreviewSize();
Mat srcMat = new Mat(previewSize.height, previewSize.width, CvType.CV_8UC3);
srcMat.put(0, 0, data);
rect = ImageCorrection.getLargestContour(data, previewSize.height, previewSize.width);
android.graphics.Rect rectangle = new android.graphics.Rect();
rectangle.left = rect.x;
rectangle.top = rect.y;
rectangle.right = rect.x + rect.width;
rectangle.bottom = rect.y + rect.height;
mOverlay.clear();
BarcodeGraphic graphic = new BarcodeGraphic(mOverlay);
mOverlay.add(graphic);
graphic.updateItem(rectangle);
Imgproc.rectangle(srcMat, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(255, 0, 0, 255), 3);
Utils.matToBitmap(srcMat, bitmap);
}
Here rectangle gives the four points of the contour, I want this to be rotated according to the angle set in setDisplayOrientation.
Rotating around a point is a fairly standard geometric process. Cribbing from this answer:
The way to rotate by an arbitraty point is first substract the point
coordinates, do the rotation about the origin and then add the point
coordinates.
x2 = px + (x1-px)*cos(q)-(y1-py)*sin(q)
y2 = py + (x1-px)*sin(q)+(y1-py)*cos(q)
where px, py are the rotation point coordinates, and x1,y1 the
original 2D shape vertex and x2,y2 the rotated coordinates, and q the
angle in radians.
I would imagine that the most efficient way to do this would be to apply the transformation to the 4 vertices and then draw a new rectangle on the rotated canvas.
Currently I'm developing an app that will detect colored circles. I'm trying to do this by following this tutorial, where guy detects red circles on image with Python. I've written the same code, just for Java.
Mat mat = new Mat(bitmap.getWidth(), bitmap.getHeight(),
CvType.CV_8UC3);
Mat hsv_image = new Mat();
Utils.bitmapToMat(bitmap, mat);
Imgproc.cvtColor(mat, hsv_image, Imgproc.COLOR_BGR2HSV);
Mat lower_red_hue_range = new Mat();
Mat upper_red_hue_range = new Mat();
Core.inRange(hsv_image, new Scalar(0, 100, 100), new Scalar(10, 255, 255), lower_red_hue_range);
Core.inRange(hsv_image, new Scalar(160, 100, 100), new Scalar(179, 255, 255), upper_red_hue_range);
Utils.matToBitmap(hsv_image, bitmap);
mutableBitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
image.setImageBitmap(mutableBitmap);
Image I use is identical to one from tutorial:
This is image with applied BGR2HSV:
When I execute the code using lower red hue range, it detects the blue circle. When I use upper red hue range it gives me black bmp(doesn't detect anything). How can it be? What am I doing wrong? This is literally copy moved from python to Java. Why's the result different then?
Thanks in advance.
Your mat is of CvType.CV_8UC1 image, i.e. you are working on a grayscale image. Try with CvType.CV_8UC3
Mat mat = new Mat(bitmap.getWidth(), bitmap.getHeight(), CvType.CV_8UC3);
hsv_image should look like this:
How to select a custom range:
You may want to detect a green circle.
Well, in HSV, tipically the range is:
H in [0,360]
S,V in [0,100]
However, for CV_8UC3 images, each component H,S,V can be represented by only 256 values at most, since it's stored in 1 byte. So, in OpenCV, the ranges H,S,V for CV_8UC3 are:
H in [0,180] <- halved to fit in the range
S,V in [0,255] <- stretched to fit the range
So to switch from typical range to OpenCV range you need to:
opencv_H = typical_H / 2;
opencv_S = typical_S * 2.55;
opencv_V = typical_V * 2.55;
So, green colors are around the value of hue of 120. The hue can have a value in the interval [0,360].
However, for Mat3b HSV images, the range for H is in [0,180], i.e. is halved so it can fit in a 8 bit representation with at most 256 possible values.
So, you want the H value to be around 120 / 2 = 60, say from 50 to 70.
You also set a minimum value for S,V to 100 in order to prevent very dark (almost black) colors.
Mat green_hue_range
inRange(hsv_image, cv::Scalar(50, 100, 100), cv::Scalar(70, 255, 255), green_hue_range);
use the following code and pass color to Blob detector and then pass an image to the detector
private Scalar converScalarRgba2HSV(Scalar rgba) {
Mat pointMatHsv= new Mat();
Mat pointMatRgba = new Mat(1, 1, CvType.CV_8UC3, rgba);
Imgproc.cvtColor(pointMatRgba,pointMatHsv, Imgproc.COLOR_RGB2HSV_FULL, 4);
return new Scalar(pointMatHsv.get(0, 0));}
// Blob Detector
public class ColorBlobDetector {
// Lower and Upper bounds for range checking in HSV color space
private Scalar mLowerBound = new Scalar(0);
private Scalar mUpperBound = new Scalar(0);
// Minimum contour area in percent for contours filtering
private static double mMinContourArea = 0.1;
// Color radius for range checking in HSV color space
private Scalar mColorRadius = new Scalar(25,50,50,0);
private Mat mSpectrum = new Mat();
private List<MatOfPoint> mContours = new ArrayList<MatOfPoint>();
Mat mPyrDownMat = new Mat();
Mat mHsvMat = new Mat();
Mat mMask = new Mat();
Mat mDilatedMask = new Mat();
Mat mHierarchy = new Mat();
public void setColorRadius(Scalar radius) {
mColorRadius = radius;
}
public void setHsvColor(Scalar hsvColor) {
double minH = (hsvColor.val[0] >= mColorRadius.val[0]) ? hsvColor.val[0]-mColorRadius.val[0] : 0;
double maxH = (hsvColor.val[0]+mColorRadius.val[0] <= 255) ? hsvColor.val[0]+mColorRadius.val[0] : 255;
mLowerBound.val[0] = minH;
mUpperBound.val[0] = maxH;
mLowerBound.val[1] = hsvColor.val[1] - mColorRadius.val[1];
mUpperBound.val[1] = hsvColor.val[1] + mColorRadius.val[1];
mLowerBound.val[2] = hsvColor.val[2] - mColorRadius.val[2];
mUpperBound.val[2] = hsvColor.val[2] + mColorRadius.val[2];
mLowerBound.val[3] = 0;
mUpperBound.val[3] = 255;
Mat spectrumHsv = new Mat(1, (int)(maxH-minH), CvType.CV_8UC3);
for (int j = 0; j < maxH-minH; j++) {
byte[] tmp = {(byte)(minH+j), (byte)255, (byte)255};
spectrumHsv.put(0, j, tmp);
}
Imgproc.cvtColor(spectrumHsv, mSpectrum, Imgproc.COLOR_HSV2RGB_FULL, 4);
}
public Mat getSpectrum() {
return mSpectrum;
}
public void setMinContourArea(double area) {
mMinContourArea = area;
}
public void process(Mat rgbaImage) {
Imgproc.pyrDown(rgbaImage, mPyrDownMat);
Imgproc.pyrDown(mPyrDownMat, mPyrDownMat);
Imgproc.cvtColor(mPyrDownMat, mHsvMat, Imgproc.COLOR_RGB2HSV_FULL);
Core.inRange(mHsvMat, mLowerBound, mUpperBound, mMask);
Imgproc.dilate(mMask, mDilatedMask, new Mat());
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(mDilatedMask, contours, mHierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
// Find max contour area
double maxArea = 0;
Iterator<MatOfPoint> each = contours.iterator();
while (each.hasNext()) {
MatOfPoint wrapper = each.next();
double area = Imgproc.contourArea(wrapper);
if (area > maxArea)
maxArea = area;
}
// Filter contours by area and resize to fit the original image size
mContours.clear();
each = contours.iterator();
while (each.hasNext()) {
MatOfPoint contour = each.next();
if (Imgproc.contourArea(contour) > mMinContourArea*maxArea) {
Core.multiply(contour, new Scalar(4,4), contour);
mContours.add(contour);
}
}
}
public List<MatOfPoint> getContours() {
return mContours;
}}
now set detector
public void initDetector() {
mDetector = new ColorBlobDetector();
mSpectrum = new Mat();
mBlobColorRgba = new Scalar(255);
mBlobColorHsv = new Scalar(255);
SPECTRUM_SIZE = new org.opencv.core.Size(500, 64);
CONTOUR_COLOR = new Scalar(0, 255, 0, 255);
mDetector.setHsvColor(converScalarRgba2HSV(new Scalar(0,255,255,255)));
Imgproc.resize(mDetector.getSpectrum(), mSpectrum, SPECTRUM_SIZE, 0, 0, Imgproc.INTER_LINEAR_EXACT);
mIsColorSelected = true;
}
now pass an image to a detector object
Mat mRgba = new Mat(inputFrame.height(), inputFrame.width(), CvType.CV_8UC4);
mRgba = inputFrame;
mDetector.process(mRgba);
List<MatOfPoint> contours = mDetector.getContours();
Log.e(TAG, "Contours count: " + contours.size());
drawContours(mRgba, contours, -1, CONTOUR_COLOR);
return mRgba;
Happy Codeing !!!
I am working on opencv eye detection project and i have sucessfully detect rectangular region of both eyes through the help of haar cascade for boths eyes. now i want to detect the eye balls from both eyes, the problem is that i have no haar cascade for eye ball tracking. kindly help me if anyone of you have this xml and suggest other solution.
here is my code of eye detection
private Mat get_template(CascadeClassifier clasificator, Rect area,int size)
{
Mat eye = new Mat();
Mat template = new Mat();
Mat mROI = mGray.submat(area);
MatOfRect eyes = new MatOfRect();
Point iris = new Point();
Rect eye_template = new Rect();
clasificator.detectMultiScale(mROI, eyes, 1.15, 2, Objdetect.CASCADE_FIND_BIGGEST_OBJECT|Objdetect.CASCADE_SCALE_IMAGE, new Size(30,30), new Size());
Rect[] eyesArray = eyes.toArray();
for (int i = 0; i < eyesArray.length; i++)
{
Rect e = eyesArray[i];
e.x = area.x + e.x;
e.y = area.y + e.y;
Core.rectangle(mROI, e.tl(), e.br(), new Scalar(25, 50, 0, 255));
Rect eye_only_rectangle = new Rect((int)e.tl().x, (int)( e.tl().y + e.height*0.4), (int)e.width, (int)(e.height*0.6));
//reduce ROI
mROI = mGray.submat(eye_only_rectangle);
Mat vyrez = mRgba.submat(eye_only_rectangle);
Core.MinMaxLocResult mmG = Core.minMaxLoc(mROI);
//Draw pink circle on eyeball
int radius = vyrez.height()/2;
// Core.circle(vyrez, mmG.minLoc, 2, new Scalar(0, 255, 0, 1), radius);
//Core.circle(vyrez, mmG.minLoc,2, new Scalar(255, 0, 255),1);
iris.x = mmG.minLoc.x + eye_only_rectangle.x;
iris.y = mmG.minLoc.y + eye_only_rectangle.y;
eye_template = new Rect((int)iris.x-size/2,(int)iris.y-size/2 ,size,size);
//draw red rectangle around eyeball
//Core.rectangle(mRgba,eye_template.tl(),eye_template.br(),new Scalar(255, 0, 0, 255), 2);
eye = (mRgba.submat(eye_only_rectangle));
template = (mGray.submat(eye_template)).clone();
//return template;
Mat eyeball_HSV = new Mat();
Mat dest = new Mat();
//Mat eye = new Mat();
//eye = mRgba.submat(eye_only_rectangle);
List<Mat> hsv_channel = new ArrayList<Mat>();
//convert image to HSV
Imgproc.cvtColor(eye, eyeball_HSV, Imgproc.COLOR_RGB2HSV, 0);
// get HSV channel
//hsv_channel[0] is hue
//hsv_channel[1] is saturation
//hsv_channel[2] is visibility
Core.split(eyeball_HSV, hsv_channel);
try
{
hsv_channel.get(0).setTo(new Scalar(145));
Log.v(TAG, "Got the Channel!");
}
catch(Exception ex)
{
ex.printStackTrace();
Log.v(TAG, "Didn't get any channel");
}
Core.merge(hsv_channel, eyeball_HSV);
Imgproc.cvtColor(eyeball_HSV, dest, Imgproc.COLOR_HSV2RGB);
Imgproc.cvtColor(dest, eye, Imgproc.COLOR_RGB2RGBA);
}
return eye;
}`enter code here`
If you are willing to consider other solutions then haar cascades, you can use facial landmark detection code. Facial landmark packages can give the location of the eyes in the image (usually, center of the eye and left and right borders).
Examples of landmark detection packages:
STASM:
http://www.milbo.users.sonic.net/stasm/
Flandmark detector:
http://cmp.felk.cvut.cz/~uricamic/flandmark/
I am developing application in which I have to detect rectangular object and draw outline I am using Open cv android library....
I succesfully detect Circle and draw outline inside image but repeatedly fail to detect Square or rectangle and draw....Here is my code to for circle..
Bitmap imageBmp = BitmapFactory.decodeResource(MainActivityPDF.this.getResources(),R.drawable.loadingplashscreen);
Mat imgSource = new Mat(), imgCirclesOut = new Mat();
Utils.bitmapToMat(imageBmp , imgSource);
//grey opencv
Imgproc.cvtColor(imgSource, imgSource, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur( imgSource, imgSource, new Size(9, 9), 2, 2 );
Imgproc.HoughCircles( imgSource, imgCirclesOut, Imgproc.CV_HOUGH_GRADIENT, 1, imgSource.rows()/8, 200, 100, 0, 0 );
float circle[] = new float[3];
for (int i = 0; i < imgCirclesOut.cols(); i++)
{
imgCirclesOut.get(0, i, circle);
org.opencv.core.Point center = new org.opencv.core.Point();
center.x = circle[0];
center.y = circle[1];
Core.circle(imgSource, center, (int) circle[2], new Scalar(255,0,0,255), 4);
}
Bitmap bmp = Bitmap.createBitmap(imageBmp.getWidth(), imageBmp.getHeight(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(imgSource, bmp);
ImageView frame = (ImageView) findViewById(R.id.imageView1);
//Bitmap bmp = Bitmap.createBitmap(100, 100, Bitmap.Config.ARGB_8888);
frame.setImageBitmap(bmp);
any help for detect square/rectangle for android ......I am wondering from 2 days ..every example are in either C++ or in C++ and I can't get through that languages...
Thanks.
There are many ways of detecting a rectangle using opencv, the most appropriate way of doing this is by finding the contours after applying Canny Edge Detection.
Steps are as follows :-
1.Convert the image to MAT
Grayscale the image
3.Apply Gausian Blur
4.Apply Morphology for filling the holes if any
5.Apply Canny Detection
6.Find Contours of the image
7.Find the largest contour of the rest
8.Draw the largest contour.
Code is as follows -
1.Convert the image to MAT
Utils.bitmapToMat(image,src)
Grayscale the image
val gray = Mat(src.rows(), src.cols(), src.type())
Imgproc.cvtColor(src, gray, Imgproc.COLOR_BGR2GRAY)
3.Apply Gausian Blur
Imgproc.GaussianBlur(gray, gray, Size(5.0, 5.0), 0.0)
4.Apply Morphology for filling the holes if any and also dilate the image
val kernel = Imgproc.getStructuringElement(
Imgproc.MORPH_ELLIPSE, Size(
5.0,
5.0
)
)
Imgproc.morphologyEx(
gray,
gray,
Imgproc.MORPH_CLOSE,
kernel
) // fill holes
Imgproc.morphologyEx(
gray,
gray,
Imgproc.MORPH_OPEN,
kernel
) //remove noise
Imgproc.dilate(gray, gray, kernel)
5.Apply Canny Detection
val edges = Mat(src.rows(), src.cols(), src.type())
Imgproc.Canny(gray, edges, 75.0, 200.0)
6.Find Contours of the image
val contours = ArrayList<MatOfPoint>()
val hierarchy = Mat()
Imgproc.findContours(
edges, contours, hierarchy, Imgproc.RETR_LIST,
Imgproc.CHAIN_APPROX_SIMPLE
)
7.Find the largest contour of the rest
public int findLargestContour(ArrayList<MatOfPoint> contours) {
double maxVal = 0;
int maxValIdx = 0;
for (int contourIdx = 0; contourIdx < contours.size(); contourIdx++) {
double contourArea = Imgproc.contourArea(contours.get(contourIdx));
if (maxVal < contourArea) {
maxVal = contourArea;
maxValIdx = contourIdx;
}
}
return maxValIdx;
}
8.Draw the largest contour which is the rectangle
Imgproc.drawContours(src, contours, idx, Scalar(0.0, 255.0, 0.0), 3)
There you go you have found the rectangle .
If any error persist in getting the process .Try resizing the source Image to half of its height and width.
Have a look at the below link for proper Java code of the above explained
https://github.com/dhananjay-91/DetectRectangle
Also,
https://github.com/aashari/android-opencv-rectangle-detector
You are on the right way by using the Houghtransformation. Instead of using Houghcircles you have to use Houghlines and check the obtained lines for intersections. If you really have to find rectangles (and not 4 edged polygones) - you should look for lines with the same angle(+- a small offset) and if you found at least a pair of these lines you have to look for lines that lay perpendicular to this, find a pair as well and check for intersections. It should not be a big deal using vectors(endpoint - startpoint) and lines to perform the angle and intersection tests.