Related
Update 1
I have an idea what inRange function does. But I don't want to apply mask and show the new image with skin color. What I want to do is to know if the image contains skin color and cover larger area.
What I want to do
I want to capture a picture whenever finger is detected inside a boundary. Its dimensions are known.
Struggling points
Manipulate image data in native code.
Detecting skin in live camera, so whenever that particular area is focused and skin is detected, snap should be taken
What I have done
I am using JNI Layer to perform the operation. I am able to get Mat from image data using this tutorial, but don't know how to manipulate poutPixels. The format is NV21 and I am not sure how to do operations on it.
I need to crop image and then detect if there's skin present in the image. I have successfully cropped the image to the desired dimension, but has no clue to move forward to detect skin. I want this method to return true or false.
Here is the code:
jbyte * pNV21FrameData = env->GetByteArrayElements(NV21FrameData, 0);
jint * poutPixels = env->GetIntArrayElements(outPixels, 0);
Mat mNV(height, width, CV_8UC3, (unsigned char*)pNV21FrameData);
Mat finalImage(height, width, CV_8UC3, (unsigned char*) poutPixels);
jfloat wScale = (float) width/screenWidth;
jfloat hScale = (float) height/screenHeight;
float temp = rectX * wScale;
int x = (int) temp;
temp = rectY * hScale;
int y = (int) temp;
int cW = (int) (width * wScale);
int cH = (int) (height * hScale);
cH = cH/2;
Rect regionToCrop(x, y, cW, cH);
mNV = mNV(regionToCrop);
finalImage = finalImage(regionToCrop);
//detect skin and return true or false
I have read about inRange function, but I don't know how to check whether there's skin or not.
Questions
Am I on the right path to proceed further?
The image format I am getting is NV21. Is it a 8UC1 or it can be 8UC3 too?
How to proceed from here to start detecting skin?
Any help is appreciated.
I have solved my problem by extracting skin color range and making all pixels equal to zero. Below are the steps.
Convert the image to HSV
First convert image to HSV.
Mat mHsv = new Mat(rows, cols, CvType.CV_8UC3);
Imgproc.cvtColor(mRgba, mHsv, Imgproc.COLOR_RGB2HSV);
Get range of skin color
Skin color range may vary, but this one is working fine for me.
Mat output = new Mat();
Core.inRange(mHsv, new Scalar(0, 0.18*255, 0), new Scalar(25, 0.68*255, 255), output);
Extract this Skin Range channel
Now extract this channel while making skin pixels equal to zero
Mat mExtracted = new Mat();
Core.extractChannel(output, mExtracted, 0);
Now you have mExtracted matrix, in which skin colored pixels are 0 and rests are 255 (or skin color, I am not sure).
Get count of zeros
Since 0 now is actually skin color area, what you can do is to define a threshold which suits your need. According to my need, I want skin to cover more than half of the area, so I made my logic accordingly.
int n = Core.countNonZero(mExtracted);
int check = (mExtracted.rows() * mExtracted.cols())/2;
if(n >= check && isFocused) {
//Take picture
}
I am using tess-two library and I wish to convert all the colors other than black in my image to white (Black will be text). Thus making it easier for the tess-two to read the text. I have tried various methods but they are taking too much time as they convert pixel by pixel. Is there a way to achieve this using canvas or anything that give results faster.
UPDATE
Another problem that came up with this algorithm is that printer doesn't print with the same BLACK and White as in android. So the algorithm converts the whole picture to white.
Pixel by pixel method that I am currently using.
binarizedImage = convertToMutable(cropped);// the bitmap is made mutable
int width = binarizedImage.getWidth();
int height = binarizedImage.getHeight();
int[] pixels = new int[width * height];
binarizedImage.getPixels(pixels, 0, width, 0, 0, width, height);
for(int i=0;i<binarizedImage.getWidth();i++) {
for(int c=0;c<binarizedImage.getHeight();c++) {
int pixel = binarizedImage.getPixel(i, c);
if(!(pixel == Color.BLACK || pixel == Color.WHITE))
{
int index = c * width + i;
pixels[index] = Color.WHITE;
binarizedImage.setPixels(pixels, 0, width, 0, 0, width, height);
}
}
}
Per, Rishabh's comment. Use a color matrix. Since black is black and is RGB(0,0,0,255), it's immune to multiplications. So if you multiply everything by 255 in all channels everything will exceed the limit and get crimped to white, except for black which will stay black.
ColorMatrix bc = new ColorMatrix(new float[] {
255, 255, 255, 0, 0,
255, 255, 255, 0, 0,
255, 255, 255, 0, 0,
0, 0, 0, 1, 0,
});
ColorMatrixColorFilter filter = new ColorMatrixColorFilter(bc);
paint.setColorFilter(filter);
You can use that paint to paint that bitmap in only-black-stays-black colormatrix filter glory.
Note: This is a quick and awesome trick, but, it will ONLY work for black. While it's perfect for your use and will turn that lengthy op into something that is instant, it does not actually conform to the title question of "a particular color" my algorithm works in any color you want, so long as it is black.
Though #Tatarize answer was perfect I was having troubles reading a printed image as its not always jet black.
This algorithm which i found on stack overflow works great, it actually checks whether the particular pixel is closer to black or white and converts the pixel to the closest color. Hence providing binarization with range. (https://stackoverflow.com/a/16534187/3710223).
What I am doing now is keeping the unwanted areas in light colors while text in black. This algorithm gives binarized image in approximately 20-35 sec. Still not that fast but efficient.
private static boolean shouldBeBlack(int pixel) {
int alpha = Color.alpha(pixel);
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);
if(alpha == 0x00) //if this pixel is transparent let me use TRASNPARENT_IS_BLACK
return TRASNPARENT_IS_BLACK;
// distance from the white extreme
double distanceFromWhite = Math.sqrt(Math.pow(0xff - redValue, 2) + Math.pow(0xff - blueValue, 2) + Math.pow(0xff - greenValue, 2));
// distance from the black extreme
double distanceFromBlack = Math.sqrt(Math.pow(0x00 - redValue, 2) + Math.pow(0x00 - blueValue, 2) + Math.pow(0x00 - greenValue, 2));
// distance between the extremes
double distance = distanceFromBlack + distanceFromWhite;
return ((distanceFromWhite/distance)>SPACE_BREAKING_POINT);
}
If the return value is true then we convert the pixel to black else we convert it to white.
I know there can be better/faster answers and more answers are welcomed :)
Same thing but done in renderscript, times about 60-100ms. You won't even notice the delay.
Bitmap blackbitmap = Bitmap.createBitmap(bitmap.getWidth(),bitmap.getHeight(),bitmap.getConfig());
RenderScript mRS = RenderScript.create(TouchEmbroidery.activity);
ScriptC_blackcheck script = new ScriptC_blackcheck(mRS);
Allocation allocationRaster0 = Allocation.createFromBitmap(
mRS,
bitmap,
Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT
);
Allocation allocationRaster1 = Allocation.createTyped(mRS, allocationRaster0.getType());
script.forEach_root(allocationRaster0, allocationRaster1);
allocationRaster1.copyTo(blackbitmap);
Does the allocation, uses renderscript to write out the data to blackbitmap.
#pragma version(1)
#pragma rs java_package_name(<YOUR PACKAGENAME GOES HERE>)
void root(const uchar4 *v_in, uchar4 *v_out) {
uint32_t value = (v_in->r * v_in->r);
value = value + (v_in->g * v_in->g);
value = value + (v_in->b * v_in->b);
if (value > 1200) {
v_out->r = 255;
v_out->g = 255;
v_out->b = 255;
}
else {
v_out->r = 0;
v_out->g = 0;
v_out->b = 0;
}
v_out->a = 0xFF;
}
Note the 1200 is just the threshold I used, should be all three components less than 20 (or, like 0, 0, sqrt(1200) aka (~34)). You can set the 1200 limit up or down accordingly.
And the build gradle needs Renderscript:
renderscriptTargetApi 22
Last few things of the build tools claims to have fixed a bunch of the renderscript headaches. So it might be perfectly reasonable to do this kind of stuff in mission critical places like yours. 20 seconds is too long to wait, 60 milliseconds is not.
I want to detect eyes irises and their centers using Hough Circle algorithm.
I'm using this code:
private void houghCircle()
{
Bitmap obtainedBitmap = imagesList.getFirst();
/* convert bitmap to mat */
Mat mat = new Mat(obtainedBitmap.getWidth(),obtainedBitmap.getHeight(),
CvType.CV_8UC1);
Mat grayMat = new Mat(obtainedBitmap.getWidth(), obtainedBitmap.getHeight(),
CvType.CV_8UC1);
Utils.bitmapToMat(obtainedBitmap, mat);
/* convert to grayscale */
int colorChannels = (mat.channels() == 3) ? Imgproc.COLOR_BGR2GRAY : ((mat.channels() == 4) ? Imgproc.COLOR_BGRA2GRAY : 1);
Imgproc.cvtColor(mat, grayMat, colorChannels);
/* reduce the noise so we avoid false circle detection */
Imgproc.GaussianBlur(grayMat, grayMat, new Size(9, 9), 2, 2);
// accumulator value
double dp = 1.2d;
// minimum distance between the center coordinates of detected circles in pixels
double minDist = 100;
// min and max radii (set these values as you desire)
int minRadius = 0, maxRadius = 1000;
// param1 = gradient value used to handle edge detection
// param2 = Accumulator threshold value for the
// cv2.CV_HOUGH_GRADIENT method.
// The smaller the threshold is, the more circles will be
// detected (including false circles).
// The larger the threshold is, the more circles will
// potentially be returned.
double param1 = 70, param2 = 72;
/* create a Mat object to store the circles detected */
Mat circles = new Mat(obtainedBitmap.getWidth(), obtainedBitmap.getHeight(), CvType.CV_8UC1);
/* find the circle in the image */
Imgproc.HoughCircles(grayMat, circles, Imgproc.CV_HOUGH_GRADIENT, dp, minDist, param1, param2, minRadius, maxRadius);
/* get the number of circles detected */
int numberOfCircles = (circles.rows() == 0) ? 0 : circles.cols();
/* draw the circles found on the image */
for (int i=0; i<numberOfCircles; i++) {
/* get the circle details, circleCoordinates[0, 1, 2] = (x,y,r)
* (x,y) are the coordinates of the circle's center
*/
double[] circleCoordinates = circles.get(0, i);
int x = (int) circleCoordinates[0], y = (int) circleCoordinates[1];
Point center = new Point(x, y);
int radius = (int) circleCoordinates[2];
/* circle's outline */
Core.circle(mat, center, radius, new Scalar(0,
255, 0), 4);
/* circle's center outline */
Core.rectangle(mat, new Point(x - 5, y - 5),
new Point(x + 5, y + 5),
new Scalar(0, 128, 255), -1);
}
/* convert back to bitmap */
Utils.matToBitmap(mat, obtainedBitmap);
MediaStore.Images.Media.insertImage(getContentResolver(),obtainedBitmap, "testgray", "gray" );
}
But it doesn't detect iris in all images correctly. Specially, if the iris has a dark color like brown. How can I fix this code to detect the irises and their centers correctly?
EDIT: Here are some sample images (which I got from the web) that shows the performance of the algorithm (Please ignore the landmarks which are represented by the red squares):
In these images the algorithm doesn't detect all irises:
This image shows how the algorithm couldn't detect irises at all:
EDIT 2: Here is a code which uses Canny edge detection, but it causes the app to crash:
private void houghCircle()
{
Mat grayMat = new Mat();
Mat cannyEdges = new Mat();
Mat circles = new Mat();
Bitmap obtainedBitmap = imagesList.getFirst();
/* convert bitmap to mat */
Mat originalBitmap = new Mat(obtainedBitmap.getWidth(),obtainedBitmap.getHeight(),
CvType.CV_8UC1);
//Converting the image to grayscale
Imgproc.cvtColor(originalBitmap,grayMat,Imgproc.COLOR_BGR2GRAY);
Imgproc.Canny(grayMat, cannyEdges,10, 100);
Imgproc.HoughCircles(cannyEdges, circles,
Imgproc.CV_HOUGH_GRADIENT,1, cannyEdges.rows() / 15); //now circles is filled with detected circles.
//, grayMat.rows() / 8);
Mat houghCircles = new Mat();
houghCircles.create(cannyEdges.rows(),cannyEdges.cols()
,CvType.CV_8UC1);
//Drawing lines on the image
for(int i = 0 ; i < circles.cols() ; i++)
{
double[] parameters = circles.get(0,i);
double x, y;
int r;
x = parameters[0];
y = parameters[1];
r = (int)parameters[2];
Point center = new Point(x, y);
//Drawing circles on an image
Core.circle(houghCircles,center,r,
new Scalar(255,0,0),1);
}
//Converting Mat back to Bitmap
Utils.matToBitmap(houghCircles, obtainedBitmap);
MediaStore.Images.Media.insertImage(getContentResolver(),obtainedBitmap, "testgray", "gray" );
}
This is the error I get in the log
FATAL EXCEPTION: Thread-28685
CvException [org.opencv.core.CvException: cv::Exception: /hdd2/buildbot/slaves/slave_ardbeg1/50-SDK/opencv/modules/imgproc/src/color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function void cv::cvtColor(cv::InputArray, cv::OutputArray, int, int)
]
at org.opencv.imgproc.Imgproc.cvtColor_1(Native Method)
at org.opencv.imgproc.Imgproc.cvtColor(Imgproc.java:4598)
Which is caused by this line: Imgproc.cvtColor(originalBitmap,grayMat,Imgproc.COLOR_BGR2GRAY);
Can anyone please tell me how this error can solved? Perhaps adding a canny edge detection will improve the results.
Hough circles work better on well defined circles. They are not good with things like iris.
After some thresholding, morphological operations or canny edge detection, feature detection methods like MSER work much better for iris detection.
Here is a similar question with a solution if you are looking for some code.
As you want to detect iris using hough transform (there are others), you had better studying the Canny edge detector and its parameters.
cv::HoughCircles takes the Canny-hysteresis threshold in param1. Investigating Canny alone, you get the impression of good threshold range.
Maybe instead of gaussian blur, you apply a better denoising (non local means with say h=32 and window sizes 5 and 15), and also try to harmonize the image contrast, e.g., using contrast limited adaptive histogram equalization (cv::CLAHE).
Harmonization is to make sure all (highlight and shadow) eyes map to similar intensity range.
I wanted to know if those images are the images you processed or if you like took a cell phone snapshot of your screen to upload them here. Because the irises are bigger than the maximum radius you set in your code. Therefor I don't understand how you could find any iris at all. The irises in the first image have a radius of over 20. So you shouldn't be able to detect them.
You should set the radii to the radius range you expect your irises to be.
I am new to OpenCV and am trying to count the number of objects in an image. I have done this before using MATLAB Image Processing Toolbox and adapted the same approach in OpenCV (Android) also.
The first step was to convert an image to gray scale. Then to threshold it and then counting the number of blobs. In Matlab there is a command - "bwlabel", which gives the number of blobs. I couldn't find such thing in OpenCV (again, I am a noob in OpenCV as well as Android).
Here is my code,
//JPG to Bitmap to MAT
Bitmap i = BitmapFactory.decodeFile(imgPath + "mms.jpg");
Bitmap bmpImg = i.copy(Bitmap.Config.ARGB_8888, false);
Mat srcMat = new Mat ( bmpImg.getHeight(), bmpImg.getWidth(), CvType.CV_8UC3);
Utils.bitmapToMat(bmpImg, srcMat);
//convert to gray scale and save image
Mat gray = new Mat(srcMat.size(), CvType.CV_8UC1);
Imgproc.cvtColor(srcMat, gray, Imgproc.COLOR_RGB2GRAY,4);
//write bitmap
Boolean bool = Highgui.imwrite(imgPath + "gray.jpg", gray);
//thresholding
Mat threshed = new Mat(bmpImg.getWidth(),bmpImg.getHeight(), CvType.CV_8UC1);
Imgproc.adaptiveThreshold(gray, threshed, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 75, 5);//15, 8 were original tests. Casey was 75,10
Core.bitwise_not(threshed, threshed);
Utils.matToBitmap(threshed, bmpImg);
//write bitmap
bool = Highgui.imwrite(imgPath + "threshed.jpg", threshed);
Toast.makeText(this, "Thresholded image saved!", Toast.LENGTH_SHORT).show();
In the next step, I tried to fill the holes and letters using dilation followed by an erosion but the blobs gets attached to each other which will ultimately give a wrong count. There is a tradeoff between filling holes and getting the blobs attached to each other on tuning the parameters for dilation and erosion.
Here is the code,
//morphological operations
//dilation
Mat dilated = new Mat(bmpImg.getWidth(),bmpImg.getHeight(), CvType.CV_8UC1);
Imgproc.dilate(threshed, dilated, Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new org.opencv.core.Size (16, 16)));
Utils.matToBitmap(dilated, bmpImg);
//write bitmap
bool = Highgui.imwrite(imgPath + "dilated.jpg", dilated);
Toast.makeText(this, "Dilated image saved!", Toast.LENGTH_SHORT).show();
//erosion
Mat eroded = new Mat(bmpImg.getWidth(),bmpImg.getHeight(), CvType.CV_8UC1);
Imgproc.erode(dilated, eroded, Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new org.opencv.core.Size(15, 15)));
Utils.matToBitmap(eroded, bmpImg);
//write bitmap
bool = Highgui.imwrite(imgPath + "eroded.jpg", eroded);
Toast.makeText(this, "Eroded image saved!", Toast.LENGTH_SHORT).show();
Because sometimes my M&Ms might be just next to each other! ;)
I also tried to use Hough Circles but the result is very unreliable (tested with coin images as well as real coins)
Here is the code,
//hough circles
Mat circles = new Mat();
// parameters
int iCannyUpperThreshold = 100;
int iMinRadius = 20;
int iMaxRadius = 400;
int iAccumulator = 100;
Imgproc.HoughCircles(gray, circles, Imgproc.CV_HOUGH_GRADIENT,
1.0, gray.rows() / 8, iCannyUpperThreshold, iAccumulator,
iMinRadius, iMaxRadius);
// draw
if (circles.cols() > 0)
{
Toast.makeText(this, "Coins : " +circles.cols() , Toast.LENGTH_LONG).show();
}
else
{
Toast.makeText(this, "No coins found", Toast.LENGTH_LONG).show();
}
The problem with this approach is that the algorithm is limited to perfect circles only (AFAIK). So, it doesn't work well when I try to scan and count M&Ms or coins lying on my desk (because angle of the device changes). With this approach, sometimes I get less no. of coins detected and sometimes more (I don't get it why more??).
On scanning this image the app sometimes shows 19 coins and sometimes 38 coins counted...I know there are other features which may be detected as circles but I totally don't get it why 38..?
So my questions...
Is there a better way to fill holes without joining adjacent blobs?
How do I count the number of objects accurately? I don't want to limit my app to counting only circles with HoughCircles approach.
FYI : OpenCV-2.4.9-android-sdk. Kindly keep in mind that I am a newbie in OpenCV and Android too.
Any help is much appreciated.
Thanks & Cheers!
Jainam
So to proceed we take your threshold image which you have generated as input and further modify it. The present code is in C++ but I guess you can easily convert it into android platform
Now instead of dilation or blurring you can try flood fill
which results in
Finally now applying the contour detection algorithm algorithm we get
The code for the above is
Mat dst = imread($path to the threshold image); // image should be single channel black and white image
imshow("dst",dst);
cv::Mat mask = cv::Mat::zeros(dst.rows + 2, dst.cols + 2, CV_8U);
// A image with size greater than the present object is created
cv::floodFill(dst, mask, cv::Point(0,0), 255, 0, cv::Scalar(), cv::Scalar(), 4 + (255 << 8) + cv::FLOODFILL_MASK_ONLY);
erode(mask,mask,Mat());
// Now to remove the outer boundary
rectangle(mask,Rect(0,0,mask.cols,mask.rows), Scalar(255,255,255),2,8,0);
imshow("Mask",mask);
Mat copy;
mask.copyTo(copy);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( copy, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
vector<vector<Point> > contours_poly( contours.size() );
vector<Rect> boundRect( contours.size() );
vector<Point2f>center( contours.size() );
vector<float>Distance( contours.size() );
vector<float>radius( contours.size() );
Mat drawing = cv::Mat::zeros(mask.rows, mask.cols, CV_8U);
int num_object = 0;
for( int i = 0; i < contours.size(); i++ ){
approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
// To get rid of the smaller object and the outer rectangle created
//because of the additional mask image we enforce a lower limit on area
//to remove noise and an upper limit to remove the outer border.
if (contourArea(contours_poly[i])>(mask.rows*mask.cols/10000) && contourArea(contours_poly[i])<mask.rows*mask.cols*0.9){
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
minEnclosingCircle( (Mat)contours_poly[i], center[i], radius[i] );
circle(drawing,center[i], (int)radius[i], Scalar(255,255,255), 2, 8, 0);
rectangle(drawing,boundRect[i], Scalar(255,255,255),2,8,0);
num_object++;
}
}
cout <<"No. of object detected =" <<num_object<<endl;
imshow("drawing",drawing);
waitKey(2);
char key = (char) waitKey(20);
if(key == 32){
// You can save your images here using a space
}
I hope this helps you in solving your problem
Just check it out,
Blur source.
Threshold binary inverted on gray.
Find contours, note that you should use CV_RETR_EXTERNAL as contour retrieval mode.
You can take the contours size as your object count.
Code:
Mat tmp,thr;
Mat src=imread("img.jpg",1);
blur(src,src,Size(3,3));
cvtColor(src,tmp,CV_BGR2GRAY);
threshold(tmp,thr,220,255,THRESH_BINARY_INV);
imshow("thr",thr);
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
findContours( thr, contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through each contour.
{
Rect r= boundingRect(contours[i]);
rectangle(src,r, Scalar(0,0,255),2,8,0);
}
cout<<"Numeber of contour = "<<contours.size()<<endl;
imshow("src",src);
waitKey();
I am trying to detect coin ( circle ) detection using Opencv4Android.
So far I have tried two approaches
1 ) Regular method :
// convert image to grayscale
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_RGBA2GRAY);
// apply Gaussian Blur
Imgproc.GaussianBlur(mGray, mGray, sSize5, 2, 2);
iMinRadius = 20;
iMaxRadius = 400;
iAccumulator = 300;
iCannyUpperThreshold = 100;
//apply houghCircles
Imgproc.HoughCircles(mGray, mIntermediateMat, Imgproc.CV_HOUGH_GRADIENT, 2.0, mGray.rows() / 8,
iCannyUpperThreshold, iAccumulator, iMinRadius, iMaxRadius);
if (mIntermediateMat.cols() > 0)
for (int x = 0; x < Math.min(mIntermediateMat.cols(), 10); x++) {
double vCircle[] = mIntermediateMat.get(0,x);
if (vCircle == null)
break;
pt.x = Math.round(vCircle[0]);
pt.y = Math.round(vCircle[1]);
radius = (int)Math.round(vCircle[2]);
// draw the found circle
Core.circle(mRgba, pt, radius, colorRed, iLineThickness);
}
2 ) Sobel and then Hough Cicles
// apply Gaussian Blur
Imgproc.GaussianBlur(mRgba, mRgba, sSize3, 2, 2,
Imgproc.BORDER_DEFAULT);
// / Convert it to grayscale
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_RGBA2GRAY);
// / Gradient X
Imgproc.Sobel(mGray, grad_x, CvType.CV_16S, 1, 0, 3, scale, delta,
Imgproc.BORDER_DEFAULT);
Core.convertScaleAbs(grad_x, abs_grad_x);
// / Gradient Y
Imgproc.Sobel(mGray, grad_y, CvType.CV_16S, 0, 1, 3, scale, delta,
Imgproc.BORDER_DEFAULT);
Core.convertScaleAbs(grad_y, abs_grad_y);
// / Total Gradient (approximate)
Core.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad);
iCannyUpperThreshold = 100;
Imgproc.HoughCircles(grad, mIntermediateMat,
Imgproc.CV_HOUGH_GRADIENT, 2.0, grad.rows() / 8,
iCannyUpperThreshold, iAccumulator, iMinRadius, iMaxRadius);
if (mIntermediateMat.cols() > 0)
for (int x = 0; x < Math.min(mIntermediateMat.cols(), 10); x++) {
double vCircle[] = mIntermediateMat.get(0, x);
if (vCircle == null)
break;
pt.x = Math.round(vCircle[0]);
pt.y = Math.round(vCircle[1]);
radius = (int) Math.round(vCircle[2]);
// draw the found circle
Core.circle(mRgba, pt, radius, colorRed, iLineThickness);
}
method one gives fair result in case of coin detection and method two gives better result
Out of these two methods second method processing is slow but results are good
Both of these methods are working when camera frmae is caputured using JavaCameraView or NativeCameraView from opencv library .
If I use same procedure on image captured from android naive image capture intent which returns Bitmap , I am unable to get any results at all i.e. no circles are detected at all.
In methods one sometimes I get circle detected when using Bitmap captured using android camera intent.
I also tried changing the captured bitmap as suggested in this Post but still no circle detection.
Can anybody tell me what modifications I have to do.
And also I want to know which algorithm will give better results in coin ( circle ) detection but with less processing.
I have played with various values of houghCircle method and also tried canny edge out put as intput to houghCircles but its not considerably good enough.