Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I had followed this tutorial to apply grab cut algorithm in opencv4android but my output image is not same as described in this tutorial. In fact the image I get is only black.
The output I get is this image Output image. I use same image used toturial
as input but my output was black.
Here is the code for grabcut in android opencv that solves your problem.
public void grabcutAlgo(Bitmap bit){
Bitmap b = bit.copy(Bitmap.Config.ARGB_8888, true);
Point tl=new Point();
Point br=new Point();
//GrabCut part
Mat img = new Mat();
Utils.bitmapToMat(b, img);
Imgproc.cvtColor(img, img, Imgproc.COLOR_RGBA2RGB);
int r = img.rows();
int c = img.cols();
Point p1 = new Point(c / 100, r / 100);
Point p2 = new Point(c - c / 100, r - r / 100);
Rect rect = new Rect(p1, p2);
//Rect rect = new Rect(tl, br);
Mat background = new Mat(img.size(), CvType.CV_8UC3,
new Scalar(255, 255, 255));
Mat firstMask = new Mat();
Mat bgModel = new Mat();
Mat fgModel = new Mat();
Mat mask;
Mat source = new Mat(1, 1, CvType.CV_8U, new Scalar(Imgproc.GC_PR_FGD));
Mat dst = new Mat();
Imgproc.grabCut(img, firstMask, rect, bgModel, fgModel, 5, Imgproc.GC_INIT_WITH_RECT);
Core.compare(firstMask, source, firstMask, Core.CMP_EQ);
Mat foreground = new Mat(img.size(), CvType.CV_8UC3, new Scalar(255, 255, 255));
img.copyTo(foreground, firstMask);
Scalar color = new Scalar(255, 0, 0, 255);
Imgproc.rectangle(img, tl, br, color);
Mat tmp = new Mat();
Imgproc.resize(background, tmp, img.size());
background = tmp;
mask = new Mat(foreground.size(), CvType.CV_8UC1,
new Scalar(255, 255, 255));
Imgproc.cvtColor(foreground, mask, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(mask, mask, 254, 255, Imgproc.THRESH_BINARY_INV);
Mat vals = new Mat(1, 1, CvType.CV_8UC3, new Scalar(0.0));
background.copyTo(dst);
background.setTo(vals, mask);
Core.add(background, foreground, dst, mask);
Bitmap grabCutImage = Bitmap.createBitmap(dst.cols(), dst.rows(), Bitmap.Config.ARGB_8888);
Bitmap processedImage = Bitmap.createBitmap(dst.cols(), dst.rows(), Bitmap.Config.RGB_565);
Utils.matToBitmap(dst, grabCutImage);
dst.copyTo(sampleImage);
imageView.setImageBitmap(grabCutImage);
firstMask.release();
source.release();
bgModel.release();
fgModel.release();
}
Related
I am new to image processing and I cant get fillPoly() working. Also, drawContours() is leaving some spaces while drawing contours. I am working on Android and most of the references that are given on internet are of Python, Matlab or C++
sSize5 = new Size(5, 5);
mIntermediateMat = new Mat();
Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.contours_in_contours_red);
Mat rgba = new Mat(bmp.getHeight(), bmp.getWidth(), CvType.CV_8UC1);
Utils.bitmapToMat(bmp, rgba);
Mat greyInnerWindow;
greyInnerWindow = new Mat();
Mat mRgba = new Mat();
Imgproc.cvtColor(rgba, greyInnerWindow, Imgproc.COLOR_RGBA2GRAY);
Imgproc.GaussianBlur(greyInnerWindow, greyInnerWindow, sSize5, 2, 2);
Imgproc.Canny(greyInnerWindow, mIntermediateMat, 5, 35);
Imgproc.cvtColor(mIntermediateMat, mRgba, Imgproc.COLOR_GRAY2BGRA, 4);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(mIntermediateMat, contours, mHierarchy, Imgproc.RETR_EXTERNAL,
Imgproc.CHAIN_APPROX_SIMPLE);
for( int i = 0; i< contours.size(); i++ )
{
Imgproc.drawContours(rgba, contours, i, new Scalar(0, 255, 0), -1);
Imgproc.fillPoly(rgba, contours, new Scalar(0,255,0));
}
Bitmap resultBitmap = Bitmap.createBitmap(rgba.cols(), rgba.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(rgba, resultBitmap);
imageView.setImageBitmap(resultBitmap);
I am using OpenCV with Android and performing Grabcut on an image for foreground extraction, I am getting the output as expected but the output image has a blue tone and not as the original image that was provided? What is the solution to the problem?
I am attaching the code that I am using currently for the same.
private Mat performGrabCut(Mat image)
{
Mat firstMask = new Mat();
Mat foregroundModel = new Mat();
Mat backgroundModel = new Mat();
Mat mask;
Mat source = new Mat(1, 1, CvType.CV_8U, new Scalar(3.0));
Mat destination = new Mat();
Rect rect = new Rect(topLeft, bottomRight);
Imgproc.grabCut(image, firstMask, rect, backgroundModel, foregroundModel, 1, 0);
Core.compare(firstMask, source, firstMask, Core.CMP_EQ);
Mat foreground = new Mat(image.size(), CvType.CV_8UC3, new Scalar(255,255,255));
image.copyTo(foreground, firstMask);
return foreground;
}
In openCV I want to compare two histograms. First I transform Bitmap to Mat:
Bitmap puzzleBmp = BitmapFactory.decodeFile(photoPath, options)
mat = new Mat(puzzleBmp.getHeight(), puzzleBmp.getWidth(), CvType.CV_8U, new Scalar(4));
Utils.bitmapToMat(puzzleBmp, mat);
Next I want to create histogram of this image:
Mat mRgba = new Mat();
Imgproc.cvtColor(mat, mRgba, Imgproc.COLOR_RGBA2RGB);
Imgproc.GaussianBlur(mRgba, mRgba, new Size(5, 5), 0, Imgproc.BORDER_DEFAULT);
Mat mHSV = new Mat();
Imgproc.cvtColor(mRgba, mHSV, Imgproc.COLOR_RGB2HSV_FULL);
Mat hist = new Mat();
int h_bins = 30;
int s_bins = 32;
MatOfInt mHistSize = new MatOfInt (h_bins, s_bins);
MatOfFloat mRanges = new MatOfFloat(0, 179, 0, 255);
MatOfInt mChannels = new MatOfInt(0, 1);
List<Mat> lHSV = Arrays.asList(mHSV);
Mat mask2 = new Mat();
mask2 = Mat.zeros( mRgba.rows() + 2, mRgba.cols() + 2, CvType.CV_8UC1 );
Range rowRange = new Range( 1, mask2.rows() - 1 );
Range colRange = new Range( 1, mask2.cols() - 1 );
Mat mask = new Mat();
mask = mask2.submat(rowRange, colRange);
boolean accumulate = false;
Imgproc.calcHist(lHSV, mChannels, mask, hist, mHistSize, mRanges, accumulate);
Core.normalize(hist, hist, 0, 255, Core.NORM_MINMAX, -1, new Mat());
From this part I get hist. But when I make these transformations with two different images i always get from method Imgproc.compareHist() same values, which means they are the same pictures.
I've a problem with this metod of opencv library...
Imgproc.cvtColor(image , image , Imgproc.COLOR_RGBA2RGB);
Mat prob_fgd = new Mat(1, 1, CvType.CV_8U,
new Scalar(Imgproc.GC_PR_FGD));
try {
Imgproc.grabCut(image, firstMask, rect, bgModel, fgModel, 3, 0);
} catch (Exception w) {
System.out.println(w.getMessage());
}
Core.compare(firstMask, prob_fgd, firstMask, Core.CMP_EQ);
foreground = new Mat(image.size(), CvType.CV_8UC3, new Scalar(255, 255,
255));
image.copyTo(foreground, firstMask);
Imgproc.resize(background, background, image.size());
mask = new Mat(image.size(), CvType.CV_8UC1, new Scalar(100, 255,
100));
foreground = overlay_colored_roi(foreground, new Scalar(100, 255, 100));
Imgproc.cvtColor(foreground, mask, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(mask, mask, 254, 255, Imgproc.THRESH_BINARY_INV);
mask.copyTo(ref);
vals = new Mat(1, 1, CvType.CV_8UC3, new Scalar(0.0));
background.copyTo(dst);
background.setTo(vals, mask);
until here the code works.
here it stop.
log say that input element of core.add must have the same size but background.size()) foreground.size()) dst.size()) mask.size()) are equals.
Core.add(background, foreground, dst, mask);
They also must have the same number of channels.Since in the code background initialization is not shown I'm assuming that must be the problem.Secondly try normal addition function that is without the mask and check the output.If the problem still persists post the full code.Hope this helps.
I'm trying to get the red rectangle region below "C", as image below:
And below is my source code use Opencv4Android:
public void threshold() {
Mat rgbMat = new Mat();
Mat grayMat = new Mat();
Mat edgeMat = new Mat();
Utils.bitmapToMat(bmp, rgbMat);
Mat intermediate = new Mat();
Imgproc.cvtColor(rgbMat, intermediate, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur(intermediate, intermediate, new Size(3, 3), 0);
Imgproc.threshold(intermediate, intermediate, 190, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);
Imgproc.Canny(intermediate, intermediate, 60, 140);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat mHierarchy = new Mat();
Imgproc.findContours(intermediate, contours, mHierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
Scalar CONTOUR_COLOR = new Scalar(255,0,0,255);
Log.e(TAG, "Contours count: " + contours.size());
Imgproc.drawContours(intermediate, contours, -1, CONTOUR_COLOR);
Bitmap edgeBmp = Bitmap.createBitmap(bmp.getWidth(), bmp.getHeight(), Config.ARGB_8888);
Utils.matToBitmap(intermediate, edgeBmp);
imageView.setImageBitmap(edgeBmp);
}
but the result is not as I expected: as image below:
As log show, Contours count: 372, and the rectangle region is discontinuous, How can I get the contour of the red rectangle region, and filter another useless region. I have referenced some other questions, but the question still not be solved, Could you do me a favor?
[update] change the code by the suggest from Morotspaj,
public void thresholdNew() {
Mat rgbMat = new Mat();
Mat grayMat = new Mat();
Utils.bitmapToMat(bmp, rgbMat);
Imgproc.cvtColor(rgbMat, grayMat, Imgproc.COLOR_BGR2GRAY);
Vector<Mat> bgr_planes = new Vector<Mat>();
Core.split(rgbMat, bgr_planes);
Mat redMat = bgr_planes.get(2);
Mat redness = new Mat();
Core.subtract(redMat, grayMat, redness);
Mat intermediateMat1 = new Mat();
Mat intermediateMat2 = new Mat();
Imgproc.GaussianBlur(redness, intermediateMat1, new Size(15,15), 0);
Imgproc.GaussianBlur(redness, intermediateMat2, new Size(55,55), 0);
Mat red_mask = new Mat();
Core.subtract(intermediateMat1, intermediateMat2, red_mask );
Imgproc.threshold(red_mask , red_mask , 90, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);
Mat masked_image = rgbMat.clone();
masked_image = masked_image.setTo(new Scalar(255,0,0), red_mask );
Bitmap edgeBmp = Bitmap.createBitmap(bmp.getWidth(), bmp.getHeight(), Config.ARGB_8888);
Utils.matToBitmap(masked_image, edgeBmp);
imageView.setImageBitmap(edgeBmp);
}
But the result is not as I expected and different with Morotspaj's.
Any error exist in the above code?
[update] Sorry, I am very busy these days, I will be try again later, and If I can not implement with Java, I will use Morotspaj's code through JNI. I will be update soon.
I made a filter to mask out the red rectangle region, just for you ;)
Mat rgbMat = imread("red_rectangle.jpg", -1);
Mat grayMat;
cvtColor(rgbMat, grayMat, COLOR_BGR2GRAY);
// Separate the red channel and compare it to the gray image
Mat channels[3];
split(rgbMat, channels);
Mat redness = Mat_<float>(channels[2]) - Mat_<float>(grayMat);
// Find the sharp red region
Mat red_blur1;
Mat red_blur2;
GaussianBlur(redness, red_blur1, Size(15,15), 0);
GaussianBlur(redness, red_blur2, Size(55,55), 0);
Mat red_mask = (red_blur1-red_blur2) > 2;
// Store result
Mat masked_image = rgbMat.clone();
masked_image.setTo(Scalar(0,0,255), red_mask);
imwrite("red_mask.png", red_mask);
imwrite("masked_image.png", masked_image);
The GaussianBlur method calls can be replaced by boxFilter if you need better performance, and the constants here and there can of course be tweaked. Hope this helps!
EDIT: Taking the difference of two differently blurred images is known as Difference of Gaussians (DoG). It finds changes in a certain scale depending on the size of the kernels. The smaller kernel is used to smooth away small details and noise. The bigger kernel destroys the details we are interested in but not the regions with very smooth changes that we don't want. By taking the difference between them we end up with only the details in the scale we are interested in! A mask can then be created easily by thresholding with the > operator.