Image segmentation by background color - OpenCV Android - android

I'm trying to segment business cards and split them by background color to treat them as different regions of interest.
For example a card of this sort:
should be able to be to be split into two images as there are 2 background colors. Are there any suggestions on how to tackle this? I've tried doing some contour analysis which didn't turn out too successful.
Other example cards:
This card should give 3 segmentations, as there are three portions even though it's only 2 colors (though 2 colors will be okay).
The above card should give just one segmentation as it is just one background color.
I'm not trying to think of gradient backgrounds just yet.

It depends on how the other cards look, but if the images all are in that great quality, it should not be too hard.
In the example you posted, you could just collect the colors of the border pixels (most left column, most right column, first row, last row) and treat what you find as possible background colors. Perhaps check if there are enough pixels with roughly the same color. You need some kind of distance measuring. One easy solution is to just use the euclidean distance in RGB color space.
A more generic solution would be to find clusters in the color histograms of the whole image and treat every color (again with tolerance) that has more than x% of the overall pixel amount as a background color. But what you define as background depends on what you want to achieve and how your images look.
If you need further suggestions, you could post more images and tag what parts of the images you want to be detected as a background color and what parst not.
-
Edit: Your two new images also show the same pattern. Background colors occupy a big part of the image, there is no noise and there are no color gradients. So a simple approach could look like the following:
Calculate the histogram of the image: see http://docs.opencv.org/modules/imgproc/doc/histograms.html#calchist and http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_calculation/histogram_calculation.html
Find the most prominent colors in the histogram. If you do not want to iterate over the Mat yourself you can use minMaxLoc ( http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#minmaxloc) as shown in the calchist documentation (see above), and if the color takes up enough percentage of the pixel count save it and set the according bin in the histogram to zero. Repeat until your percentage is not reached any more. You will then have saved a list of the most prominent colors, your background colors.
Threshold the image for every background color you have. See: http://docs.opencv.org/doc/tutorials/imgproc/threshold/threshold.html
On the resulting threadholded images find the corresponding region to every background color. See: http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html
If you have examples that do not work with this approach, just post them.

As an approach for also finding backgrounds with color gradients in them, one could use canny. The following code (yes, not android, I know, but the result should be the same if you port it) works fine with the three example images you posted so far. If you have other images, that do not work with this, please let me know.
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
Mat src;
Mat src_gray;
int canny_thresh = 100;
int max_canny_thresh = 255;
int size_per_mill = 120;
int max_size_per_mill = 1000;
RNG rng(12345);
bool cmp_contour_area_less(const vector<Point>& lhs, const vector<Point>& rhs)
{
return contourArea(lhs) < contourArea(rhs);
}
void Segment()
{
Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Canny(src_gray, canny_output, canny_thresh, canny_thresh*2, 3);
// Draw rectangle around canny image to also get regions touching the edges.
rectangle(canny_output, Point(1, 1), Point(src.cols-2, src.rows-2), Scalar(255));
namedWindow("Canny", CV_WINDOW_AUTOSIZE);
imshow("Canny", canny_output);
// Find the contours.
findContours(canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
// Remove largest Contour, because it represents always the whole image.
sort(contours.begin(), contours.end(), cmp_contour_area_less);
contours.resize(contours.size()-1);
reverse(contours.begin(), contours.end());
// Maximum contour size.
int image_pixels(src.cols * src.rows);
cout << "image_pixels: " << image_pixels << "\n";
// Filter the contours, leaving just large enough ones.
vector<vector<Point> > background_contours;
for(size_t i(0); i < contours.size(); ++i)
{
double area(contourArea(contours[i]));
double min_size((size_per_mill / 1000.0) * image_pixels);
if (area >= min_size)
{
cout << "Background contour " << i << ") area: " << area << "\n";
background_contours.push_back(contours[i]);
}
}
// Draw large contours.
Mat drawing = Mat::zeros(canny_output.size(), CV_8UC3);
for(size_t i(0); i < background_contours.size(); ++i)
{
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(drawing, background_contours, i, color, 1, 8, hierarchy, 0, Point());
}
namedWindow("Contours", CV_WINDOW_AUTOSIZE);
imshow("Contours", drawing);
}
void size_callback(int, void*)
{
Segment();
}
void thresh_callback(int, void*)
{
Segment();
}
int main(int argc, char* argv[])
{
if (argc != 2)
{
cout << "Please provide an image file.\n";
return -1;
}
src = imread(argv[1]);
cvtColor(src, src_gray, CV_BGR2GRAY);
blur(src_gray, src_gray, Size(3,3));
namedWindow("Source", CV_WINDOW_AUTOSIZE);
imshow("Source", src);
if (!src.data)
{
cout << "Unable to load " << argv[1] << ".\n";
return -2;
}
createTrackbar("Canny thresh:", "Source", &canny_thresh, max_canny_thresh, thresh_callback);
createTrackbar("Size thresh:", "Source", &size_per_mill, max_size_per_mill, thresh_callback);
Segment();
waitKey(0);
}

Related

Detection of four corners of a document under different circumstances

I have tried 2 methodologies as follows:-
conversion of image to Mat
apply gaussian blur
then canny edge detection
find contours
The problem with this method is:
too many contours are detected
mostly open contours
doesn't detect what I want to detect
Then I changed my approach and tried adaptive thresholding after gaussian blur/median blur and it is much better and I am able to detect the corners in 50% cases
The current problem I am facing is that the page detection requires contrasting and plain background without any reflections. I think it's too idealistic for real world use.
This is where I would like some help. Even a direction towards the solution is highly appreciated especially in java. Thanks in anticipation
works absolutely fine with a significant contrasting background like this
Detected 4 corners
This picture gives troubles because the background isn't exactly the most contrasting
Initial largest contour found
Update: median blur did not help much so I traced the cause and found that the page boundary was detected in bits and pieces and not a single contour so it detected the biggest contour as a part of the page boundary Therefore performed some morphological operations to close relatively small gaps and the resultant largest contour is definitely improved but its its not optimum. Any ideas how I can improve the big gaps?
morphed original picture
largest contour found in the morphed image
PS morphing the image in ideal scenarios has led to detection of false contour boundaries. Any condition which can be checked before morphing an image is also a bonus. Thank you
If you use methods like that:
public static RotatedRect getBestRectByArea(List<RotatedRect> boundingRects) {
RotatedRect bestRect = null;
if (boundingRects.size() >= 1) {
RotatedRect boundingRect;
Point[] vertices = new Point[4];
Rect rect;
double maxArea;
int ixMaxArea = 0;
// find best rect by area
boundingRect = boundingRects.get(ixMaxArea);
boundingRect.points(vertices);
rect = Imgproc.boundingRect(new MatOfPoint(vertices));
maxArea = rect.area();
for (int ix = 1; ix < boundingRects.size(); ix++) {
boundingRect = boundingRects.get(ix);
boundingRect.points(vertices);
rect = Imgproc.boundingRect(new MatOfPoint(vertices));
if (rect.area() > maxArea) {
maxArea = rect.area();
ixMaxArea = ix;
}
}
bestRect = boundingRects.get(ixMaxArea);
}
return bestRect;
}
private static Bitmap findROI(Bitmap sourceBitmap) {
Bitmap roiBitmap = Bitmap.createBitmap(sourceBitmap.getWidth(), sourceBitmap.getHeight(), Bitmap.Config.ARGB_8888);
Mat sourceMat = new Mat(sourceBitmap.getWidth(), sourceBitmap.getHeight(), CV_8UC3);
Utils.bitmapToMat(sourceBitmap, sourceMat);
final Mat mat = new Mat();
sourceMat.copyTo(mat);
Imgproc.cvtColor(mat, mat, Imgproc.COLOR_RGB2GRAY);
Imgproc.threshold(mat, mat, 146, 250, Imgproc.THRESH_BINARY);
// find contours
List<MatOfPoint> contours = new ArrayList<>();
List<RotatedRect> boundingRects = new ArrayList<>();
Imgproc.findContours(mat, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// find appropriate bounding rectangles
for (MatOfPoint contour : contours) {
MatOfPoint2f areaPoints = new MatOfPoint2f(contour.toArray());
RotatedRect boundingRect = Imgproc.minAreaRect(areaPoints);
boundingRects.add(boundingRect);
}
RotatedRect documentRect = getBestRectByArea(boundingRects);
if (documentRect != null) {
Point rect_points[] = new Point[4];
documentRect.points(rect_points);
for (int i = 0; i < 4; ++i) {
Imgproc.line(sourceMat, rect_points[i], rect_points[(i + 1) % 4], ROI_COLOR, ROI_WIDTH);
}
}
Utils.matToBitmap(sourceMat, roiBitmap);
return roiBitmap;
}
you can achieve for your source images results like this:
or that:
If you adjust threshold values and apply filters you can achieve even better results.
You can pick a single contour by using one or both of:
Use BoundingRect and ContourArea to evaluate the squareness of each contour. boundingRect() returns orthogonal rects., to handle arbitrary rotation better use minAreaRect() which returns optimally rotated ones.
Use Cv.ApproxPoly iteratively to reduce to a 4 sided shape
var approxIter = 1;
while (true)
{
var approxCurve = Cv.ApproxPoly(largestContour, 0, null, ApproxPolyMethod.DP, approxIter, true);
var approxCurvePointsTmp = new[] { approxCurve.Select(p => new CvPoint2D32f((int)p.Value.X, (int)p.Value.Y)).ToArray() }.ToArray();
if (approxCurvePointsTmp[0].Length == 4)
{
corners = approxCurvePointsTmp[0];
break;
}
else if (approxCurvePointsTmp[0].Length < 4) throw new InvalidOperationException("Failed to decimate corner points");
approxIter++;
}
However neither of these will help if the contour detection gives you two separate contours due to noise / contrast.
I think it would be possible to use the hough line transformation to help detect cases where a line has been split into two contours.
If so the search could be repeated for all combinations of joined contours to see if a bigger / more rectangular match is found.
Stop relying on edge detection, the worst methodology in the universe, and switch to some form of image segmentation.
The paper is white, the background is contrasted, this is the information that you should use.

OpenCV speed traffic sign detection

I have a problem detecting speed traffic signs with opencv 2.4 for Android.
I do the following:
"capture frame -> convert it to HSV -> extract red areas -> detect signs with ellipse detection"
So far ellipse detection works perfect as long as picture is good quality.
But as you see in pictures bellow, that red extraction does not work OK, because of poor quality of picture frames, by my opinion.
Converting original image to HSV:
Imgproc.cvtColor(this.source, this.source, Imgproc.COLOR_RGB2HSV, 3);
Extracting red colors:
Core.inRange(this.source, new Scalar(this.h,this.s,this.v), new Scalar(230,180,180), this.source);
So my question is is there another way of detecting traffic sign like this or extracting red areas out of it, which by the way can be very faint like in last picture ?
This is the original image:
This is converted to HSV, as you can see red areas look the same color as nearby trees. Thats how I'm suppose to know it's red but I can't.
Converted to HSV:
This is with red colors extracted. If colors would be correct I should get almost perfect circle/ellipse around sign, but it is incomplet due to false colors.
Result after extraction:
Ellipse method:
private void findEllipses(Mat input){
Mat thresholdOutput = new Mat();
int thresh = 150;
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
MatOfInt4 hierarchy = new MatOfInt4();
Imgproc.threshold(source, thresholdOutput, thresh, 255, Imgproc.THRESH_BINARY);
//Imgproc.Canny(source, thresholdOutput, 50, 180);
Imgproc.findContours(source, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
RotatedRect minEllipse[] = new RotatedRect[contours.size()];
for(int i=0; i<contours.size();i++){
MatOfPoint2f temp=new MatOfPoint2f(contours.get(i).toArray());
if(temp.size().height > minEllipseSize && temp.size().height < maxEllipseSize){
double a = Imgproc.fitEllipse(temp).size.height;
double b = Imgproc.fitEllipse(temp).size.width;
if(Math.abs(a - b) < 10)
minEllipse[i] = Imgproc.fitEllipse(temp);
}
}
detectedObjects.clear();
for( int i = 0; i< contours.size(); i++ ){
Scalar color = new Scalar(180, 255, 180);
if(minEllipse[i] != null){
detectedObjects.add(new DetectedObject(minEllipse[i].center));
DetectedObject detectedObj = new DetectedObject(minEllipse[i].center);
Core.ellipse(source, minEllipse[i], color, 2, 8);
}
}
}
Problematic sign:
You can find a review of traffic signs detection methods here and here.
You'll see that there are 2 ways you can achieve this:
Color-based (like what you're doing now)
Shape-based
In my experience, I found that shape-based methods works pretty good, because the color may change a lot under different lighting conditions, camera quality, etc.
Since you need to detect speed traffic signs, which I assume are always circular, you can use an ellipse detector to find all circular objects in your image, and then apply some validation to determine if it's a traffic sign or not.
Why ellipse detection?
Well, since you're looking for perspective distorted circles, you are in fact looking for ellipses. Real-time ellipse detection is an interesting (although limited) research topic. I'll point you out to 2 papers with C++ source code available (which you can use in you app through native JNI calls):
L. Libuda, I. Grothues, K.-F. Kraiss, Ellipse detection in digital image
data using geometric features, in: J. Braz, A. Ranchordas, H. Arajo,
J. Jorge (Eds.), Advances in Computer Graphics and Computer Vision,
volume 4 of Communications in Computer and Information Science,
Springer Berlin Heidelberg, 2007, pp. 229-239. link, code
M. Fornaciari, A. Prati, R. Cucchiara,
"A fast and effective ellipse detector for embedded vision applications", Pattern Recognition, 2014 link, code
UPDATE
I tried the method 2) without any preprocessing. You can see that at least the sign with the red border is detected very good:
Referencing to your text:
This is converted to HSV, as you can see red areas look the same color
as nearby trees. Thats how I'm suppose to know it's red but I can't.
I want to show you my result of basically what you did (simple operations should be easily transferable to android openCV):
// convert to HSV
cv::Mat hsv;
cv::cvtColor(input,hsv,CV_BGR2HSV);
std::vector<cv::Mat> channels;
cv::split(hsv,channels);
// opencv = hue values are divided by 2 to fit 8 bit range
float red1 = 25/2.0f;
// red has one part at the beginning and one part at the end of the range (I assume 0° to 25° and 335° to 360°)
float red2 = (360-25)/2.0f;
// compute both thresholds
cv::Mat thres1 = channels[0] < red1;
cv::Mat thres2 = channels[0] > red2;
// choose some minimum saturation
cv::Mat saturationThres = channels[1] > 50;
// combine the results
cv::Mat redMask = (thres1 | thres2) & saturationThres;
// display result
cv::imshow("red", redMask);
These are my results:
From your result, please mind that findContours alters the input image, so maybe you extracted the ellipse but just don't see it in the image anymore, if you saved the image AFTER findContours.
private void findEllipses(Mat input){
Mat thresholdOutput = new Mat();
int thresh = 150;
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
MatOfInt4 hierarchy = new MatOfInt4();
Imgproc.threshold(source, thresholdOutput, thresh, 255, Imgproc.THRESH_BINARY);
//Imgproc.Canny(source, thresholdOutput, 50, 180);
Imgproc.findContours(source, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// source = thresholdOutput;
RotatedRect minEllipse[] = new RotatedRect[contours.size()];
for(int i=0; i<contours.size();i++){
MatOfPoint2f temp=new MatOfPoint2f(contours.get(i).toArray());
if(temp.size().height > minEllipseSize && temp.size().height < maxEllipseSize){
double a = Imgproc.fitEllipse(temp).size.height;
double b = Imgproc.fitEllipse(temp).size.width;
if(Math.abs(a - b) < 10)
minEllipse[i] = Imgproc.fitEllipse(temp);
}
}
detectedObjects.clear();
for( int i = 0; i< contours.size(); i++ ){
Scalar color = new Scalar(180, 255, 180);
if(minEllipse[i] != null){
detectedObjects.add(new DetectedObject(minEllipse[i].center));
DetectedObject detectedObj = new DetectedObject(minEllipse[i].center);
Core.ellipse(source, minEllipse[i], color, 2, 8);
}
}
}
have you tried using opencv ORB? it works really well.
I created a haar cascade for a traffic sign (roundabout in my case) and used opencv ORB to match features and remove any false positives.
For image recognition used Google's tensorflow and results were spectacular.

counting objects & better way to filling holes

I am new to OpenCV and am trying to count the number of objects in an image. I have done this before using MATLAB Image Processing Toolbox and adapted the same approach in OpenCV (Android) also.
The first step was to convert an image to gray scale. Then to threshold it and then counting the number of blobs. In Matlab there is a command - "bwlabel", which gives the number of blobs. I couldn't find such thing in OpenCV (again, I am a noob in OpenCV as well as Android).
Here is my code,
//JPG to Bitmap to MAT
Bitmap i = BitmapFactory.decodeFile(imgPath + "mms.jpg");
Bitmap bmpImg = i.copy(Bitmap.Config.ARGB_8888, false);
Mat srcMat = new Mat ( bmpImg.getHeight(), bmpImg.getWidth(), CvType.CV_8UC3);
Utils.bitmapToMat(bmpImg, srcMat);
//convert to gray scale and save image
Mat gray = new Mat(srcMat.size(), CvType.CV_8UC1);
Imgproc.cvtColor(srcMat, gray, Imgproc.COLOR_RGB2GRAY,4);
//write bitmap
Boolean bool = Highgui.imwrite(imgPath + "gray.jpg", gray);
//thresholding
Mat threshed = new Mat(bmpImg.getWidth(),bmpImg.getHeight(), CvType.CV_8UC1);
Imgproc.adaptiveThreshold(gray, threshed, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 75, 5);//15, 8 were original tests. Casey was 75,10
Core.bitwise_not(threshed, threshed);
Utils.matToBitmap(threshed, bmpImg);
//write bitmap
bool = Highgui.imwrite(imgPath + "threshed.jpg", threshed);
Toast.makeText(this, "Thresholded image saved!", Toast.LENGTH_SHORT).show();
In the next step, I tried to fill the holes and letters using dilation followed by an erosion but the blobs gets attached to each other which will ultimately give a wrong count. There is a tradeoff between filling holes and getting the blobs attached to each other on tuning the parameters for dilation and erosion.
Here is the code,
//morphological operations
//dilation
Mat dilated = new Mat(bmpImg.getWidth(),bmpImg.getHeight(), CvType.CV_8UC1);
Imgproc.dilate(threshed, dilated, Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new org.opencv.core.Size (16, 16)));
Utils.matToBitmap(dilated, bmpImg);
//write bitmap
bool = Highgui.imwrite(imgPath + "dilated.jpg", dilated);
Toast.makeText(this, "Dilated image saved!", Toast.LENGTH_SHORT).show();
//erosion
Mat eroded = new Mat(bmpImg.getWidth(),bmpImg.getHeight(), CvType.CV_8UC1);
Imgproc.erode(dilated, eroded, Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new org.opencv.core.Size(15, 15)));
Utils.matToBitmap(eroded, bmpImg);
//write bitmap
bool = Highgui.imwrite(imgPath + "eroded.jpg", eroded);
Toast.makeText(this, "Eroded image saved!", Toast.LENGTH_SHORT).show();
Because sometimes my M&Ms might be just next to each other! ;)
I also tried to use Hough Circles but the result is very unreliable (tested with coin images as well as real coins)
Here is the code,
//hough circles
Mat circles = new Mat();
// parameters
int iCannyUpperThreshold = 100;
int iMinRadius = 20;
int iMaxRadius = 400;
int iAccumulator = 100;
Imgproc.HoughCircles(gray, circles, Imgproc.CV_HOUGH_GRADIENT,
1.0, gray.rows() / 8, iCannyUpperThreshold, iAccumulator,
iMinRadius, iMaxRadius);
// draw
if (circles.cols() > 0)
{
Toast.makeText(this, "Coins : " +circles.cols() , Toast.LENGTH_LONG).show();
}
else
{
Toast.makeText(this, "No coins found", Toast.LENGTH_LONG).show();
}
The problem with this approach is that the algorithm is limited to perfect circles only (AFAIK). So, it doesn't work well when I try to scan and count M&Ms or coins lying on my desk (because angle of the device changes). With this approach, sometimes I get less no. of coins detected and sometimes more (I don't get it why more??).
On scanning this image the app sometimes shows 19 coins and sometimes 38 coins counted...I know there are other features which may be detected as circles but I totally don't get it why 38..?
So my questions...
Is there a better way to fill holes without joining adjacent blobs?
How do I count the number of objects accurately? I don't want to limit my app to counting only circles with HoughCircles approach.
FYI : OpenCV-2.4.9-android-sdk. Kindly keep in mind that I am a newbie in OpenCV and Android too.
Any help is much appreciated.
Thanks & Cheers!
Jainam
So to proceed we take your threshold image which you have generated as input and further modify it. The present code is in C++ but I guess you can easily convert it into android platform
Now instead of dilation or blurring you can try flood fill
which results in
Finally now applying the contour detection algorithm algorithm we get
The code for the above is
Mat dst = imread($path to the threshold image); // image should be single channel black and white image
imshow("dst",dst);
cv::Mat mask = cv::Mat::zeros(dst.rows + 2, dst.cols + 2, CV_8U);
// A image with size greater than the present object is created
cv::floodFill(dst, mask, cv::Point(0,0), 255, 0, cv::Scalar(), cv::Scalar(), 4 + (255 << 8) + cv::FLOODFILL_MASK_ONLY);
erode(mask,mask,Mat());
// Now to remove the outer boundary
rectangle(mask,Rect(0,0,mask.cols,mask.rows), Scalar(255,255,255),2,8,0);
imshow("Mask",mask);
Mat copy;
mask.copyTo(copy);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( copy, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
vector<vector<Point> > contours_poly( contours.size() );
vector<Rect> boundRect( contours.size() );
vector<Point2f>center( contours.size() );
vector<float>Distance( contours.size() );
vector<float>radius( contours.size() );
Mat drawing = cv::Mat::zeros(mask.rows, mask.cols, CV_8U);
int num_object = 0;
for( int i = 0; i < contours.size(); i++ ){
approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
// To get rid of the smaller object and the outer rectangle created
//because of the additional mask image we enforce a lower limit on area
//to remove noise and an upper limit to remove the outer border.
if (contourArea(contours_poly[i])>(mask.rows*mask.cols/10000) && contourArea(contours_poly[i])<mask.rows*mask.cols*0.9){
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
minEnclosingCircle( (Mat)contours_poly[i], center[i], radius[i] );
circle(drawing,center[i], (int)radius[i], Scalar(255,255,255), 2, 8, 0);
rectangle(drawing,boundRect[i], Scalar(255,255,255),2,8,0);
num_object++;
}
}
cout <<"No. of object detected =" <<num_object<<endl;
imshow("drawing",drawing);
waitKey(2);
char key = (char) waitKey(20);
if(key == 32){
// You can save your images here using a space
}
I hope this helps you in solving your problem
Just check it out,
Blur source.
Threshold binary inverted on gray.
Find contours, note that you should use CV_RETR_EXTERNAL as contour retrieval mode.
You can take the contours size as your object count.
Code:
Mat tmp,thr;
Mat src=imread("img.jpg",1);
blur(src,src,Size(3,3));
cvtColor(src,tmp,CV_BGR2GRAY);
threshold(tmp,thr,220,255,THRESH_BINARY_INV);
imshow("thr",thr);
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
findContours( thr, contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through each contour.
{
Rect r= boundingRect(contours[i]);
rectangle(src,r, Scalar(0,0,255),2,8,0);
}
cout<<"Numeber of contour = "<<contours.size()<<endl;
imshow("src",src);
waitKey();

How to plot the RGB intensity graph from an image in android?

I wanted to plot the RGB intensity graph of an image like sinusoidal waves for 3 colours. Can anyone please suggest any idea to do this ?
There are (in general) 256 levels (8-bits) for each color component. If the image has an alpha channel then the overall image bits per pixel will be 32, else for an RGB only image it will be 24.
I will push this out algorithmically to get the image histogram, it will be up to you to write the drawing code.
// Arrays for the histogram data
int histoR[256]; // Array that will hold the counts of how many of each red VALUE the image had
int histoG[256]; // Array that will hold the counts of how many of each green VALUE the image had
int histoB[256]; // Array that will hold the counts of how many of each blue VALUE the image had
int histoA[256]; // Array that will hold the counts of how many of each alpha VALUE the image had
// Zeroize all histogram arrays
for(num = 0 through 255){
histoR[num] = 0;
histoG[num] = 0;
histoB[num] = 0;
histoA[num] = 0;
}
// Move through all image pixels counting up each time a pixel color value is used
for(x = 0 through image width){
for(y = 0 through image height){
histoR[image.pixel(x, y).red] += 1;
histoG[image.pixel(x, y).green] += 1;
histoB[image.pixel(x, y).blue] += 1;
histoA[image.pixel(x, y).alpha] += 1;
}
}
You now have the histogram data, it is up to you to plot it. PLEASE REMEMBER THE ABOVE IS ONLY AN ALGORITHMIC DESCRIPTION, NOT ACTUAL CODE

Find Skew Angle and Rotate the Image in C++ OpenCV

Need help, Plz see the following question, I'm asking this question second time, because last time I didn't get any answer. Consider following Two links =>
1:=> Finding Skew Angle
2:=> Rotating Image as per skew angle
I wants to do the same, and the code given on this links works good. But the problem is, See the text displayed in these images, this code works fine for well aligned text only (as displayed in the images on the given links), but It gets failed when your text is in scattered form. Please tell me how to do it for images that contains text in the scattered form..? Thanks in advance..!! [ Here the main challenge is TO FIND CORRECT SKEW ANGLE..]
I am very frustrated because of this problem... Plz Help..!!!
My Code is as follows:=>
// Find Skew angle.
double compute_skew(const char* filename)
{
// Load in grayscale.
cv::Mat img = cv::imread(filename, 0);
// Binarize
cv::threshold(img, img, 225, 255, cv::THRESH_BINARY);
// Invert colors
cv::bitwise_not(img, img);
cv::Mat element = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(5, 3));
cv::erode(img, img, element);
std::vector<cv::Point> points;
cv::Mat_<uchar>::iterator it = img.begin<uchar>();
cv::Mat_<uchar>::iterator end = img.end<uchar>();
for (; it != end; ++it)
if (*it)
points.push_back(it.pos());
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
double angle = box.angle;
if (angle < -45.)
angle += 90.;
cv::Point2f vertices[4];
box.points(vertices);
for(int i = 0; i < 4; ++i)
cv::line(img, vertices[i], vertices[(i + 1) % 4], cv::Scalar(255, 0, 0), 1, CV_AA);
std::cout << "File **************Angle***************** " << filename << ": " << angle << std::endl;
return angle;
}
// Rotate Image according to skew angle.
void deskew(const char* filename, double angle)
{
cv::Mat img = cv::imread(filename, 0);
Point2f src_center(img.cols/2.0F, img.rows/2.0F);
Mat rot_mat = getRotationMatrix2D(src_center, angle, 1.0);
Mat rotated;
warpAffine(img, rotated, rot_mat, img.size(), cv::INTER_CUBIC);
imwrite(filename,rotated);
}
That approach is doomed to fail if you have scattered text because it relies on finding long text lines. Also, the term "skew" is a bit unfortunate in this instance because it is pure rotation in the posted examples.
What I would do is make line projections over a range of orientations (cfr. Radon transform or Hough line search). When your line projection has the right orientation, there will be a lot of zeros in the line projection caused by the interline gaps. The orientation which yields the most zero values in the projection is the most likely rotation angle of the text.

Categories

Resources