openCV4android detect Shape and Color (HSV) - android

I'm a beginner in openCV4android and I would like to get some help if possible
.
I'm trying to detect colored triangles,squares or circles using my Android phone camera but I don't know where to start.
I have been reading OReilly Learning OpenCV book and I got some knowledge about OpenCV.
Here is what I want to make:
1- Get the tracking color (just the color HSV) of the object by touching the screen
- I have already done this by using the color blob example from the OpenCV4android example
2- Find on the camera shapes like triangles, squares or circles based on the color choosed before.
I have just found examples of finding shapes within an image. What I would like to make is finding using the camera on real time.
Any help would be appreciated.
Best regards and have a nice day.

If you plan to implement NDK for your opencv stuff then you can use the same idea they are using in OpenCV tutorial 2-Mixedprocessing.
// on camera frames call your native method
public Mat onCameraFrame(CvCameraViewFrame inputFrame)
{
mRgba = inputFrame.rgba();
Nativecleshpdetect(mRgba.getNativeObjAddr()); // native method call to perform color and object detection
// the method getNativeObjAddr gets the address of the Mat object(camera frame) and passes it to native side as long object so that you dont have to create and destroy Mat object on each frame
}
public native void Nativecleshpdetect(long matAddrRgba);
In Native side
JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial2_Tutorial2Activity_Nativecleshpdetect(JNIEnv*, jobject,jlong addrRgba1)
{
Mat& mRgb1 = *(Mat*)addrRgba1;
// mRgb1 is a mat object which points to the address of the input camera frame, so all the manipulations you do here will reflect on the live camera frame
//once you have your mat object(i.e mRgb1 ) you can implement all the colour and shape detection algorithm you have learnt in opencv book
}
since all manipulations are done using pointers you have to be bit careful handling them. hope this helps

Why dont you make use of JavaCV i think its a better alternative..you dont have to use the NDK at all for this..
try this:
http://code.google.com/p/javacv/

If you check OpenCV's Back Projection tutorial it does what you are looking for (and a bit more).
Back Projection:
"In terms of statistics, the values stored in the BackProjection
matrix represent the probability that a pixel in a image belongs to
the region with the selected color."
I have converted that tutorial to OpenCV4Android (2.4.8) like you were looking for, it does not use Android NDK. You can see all the code here at Github.
You can also check this answer for more details.

Though its a bit late i would like to make a contribution to the question.
1- Get the tracking color (just the color HSV) of the object by
touching the screen - I have already done this by using the color blob
example from the OpenCV4android example
Implement OnTouchListener to your activity
onTouch function
int cols = mRgba.cols();
int rows = mRgba.rows();
int xOffset = (mOpenCvCameraView.getWidth() - cols) / 2;
int yOffset = (mOpenCvCameraView.getHeight() - rows) / 2;
int x = (int) event.getX() - xOffset;
int y = (int) event.getY() - yOffset;
Log.i(TAG, "Touch image coordinates: (" + x + ", " + y + ")");
if ((x < 0) || (y < 0) || (x > cols) || (y > rows)) return false;
Rect touchedRect = new Rect();
touchedRect.x = (x > 4) ? x - 4 : 0;
touchedRect.y = (y > 4) ? y - 4 : 0;
touchedRect.width = (x + 4 < cols) ? x + 4 - touchedRect.x : cols - touchedRect.x;
touchedRect.height = (y + 4 < rows) ? y + 4 - touchedRect.y : rows - touchedRect.y;
Mat touchedRegionRgba = mRgba.submat(touchedRect);
Mat touchedRegionHsv = new Mat();
Imgproc.cvtColor(touchedRegionRgba, touchedRegionHsv, Imgproc.COLOR_RGB2HSV_FULL);
// Calculate average color of touched region
mBlobColorHsv = Core.sumElems(touchedRegionHsv);
int pointCount = touchedRect.width * touchedRect.height;
for (int i = 0; i < mBlobColorHsv.val.length; i++)
mBlobColorHsv.val[i] /= pointCount;
mBlobColorRgba = converScalarHsv2Rgba(mBlobColorHsv);
mColor = mBlobColorRgba.val[0] + ", " + mBlobColorRgba.val[1] + ", " + mBlobColorRgba.val[2] + ", " + mBlobColorRgba.val[3];
Log.i(TAG, "Touched rgba color: (" + mBlobColorRgba.val[0] + ", " + mBlobColorRgba.val[1] +
", " + mBlobColorRgba.val[2] + ", " + mBlobColorRgba.val[3] + ")");
mRGBA is a mat object which was initiated in onCameraViewStarted as
mRgba = new Mat(height, width, CvType.CV_8UC4);
And for the 2nd part:
2- Find on the camera shapes like triangles, squares or circles based
on the color choosed before.
I have tried to find out the selected contours shape using approxPolyDP
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(0).toArray());
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true) * 0.02;
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
System.out.println("points length" + points.toArray().length);
if( points.toArray().length == 5)
{
System.out.println("Pentagon");
mShape = "Pentagon";
}
else if(points.toArray().length > 5)
{
System.out.println("Circle");
Imgproc.drawContours(mRgba, contours, 0, new Scalar(255, 255, 0, -1));
mShape = "Circle";
}
else if(points.toArray().length == 4)
{
System.out.println("Square");
mShape = "Square";
}
else if(points.toArray().length == 4)
{
System.out.println("Triangle");
mShape = "Triangle";
}
This was done on onCameraFrame function after i obtained the contour list
For me if the length of point array was more than 5 it was usually a circle. But there is other algorithm to obtain circle and its attributes.

Related

How to extract lines from each contour in OpenCV for Android?

I'd like to examine each Canny detected edge and look for the main lines in it (to check if they seem to shape a rectangle, for example if 2 pairs of lines are parallel etc.).
Imgproc.HoughLinesP does what I want, but it gives the lines from the whole image, and I want to know which lines come from the same edges.
I tried also FindContours, and looking for main lines in each contour with approxPolyDP, but this doesn't look adapted because there are often gaps in Canny detected edges. This gives contours of the edges and not the edges themselves.
Here is a test image example :
How can I get a set of lines for each shape ?
Based on Miki's answer, here is what I've done :
Canny
HoughLinesP (or LineSegmentDetector, as you want) : to detect lines
ConnectedComponents : to find Canny "contours" in the Canny image.
Dilate with a 3x3 kernel (see below)
For each Hough line : take a few pixels from the line and look for the most frequent value (ignore 0's).
For example, I chose {p1 , 0.75*p1 + 0.25*p2, 0.5*p1 + 0.5*p2, 0.25*p1 + 0.75*p2, p2}, so if my values are {1,2,0,2,2} then the line belongs to the connectedComponent number 2.
Dilating is to be sure you didn't miss a contour by only 1 pixel (but don't use it if your objects are too close).
This allows to "tag" HoughLines with the color of the contour they belong to.
All of these functions can be found in Imgproc module, this works in OpenCV 3.0 only and gives the desired result.
Here is a code :
// open image
File root = Environment.getExternalStorageDirectory();
File file = new File(root, "image_test.png");
Mat mRGBA = Imgcodecs.imread(file.getAbsolutePath());
Imgproc.cvtColor(mRGBA, mRGBA, Imgproc.COLOR_BGR2RGB);
Mat mGray = new Mat();
Imgproc.cvtColor(mRGBA, mGray, Imgproc.COLOR_RGBA2GRAY);
Imgproc.medianBlur(mGray, mGray, 7);
/* Main part */
Imgproc.Canny(mGray, mGray, 50, 60, 3, true);
Mat aretes = new Mat();
Imgproc.HoughLinesP(mGray, aretes, 1, 0.01745329251, 30, 10, 4);
/**
* Tag Canny edges in the gray picture with indexes from 1 to 65535 (0 = background)
* (Make sure there are less than 255 components or convert mGray to 16U before)
*/
int nb = Imgproc.connectedComponents(mGray,mGray,8,CvType.CV_16U);
Imgproc.dilate(mGray, mGray, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(3,3)));
// for each Hough line
for (int x = 0; x < aretes.rows(); x++) {
double[] vec = aretes.get(x, 0);
double x1 = vec[0],
y1 = vec[1],
x2 = vec[2],
y2 = vec[3];
/**
* Take 5 points from the line
*
* x----x----x----x----x
* P1 P2
*/
double[] pixel_values = new double[5];
pixel_values[0] = mGray.get((int) y1, (int) x1)[0];
pixel_values[1] = mGray.get((int) (y1*0.75 + y2*0.25), (int) (x1*0.75 + x2*0.25))[0];
pixel_values[2] = mGray.get((int) ((y1 + y2) *0.5), (int) ((x1 + x2) *0.5))[0];
pixel_values[3] = mGray.get((int) (y1*0.25 + y2*0.75), (int) (x1*0.25 + x2*0.75))[0];
pixel_values[4] = mGray.get((int) y2, (int) x2)[0];
/**
* Look for the most frequent value
* (To make it readable, the following code accepts the line only if there are at
* least 3 good pixels)
*/
double value;
Arrays.sort(pixel_values);
if (pixel_values[1] == pixel_values[3] || pixel_values[0] == pixel_values[2] || pixel_values[2] == pixel_values[4]) {
value = pixel_values[2];
}
else {
value = 0;
}
/**
* Now value is the index of the connected component (or 0 if it's a bad line)
* You can store it in an other array, here I'll just draw the line with the value
*/
if (value != 0) {
Imgproc.line(mRGBA,new Point(x1,y1),new Point(x2,y2),new Scalar((value * 41 + 50) % 255, (value * 69 + 100) % 255, (value * 91 + 60) % 255),3);
}
}
Imgproc.cvtColor(mRGBA, mRGBA, Imgproc.COLOR_RGB2BGR);
File file2 = new File(root, "image_test_OUT.png");
Imgcodecs.imwrite(file2.getAbsolutePath(), mRGBA);
If you're using OpenCV 3.0.0 you can use LineSegmentDetector, and "AND" your detected lines with the contours.
I provide a sample code below. It's C++ (sorry about that), but you can easily translate in Java. At least you see how to use LineSegmentDetector and how extract common lines for each contour. You'll see the lines on the same contour with the same color.
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
RNG rng(12345);
Mat3b img = imread("path_to_image");
Mat1b gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
Mat3b result;
cvtColor(gray, result, COLOR_GRAY2BGR);
// Detect lines
Ptr<LineSegmentDetector> detector = createLineSegmentDetector();
vector<Vec4i> lines;
detector->detect(gray, lines);
// Draw lines
Mat1b lineMask(gray.size(), uchar(0));
for (int i = 0; i < lines.size(); ++i)
{
line(lineMask, Point(lines[i][0], lines[i][1]), Point(lines[i][2], lines[i][3]), Scalar(255), 2);
}
// Compute edges
Mat1b edges;
Canny(gray, edges, 200, 400);
// Find contours
vector<vector<Point>> contours;
findContours(edges.clone(), contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
for (int i = 0; i < contours.size(); ++i)
{
// Draw each contour
Mat1b contourMask(gray.size(), uchar(0));
drawContours(contourMask, contours, i, Scalar(255), 2); // Better use 1 here. 2 is just for visualization purposes
// AND the contour and the lines
Mat1b bor;
bitwise_and(contourMask, lineMask, bor);
// Draw the common pixels with a random color
vector<Point> common;
findNonZero(bor, common);
Vec3b color = Vec3b(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
for (int j = 0; j < common.size(); ++j)
{
result(common[j]) = color;
}
}
imshow("result", result);
waitKey();
return 0;
}

PerspectiveTransform and Crop Rectangle in Android using openCV

I receive an image from Android camera using CvCameraViewListener2
My program detect a yellow rectangle plate.
The problem is after I know the 4 points which are the corner of rectangle, I want to crop the rectangle and shown in the JavaCameraView
Here is my program's screenshot when it detect a rectangle.
First, I apply perspectiveTransform to transform an image into correct perspective. But the result is very bad like
I have no idea how to correctly do this.
The following is my code.
private Mat getTransform(Mat inputFrame, List<Point> corners) {
Mat outputFrame = inputFrame.clone();
//compute width of new image
double widthA = Math.sqrt(Math.pow(corners.get(3).x - corners.get(2).x, 2) + Math.pow(corners.get(3).y - corners.get(2).y, 2));
double widthB = Math.sqrt(Math.pow(corners.get(1).x - corners.get(0).x, 2) + Math.pow(corners.get(1).y - corners.get(0).y, 2));
double maxWidth = max(widthA, widthB);
// compute heigh of new image
double heightA = Math.sqrt(Math.pow(corners.get(1).x - corners.get(3).x, 2) + Math.pow(corners.get(1).y - corners.get(3).y, 2));
double heightB = Math.sqrt(Math.pow(corners.get(0).x - corners.get(2).x, 2) + Math.pow(corners.get(0).y - corners.get(2).y, 2));
double maxHeight = max(heightA, heightB);
Log.d(MAINTAG, "Width: " + maxWidth + ", Height: " + maxHeight);
List<Point> output = new ArrayList<>();
output.add(new Point(0, 0));
output.add(new Point(0, maxHeight));
output.add(new Point(maxWidth, maxHeight));
output.add(new Point(maxWidth, 0));
Mat inputMat = Converters.vector_Point2f_to_Mat(corners);
Mat outputMat = Converters.vector_Point2f_to_Mat(output);
Mat outputTransform = Imgproc.getPerspectiveTransform(inputMat, outputMat);
Imgproc.warpPerspective(inputFrame, outputFrame, outputTransform, new Size(outputMat.width(), outputMat.height()), Imgproc.INTER_LINEAR);
return outputFrame;
The corners is arranged as topLeft, topRight, bottomLeft, bottomRight
After correct the perspective, the screen gets freeze!!!
Second, I want to crop the rectangle after I correct its perspective and show in JavaCameraView. How do I do that?
Thank you for all solutions and suggestions.

Treshold face images in various light

I want to ask about some ideas / study materials connected to binarization. I am trying to create system that detects human emotions. I am able to get areas such as brows, eyes, nose, mouth etc. but then comes another stage -> processing...
My images are taken in various places/time of day/weather conditions. It's problematic during binarization, with the same treshold value one images are fully black, other looks well and provide me informations I want.
What I want to ask you about is:
1) If there is known way how to bring all images to the same level of brightness?
2) How to create dependency between treshold value and brightness on image?
What I have tried for now is normalize the image... but there are no effects, maybe I'm doing something wrong. I'm using OpenCV (for android)
Core.normalize(cleanFaceMatGRAY, cleanFaceMatGRAY,0, 255, Core.NORM_MINMAX, CvType.CV_8U);
EDIT:
I tried adaptive treshold, OTSU - they didnt work for me. I have problems with using CLAHE in Android but I managed to implement Niblack algorithm.
Core.normalize(cleanFaceMatGRAY, cleanFaceMatGRAY,0, 255, Core.NORM_MINMAX, CvType.CV_8U);
nibelBlackTresholding(cleanFaceMatGRAY, -0.2);
private void nibelBlackTresholding(Mat image, double parameter) {
Mat meanPowered = image.clone();
Core.multiply(image, image, meanPowered);
Scalar mean = Core.mean(image);
Scalar stdmean = Core.mean(meanPowered);
double tresholdValue = mean.val[0] + parameter * stdmean.val[0];
int totalRows = image.rows();
int totalCols = image.cols();
for (int cols=0; cols < totalCols; cols++) {
for (int rows=0; rows < totalRows; rows++) {
if (image.get(rows, cols)[0] > tresholdValue) {
image.put(rows, cols, 255);
} else {
image.put(rows, cols, 0);
}
}
}
}
The results are really good, but still not enough for some images. I paste links cuz images are big and I don't want to take too much screen:
For example this one is tresholded really fine:
https://dl.dropboxusercontent.com/u/108321090/a1.png
https://dl.dropboxusercontent.com/u/108321090/a.png
But bad light produce shadows sometimes and this gives this effect:
https://dl.dropboxusercontent.com/u/108321090/b1.png
https://dl.dropboxusercontent.com/u/108321090/b.png
Do you have any idea that could help me to improve treshold of those images with high light difference (shadows)?
EDIT2:
I found that my previous Algorithm is implemented in wrong way. Std was calculated in wrong way. In Niblack Thresholding mean is local value not global. I repaired it according to this reference http://arxiv.org/ftp/arxiv/papers/1201/1201.5227.pdf
private void niblackThresholding2(Mat image, double parameter, int window) {
int totalRows = image.rows();
int totalCols = image.cols();
int offset = (window-1)/2;
double tresholdValue = 0;
double localMean = 0;
double meanDeviation = 0;
for (int y=offset+1; y<totalCols-offset; y++) {
for (int x=offset+1; x<totalRows-offset; x++) {
localMean = calculateLocalMean(x, y, image, window);
meanDeviation = image.get(y, x)[0] - localMean;
tresholdValue = localMean*(1 + parameter * ( (meanDeviation/(1 - meanDeviation)) - 1 ));
Log.d("QWERTY","TRESHOLD " +tresholdValue);
if (image.get(y, x)[0] > tresholdValue) {
image.put(y, x, 255);
} else {
image.put(y, x, 0);
}
}
}
}
private double calculateLocalMean(int x, int y, Mat image, int window) {
int offset = (window-1)/2;
Mat tempMat;
Rect tempRect = new Rect();
Point leftTop, bottomRight;
leftTop = new Point(x - (offset + 1), y - (offset + 1));
bottomRight = new Point(x + offset, y + offset);
tempRect = new Rect(leftTop, bottomRight);
tempMat = new Mat(image, tempRect);
return Core.mean(tempMat).val[0];
}
Results for 7x7 window and proposed in reference k parameter = 0.34: I still can't get rid of shadow on faces.
https://dl.dropboxusercontent.com/u/108321090/b2.png
https://dl.dropboxusercontent.com/u/108321090/b1.png
things to look at:
http://docs.opencv.org/java/org/opencv/imgproc/CLAHE.html
http://docs.opencv.org/java/org/opencv/imgproc/Imgproc.html#adaptiveThreshold(org.opencv.core.Mat,%20org.opencv.core.Mat,%20double,%20int,%20int,%20int,%20double)
http://docs.opencv.org/java/org/opencv/imgproc/Imgproc.html#threshold(org.opencv.core.Mat,%20org.opencv.core.Mat,%20double,%20double,%20int) (THRESH_OTSU)

OpenCV crop function fatal signal 11

Hello I am doing an android app which uses OpenCV to detect rectangles/squares, to detect them I am using functions (modified a bit) from squares.cpp. Points of every square found I am storing in vector> squares, then i pass it to the function which choose the biggest one and store it in vector theBiggestSq. The problem is with the cropping function which code i will paste below (i will post the link to youtube showing the problem too). If the actual square is far enough from the camera it works ok but if i will close it a bit in some point it will hang. I will post the print screen of the problem from LogCat and there are the points printed out (the boundaries points taken from theBiggestSq vector, maybe it will help to find the solution).
void cutAndSave(vector<Point> theBiggestSq, Mat image){
RotatedRect box = minAreaRect(Mat(theBiggestSq));
// Draw bounding box in the original image (debug purposes)
//cv::Point2f vertices[4];
//box.points(vertices);
//for (int i = 0; i < 4; ++i)
//{
//cv::line(img, vertices[i], vertices[(i + 1) % 4], cv::Scalar(0, 255, 0), 1, CV_AA);
//}
//cv::imshow("box", img);
//cv::imwrite("box.png", img);
// Set Region of Interest to the area defined by the box
Rect roi;
roi.x = box.center.x - (box.size.width / 2);
roi.y = box.center.y - (box.size.height / 2);
roi.width = box.size.width;
roi.height = box.size.height;
// Crop the original image to the defined ROI
//bmp=Bitmap.createBitmap(box.size.width / 2, box.size.height / 2, Bitmap.Config.ARGB_8888);
Mat crop = image(roi);
//Mat crop = image(Rect(roi.x, roi.y, roi.width, roi.height)).clone();
//Utils.matToBitmap(crop*.clone()* ,bmp);
imwrite("/sdcard/OpenCVTest/1.png", bmp);
imshow("crop", crop);
}
video of my app and its problems
cords printed respectively are: roi.x roi.y roi.width roi.height
Another problem is that the boundaries drawn should have a green colour but as you see on the video they are distorted (flexed like those boundaries would be made from glass?).
Thank you for any help. I am new in openCV doing it from only one month so please be tolerant.
EDIT:
drawing code:
//draw//
for( size_t i = 0; i < squares.size(); i++ )
{
const Point* p = &squares[i][0];
int n = (int)squares[i].size();
polylines(mBgra, &p, &n, 1, true, Scalar(255,255,0), 5, 10);
//Rect rect = boundingRect(cv::Mat(squares[i]));
//rectangle(mBgra, rect.tl(), rect.br(), cv::Scalar(0,255,0), 2, 8, 0);
}
This error basically tells you the cause - your ROI exceeds the image dimensions. This means that when you are extracting Rect roi from RotatedRect box then either x or y are smaller than zero, or the width/height pushes the dimensions outside the image. You should check this using something like
// Propose rectangle from data
int proposedX = box.center.x - (box.size.width / 2);
int proposedY = box.center.y - (box.size.height / 2);
int proposedW = box.size.width;
int proposedH = box.size.height;
// Ensure top-left edge is within image
roi.x = proposedX < 0 ? 0 : proposedX;
roi.y = proposedY < 0 ? 0 : proposedY;
// Ensure bottom-right edge is within image
roi.width =
(roi.x - 1 + proposedW) > image.cols ? // Will this roi exceed image?
(image.cols - 1 - roi.x) // YES: make roi go to image edge
: proposedW; // NO: continue as proposed
// Similar for height
roi.height = (roi.y - 1 + proposedH) > image.rows ? (image.rows - 1 - roi.y) : proposedH;

Simple Image Sharpening algorithm for Android App

I am looking for a simple image sharpening algorithm to use in my Android application. I have a grayscale image captured from video (mostly used for text) and I would like to sharpen it because my phone does not have auto focus, and the close object distance blurs the text. I don't have any background in image processing. But as a user, I am familiar with unsharp masking and other sharpening tools available in Gimp, Photoshop, etc. I didn't see any support for image processing in the Android API, and hence am looking for a method to implement myself. Thanks.
This is a simple image sharpening algorithm.
You should pass to this function width, height and byte[] array of your grayscale image and it will sharpen the image in this byte[] array.
void sharpen(int width, int height, byte* yuv) {
char *mas;
mas = (char *) malloc(width * height);
memcpy(mas, yuv, width * height);
signed int res;
int ywidth;
for (int y = 1; y < height - 1; y++) {
ywidth = y * width;
for (int x = 1; x < width - 1; x++) {
res = (
mas[x + ywidth] * 5
- mas[x - 1 + ywidth]
- mas[x + 1 + ywidth]
- mas[x + (ywidth + width)]
- mas[x + (ywidth - width)]
);
if (res > 255) {
res = 255;
};
if (res < 0) {
res = 0;
};
yuv[x + ywidth] = res;
}
}
free(mas);
}
If you have access to pixel information, your most basic option woul be a sharpening convolution kernel. Take a look at the following sites, you can learn more about sharpening kernels and how to apply kernels there.
link1
link2
ImageJ has many algorithms in Java and is freely available.

Categories

Resources