I am developing an android application where I need to detect the blinking of eyes. So far I have been able to detect the face and eyes using OpenCV. But now I need to check if the eyes are open or close. I read somewhere that one of the ways I can do that is by measuring the pixel intensities (grey levels). But it was not properly explained as in how to do that step by step. I am actually new to OpenCV. So can anyone please help me how can I do that. It is really very important.
Here is my onCameraFrame method:
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
mGray = inputFrame.gray();
if (mAbsoluteFaceSize == 0) {
int height = mGray.rows();
if (Math.round(height * mRelativeFaceSize) > 0) {
mAbsoluteFaceSize = Math.round(height * mRelativeFaceSize);
}
}
if (mZoomWindow == null || mZoomWindow2 == null)
CreateAuxiliaryMats();
MatOfRect faces = new MatOfRect();
if (mJavaDetector != null)
mJavaDetector.detectMultiScale(mGray, faces, 1.1, 2,
2, // TODO: objdetect.CV_HAAR_SCALE_IMAGE
new Size(mAbsoluteFaceSize, mAbsoluteFaceSize),
new Size());
Rect[] facesArray = faces.toArray();
for (int i = 0; i < facesArray.length; i++) {
Core.rectangle(mRgba, facesArray[i].tl(), facesArray[i].br(),
FACE_RECT_COLOR, 2);
xCenter = (facesArray[i].x + facesArray[i].width + facesArray[i].x) / 2;
yCenter = (facesArray[i].y + facesArray[i].y + facesArray[i].height) / 2;
Point center = new Point(xCenter, yCenter);
Rect r = facesArray[i];
// compute the eye area
Rect eyearea = new Rect(r.x + r.width / 20,
(int) (r.y + (r.height / 20)), r.width - 2 * r.width / 20,
(int) (r.height / 9.0));
// split it
Rect eyearea_right = new Rect(r.x + r.width / 6,
(int) (r.y + (r.height / 4)),
(r.width - 2 * r.width / 16) / 3, (int) (r.height / 4.0));
Rect eyearea_left = new Rect(r.x + r.width / 11
+ (r.width - 2 * r.width / 16) / 2,
(int) (r.y + (r.height / 4)),
(r.width - 2 * r.width / 16) / 3, (int) (r.height / 4.0));
// draw the area - mGray is working grayscale mat, if you want to
// see area in rgb preview, change mGray to mRgba
Core.rectangle(mRgba, eyearea_left.tl(), eyearea_left.br(),
new Scalar(255, 0, 0, 255), 2);
Core.rectangle(mRgba, eyearea_right.tl(), eyearea_right.br(),
new Scalar(255, 0, 0, 255), 2);
if (learn_frames < 5) {
teplateR = get_template(mJavaDetectorEye, eyearea_right, 24);
teplateL = get_template(mJavaDetectorEye, eyearea_left, 24);
learn_frames++;
} else {
// Learning finished, use the new templates for template
// matching
match_eye(eyearea_right, teplateR, method);
match_eye(eyearea_left, teplateL, method);
}
}
return mRgba;
}
Thanks in advance.
I already worked on this problem and this algorithm. It's implemented (C++) here: https://github.com/maz/blinking-angel with algorithm here: http://www.cs.bu.edu/techreports/pdf/2005-012-blink-detection.pdf .
As far as I can remember:
You get B&W current and 100ms ago frames
You do new - old (see 154 in github code)
You apply a threshold then a dilatation filter
You compute contours
If you have a blob with area > to a threshold at the eye location, it means that user blinked eyes
Give a look at the is_blink function at line 316. In his case, he do w * h of blob surrounding box > to a threshold.
In fact it use difference between eye/skin color. In Github implementation, threshold > 5 is used.
What I did was that I converted the eye region from RGB to HSV and applied skin detection. I found out the range of skin color in HSV. If the percentage of skin pixels is greater than threshold value that means Eye is close otherwise it is open. Though there is still some accuracy issue due to amount of light present. Thank you all for giving me a start :)
Generally there is no obvious solution for this problem, but the number of approaches is quite big, so i'm sure that after a bit of serching you will find something good enough for you. For example you can use algorithm which i mentioned here (there is a link to this question in "Related" btw - check other link from this group as well).
Related
I want to print numbers from 0 to 100 clockwise on canvas it will work when we have values from 0 to 12 but once i changed the values or increased the it is stopped working in my case can anyone help me here?
private int[] mClockHours = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,12};
private void drawNumeral(Canvas canvas) {
mPaint.setTextSize(fontSize);
for (int number : mClockHours) {
String tmp = String.valueOf(number);
mPaint.getTextBounds(tmp, 0, tmp.length(), mRect);
double angle = Math.PI / 6 * (number - 2);
Log.d("drawNumeral", "number: "+number);
Log.d("drawNumeral", "Math.PI: "+Math.PI);
Log.d("drawNumeral", "angle: "+angle);
Log.d("drawNumeral", "temp: "+tmp);
int x = (int) (width / 2 + Math.cos(angle) * mRadius - mRect.width() / 2);
int y = (int) (height / 2 + Math.sin(angle) * mRadius - mRect.height() / 2);
canvas.drawText(tmp, x, y, mPaint);
}
}
You need to invalidate() the view to see changes you made. Here is a good tutorial to understand how to draw on canvas.
Good day.I am creating a siri like wave for android and i encounter an big issue.I need the wave to be in 4 colors.Lets assume i only have one single line which is drawing on the screen accordingly to the voice decibels.Anyway i am able to do it but no way i am able to give 4 different colors for same path.Assume it is 1 single path which moves from screen start to screen end,i need that line to have 4 different colors,mainly i had to divide the path into 4 parts and draw the color for each parts,but neither google,nor any other source give me anything (not even found anything similar to what i want).
Meanwhile i am posting the code where actually i am drawing the lines.
for (int l = 0; l < mWaveCount; ++l) {
float midH = height / 2.0f;
float midW = width / 2.0f;
float maxAmplitude = midH / 2f - 4.0f;
float progress = 1.0f - l * 1.0f / mWaveCount;
float normalAmplitude = (1.5f * progress - 0.5f) * mAmplitude;
float multiplier = (float) Math.min(1.0, (progress / 3.0f * 2.0f) + (1.0f / 3.0f));
if (l != 0) {
mSecondaryPaint.setAlpha((int) (multiplier * 255));
}
mPath.reset();
for (int x = 0; x < width + mDensity; x += mDensity) {
float scaling = 1f - (float) Math.pow(1 / midW * (x - midW), 2);
float y = scaling * maxAmplitude * normalAmplitude * (float) Math.sin(
180 * x * mFrequency / (width * Math.PI) + mPhase) + midH;
// canvas.drawPoint(x, y, l == 0 ? mPrimaryPaint : mSecondaryPaint);
//
// canvas.drawLine(x, y, x, 2*midH - y, mSecondaryPaint);
if (x == 0) {
mPath.moveTo(x, y);
} else {
mPath.lineTo(x, y);
// final float x2 = (x + mLastX) / 2;
// final float y2 = (y + mLastY) / 2;
// mPath.quadTo(x2, y2, x, y);
}
mLastX = x;
mLastY = y;
}
if (l == 0) {
canvas.drawPath(mPath, mPrimaryPaint);
} else {
canvas.drawPath(mPath, mSecondaryPaint);
}
}
I tried to change color on if (l == 0) {
canvas.drawPath(mPath, mPrimaryPaint);
} but if i change it here,no result at all,either the line is separate and not moving at all,but it should,either the color is not applied,propably because i am doing it in loop as i had to and everytime the last color is picked to draw.Anyway can you help me out?Even an small reference is gold for me because really there is nothing at all in the internet.
Anyway even though Matt Horst answer fully correct,i found the simplest and easiest solution...i never thought it would be so easy.Anyway if in world there is someone who need to make an path divided into multiple colors,here is what you can do
int[] rainbow = getRainbowColors();
Shader shader = new LinearGradient(0, 0, 0, width, rainbow,
null, Shader.TileMode.REPEAT);
Matrix matrix = new Matrix();
matrix.setRotate(90);
shader.setLocalMatrix(matrix);
mPrimaryPaint.setShader(shader);
Where getRainbowColors() is an array of colors you wish your line to have and width is the length of the path so the Shader knows how to draw the colors in right way to fit the length of path.Anyway .....easy isnt it?and pretty purged me a lot to get into this simple point.Nowhere in internet you could find only if you are looking for something completelly different,than you might come across this.
It seems to me like you could set up one paint for each section, each with a different color. Then set up one path for each section too. Then as you draw across the screen, wherever the changeover point is between sections, start drawing with the new path. And make sure first to use moveTo() on the new path so it starts off where the old one left off.
For my solution, I tried changing the color of the linePaint in the onDraw Call. But it was drawing a single color.
So i used two different paints for two different colors and draw path on the canvas.
And it worked. Hope it helps someone out there.
i have problem with detection road lanes with my phone.
i wrote some code for road lanes detection, but him not working for me.
From camera get modifications from normal view to BGR colors and try use GausianBlur and Canny, but i think i not good draw lanes for detection.
Maybe some people have another idea how detection road lanes with OpenCV?
Mat mYuv = new Mat(height + height / 2, width, CvType.CV_8UC1);
Mat mRgba = new Mat(height + height / 2, width, CvType.CV_8UC1);
Mat thresholdImage = new Mat(height + height / 2, width, CvType.CV_8UC1);
mYuv.put(0, 0, data);
Imgproc.cvtColor(mYuv, mRgba, Imgproc.COLOR_YUV420p2BGR, 4);
//convert to grayscale
Imgproc.cvtColor(mRgba, thresholdImage, Imgproc.COLOR_mRGBA2RGBA, 4);
// Perform a Gaussian blur (convolving in 5x5 Gaussian) & detect edges
Imgproc.GaussianBlur(mRgba, mRgba, new Size(5,5), 2.2, 2);
Imgproc.Canny(mRgba, thresholdImage, VActivity.CANNY_MIN_TRESHOLD, VActivity.CANNY_MAX_THRESHOLD);
Mat lines = new Mat();
double rho = 1;
double theta = Math.PI/180;
int threshold = 50;
//do Hough transform to find lanes
Imgproc.HoughLinesP(thresholdImage, lines, rho, theta, threshold, VActivity.HOUGH_MIN_LINE_LENGTH, VActivity.HOUGH_MAX_LINE_GAP);
for (int x = 0; x < lines.cols() && x < 1; x++){
double[] vec = lines.get(0, x);
double x1 = vec[0],
y1 = vec[1],
x2 = vec[2],
y2 = vec[3];
Point start = new Point(x1, y1);
Point end = new Point(x2, y2);
Core.line(mRgba, start, end, new Scalar(255, 0, 0), 3);
}
This approach is fine and I've done something similar, not for road line detection but I did notice that it could be used for that purpose. Some comments:
Not sure why you do:
Imgproc.cvtColor(mRgba, thresholdImage, Imgproc.COLOR_mRGBA2RGBA, 4);
as 1. the comment say convert to greyscale, which is a single channel and 2. thresholdImage will get overwritten with the call to Canny later. You just need to dimension thresholdImage with:
thresholdImage = new Mat(mRgba.size(), CvType.CV_8UC1);
What are your parameter values to the call to Canny? I played about with mine considerably and ended up with values like: threshold1 = 441, threshold2 = 160, aperture = 3.
Likewise Imgproc.HoughLinesP: I use Imgproc.HoughLines rather than Imgproc.HoughLinesP with parameters: threshold = 80, minLen = 30, maxLen = 10.
Also have a look at:
for (int x = 0; x < lines.cols() && x < 1; x++){
&& x < 1 means you will only take the first line that the call to HoughLinesP returns. I'd suggest you remove this and use some other criteria to reduce the number of lines; for example, I was interesting in only horizontal and vertical lines so I used atan2 to calculate line angles and exclude those that deviate too much.
UPDATE
Here is how I get the angle of a line. Assuming coordinates of one point is (x1,y1) and the other (x2, y2) then to get the angle:
double lineAngle = Math.atan2(y2 - y1, x2 - x1);
this should return an angle in radians between -PI/2 and PI/2
With regard Canny parameters then I would experiment - I set up onTouch so that I could adjust the threshold values by touching certain parts of the screen to see the effects in realtime. Note that aperture is rather disappointing parameter: it seems to only like odd values 3, 5 and 7 and 3 is the best that I've found.
Something like in the onTouch method:
int w = mRgba.width();
int h = mRgba.height();
float x = event.getX();
float y = event.getY();
if ((x < w / 3) && (y < h / 2)) t1 += 20;
if ((x < w / 3) && (y >= h / 2)) t1 -= 20;
if ((x > 2 * w / 3) && (y < h / 2)) t2 += 20;
if ((x > 2 * w / 3) && (y >= h / 2)) t2 -= 20;
t1 and t2 being the threshold values passed to the Canny call.
I'm facing a very strange issue with OpenCV for Android :
when I'm accessing pixel with Mat.at it gives me the wrong pixel on the screen :
A simple example :
for( double y = (mat.rows - h) / 2 ; y < (mat.rows + h) / 2 ; y++ ) {
for( double x = (mat.cols - w) / 2; x < (mat.cols + w) / 2; x++ ) {
for( int c = 0; c < 3; c++ ) {
mat.at<Vec3b>(y,x)[c] =
saturate_cast<uchar>( 255 );
}
}
}
circle(mat, Point((mat.cols - w) / 2, (mat.rows - h) / 2), 10, Scalar(255,0,0,255));
circle(mat, Point((mat.cols + w) / 2, (mat.rows - h) / 2), 10, Scalar(255,0,0,255));
circle(mat, Point((mat.cols - w) / 2, (mat.rows + h) / 2), 10, Scalar(255,0,0,255));
circle(mat, Point((mat.cols + w) / 2, (mat.rows + h) / 2), 10, Scalar(255,0,0,255));
I should have the corners aligned with the box but not.
Is there a conversion to make in order to access to the true coordinates ?
You don't post the initialization of mat, but it appears to be initialized as type CV_8UC4. This means that accessing the image using mat.at<cv::Vec3b> will give you incorrect pixel locations. Four-channel images must be accessed using at<cv::Vec4b> to give correct pixel locations, even if you are only modifying three of the channels, as in your example.
Unrelated: it's not advisable to use double as a counter variable type.
This is a long shot, but it might help you. Is this a SDK emulation? I remember having issues with thoes. When I tried my code in a device it worked like a charm.
I am trying to change the alpha value of a bitmap per pixel in a for loop. The Bitmap is created from a createBitmap(source,x,y,w,h) of another bitmap. I've done a little test but I can't seem to alter the alpha. Is it the setPixel command or the fact the bitmap it isn't ARGB?
I want to create a simple fade out effect in the end but for now I am not referencing original pixel colors just green with half alpha. Thanks if you can help :)
_left[1] = Bitmap.createBitmap(TestActivity.photo, 0, 0, 256, 256);
for (int i = 0; i < _left[1].getWidth(); i++)
for (int t = 0; t < _left[1].getHeight(); t++) {
int a = (_left[1].getWidth() / 2) - i;
int b = (_left[1].getHeight() / 2) - t;
double dist = Math.sqrt((a*a) + (b*b));
if (dist > 20) _left[1].setPixel(i, t, Color.argb(128, 0, 255, 0));
}
UPDATE :
Okay this is the result I came up with if anyone wants to take a bitmap and fade out radially. But yes it is VERY SLOW without arrays... Thanks Reuben for a step in the right direction
public void fadeBitmap (Bitmap input, double fadeStartPercent, double fadeEndPercent, Bitmap output) {
Bitmap tempalpha = Bitmap.createBitmap(input.getWidth(), input.getHeight(), Bitmap.Config.ARGB_8888 );
Canvas printcanvas = new Canvas(output);
int radius = input.getWidth() / 2;
double fadelength = (radius * (fadeEndPercent / 100));
double fadestart = (radius * (fadeStartPercent / 100));
for (int i = 0; i < input.getWidth(); i++)
for (int t = 0; t < input.getHeight(); t++) {
int a = (input.getWidth() / 2) - i;
int b = (input.getHeight() / 2) - t;
double dist = Math.sqrt((a*a) + (b*b));
if (dist <= fadestart) {
tempalpha.setPixel(i,t,Color.argb(255, 255, 255, 255));
} else {
int fadeoff = 255 - (int) ((dist - fadestart) * (255/(fadelength - fadestart)));
if (dist > radius * (fadeEndPercent / 100)) fadeoff = 0;
tempalpha.setPixel(i,t,Color.argb(fadeoff, 255, 255, 255));
}
}
Paint alphaP = new Paint();
alphaP.setAntiAlias(true);
alphaP.setXfermode(new PorterDuffXfermode(Mode.DST_IN));
// printcanvas.setBitmap();
printcanvas.drawBitmap(input, 0, 0, null);
printcanvas.drawBitmap(tempalpha, 0, 0, alphaP);
}
The version of Bitmap.createBitmap() you are using returns an immutable bitmap. Bitmap.setPixel() will have no effect.
setPixel is appallingly slow anyway. Aim to use setPixels(), or, best of all, find a better way than manipulating bitmap pixels directly. I expect you could do something clever with a separate alpha-only bitmap and the right PorterDuff mode.