How to track trajectory of moving object openCV C++ - android

I am fairly new to openCV libraries and I am trying to do real time object detection for a school project on an android app. followed this tutorial (https://www.youtube.com/watch?v=bSeFrPrqZ2A) and I am able to detect object by color on my android phone. Now I am trying to map out the trajectory of the object just like in this video (https://www.youtube.com/watch?v=QTYSRZD4vyI).
Following is some of the source code provided in the first youtube video.
void searchForMovement(int& x, int& y, Mat& mRgb1, Mat& threshold){
morphOps(threshold);
Mat temp;
threshold.copyTo(temp);
//these two vectors needed for output of findContours
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
//find contours of filtered image using openCV findContours function
//In OpenCV, finding contours is like finding white object from black background.
// So remember, object to be found should be white and background should be black.
//CV_CHAIN_APPROX_SIMPLE to draw 4 points of the contour
findContours(temp,contours,hierarchy,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE );
double refArea = 0;
bool objectFound = false;
if (hierarchy.size() > 0) {
int numObjects = hierarchy.size();
//if number of objects greater than MAX_NUM_OBJECTS we have a noisy filter
if(numObjects<MAX_NUM_OBJECTS){
for (int index = 0; index >= 0; index = hierarchy[index][0]) {
Moments moment = moments((cv::Mat)contours[index]);
double area = moment.m00;
//if the area is less than 20 px by 20px then it is probably just noise
//if the area is the same as the 3/2 of the image size, probably just a bad filter
//we only want the object with the largest area so we safe a reference area each
//iteration and compare it to the area in the next iteration.
if(area>MIN_OBJECT_AREA && area<MAX_OBJECT_AREA && area>refArea){
x = moment.m10/area;
y = moment.m01/area;
objectFound = true;
refArea = area;
}else objectFound = false;
}
//let user know you found an object
if(objectFound ==true){
putText(mRgb1,"Tracking Object",Point(0,50),2,1,Scalar(0,255,0),2);
//draw object location on screen
drawObject(x,y,mRgb1);}
}else putText(mRgb1,"TOO MUCH NOISE! ADJUST FILTER",Point(0,50),1,2,Scalar(0,0,255),2);
}
}
void drawObject(int x, int y,Mat &frame){
Mat traj;
traj = frame;
//use some of the openCV drawing functions to draw crosshairs
//on your tracked image!
//UPDATE:JUNE 18TH, 2013
//added 'if' and 'else' statements to prevent
//memory errors from writing off the screen (ie. (-25,-25) is not within the window!)
circle(frame,Point(x,y),20,Scalar(0,255,0),2);
if(y-25>0)
line(frame,Point(x,y),Point(x,y-25),Scalar(0,255,0),2);
else line(traj,Point(x,y),Point(x,0),Scalar(0,255,0),2);
if(y+25<FRAME_HEIGHT)
line(frame,Point(x,y),Point(x,y+25),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(x,FRAME_HEIGHT),Scalar(0,255,0),2);
if(x-25>0)
line(traj,Point(x,y),Point(x-25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(0,y),Scalar(0,255,0),2);
if(x+25<FRAME_WIDTH)
line(frame,Point(x,y),Point(x+25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(FRAME_WIDTH,y),Scalar(0,255,0),2);
// add(traj, frame, frame);
putText(frame,intToString(x)+","+intToString(y),Point(x,y+30),1,1,Scalar(0,255,0),2);
}
How can I add onto this code to get the trajectory of an object showed in the 2nd video? Any suggestion would be much appreciated. Thank you.

http://opencv-srf.blogspot.co.uk/2010/09/object-detection-using-color-seperation.html
Found it. When doing it in android, need to make sure the lastX and lastY are updating as well.

Related

Detection of four corners of a document under different circumstances

I have tried 2 methodologies as follows:-
conversion of image to Mat
apply gaussian blur
then canny edge detection
find contours
The problem with this method is:
too many contours are detected
mostly open contours
doesn't detect what I want to detect
Then I changed my approach and tried adaptive thresholding after gaussian blur/median blur and it is much better and I am able to detect the corners in 50% cases
The current problem I am facing is that the page detection requires contrasting and plain background without any reflections. I think it's too idealistic for real world use.
This is where I would like some help. Even a direction towards the solution is highly appreciated especially in java. Thanks in anticipation
works absolutely fine with a significant contrasting background like this
Detected 4 corners
This picture gives troubles because the background isn't exactly the most contrasting
Initial largest contour found
Update: median blur did not help much so I traced the cause and found that the page boundary was detected in bits and pieces and not a single contour so it detected the biggest contour as a part of the page boundary Therefore performed some morphological operations to close relatively small gaps and the resultant largest contour is definitely improved but its its not optimum. Any ideas how I can improve the big gaps?
morphed original picture
largest contour found in the morphed image
PS morphing the image in ideal scenarios has led to detection of false contour boundaries. Any condition which can be checked before morphing an image is also a bonus. Thank you
If you use methods like that:
public static RotatedRect getBestRectByArea(List<RotatedRect> boundingRects) {
RotatedRect bestRect = null;
if (boundingRects.size() >= 1) {
RotatedRect boundingRect;
Point[] vertices = new Point[4];
Rect rect;
double maxArea;
int ixMaxArea = 0;
// find best rect by area
boundingRect = boundingRects.get(ixMaxArea);
boundingRect.points(vertices);
rect = Imgproc.boundingRect(new MatOfPoint(vertices));
maxArea = rect.area();
for (int ix = 1; ix < boundingRects.size(); ix++) {
boundingRect = boundingRects.get(ix);
boundingRect.points(vertices);
rect = Imgproc.boundingRect(new MatOfPoint(vertices));
if (rect.area() > maxArea) {
maxArea = rect.area();
ixMaxArea = ix;
}
}
bestRect = boundingRects.get(ixMaxArea);
}
return bestRect;
}
private static Bitmap findROI(Bitmap sourceBitmap) {
Bitmap roiBitmap = Bitmap.createBitmap(sourceBitmap.getWidth(), sourceBitmap.getHeight(), Bitmap.Config.ARGB_8888);
Mat sourceMat = new Mat(sourceBitmap.getWidth(), sourceBitmap.getHeight(), CV_8UC3);
Utils.bitmapToMat(sourceBitmap, sourceMat);
final Mat mat = new Mat();
sourceMat.copyTo(mat);
Imgproc.cvtColor(mat, mat, Imgproc.COLOR_RGB2GRAY);
Imgproc.threshold(mat, mat, 146, 250, Imgproc.THRESH_BINARY);
// find contours
List<MatOfPoint> contours = new ArrayList<>();
List<RotatedRect> boundingRects = new ArrayList<>();
Imgproc.findContours(mat, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// find appropriate bounding rectangles
for (MatOfPoint contour : contours) {
MatOfPoint2f areaPoints = new MatOfPoint2f(contour.toArray());
RotatedRect boundingRect = Imgproc.minAreaRect(areaPoints);
boundingRects.add(boundingRect);
}
RotatedRect documentRect = getBestRectByArea(boundingRects);
if (documentRect != null) {
Point rect_points[] = new Point[4];
documentRect.points(rect_points);
for (int i = 0; i < 4; ++i) {
Imgproc.line(sourceMat, rect_points[i], rect_points[(i + 1) % 4], ROI_COLOR, ROI_WIDTH);
}
}
Utils.matToBitmap(sourceMat, roiBitmap);
return roiBitmap;
}
you can achieve for your source images results like this:
or that:
If you adjust threshold values and apply filters you can achieve even better results.
You can pick a single contour by using one or both of:
Use BoundingRect and ContourArea to evaluate the squareness of each contour. boundingRect() returns orthogonal rects., to handle arbitrary rotation better use minAreaRect() which returns optimally rotated ones.
Use Cv.ApproxPoly iteratively to reduce to a 4 sided shape
var approxIter = 1;
while (true)
{
var approxCurve = Cv.ApproxPoly(largestContour, 0, null, ApproxPolyMethod.DP, approxIter, true);
var approxCurvePointsTmp = new[] { approxCurve.Select(p => new CvPoint2D32f((int)p.Value.X, (int)p.Value.Y)).ToArray() }.ToArray();
if (approxCurvePointsTmp[0].Length == 4)
{
corners = approxCurvePointsTmp[0];
break;
}
else if (approxCurvePointsTmp[0].Length < 4) throw new InvalidOperationException("Failed to decimate corner points");
approxIter++;
}
However neither of these will help if the contour detection gives you two separate contours due to noise / contrast.
I think it would be possible to use the hough line transformation to help detect cases where a line has been split into two contours.
If so the search could be repeated for all combinations of joined contours to see if a bigger / more rectangular match is found.
Stop relying on edge detection, the worst methodology in the universe, and switch to some form of image segmentation.
The paper is white, the background is contrasted, this is the information that you should use.

OpenCV speed traffic sign detection

I have a problem detecting speed traffic signs with opencv 2.4 for Android.
I do the following:
"capture frame -> convert it to HSV -> extract red areas -> detect signs with ellipse detection"
So far ellipse detection works perfect as long as picture is good quality.
But as you see in pictures bellow, that red extraction does not work OK, because of poor quality of picture frames, by my opinion.
Converting original image to HSV:
Imgproc.cvtColor(this.source, this.source, Imgproc.COLOR_RGB2HSV, 3);
Extracting red colors:
Core.inRange(this.source, new Scalar(this.h,this.s,this.v), new Scalar(230,180,180), this.source);
So my question is is there another way of detecting traffic sign like this or extracting red areas out of it, which by the way can be very faint like in last picture ?
This is the original image:
This is converted to HSV, as you can see red areas look the same color as nearby trees. Thats how I'm suppose to know it's red but I can't.
Converted to HSV:
This is with red colors extracted. If colors would be correct I should get almost perfect circle/ellipse around sign, but it is incomplet due to false colors.
Result after extraction:
Ellipse method:
private void findEllipses(Mat input){
Mat thresholdOutput = new Mat();
int thresh = 150;
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
MatOfInt4 hierarchy = new MatOfInt4();
Imgproc.threshold(source, thresholdOutput, thresh, 255, Imgproc.THRESH_BINARY);
//Imgproc.Canny(source, thresholdOutput, 50, 180);
Imgproc.findContours(source, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
RotatedRect minEllipse[] = new RotatedRect[contours.size()];
for(int i=0; i<contours.size();i++){
MatOfPoint2f temp=new MatOfPoint2f(contours.get(i).toArray());
if(temp.size().height > minEllipseSize && temp.size().height < maxEllipseSize){
double a = Imgproc.fitEllipse(temp).size.height;
double b = Imgproc.fitEllipse(temp).size.width;
if(Math.abs(a - b) < 10)
minEllipse[i] = Imgproc.fitEllipse(temp);
}
}
detectedObjects.clear();
for( int i = 0; i< contours.size(); i++ ){
Scalar color = new Scalar(180, 255, 180);
if(minEllipse[i] != null){
detectedObjects.add(new DetectedObject(minEllipse[i].center));
DetectedObject detectedObj = new DetectedObject(minEllipse[i].center);
Core.ellipse(source, minEllipse[i], color, 2, 8);
}
}
}
Problematic sign:
You can find a review of traffic signs detection methods here and here.
You'll see that there are 2 ways you can achieve this:
Color-based (like what you're doing now)
Shape-based
In my experience, I found that shape-based methods works pretty good, because the color may change a lot under different lighting conditions, camera quality, etc.
Since you need to detect speed traffic signs, which I assume are always circular, you can use an ellipse detector to find all circular objects in your image, and then apply some validation to determine if it's a traffic sign or not.
Why ellipse detection?
Well, since you're looking for perspective distorted circles, you are in fact looking for ellipses. Real-time ellipse detection is an interesting (although limited) research topic. I'll point you out to 2 papers with C++ source code available (which you can use in you app through native JNI calls):
L. Libuda, I. Grothues, K.-F. Kraiss, Ellipse detection in digital image
data using geometric features, in: J. Braz, A. Ranchordas, H. Arajo,
J. Jorge (Eds.), Advances in Computer Graphics and Computer Vision,
volume 4 of Communications in Computer and Information Science,
Springer Berlin Heidelberg, 2007, pp. 229-239. link, code
M. Fornaciari, A. Prati, R. Cucchiara,
"A fast and effective ellipse detector for embedded vision applications", Pattern Recognition, 2014 link, code
UPDATE
I tried the method 2) without any preprocessing. You can see that at least the sign with the red border is detected very good:
Referencing to your text:
This is converted to HSV, as you can see red areas look the same color
as nearby trees. Thats how I'm suppose to know it's red but I can't.
I want to show you my result of basically what you did (simple operations should be easily transferable to android openCV):
// convert to HSV
cv::Mat hsv;
cv::cvtColor(input,hsv,CV_BGR2HSV);
std::vector<cv::Mat> channels;
cv::split(hsv,channels);
// opencv = hue values are divided by 2 to fit 8 bit range
float red1 = 25/2.0f;
// red has one part at the beginning and one part at the end of the range (I assume 0° to 25° and 335° to 360°)
float red2 = (360-25)/2.0f;
// compute both thresholds
cv::Mat thres1 = channels[0] < red1;
cv::Mat thres2 = channels[0] > red2;
// choose some minimum saturation
cv::Mat saturationThres = channels[1] > 50;
// combine the results
cv::Mat redMask = (thres1 | thres2) & saturationThres;
// display result
cv::imshow("red", redMask);
These are my results:
From your result, please mind that findContours alters the input image, so maybe you extracted the ellipse but just don't see it in the image anymore, if you saved the image AFTER findContours.
private void findEllipses(Mat input){
Mat thresholdOutput = new Mat();
int thresh = 150;
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
MatOfInt4 hierarchy = new MatOfInt4();
Imgproc.threshold(source, thresholdOutput, thresh, 255, Imgproc.THRESH_BINARY);
//Imgproc.Canny(source, thresholdOutput, 50, 180);
Imgproc.findContours(source, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// source = thresholdOutput;
RotatedRect minEllipse[] = new RotatedRect[contours.size()];
for(int i=0; i<contours.size();i++){
MatOfPoint2f temp=new MatOfPoint2f(contours.get(i).toArray());
if(temp.size().height > minEllipseSize && temp.size().height < maxEllipseSize){
double a = Imgproc.fitEllipse(temp).size.height;
double b = Imgproc.fitEllipse(temp).size.width;
if(Math.abs(a - b) < 10)
minEllipse[i] = Imgproc.fitEllipse(temp);
}
}
detectedObjects.clear();
for( int i = 0; i< contours.size(); i++ ){
Scalar color = new Scalar(180, 255, 180);
if(minEllipse[i] != null){
detectedObjects.add(new DetectedObject(minEllipse[i].center));
DetectedObject detectedObj = new DetectedObject(minEllipse[i].center);
Core.ellipse(source, minEllipse[i], color, 2, 8);
}
}
}
have you tried using opencv ORB? it works really well.
I created a haar cascade for a traffic sign (roundabout in my case) and used opencv ORB to match features and remove any false positives.
For image recognition used Google's tensorflow and results were spectacular.

GLSL: How to calculate fragments output RGB value based on Photoshops curve value?

I am working on image editing using OPENGL in Android and I have applied filter to an image using photoshop curve now I want to reproduce the same in Android using glsl. Is there any formula to calculate single fragment color using photoshops curve output value?
EDIT
The math behind the photoshop curve has already been answered in this question
How to recreate the math behind photoshop curves but I am not very clear about how to reproduce the same in glsl fragment shader.
Screen Shot of my photoshop curve
You're after a function fragColour = curves(inColour, constants...). If you have just the one curve for red, green and blue you apply the same curve to each individually. This answer has a link (below) to code which plots points along the function. The key line is:
double y = ...
Which you'd return from curves. The variable x in the loop is your inColour. All you need now is the constants which come from the points and the second derivative sd arrays. These you'll have to pass in as uniforms. The function first has to figure out which point each colour x is between (finding cur, next, sd[i] and sd[i+1]), then evaluate and return y.
EDIT:
If you just want to apply some curve you've created in photoshop then the problem is much simpler. The easiest way is to create a simple function that gives a similar shape. I use these as a starting point. A gamma correction curve is also quite common.
This is overkill, but if you do need a more exact result, you could create an image with a linear ramp (e.g. 255 pixels from black to white), apply your filter to it in photoshop and the result becomes a lookup table. Passing in all 255 values to a shader is expensive so if it's a smooth curve you could try some curve fitting tools (for example).
Once you have a function, simply apply it to your colour in GLSL. Applying a gamma curve for example is done like this:
fragColour = vec4(pow(inColour.rgb, 1.0 / gamma.rgb), inColour.a);
EDIT2:
The curve you have looks very similar to this:
fragColour = vec4(pow(inColour.rgb, 1.0 / vec3(0.6)), inColour.a);
Or even simpler:
fragColour = vec4(inColour.rgb * inColour.rgb, inColour.a);
Just in case the link dies, I'll copy the code here (not that I've tested it):
Point[] points = /* liste de points, triés par "x" croissants */
double[] sd = secondDerivative(points);
for(int i=0;i<points.length-1;i++) {
Point cur = points[i];
Point next = points[i+1];
for(int x=cur.x;x<next.x;x++) {
double t = (double)(x-cur.x)/(next.x-cur.x);
double a = 1-t;
double b = t;
double h = next.x-cur.x;
double y= a*cur.y + b*next.y + (h*h/6)*( (a*a*a-a)*sd[i]+ (b*b*b-b)*sd[i+1] );
draw(x,y); /* ou tout autre utilisation */
}
}
And the second derivative:
public static double[] secondDerivative(Point... P) {
int n = P.length;
double yp1=0.0;
double ypn=0.0;
// build the tridiagonal system
// (assume 0 boundary conditions: y2[0]=y2[-1]=0)
double[][] matrix = new double[n][3];
double[] result = new double[n];
matrix[0][1]=1;
for(int i=1;i<n-1;i++) {
matrix[i][0]=(double)(P[i].x-P[i-1].x)/6;
matrix[i][1]=(double)(P[i+1].x-P[i-1].x)/3;
matrix[i][2]=(double)(P[i+1].x-P[i].x)/6;
result[i]=(double)(P[i+1].y-P[i].y)/(P[i+1].x-P[i].x) - (double)(P[i].y-P[i-1].y)/(P[i].x-P[i-1].x);
}
matrix[n-1][1]=1;
// solving pass1 (up->down)
for(int i=1;i<n;i++) {
double k = matrix[i][0]/matrix[i-1][1];
matrix[i][1] -= k*matrix[i-1][2];
matrix[i][0] = 0;
result[i] -= k*result[i-1];
}
// solving pass2 (down->up)
for(int i=n-2;i>=0;i--) {
double k = matrix[i][2]/matrix[i+1][1];
matrix[i][1] -= k*matrix[i+1][0];
matrix[i][2] = 0;
result[i] -= k*result[i+1];
}
// return second derivative value for each point P
double[] y2 = new double[n];
for(int i=0;i<n;i++) y2[i]=result[i]/matrix[i][1];
return y2;
}

Android OpenGL ES 2: Draw frame by frame

I am quite new to opengl es 2.0 on android. I am working on a project which draws a few plane indicators on screen(like altimeter, compass etc). After doing the tutorial from the official google dev site here http://developer.android.com/training/graphics/opengl/index.html I just continued along this path, drawing circles, triangles, squares etc (only 2d stuff). I can make the drawn objects move using rotation and translation matrices, but the only way I know how to do this(except for how they did it in the tutorial) is like this in the onDrawFrame() method of my renderer class:
//set values for all Indicators
try {
Thread.sleep(1);
// for roll + pitch:
if(roll < 90) {
roll += 1.5f;
} else roll = 0;
if(pitch < 90) {
pitch += 0.5f;
} else pitch = 0;
// for compass:
if(compassDeg > 360) compassDeg = 0;
else compassDeg += 1;
//for altimeter
if(realAltitude >= 20000) realAltitude = 0;
else realAltitude += 12;
//for speedometer:
if(realSpeed >= 161) realSpeed = 0;
else realSpeed += 3;
} catch (InterruptedException e) {
e.printStackTrace();
}
roll, pitch, compassDeg, speed etc are the parameters the indicators receive and I designed them to move accordingly (if compassDeg = 0 for example, the compass will point north and so on). These parameters will eventually be received via bluetooth but for now I'm modifying them from the code directly because I don't have a bluetooth implementation yet.
I am pretty sure this is not the best way to do it, sometimes the drawn objects stutter and seem to go back a few frames, then back again and I don't think pausing the drawing method is a good idea in general.
I've seen that in the tutorial I mentioned in the beginning they use something like this:
//Use the following code to generate constant rotation.
//Leave this code out when using TouchEvents.
long time = SystemClock.uptimeMillis() %4000L ;
float contAngle = -0.090f * ((int) time);
Matrix.setRotateM(contRotationMatrix, 0, contAngle, 0, 0, -1.0f);
Matrix.multiplyMM(contMVPMatrix, 0, mMVPMatrix4, 0, contRotationMatrix, 0);
which is still kinda weird I think, there has to be a more straightforward way in which to specify how to draw each frame, to rotate and translate objects frame by frame.
So my question is how do I make everything move frame by frame or something like that, or at least how do I find out when one frame has been drawn?

Optical flow in Android

We have been dealing with OpenCV for two weeks to make it work on Android.
Do you know where can we find an Android implementation of optical flow? It would be nice if it's implemented using OpenCV.
Openframeworks has openCV baked in, as well as many other interesting libraries. It has a very elegant strucutre, and I have used it with android to make a virtual mouse of the phone using motion estimation from the camera.
See the ports to android here http://openframeworks.cc/setup/android-studio/
Seems they recently added support for android studio, otherwise eclipse works great.
Try this
#Override
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
if (mMOP2fptsPrev.rows() == 0) {
//Log.d("Baz", "First time opflow");
// first time through the loop so we need prev and this mats
// plus prev points
// get this mat
Imgproc.cvtColor(mRgba, matOpFlowThis, Imgproc.COLOR_RGBA2GRAY);
// copy that to prev mat
matOpFlowThis.copyTo(matOpFlowPrev);
// get prev corners
Imgproc.goodFeaturesToTrack(matOpFlowPrev, MOPcorners, iGFFTMax, 0.05, 20);
mMOP2fptsPrev.fromArray(MOPcorners.toArray());
// get safe copy of this corners
mMOP2fptsPrev.copyTo(mMOP2fptsSafe);
}
else
{
//Log.d("Baz", "Opflow");
// we've been through before so
// this mat is valid. Copy it to prev mat
matOpFlowThis.copyTo(matOpFlowPrev);
// get this mat
Imgproc.cvtColor(mRgba, matOpFlowThis, Imgproc.COLOR_RGBA2GRAY);
// get the corners for this mat
Imgproc.goodFeaturesToTrack(matOpFlowThis, MOPcorners, iGFFTMax, 0.05, 20);
mMOP2fptsThis.fromArray(MOPcorners.toArray());
// retrieve the corners from the prev mat
// (saves calculating them again)
mMOP2fptsSafe.copyTo(mMOP2fptsPrev);
// and save this corners for next time through
mMOP2fptsThis.copyTo(mMOP2fptsSafe);
}
/*
Parameters:
prevImg first 8-bit input image
nextImg second input image
prevPts vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.
nextPts output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input.
status output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0.
err output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn't found then the error is not defined (use the status parameter to find such cases).
*/
Video.calcOpticalFlowPyrLK(matOpFlowPrev, matOpFlowThis, mMOP2fptsPrev, mMOP2fptsThis, mMOBStatus, mMOFerr);
cornersPrev = mMOP2fptsPrev.toList();
cornersThis = mMOP2fptsThis.toList();
byteStatus = mMOBStatus.toList();
y = byteStatus.size() - 1;
for (x = 0; x < y; x++) {
if (byteStatus.get(x) == 1) {
pt = cornersThis.get(x);
pt2 = cornersPrev.get(x);
Core.circle(mRgba, pt, 5, colorRed, iLineThickness - 1);
Core.line(mRgba, pt, pt2, colorRed, iLineThickness);
}
}
return mRgba;
}

Categories

Resources