How to implement the signum function in OpenCV? - android

The dst = signum(src) function set the values of all positive elements in src to 1, and the values of all negative elements to -1.
However, it seems that it is not possible to implement the signum() function by applying the OpenCV function threshold(). I do not want to traverse src neither, because it is inefficient.

I don't know which language you are using, but in OpenCV C++, signum function can be implemented as follows:
Mat signum(Mat src)
{
Mat dst = (src >= 0) & 1;
dst.convertTo(dst,CV_32F, 2.0, -1.0);
return dst;
}
Of-course, the returned matrix would have floating point or a signed type to store the value of -1.
Update:
The previous implementation returns only 1 or -1 depending on the input values, but according to signum definition, 0 should remain 0 in the output. So getting reference from this answer, the standard signum function can be implemented as follows using OpenCV:
Mat signum(Mat src)
{
Mat z = Mat::zeros(src.size(), src.type());
Mat a = (z < src) & 1;
Mat b = (src < z) & 1;
Mat dst;
addWeighted(a,1.0,b,-1.0,0.0,dst, CV_32F);
return dst;
}

Related

Run inference with an openCV image

I have an Android Project with OpenCV4.0.1 and TFLite installed.
And I want to make an inference with a pretrained MobileNetV2 of an cv::Mat which I extracted and cropped from a CameraBridgeViewBase (Android style).
But it's kinda difficult.
I followed this example.
That does the inference about a ByteBuffer variable called "imgData" (line 71, class: org.tensorflow.lite.examples.classification.tflite.Classifier)
That imgData looks been filled on the method called "convertBitmapToByteBuffer" from the same class (line 185), adding pixel by pixel form a bitmap that looks to be cropped little before.
private int[] intValues = new int[224 * 224];
Mat _croppedFace = new Mat() // Cropped image from CvCameraViewFrame.rgba() method.
float[][] outputVal = new float[1][1]; // Output value from my MobileNetV2 // trained model (i've changed the output on training, tested on python)
// Following: https://stackoverflow.com/questions/13134682/convert-mat-to-bitmap-opencv-for-android
Bitmap bitmap = Bitmap.createBitmap(_croppedFace.cols(), _croppedFace.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(_croppedFace, bitmap);
convertBitmapToByteBuffer(bitmap); // This call should be used as the example one.
// runInference();
_tflite.run(imgData, outputVal);
But, it looks that the input_shape of my NN is not correct, but I'm following the MobileNet example because my NN it's a MobileNetV2.
I've solved the error, but I'm sure that it isn't the best way to do it.
Keras MobilenetV2 input_shape is: (nBatches, 224, 224, nChannels).
I just want to predict a single image, so, nBaches == 1, and I'm working on RGB mode, so nChannels == 3
// Nasty nasty, but works. nBatches == 2? -- _cropped.shape() == (244, 244), 3 channels.
float [][][][] _inputValue = new float[2][_cropped.cols()][_cropped.rows()][3];
// Fill the _inputValue
for(int i = 0; i < _croppedFace.cols(); ++i)
for (int j = 0; j < _croppedFace.rows(); ++j)
for(int z = 0; z < 3; ++z)
_inputValue [0][i][j][z] = (float) _croppedFace.get(i, j)[z] / 255; // DL works better with 0:1 values.
/*
Output val, has this shape, but I don't really know why.
I'm sure that one's of that 2's is for nClasses (I'm working with 2 classes)
But I don't really know why it's using the other one.
*/
float[][] outputVal = new float[2][2];
// Tensorflow lite interpreter
_tflite.run(_inputValue , outputVal);
On python has the same shape:
Python prediction:
[[XXXXXX, YYYYY]] <- Sure for the last layer that I made, this is just a prototype NN.
Hope some one got help, and also that someone can improve the answer because this is not very optimized.

How to track trajectory of moving object openCV C++

I am fairly new to openCV libraries and I am trying to do real time object detection for a school project on an android app. followed this tutorial (https://www.youtube.com/watch?v=bSeFrPrqZ2A) and I am able to detect object by color on my android phone. Now I am trying to map out the trajectory of the object just like in this video (https://www.youtube.com/watch?v=QTYSRZD4vyI).
Following is some of the source code provided in the first youtube video.
void searchForMovement(int& x, int& y, Mat& mRgb1, Mat& threshold){
morphOps(threshold);
Mat temp;
threshold.copyTo(temp);
//these two vectors needed for output of findContours
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
//find contours of filtered image using openCV findContours function
//In OpenCV, finding contours is like finding white object from black background.
// So remember, object to be found should be white and background should be black.
//CV_CHAIN_APPROX_SIMPLE to draw 4 points of the contour
findContours(temp,contours,hierarchy,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE );
double refArea = 0;
bool objectFound = false;
if (hierarchy.size() > 0) {
int numObjects = hierarchy.size();
//if number of objects greater than MAX_NUM_OBJECTS we have a noisy filter
if(numObjects<MAX_NUM_OBJECTS){
for (int index = 0; index >= 0; index = hierarchy[index][0]) {
Moments moment = moments((cv::Mat)contours[index]);
double area = moment.m00;
//if the area is less than 20 px by 20px then it is probably just noise
//if the area is the same as the 3/2 of the image size, probably just a bad filter
//we only want the object with the largest area so we safe a reference area each
//iteration and compare it to the area in the next iteration.
if(area>MIN_OBJECT_AREA && area<MAX_OBJECT_AREA && area>refArea){
x = moment.m10/area;
y = moment.m01/area;
objectFound = true;
refArea = area;
}else objectFound = false;
}
//let user know you found an object
if(objectFound ==true){
putText(mRgb1,"Tracking Object",Point(0,50),2,1,Scalar(0,255,0),2);
//draw object location on screen
drawObject(x,y,mRgb1);}
}else putText(mRgb1,"TOO MUCH NOISE! ADJUST FILTER",Point(0,50),1,2,Scalar(0,0,255),2);
}
}
void drawObject(int x, int y,Mat &frame){
Mat traj;
traj = frame;
//use some of the openCV drawing functions to draw crosshairs
//on your tracked image!
//UPDATE:JUNE 18TH, 2013
//added 'if' and 'else' statements to prevent
//memory errors from writing off the screen (ie. (-25,-25) is not within the window!)
circle(frame,Point(x,y),20,Scalar(0,255,0),2);
if(y-25>0)
line(frame,Point(x,y),Point(x,y-25),Scalar(0,255,0),2);
else line(traj,Point(x,y),Point(x,0),Scalar(0,255,0),2);
if(y+25<FRAME_HEIGHT)
line(frame,Point(x,y),Point(x,y+25),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(x,FRAME_HEIGHT),Scalar(0,255,0),2);
if(x-25>0)
line(traj,Point(x,y),Point(x-25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(0,y),Scalar(0,255,0),2);
if(x+25<FRAME_WIDTH)
line(frame,Point(x,y),Point(x+25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(FRAME_WIDTH,y),Scalar(0,255,0),2);
// add(traj, frame, frame);
putText(frame,intToString(x)+","+intToString(y),Point(x,y+30),1,1,Scalar(0,255,0),2);
}
How can I add onto this code to get the trajectory of an object showed in the 2nd video? Any suggestion would be much appreciated. Thank you.
http://opencv-srf.blogspot.co.uk/2010/09/object-detection-using-color-seperation.html
Found it. When doing it in android, need to make sure the lastX and lastY are updating as well.

OpenCV Android - error on match template

I've seen some questions here that is related to my error such as this and this and I know that I can't execute Imgproc.matchTemplate() method if the image and the template don't have the same datatype. But I'm still confused on how to know what type of Mat I'm using.
Below is my code which I adapted from example here:
for (int i = 0; i < 24; i++) {
arrDraw[i] = getResources().getIdentifier("let" + i, "drawable", getPackageName());
}
Mat mImage = input.submat(bigRect);
for (int i = 0; i < 24; i++) {
Mat mTemplate = Utils.loadResource(this, arrDraw[i], Highgui.CV_LOAD_IMAGE_COLOR);
Mat mResult = new Mat(mImage.rows(), mImage.cols(), CvType.CV_32FC1);
Imgproc.matchTemplate(mImage, mTemplate, mResult, match_method);
Core.normalize(mResult, mResult, 0, 1, Core.NORM_MINMAX, -1, new Mat());
... // further process
}
So basically what I'm trying to do is take a mImage from submat of inputFrame and do match template process with 24 other pictures and decide which has the best value (either lowest or highest). Yet the error shows this.
OpenCV Error: Assertion failed ((img.depth() == CV_8U || img.depth() == CV_32F) && img.type() == templ.type()) in void cv::matchTemplate(cv::InputArray, cv::InputArray, cv::OutputArray, int), file /home/reports/ci/slave_desktop/50-SDK/opencv/modules/imgproc/src/templmatch.cpp, line 249
I tried to initialize the mImage and mTemplate first with the same type but still no luck. Any advice? Thanks before.
The error is telling you that image and template have different types.
Assertion failed ... img.type() == templ.type() ....
I'd be willing to bet (a small amount) that mTemplate is CV_8UC3 BGR ordered.
From your code posted, it's not possible to tell what mImage's type is though if it's extracted from a camera frame, and if you did something like :
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
Mat inputFrame = inputFrame.rgba();
....
}
then it's likely to be CV_8UC4 BGRA ordered. Which is not the same type.
Also, I'm not sure what the behaviour of submat() is one a 3D or 4D input matrix, I think it's designed to operate only on 2D matrices so you may find that it returns either a 2D matrix (CV_8UC2) or some undefined weirdness.
I'd suggest that you try dumping the type() and depth() or both image and template before your matchTemplate( ... ) call.

OpenCV HoughCircles cvRound

i have just followed an example on opencv regarding circle detection http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, src_gray.rows/8, 200, 100, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
...
However, im having projekt with eclipse not accepting the function call
cvRound(circles[i][0])
Invalid arguments ' Candidates are: int cvRound(double) '
I have tried to add include a number of directories for gnu c and c++ in properties -> c/c++ general -> paths and symbols for example
ndkroot/sources/cxx-stl..../include
the native/jni/include
for opencv etc
But still it wont accept the cvRound function, is there something im missing?
thx in advance
cvRound function is just a rounding function to convert a double value to integer. Two ways:
1- You can make your own rounding function and use it.
int Round(double x){
int y;
if(x >= (int)x+0,5)
y = (int)x++;
else
y = (int)x;
return y;
}
2- Include not only C++, but also C API of opencv. (include/opencv/)

Optical flow in Android

We have been dealing with OpenCV for two weeks to make it work on Android.
Do you know where can we find an Android implementation of optical flow? It would be nice if it's implemented using OpenCV.
Openframeworks has openCV baked in, as well as many other interesting libraries. It has a very elegant strucutre, and I have used it with android to make a virtual mouse of the phone using motion estimation from the camera.
See the ports to android here http://openframeworks.cc/setup/android-studio/
Seems they recently added support for android studio, otherwise eclipse works great.
Try this
#Override
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
if (mMOP2fptsPrev.rows() == 0) {
//Log.d("Baz", "First time opflow");
// first time through the loop so we need prev and this mats
// plus prev points
// get this mat
Imgproc.cvtColor(mRgba, matOpFlowThis, Imgproc.COLOR_RGBA2GRAY);
// copy that to prev mat
matOpFlowThis.copyTo(matOpFlowPrev);
// get prev corners
Imgproc.goodFeaturesToTrack(matOpFlowPrev, MOPcorners, iGFFTMax, 0.05, 20);
mMOP2fptsPrev.fromArray(MOPcorners.toArray());
// get safe copy of this corners
mMOP2fptsPrev.copyTo(mMOP2fptsSafe);
}
else
{
//Log.d("Baz", "Opflow");
// we've been through before so
// this mat is valid. Copy it to prev mat
matOpFlowThis.copyTo(matOpFlowPrev);
// get this mat
Imgproc.cvtColor(mRgba, matOpFlowThis, Imgproc.COLOR_RGBA2GRAY);
// get the corners for this mat
Imgproc.goodFeaturesToTrack(matOpFlowThis, MOPcorners, iGFFTMax, 0.05, 20);
mMOP2fptsThis.fromArray(MOPcorners.toArray());
// retrieve the corners from the prev mat
// (saves calculating them again)
mMOP2fptsSafe.copyTo(mMOP2fptsPrev);
// and save this corners for next time through
mMOP2fptsThis.copyTo(mMOP2fptsSafe);
}
/*
Parameters:
prevImg first 8-bit input image
nextImg second input image
prevPts vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.
nextPts output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input.
status output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0.
err output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn't found then the error is not defined (use the status parameter to find such cases).
*/
Video.calcOpticalFlowPyrLK(matOpFlowPrev, matOpFlowThis, mMOP2fptsPrev, mMOP2fptsThis, mMOBStatus, mMOFerr);
cornersPrev = mMOP2fptsPrev.toList();
cornersThis = mMOP2fptsThis.toList();
byteStatus = mMOBStatus.toList();
y = byteStatus.size() - 1;
for (x = 0; x < y; x++) {
if (byteStatus.get(x) == 1) {
pt = cornersThis.get(x);
pt2 = cornersPrev.get(x);
Core.circle(mRgba, pt, 5, colorRed, iLineThickness - 1);
Core.line(mRgba, pt, pt2, colorRed, iLineThickness);
}
}
return mRgba;
}

Categories

Resources