I am trying to write an android app that allows for realtime processing of camera feed from a device to detect contours. Despite following many examples online, when I run the following code, an exception is thrown (also shown below).
Code:
Bitmap image = mTextureView.getBitmap();
Mat mat = new Mat();
Mat matConverted = new Mat();
Utils.bitmapToMat(image, mat);
mat.convertTo(matConverted, CvType.CV_32SC1);
List<MatOfPoint> contours = new ArrayList<>();
Imgproc.findContours(matConverted, contours, new Mat(), Imgproc.RETR_FLOODFILL,
Imgproc.CHAIN_APPROX_SIMPLE);
Exception:
CvException [org.opencv.core.CvException: cv::Exception:
/Volumes/Linux/builds/master_pack-android/opencv/modules/imgproc/src/contours.cpp:198:
error: (-210) [Start]FindContours supports only CV_8UC1 images when mode !=
CV_RETR_FLOODFILL otherwise supports CV_32SC1 images only in function _CvContourScanner*
cvStartFindContours(void*, CvMemStorage*, int, int, int, CvPoint)
Am I missing something obvious here?
Try
Mat matConverted = new Mat(mat.size(), CvType.CV_8UC1);
In my case, it worked
Related
I am developing one Android application using OpenCV where I have to implement Background Subtraction. I am able to see some frames in grayscale and background removed, it only lasts for a while though, and then the application crashes.
Technique used: BackgroundSubtractorMOG2
This is my snippet of OnCameraFrame
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat frame = inputFrame.rgba();
Mat mRgb = new Mat();
Mat mFGMask = new Mat();
BackgroundSubtractorMOG2 mog2 = Video.createBackgroundSubtractorMOG2();
Imgproc.cvtColor(frame, mRgb, Imgproc.COLOR_RGBA2RGB);
mog2.apply(mRgb, mFGMask);
Imgproc.cvtColor(mFGMask, frame, Imgproc.COLOR_GRAY2RGBA);
return frame;
}
Thanks in advance.
I was able to solve my issue.
new Mat() which is used was causing an issue of memory management. It has to be initialized just once in onCameraViewStarted and the returning Mat has to be released in onCameraViewStopped. After modifying the code as per suggestions through the OpenCV community, I was able to execute my application properly.
1. Declare at the first
private Mat mRgb;
private Mat mFGMask;
private Mat frame;
private BackgroundSubtractorMOG2 mog2;
Initialize in onCameraViewStarted
public void onCameraViewStarted(int width, int height) {
mRgb = new Mat();
mFGMask = new Mat();
mog2 = Video.createBackgroundSubtractorMOG2();
}
Release the returning frame in onCameraViewStopped
public void onCameraViewStopped() {
frame.release();
}
For full code: https://github.com/rishirajrsawant/OpenCV-Background-Subtraction
How can I draw the contours and create a bounding rectangle on the image in android?
What i have so far,
Mat imageMat = new Mat();
Utils.bitmapToMat(photo, imageMat);
Imgproc.cvtColor(imageMat, imageMat, Imgproc.COLOR_BGR2GRAY);
Imgproc.adaptiveThreshold(imageMat, imageMat, 255,
Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY_INV, 7, 7);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(imageMat, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
Bitmap resultBitmap = Bitmap.createBitmap(imageMat.cols(), imageMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(imageMat, resultBitmap);
((ImageView) findViewById(R.id.imageView)).setImageBitmap(resultBitmap);
Take a look at the documentation here for the DrawContours method, and you can follow this example.
As for the bounding rectangle, have a look at the same documentation linked above, and (following this example) have
List<Rect> boundingRects = new List<Rect>();
for (MatOfPoint points : contours){
boundingRects.append(Imgproc.boundingRect(points));
}
Although I have not tested the code (I can't at the moment) this should solve your problems (examples are reasonable in my opinion and they're confirmed solutions for similar problems).
Let me know!
I am making an android application that can detect an object from an image frame captured from a video.
The sample applications in openCV only have examples on real-time detection.
Additional Info:
-I'm using Haar classifier
As of now I'm storing the frames captured in an array of ImageView, how can i use OpenCV to detect the object and draw a rectangle around it?
for(int i=0 ;i <6; i++)
{
ImageView imageView = (ImageView)findViewById(ids_of_images[i]);
imageView.setImageBitmap(retriever.getFrameAtTime(looper,MediaMetadataRetriever.OPTION_CLOSEST_SYNC));
Log.e("MicroSeconds: ", ""+looper);
looper +=10000;
}
i hope you have integrated opencv 4 android library in your project .
Now, you can convert image to Mat using opencv function
Mat srcMat = new Mat();
Utils.bitmapToMat(yourbitmap,srcMat);
Once, you have mat you can apply opencv functions to find rectangle objects from image.
Now , follow the code to detect rectangle :
Mat mGray = new Mat();
cvtColor(mRgba, mGray, Imgproc.COLOR_BGR2GRAY, 1);
Imgproc.GaussianBlur(mGray, mGray, new Size(3, 3), 5, 10, BORDER_DEFAULT);
Canny(mGray, mGray, otsu_thresold, otsu_thresold * 0.5, 3, true); // edge detection using canny edge detection algorithm
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(mGray,contours,hierarchy,Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
Now , you have contours from image . So,you can get the max contour from it and draw it using drawContour() method :
for (int contourIdx = 0; contourIdx < contours.size(); contourIdx++){
Imgproc.drawContours(src, contours, contourIdx, new Scalar(0, 0, 255)-1);
}
and you're done !! you can refer this link :
Android using drawContours to fill region
hope it will help !!
I want to write an Android App that can track object with OpenCV features matching in real time.
For now, I can features matching with two pictures, I want it can work in real time, even the camera frames will be pretty low, I still want to try it.
Any helpăsuggestion or references for me?
Edit:
I found this question and I try it like this
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
mGray = inputFrame.gray();
VideoCapture mcapture = new VideoCapture(0);
mcapture.open(Highgui.CV_CAP_ANDROID_COLOR_FRAME);
if(!mcapture.isOpened()){
Core.putText(mRgba, "Capture Fail", new Point(50, 50), BIND_AUTO_CREATE, BIND_AUTO_CREATE, Color_Green);
}else{
Mat frame = new Mat();
Imgproc.cvtColor(mRgba, frame, Imgproc.COLOR_RGB2GRAY);
mcapture.retrieve(frame, 3);
mRgba = frame;
}
return mRgba;
}
VideoCapture is not open . any help?
I found this tutorial which explains eye tracking using Android.. You can just follow this to use the main camera and track objects..
HTH
I'm trying to convert a bmp file into a Mat, and then convert it to greyscale. But I'm having trouble getting it working. Here's what I've got:
String filename = "/mnt/sdcard/DCIM/01.bmp";
Bitmap bmp = BitmapFactory.decodeFile(filename);
Mat imgToProcess = null;
Utils.bitmapToMat(bmp, imgToProcess);
But whenever that final line is used, the app just crashes (the rest of the time it continues on just fine).
The rest of the code was going to be:
Imgproc.cvtColor(imgToProcess, imgToProcess, Imgproc.COLOR_BGR2GRAY);
Imgproc.cvtColor(imgToProcess, imgToProcess, Imgproc.COLOR_GRAY2RGBA, 4);
Utils.matToBitmap(imgToProcess, bmp);
I've no idea whether or not that works though, since I can't get the file converted to a Mat yet from the earlier part of the code. Looking at the documentation for Utils (found here) I'm using it correctly, but it's still not working.
Can anyone help me out here?
Change line:
Mat imgToProcess = null;
to this:
Mat imgToProcess = new Mat();
or this:
Mat imgToProcess = new Mat(bmp.getHeight(), bmp.getHeight(), CvType.CV_8UC4);
And why don't you just use Highgui.imread instead?
Mat imgToProcess = Highgui.imread(filename);