I want to write an Android App that can track object with OpenCV features matching in real time.
For now, I can features matching with two pictures, I want it can work in real time, even the camera frames will be pretty low, I still want to try it.
Any helpăsuggestion or references for me?
Edit:
I found this question and I try it like this
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
mGray = inputFrame.gray();
VideoCapture mcapture = new VideoCapture(0);
mcapture.open(Highgui.CV_CAP_ANDROID_COLOR_FRAME);
if(!mcapture.isOpened()){
Core.putText(mRgba, "Capture Fail", new Point(50, 50), BIND_AUTO_CREATE, BIND_AUTO_CREATE, Color_Green);
}else{
Mat frame = new Mat();
Imgproc.cvtColor(mRgba, frame, Imgproc.COLOR_RGB2GRAY);
mcapture.retrieve(frame, 3);
mRgba = frame;
}
return mRgba;
}
VideoCapture is not open . any help?
I found this tutorial which explains eye tracking using Android.. You can just follow this to use the main camera and track objects..
HTH
Related
I am developing one Android application using OpenCV where I have to implement Background Subtraction. I am able to see some frames in grayscale and background removed, it only lasts for a while though, and then the application crashes.
Technique used: BackgroundSubtractorMOG2
This is my snippet of OnCameraFrame
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat frame = inputFrame.rgba();
Mat mRgb = new Mat();
Mat mFGMask = new Mat();
BackgroundSubtractorMOG2 mog2 = Video.createBackgroundSubtractorMOG2();
Imgproc.cvtColor(frame, mRgb, Imgproc.COLOR_RGBA2RGB);
mog2.apply(mRgb, mFGMask);
Imgproc.cvtColor(mFGMask, frame, Imgproc.COLOR_GRAY2RGBA);
return frame;
}
Thanks in advance.
I was able to solve my issue.
new Mat() which is used was causing an issue of memory management. It has to be initialized just once in onCameraViewStarted and the returning Mat has to be released in onCameraViewStopped. After modifying the code as per suggestions through the OpenCV community, I was able to execute my application properly.
1. Declare at the first
private Mat mRgb;
private Mat mFGMask;
private Mat frame;
private BackgroundSubtractorMOG2 mog2;
Initialize in onCameraViewStarted
public void onCameraViewStarted(int width, int height) {
mRgb = new Mat();
mFGMask = new Mat();
mog2 = Video.createBackgroundSubtractorMOG2();
}
Release the returning frame in onCameraViewStopped
public void onCameraViewStopped() {
frame.release();
}
For full code: https://github.com/rishirajrsawant/OpenCV-Background-Subtraction
I get a mat (320x480) from the camera of the phone. Then I need to process a part of that frame. I use ROI for that:
mat2 = new Mat(width, 175, CvType.CV_8SC3);
Rect roi = new Rect(75, 0, 175, 320);
mat2 = new Mat(mat1, roi);
Now I want to create a new mat with dimensions 320x480 with a black background. How do I do that?
Then I want to copy the processed ROI to the new mat at the same place it was on the first mat. How do I do that?
I am using OpenCV 3.4.6 and android studio.
Thank you in advance
You just need to submat() an ROI from the original Mat (your camera frame), and then process the submat as a normal Mat, but be careful to do not clone the sumbmat if you want the original get the effect.
Update according to the comment:
Mat originalMat = someMat;
Mat blackMat = Mat.zeros(size, CvType.CV_8UC1); // it is your black mat
// create a submat from the original mat, and clone it
Mat roiMat = originalMat.submat(rect).clone();
..... // do what you want with roiMat
// now copy the result to the original mat
Mat dst = originalMat.submat(rect); // do not clone this submat
roiMat.copyTo(dst);
// done!
I am making an android application that can detect an object from an image frame captured from a video.
The sample applications in openCV only have examples on real-time detection.
Additional Info:
-I'm using Haar classifier
As of now I'm storing the frames captured in an array of ImageView, how can i use OpenCV to detect the object and draw a rectangle around it?
for(int i=0 ;i <6; i++)
{
ImageView imageView = (ImageView)findViewById(ids_of_images[i]);
imageView.setImageBitmap(retriever.getFrameAtTime(looper,MediaMetadataRetriever.OPTION_CLOSEST_SYNC));
Log.e("MicroSeconds: ", ""+looper);
looper +=10000;
}
i hope you have integrated opencv 4 android library in your project .
Now, you can convert image to Mat using opencv function
Mat srcMat = new Mat();
Utils.bitmapToMat(yourbitmap,srcMat);
Once, you have mat you can apply opencv functions to find rectangle objects from image.
Now , follow the code to detect rectangle :
Mat mGray = new Mat();
cvtColor(mRgba, mGray, Imgproc.COLOR_BGR2GRAY, 1);
Imgproc.GaussianBlur(mGray, mGray, new Size(3, 3), 5, 10, BORDER_DEFAULT);
Canny(mGray, mGray, otsu_thresold, otsu_thresold * 0.5, 3, true); // edge detection using canny edge detection algorithm
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(mGray,contours,hierarchy,Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
Now , you have contours from image . So,you can get the max contour from it and draw it using drawContour() method :
for (int contourIdx = 0; contourIdx < contours.size(); contourIdx++){
Imgproc.drawContours(src, contours, contourIdx, new Scalar(0, 0, 255)-1);
}
and you're done !! you can refer this link :
Android using drawContours to fill region
hope it will help !!
I am trying to write an android app that allows for realtime processing of camera feed from a device to detect contours. Despite following many examples online, when I run the following code, an exception is thrown (also shown below).
Code:
Bitmap image = mTextureView.getBitmap();
Mat mat = new Mat();
Mat matConverted = new Mat();
Utils.bitmapToMat(image, mat);
mat.convertTo(matConverted, CvType.CV_32SC1);
List<MatOfPoint> contours = new ArrayList<>();
Imgproc.findContours(matConverted, contours, new Mat(), Imgproc.RETR_FLOODFILL,
Imgproc.CHAIN_APPROX_SIMPLE);
Exception:
CvException [org.opencv.core.CvException: cv::Exception:
/Volumes/Linux/builds/master_pack-android/opencv/modules/imgproc/src/contours.cpp:198:
error: (-210) [Start]FindContours supports only CV_8UC1 images when mode !=
CV_RETR_FLOODFILL otherwise supports CV_32SC1 images only in function _CvContourScanner*
cvStartFindContours(void*, CvMemStorage*, int, int, int, CvPoint)
Am I missing something obvious here?
Try
Mat matConverted = new Mat(mat.size(), CvType.CV_8UC1);
In my case, it worked
I want to adjust the brightness of frame in opencv camera which is called mRgba. After I split the channel of lab. I hope to adjust the L channel but I don't know how to change the value in the L channel.
Mat lab_image = new Mat();
//mRgba is the frame which shows in the camera
Imgproc.cvtColor(mRgba, lab_image, Imgproc.COLOR_mRGBA2RGBA);
Imgproc.cvtColor(lab_image, lab_image, Imgproc.COLOR_RGBA2RGB);
Imgproc.cvtColor(lab_image, lab_image, Imgproc.COLOR_RGB2Lab);
// Extract the L channel
List<Mat> lab_list = new ArrayList(3);
Core.split(lab_image,lab_list);
//lab_list.get(0).copyTo(mRgba);
Mat result_image = new Mat();
Core.merge(lab_list,result_image);
Imgproc.cvtColor(result_image, mRgba, Imgproc.COLOR_Lab2RGB);
Imgproc.cvtColor(mRgba, mRgba, Imgproc.COLOR_RGB2RGBA);
Imgproc.cvtColor(mRgba, mRgba, Imgproc.COLOR_RGBA2mRGBA);
I try to use setTo() to set the color but it change the whole color.
lab_list.get(0).setTo(new Scalar(255,255,255,0.1));
I want to add value to increase the whole brightness.I hope the final result can become the following photo. Please give me some help. Thank You.
http://i.stack.imgur.com/dSr4L.png
Let us say you want to increase your L channel by 50.
You can do it like this:
Mat dst = new Mat();
Core.add(lab_list.get(0), Scalar(50), dst);
lab_list.set(0, dst);
And then merge the channels like you do already.