I am trying out OpenCV. In my demo, I want to show the camera preview frames as plain black and white. My demo compiles and runs fine (I could see the black and white preview frames) but crashes after running for a few seconds:
OpenCV Error: Insufficient memory (Failed to allocate 11059200 bytes)
...
cv::Mat::create(int, const int*, int)]
at org.opencv.core.Mat.n_Mat(Native Method)
at org.opencv.core.Mat.<init>(Mat.java:50)
my code:
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat src = inputFrame.rgba();
// insufficient memory error here?
Mat dest = new Mat(src.size(), src.type());
Imgproc.threshold(src, dest, 30, 255, Imgproc.THRESH_BINARY);
return dest;
}
What could be wrong?
I have resolved this issue by adding this line.
Mat.release()
to every OpenCV Matrix object at of course locations where each object was no longer needed
Related
I am using opencv in Android Studio to increase the red color proportion in an image. But when I run the function after about 30 times, the program crashes and shows the error.
OpenCV Error: Insufficient memory(Failed to allocate *****bytes)
I have searched many relating questions and many people say add Mat.release() could solve this problem. I added the Mat.release in my function but it does not help. The program still crashes after I run it over 30 times.
Here is my Code. Does somebody know how to solve this issue?
public void addRedColor(int red){
Mat img = new Mat();
Utils.bitmapToMat(src, img);
List<Mat> bitplane = new ArrayList<>(img.channels());
Core.split(img, bitplanes);
Mat redChannel = new Mat();
Core.add(bitplane.get(0), new Scalar(red), redChannel);
bitplane.set(0, redChannel);
Core.merge(bitplane, img);
// release the Mat
img.release();
bitplane.get(0).release();
bitplane.get(1).release();
bitplane.get(2).release();
redChannel.release();
}
Avoid to waste memory with new Mat and force the garbage collector before the end of the method
System.gc();
System.runFinalization();
I am trying to process android video in real time with opencv-android. So far I am able to access the video with opencv, and display it on a org.opencv.android.JavaCameraView.(I referred to this link) I haven't been able to access the video feed of the camera by frame-wise. I need to get each and every frame in order to apply some opencv algorithms on them for object tracking. Please suggest a method to access and process the frames with opencv. (Redirect this if it's already been asked)
Here is how to do it:
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
//Do your processing here, for example:
Mat image = inputFrame.rgba();
Mat ret_mat=new Mat();
Core.add(image, new Scalar(40, 40, 40, 0), ret_mat); //change brightness of video frame
return ret_mat;
}
OpenCv functions operate with Mat object, which represents matrix of pixels of your image/frame. The code above makes each frame brighter by 40 units.
I'm newbie OCV and android developer. I want Imgproc.GaussianBlur filter in my app. When i use it application send "application stopped". I only add 3 lines to "OpenCV Tutorial 3 - Camera Control":
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
Mat mat = inputFrame.gray();
org.opencv.core.Size s = mat.size();
Imgproc.GaussianBlur(mat, mat, s, 2);
return mat;
}
What could be wrong? I have Lenovo A820 Android 4.1.2 and tried it on OpenCV 2.4.4, 2.4.5 and 2.4.6. I tried different API. The Imgproc.Sobel(mat,mat,-1,1,1); filter works good.
look at the docs for GaussianBlur
it says: "ksize – Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd."
so, imho, you confused the kernel size with the Mat's size, try something like:
Mat mat = inputFrame.gray();
org.opencv.core.Size s = new Size(3,3);
Imgproc.GaussianBlur(mat, mat, s, 2);
return mat;
I am developing an Android OpenCV app based on Opencv4android SDK tutorial 2 - Mixed Processing.
in the frame processing function public Mat onCameraFrame(CvCameraViewFrame inputFrame) {}
The frame is RGBA and I want to make RGB by doing this:
mRgba = inputFrame.rgba();
mGray = inputFrame.gray();
Mat mRgb=new Mat(640,480,CvType.CV_8UC3);
mRgba.convertTo(mRgb, CvType.CV_8UC3);
//Imgproc.cvtColor(mRgba, mRgb, CvType.CV_8UC3);
PinkImage(mRgb.dataAddr());
But when I debug and log the things I passed to the JNI part, I find it's not working at all.
mRgb is CV_8UC4 even after calling converto()
What is the cause of this?
OK, the answer is here
Imgproc.cvtColor(mRgba,mRgb,Imgproc.COLOR_RGBA2RGB);
instead of
mRgba.convertTo(mRgb, CvType.CV_8UC3);
Thanks a lot!!
You never use the converted data. You still pass mRgba.dataAddr() to PinkImage(), which is the unmodified RGBA image. You need to pass in the modified data:
PinkImage(mRgb.dataAddr());
Currently trying
<code>
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
Imgproc.Canny(mRgba, markers, 80, 90);
Mat threeChannel = new Mat();
Imgproc.cvtColor(mRgba, threeChannel, Imgproc.COLOR_BGR2GRAY);
Imgproc.watershed(threeChannel, markers);
return threeChannel;
}
</code>
However, it fails with
CvException [org.opencv.core.CvException: /home/reports/ci/slave/50-SDK/opencv/modules/imgproc/src/segmentation.cpp:147: error: (-210) Only 8-bit, 3-channel input images are supported in function void cvWatershed(const CvArr*, CvArr*)
Could you advise how to appropriately use the markers from a Canny/Sobel edge detection to feed a Watershed algorithm? Android-specifics would be greatly helpful as this is my first Android project.
The error states that the input image for watershed() must be an 8-bit 3-channels image. After calling cvtColor(), print the number of channels of threeChannel. Don't be surprised if it outputs 1.
Pass mRgba directly to watershed() and see what happens. One of my previous answers have working code using watershed, you can use that for testing.
You need to just convert your image from 4 channel to 3 channels.
For example
Imgproc.cvtColor(mat , mat, Imgproc.COLOR_BGRA2BGR);