Explicitly releasing Mat with opencv 2.0 - android

I'm working on a program where we do some image processing of full quality camera photos using the Android NDK. So, obviously memory usage is a big concern.
There are times where I don't need the contents of a Mat anymore - I know that it'll be released automatically when it goes out of scope, but is there a good way of releasing it earlier, so I can reduce the memory usage?
It's running fine on my Galaxy S II right now, but obviously that is not representative of the capabilities of a lot of the older phones around!

If you have only one matrix pointing to your data, you can do this trick:
Mat img = imread("myImage.jpg");
// do some operations
img = Mat(); // release it
If more than one Mat is pointing to your data, what you should do is to release all of them
Mat img = imread("myImage.jpg");
Mat img2 = img;
Mat roi = img(Rect(0,0,10,10));
// do some operations
img = Mat(); // release all of them
img2 = Mat();
roi = Mat();
Or use the bulldozer approach: (Are you sure? this sounds like inserting bugs in your code )
Mat img = imread("myImage.jpg");
Mat img2 = img;
Mat roi = img(Rect(0,0,10,10));
// do some operations
char* imgData = (char*)img.data;
free[] imgData;
imshow("Look, this is called access violation exception", roi);

Mat::release() should do the trick.
cf.: OpenCV Memory Management Documentation

Related

Super fast Bitmap 180 degrees rotation using OpenCV on Android

I am looking for a fast way to rotate a Bitmap 180 degrees using OpenCV on Android. Also, the Bitmap may be rotated in place, i.e. without allocation of additional memory.
Here riwnodennyk described different methods of rotating Bitmaps and compared their performances: https://stackoverflow.com/a/29734593/1707617. Here is GitHub repository for this study: https://github.com/riwnodennyk/ImageRotation.
This study doesn't include OpenCV implementation. So, I tried my own implementation of OpenCV rotator (only 180 degrees).
The best rotation methods for my test picture are as follows:
OpenCV: 13 ms
Ndk: 15 ms
Usual: 29 ms
Because OpenCV performance looks very promising and my implementation seems to be unoptimized, I decided to ask you how to implement it better.
My implementation with comments:
#Override
public Bitmap rotate(Bitmap srcBitmap, int angleCcw) {
Mat srcMat = new Mat();
Utils.bitmapToMat(srcBitmap, srcMat); // Possible memory allocation and copying
Mat rotatedMat = new Mat();
Core.flip(srcMat, rotatedMat, -1); // Possible memory allocation
Bitmap dstBitmap = Bitmap.createBitmap(srcBitmap.getWidth(), srcBitmap.getHeight(), Bitmap.Config.ARGB_8888); // Unneeded memory allocation
Utils.matToBitmap(rotatedMat, dstBitmap); // Unneeded copying
return dstBitmap;
}
There are 3 places where, I suppose, unnecessary allocations and copying may take place.
Is it possible to get rid of theese unnecessary operations?
I implemented fast bitmap rotation library.
Performance (approximately):
5 times faster than Matrix rotation method.
3.8 times faster than Ndk rotation method.
3.3 times faster than OpenCV rotation method.
https://github.com/ivankrylatskoe/fast-bitmap-rotation-lib

Blurring a Rect within a screenshot

I'm developing an Android app which uses a background Service to programmatically capture a screenshot of whatever is on the screen currently. I obtain the screenshot as a Bitmap.
Next, I successfully imported OpenCV into my Android project.
What I need to do now is blur a subset of this image, i.e. not the entire image itself, but a [rectangular] area or sub-region within the image. I have an array of Rect objects representing the rectangular regions that I need to blur within the screenshot.
I've been looking around for a tutorial on doing this with OpenCV in Java, and I haven't found a clear answer. The Mat and Imgproc classes are obviously the ones of interest, and there's the Mat.submat() method, but I've been unable to find a clear, straightforward tutorial on getting this done.
I've googled a lot, and none of the examples I've found are complete. I need to do this in Java, within the Android runtime.
What I need is: Bitmap >>> Mat >>> Imgproc>>> Rect >>> Bitmap with ROI
blurred.
Any experienced OpenCV devs out here, can you point me in the right direction? This is the only thing I'm stuck at.
Related:
Gaussian blurring with OpenCV: only blurring a subregion of an image?.
How to blur a rectagle with OpenCv.
How to blur some portion of Image in Android?.
The C++ code to achieve this task is shared below with comments and sample images:
// load an input image
Mat img = imread("C:\\elon_tusk.png");
img:
// extract subimage
Rect roi(113, 87, 100, 50);
Mat subimg = img(roi);
subimg:
// blur the subimage
Mat blurred_subimage;
GaussianBlur(subimg, blurred_subimage, Size(0, 0), 5, 5);
blurred_subimage:
// copy the blurred subimage back to the original image
blurred_subimage.copyTo(img(roi));
img:
Android equivalent:
Mat img = Imgcodecs.imread("elon_tusk.png");
Rect roi = new Rect(113, 87, 100, 50);
Mat subimage = img.submat(roi).clone();
Imgproc.GaussianBlur(subimg, subimg, new Size(0,0), 5, 5);
subimg.copyTo(img.submat(roi));
You could just implement your own helper function, let's call it roi (region of interest).
Since images in opencv are numpy ndarrays, you can do something like this:
def roi(image: np.ndarray, region: QRect) -> np.ndarray:
a1 = region.upperLeft().x()
b1 = region.bottomRight().y()
a2 = region.upperLeft().x()
b2 = region.bottomRight().y()
return image[a1:a2, b1:b2]
And just use this helper function to extract the subregions of the image that you are interested, blur them and put the result back on the original picture.

comparison of channel images in android and matlab

so i have a 1024x1024 greyscale image, when i open this picture using matlab, matlab detect that the image is 1024x1024 uint8.
the problem is when I do image processing with android, where I divide the image into several parts, then do the attacking process to some parts of the image, then re-combine the image. then I open the image of the attacking result using matlab, and matlab detect image size 1024x1024x3 uint8. I've tried to change the image using the cvtColor function provided by opencv to change the image channel but the image is still considered 3 channel by matlab. this is a sample image before and after done image processing with android (left:before attacking, right:after attacking)
this is one of the attacking functions named gaussian noised which I implement using android
private Bitmap GaussianNoise(Bitmap src, double variance) {
Bitmap hasil = src;
Mat input = new Mat();
Mat imGray = new Mat();
Utils.bitmapToMat(src, input);
Imgproc.cvtColor(input, imGray, Imgproc.COLOR_RGBA2GRAY);
Mat noise = new Mat(imGray.size(), CvType.CV_64F);
Mat resultMat = new Mat();
Core.normalize(imGray, resultMat, 0.0, 1.0, Core.NORM_MINMAX, CV_64F);
Core.randn(noise, 0, Math.sqrt(variance));
Core.add(resultMat, noise, resultMat);
Core.normalize(resultMat, resultMat, 0.0, 1.0, Core.NORM_MINMAX, CV_64F);
resultMat.convertTo(resultMat, imGray.type(), 255, 0);
Utils.matToBitmap(resultMat, hasil);
return hasil;
}
all forms of assistance will be greatly appreciated

can't find correct FAST-SURF matches when using openCV for android

I'm using openCV for android to implement a logo detection algorithm. my goal now is to find a predefined logo in a picture I've taken with the android camera.
I can't get ANY right matches.. I think this is very weird considering I'm almost only using openCV library functions.
First I detect keypoints using FAST detector, my images are 500x500 in size
afterwards I use SURF to describe these keypoints.
with knn I ask for the 2 best matches, and elliminate those who don't have A ratio smaller than 0.6 (first.distance/ second.distance).
I'm getting around 10 matches, but they are all wrong, when I draw every match (100+), they all seem to be wrong
I can't see what I'm doing wrong here, does anyone have the same problem, or know what I'm doing wrong?
FeatureDetector FAST = FeatureDetector.create(FeatureDetector.FAST);
// extract keypoints
FAST.detect(image1, keypoints);
FAST.detect(image2, logoKeypoints);
DescriptorExtractor SurfExtractor = DescriptorExtractor
.create(DescriptorExtractor.SURF);
Mat descriptors = new Mat();
Mat logoDescriptors = new Mat();
SurfExtractor.compute(image1, keypoints, descriptors);
SurfExtractor.compute(image2, logoKeypoints, logoDescriptors);
List<DMatch> matches = new ArrayList<DMatch>();
matches = knn(descriptors, logoDescriptors);
Scalar blue = new Scalar(0, 0, 255);
Scalar red = new Scalar(255, 0, 0);
Features2d.drawMatches(image2, logoKeypoints, image1, keypoints,
matches, rgbout, blue, red);
I think the problem is the matcher you are using. For floatbased such as (SURF)descriptors use FLANN as a matcher or BRUTEFORCE as a matcher. Also strive to use the same feature descriptor for both extraction and matching...i.e SURF features on SURF keypoints.
Read this post on stackoverflow,and the articles linked in it for better understanding.
How Does OpenCV ORB Feature Detector Work?

Cropping image in Android using opencv

I am using OpenCV 2.3.1 in Android. I need to crop the image into half.
What I am doing is:
Mat mIntermediateMat2 = new Mat(frame_height,frame_width,rgba.type);
mIntermediateMat2 = rgba.clone();
mIntermediateMat2 = mIntermediateMat2.rowRange(0,frame_height/2);
Will the third step do the job or I have to add something more?
I saw Mat::operator() in opencv 2.3 documentation but unfortunately not able to find in the opencv Android package.
There are a few constructors for the Mat class, one of which takes a Mat and an ROI (region of interest).
Here's how to do it in Android/Java:
Mat uncropped = getUncroppedImage();
Rect roi = new Rect(x, y, width, height);
Mat cropped = new Mat(uncropped, roi);
I have always done cropping this way:
Mat image = ...; // fill it however you want to
Mat crop(image, Rect(0, 0, image.cols, image.rows / 2)); // NOTE: this will only give you a reference to the ROI of the original data
// if you want a copy of the crop do this:
Mat output = crop.clone();
Hope that helps!
Seems cv::getRectSubPix does what you want. Plus you don't have to allocate more space than you need. Also it does necessary interpolations if the cropped area is not aligned over an exact pixel.
This should do what you want. Input type will be the output type.
Mat dst;
getRectSubPix(src, Size(src.rows()/2,src.cols()), Point2f(src.rows()/4, src.cols()/2), dst);

Categories

Resources