I'd like to initialize a 3 by 3 cross-shaped kernel matrix and use it to dilate an image in OpenCV4Android. In native C++ OpenCV, you'd do:
Mat kernel = (Mat_<int>(3,3) << 0,1,0,1,1,1,0,1,0);
dilate(image, image, kernel);
but how can I do the equivalent of the first line in Java? A Mat cannot be treated like an array, and Java has no << operator. There seems to be an OpenCV function called cvCreateStructuringElementEx which initializes Mats for use as kernels, but I can't find this function in OpenCV4Android.
Thanks so much.
I have never tried this but check if it works, at least this is the OpenCV4Android way to set a Structuring Element:
Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_CROSS, new Size(3, 3));
Also, check out copyTo() method, it can receive a Mask:
src_mat.copyTo(dst_mat, mask);
Related
it's the first time for me that I ask help here. I will try to be as precise as possible in my question.
I am trying to develop a shape detection app for Android.
I first identified the algorithm which works for my case playing with Python. Basically for each frame I do this:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower_color, upper_color)
contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
#here I filter my results
by this algorithm I am able to run the analysis realtime on videos having a frame rate of 120fps.
So I tryied to implement the same algorithm on Android Studio, doing the following for each Frame:
Imgproc.cvtColor(frameInput, tempFrame, Imgproc.COLOR_BGR2HSV);
Core.inRange(tempFrame,lowColorRoi,highColorRoi,tempFrame);
List<MatOfPoint> contours1 = new ArrayList<MatOfPoint>();
Imgproc.findContours(tempFrame /*.clone()*/, contours1, new Mat(), Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE);
for(MatOfPoint c : contours1){
//here I filter my results
}
and I see that only the findContour function takes 5-600ms to be performed at each iteration (I noticed that it takes also more using tempFrame.clone()), allowing more or less to run the analysis with only 2fps.
This speed is not acceptable at all of course. Do you have any suggestion about how to improve this speed? 30-40fps would be already a good target for me.
I will really appreciate any help from you all. Many thanks in advance.
I would suggest trying to do your shape analysis on a lower resolution version of the image, if that is acceptable. I often see directly proportional timing with number of pixels of the image and the number of channels of the image - so if you can halve the width and height it could be a 4 times performance improvement. If that works, likely the first thing to do is a resize, then all subsequent calls have a smaller burden.
Next, be careful using OpenCV in Java/Kotlin because there is a definite cost to marshalling over the JNI interface. You could write the majority of your code in native C++, and then make just a single call across JNI to a C++ function that handles all of the shape analysis at once.
What does the submat function do in OpenCV-Android?
I am trying to understand a code that I found that really does the functionality I am looking for, which I perfectly understand the code except for the 'Submat' function since in the documentation I cannot find its detail of the parameters
Mat zoomCorner = rgba.Submat(0, rows / 2 - rows / 10, 0, cols / 2 - cols / 10);
Mat object in OpenCV use reference counting approach.
Mat has reference to buffer and buffer parameters.
When you use submat(), you get new Mat that shares image data buffer with original Mat.
You need to copy ROI to new Mat with Mat::copyTo() method and make all calculations with submat() copy.
Okay, so I've been trying and searching online, but I can't find this.
I have:
OpenCV4Android which I am using in a mixed fashion: Java and Native.
a Mat obtained with
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
This can not be changed to Native because it is someone else's library and it is entirely built in non-native way.
Native methods to which I pass this Mat by using mFrame.nativeObj and using:
JNIEXPORT int JNICALL Java_com_... ( ...jlong addr ... )
{
Mat& mrgba = *((Mat*)addr);
// do stuff
imwrite( mrgba, ... );
}
Now... I use this matrix and then I write it with imwrite, all in this native part. Although imwrite does write a file, its colors are all wrong. Red where they should be white, green where they should be black and purple where they should be the color of my table i.e. yellowish. Now, instead of blindly trying cvColor and convertTo, I'd rather know stuff.
What is the number of channels, type, channel order and whatnot that I should know about a frame that was first retrieved with
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
and then passed through JNI to native OpenCV? Effectively, what conversions do I need to do for native imwrite to behave?
For some reason, the image obtained with
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
needs to be converted like so:
cvtColor(mrgba, mbgr, CV_YCrCb2RGB, 4);
in order for imwrite to correctly output an image to SD Card.
I don't understand why (imwrite is supposed to accept BGR images), but at least this answers my question.
Try
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_BGRA);
coz OpenCV reads images with blue , green and red channel instead of red,green, blue..
My Android application uses javaCV and calls detectMultiScale() function with LBP cascade to detect faces. It works completely fine on my emulator. However, when I tried to test it on my HTC Incredible S, it returns 0, could not detect any face! Could anyone show me some hints why it does not work? Many thanks for your help!!!
Here is my code for face detection:
CASCADE_FILE = working_Dir.getAbsolutePath() + "/lbpcascade_frontalface.xml";
public static CvRect getFaceWithLBP(IplImage grayFaceImg)
{
CascadeClassifier cascade = new CascadeClassifier(CASCADE_FILE);
CvRect facesdetection = new CvRect(null);
cascade.detectMultiScale(grayFaceImg, facesdetection, 1.1, 2, CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_DO_ROUGH_SEARCH,
new CvSize(), new CvSize(grayFaceImg.width(), grayFaceImg.height()));
return facesdetection;
}
Just a note, as per the OpenCV documentation, the flags (such as CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_DO_ROUGH_SEARCH) can not be used with new cascades (like LBP ones).
void CascadeClassifier::detectMultiScale(const Mat& image, vector& objects, double scaleFactor=1.1, int minNeighbors=3, int flags=0, Size minSize=Size(), Size maxSize=Size())
Parameters:
cascade – Haar classifier cascade (OpenCV 1.x API only). It can be loaded from XML or YAML file using Load(). When the cascade is not needed anymore, release it using cvReleaseHaarClassifierCascade(&cascade).
image – Matrix of the type CV_8U containing an image where objects are detected.
objects – Vector of rectangles where each rectangle contains the detected object.
scaleFactor – Parameter specifying how much the image size is reduced at each image scale.
minNeighbors – Parameter specifying how many neighbors each candidate rectangle should have to retain it.
flags – Parameter with the same meaning for an old cascade as in the function cvHaarDetectObjects. It is not used for a new cascade.
minSize – Minimum possible object size. Objects smaller than that are ignored.
maxSize – Maximum possible object size. Objects larger than that are ignored.
I wanna replace pixels for bitmap without creating a new one instance of bitmap. Is it possible to change void * pixels for bitmap which was given to native method without creating a new one instance of bitmap?
According to the Android NDK r8b documentation ("Stable APIs" section), you can pass a Bitmap to the NDK layer and process its pixels from there:
The 'jnigraphics' Library:
This is a tiny library that exposes a stable, C-based, interface that
allows native code to reliably access the pixel buffers of Java bitmap
objects.
To use it, include the header in your source code,
and and link to the jnigraphics library as in:
LOCAL_LDLIBS += -ljnigraphics
For details, read the source header at the following location:
build/platforms/android-8/arch-arm/usr/include/android/bitmap.h
Briefly, typical usage should look like:
1/ Use AndroidBitmap_getInfo() to retrieve information about a
given bitmap handle from JNI (e.g. its width/height/pixel format)
2/ Use AndroidBitmap_lockPixels() to lock the pixel buffer and
retrieve a pointer to it. This ensures the pixels will not move
until AndroidBitmap_unlockPixels() is called.
3/ Modify the pixel buffer, according to its pixel format, width,
stride, etc.., in native code.
4/ Call AndroidBitmap_unlockPixels() to unlock the buffer.