In OpenCV I retrieve a Gabor kernel for image processing which is a 9:9 matrix using:
Imgproc.getGaborKernel(...)
I have a gray matrix of the original image. (i'm not even sure if the kernel is supposed to be the size of the image or just a small segment, I'm fairly certain of the small kernel)
How do I convolve the two and get the output of the convolution?
I'm trying to put together a Gabor wavelet filter for edge detection.
EDIT: as far as convolution of matrices seems to be concerned it looks like the opencv "filter2d" method is what is used to do it and is found in Imgproc class of Android OpenCV api.
However when I do my convolution and put it to the screen its just a black image.
Size size = new Size(9,9);
Mat gaborKernel = Imgproc.getGaborKernel(size, 3.0, -Math.PI/4, Math.PI, 10.0, Math.PI*0.5, CvType.CV_64F);
Imgproc.filter2D(intermediate, output, -1, gaborKernel);
Bitmap temp = Bitmap.createBitmap(intermediate.cols(), intermediate.rows(), Config.ARGB_8888);
Utils.matToBitmap(output, temp);
I did a system output to see the values and all of the values are extremely small as seen below.
You need to normalize your kernel.
Just loop over the kernel matrix, calculate the sum of values. Then loop again to divide each value to the sum. This ensures that your kernel does not change the overal brightness.
Related
I am using com.otaliastudios.cameraview.CameraView component from otaliastudios Library. I need to convert some of the frames to Mat objects to process using OpenCV. How can I convert Otaliastudios Frame object into OpenCV Mat object?
Edit: The frame class I am using is located here: github.com/natario1/CameraView/blob/master/cameraview/src/main/java/com/otaliastudios/cameraview/Frame.java
How can I know which Image format this is? Does it make a difference?
You need to know the source format of your phone camera frame.
The Frame object contains a byte[] data field.This field is probably in the same ImageFormat of your camera. The two most common formats are NV21 and YUV_420_888.
The YUV format is composed by a component that is the luminance Y and another component that is called chrominance (U-V).
Typically the relation (and consequently the real bit/byte size) of these two components is defined by methods that reduce the chrominance component because human eyes are more sensible to luminance variations than color variations (see Chroma Subsampling). The reduction is expressed by a set of numbers like 4:2:0.
In this case the part related to chrominance is half the size of luminance.
So the byte size of a Frame has probably a part for the luminance that is equal to width X height bytes and a part of chrominance that is width X (height/2) bytes. This means that the byte size is heavily dependent on the image format that you are acquiring and you have to modify the mat size and choose the CvType according to this.
You have to allocate a mat that has the same size of your frame and put the data into it(from this answer):
mYuv = new Mat(getFrameHeight() + getFrameHeight() / 2,
getFrameWidth(), CvType.CV_8UC1);
....
mYuv.put(0, 0, data);
And then you have your Mat. If you need also to convert to an RGB format check the bottom of this page.
Hope this will help you.
My issue is that during smooth scaling applied to Skia canvas (with concat method) the text appears to scale in "spurts", non-uniformly. The issue is particularly evident on Android platform with FreeType 2 back-end.
I believe this is how general text scaling works in Skia - first apply text size to font engine, then extract glyph bitmap and transform it with "remainder" matrix to achieve the desired final size. But somehow final remaining scaling is not applied which results in such spurts amidst transition between integral values of text size. The same thing with pure Java/Android canvas appears to work impeccably (text scales smoothly).
My question is how can I fix that behavior? Maybe there is some build configuration flag I could tweak, maybe SkPaint runtime flag?
Skia revision is m59.
I don't know Skia, but generally when I see this behavior with scaling text, it's because you're casting your scaling float to an int.
float scale = someValue;
int someOtherVar = scale;
... some scaling math on someOtherVar...
text.setScale(someOtherVar)
This will cause the described behavior
Never convert any scaling variables to int's until the very last step.
When painting text, try setting Paint.isLinearText. This causes Skia to render the text to a path before applying transformations. My testing shows that this causes scaling to become smooth.
Does anybody know how to modify the picture size using OpenCV for Android ?
It seems that sizes are set to a maximum that I didn't managed to change.
Using the tutorial ImageManipulations which is based on JavaCameraView, here are the maximum resolutions that I can get:
camera Preview Size. Width: 960 Height : 720
camera Picture Size. Width: 640 Height : 480
The problem is that I need a much higher resolution for the picture (I don't care about the preview size).
Maybe there's an answer in the opencv forum but I can't access to this answer since it seems there are works over there (OpenCVForum)
You can resize a Mat as follows:
Size szSource = new Size(640,480);
Size szResized = new Size(2592,1944);
Mat mSource= new Mat(szSource, CvType.CV_8U);// Read or Fill Something in your mSource
Mat mResised = new Mat();
Imgproc.resize( mSource, mResised, szResized,0,0,INTER_NEAREST);//mSource-> Your Source image
interpolation – interpolation method can be any of the above
INTER_NEAREST - a nearest-neighbor interpolation
INTER_LINEAR - a bilinear interpolation (used by default)
INTER_AREA - resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire’-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method.
INTER_CUBIC - a bicubic interpolation over 4x4 pixel neighborhood
INTER_LANCZOS4 - a Lanczos interpolation over 8x8 pixel neighborhood
For further reference please see this.
From this post I took the following code. It crops a region from an original image using OpenCV4Android.
Mat uncropped = getUncroppedImage();
Rect roi = new Rect(x, y, width, height);
Mat cropped = new Mat(uncropped, roi);
Works fine, but imagine the memory for the Mat that is returned by getUncroppedImage is allocated only once. But the memory for the cropped image is re-allocated all the time. Is there a way to crop a region from OpenCV:Mat without using the Mat-constructor?
#Matthias, 'cropped' image in your code points to the same memory as 'uncropped'. No memory is reallocated. To test this you can change content of cropped image (set it to be white for example) and you will see that content of uncropped will be changed as well.
In OpenCV when you want to make two images have same content but different memory you should say so explicitly, i.e. use functions like copy, copyTo or clone. OpenCV tries to avoid memory reallocation and copy whenever possible.
Yes, you can just use a subimage:
Mat uncropped = getUncroppedImage();
Rect roi = new Rect(x, y, width, height);
Mat cropped = uncropped(roi);
that way, cropped uses the same image array data as uncropped, but only in the subimage area. The Matrix Header is created, but the pixel data array (that's what is expensive) isn't.
nearly all openCV functions can work on subimages because theyre just like normal images with another widthstep (byte length of a single row)
edit: this i c++ syntax, not sure how android openCV differs!
I try to visualize the gradiants and angles of an image which computed by the HOGDescriptor of the OpenCV Lib for Android. At the begin i have an 3 channel image Mat() with 8 bit unsigned int (CV_8UC3). The result of the computation is a MAT() (CV_32FC2) of the gradiants and a Mat() (CV_8UC2) of the angles. How can i visualize this results? What represent the values? Why have the angle Mat() 2 channels? Are the 2 channels of the gradiant Mat() the x and y component of the gradiant? I cant find documentation of the computeGradiant-Method.
HOG descriptor is an histogram of oriented gradient: it is an histogram where each bin reprezent the vote for gradient in corresponding orientation.
In order to compute this descriptor, you should first convert you 3 channels color image into a grayscale image
cv::cvtColor(CV_BGR2GRAY);
The result of "ComputeGradient" method is for exemple two images (same size as the original): x-component and y-component.
You should then be able to compute for each pixel the gradient magnitude and orientation.
mag=sqrt(x*x+y*y)
alpha=atan(y/x)
Then you can fill you histogram. Note that HOG descritpor is computed by blocks and cells. See this for more detail.