android floodfill opencv UnsatisfiedLinkError - android

Story:
On the android screen I display a picture with a ruler in it. I klick inside the ruler and floodfill it like this:
1Imgproc.floodFill(image0, mask, seed, new Scalar(20,40,60));
Unfortunately the results are bad, because the contours of the ruler are not closed and the picture is filled nearly completely.
Better results returned from the floodfill.cpp sample on my desktop pc which I tried to execute analogical on the android like this:
2Imgproc.floodFill(image0, mask, seed, new Scalar(50,70,90), rect, new Scalar(20,20,20), new Scalar(20,20,20), Imgproc.FLOODFILL_FIXED_RANGE);
Here I get the UnsatisfiedLinkError.
How is this possible? Improc.floodfill(...
offers four different floodfill variations (with different parameters) and only the first one (number 1) works. All others: UnsatisfiedLinkError
Question:
Does this mean that the other variations are offered but just not implemented?
€dit: The error occurs here in line 96. And the logcat output can be found here.

Related

Opencv findContours in Android seems much slower than findContours in Python. Do you have any suggestion to improve algorithm speed?

it's the first time for me that I ask help here. I will try to be as precise as possible in my question.
I am trying to develop a shape detection app for Android.
I first identified the algorithm which works for my case playing with Python. Basically for each frame I do this:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower_color, upper_color)
contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
#here I filter my results
by this algorithm I am able to run the analysis realtime on videos having a frame rate of 120fps.
So I tryied to implement the same algorithm on Android Studio, doing the following for each Frame:
Imgproc.cvtColor(frameInput, tempFrame, Imgproc.COLOR_BGR2HSV);
Core.inRange(tempFrame,lowColorRoi,highColorRoi,tempFrame);
List<MatOfPoint> contours1 = new ArrayList<MatOfPoint>();
Imgproc.findContours(tempFrame /*.clone()*/, contours1, new Mat(), Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE);
for(MatOfPoint c : contours1){
//here I filter my results
}
and I see that only the findContour function takes 5-600ms to be performed at each iteration (I noticed that it takes also more using tempFrame.clone()), allowing more or less to run the analysis with only 2fps.
This speed is not acceptable at all of course. Do you have any suggestion about how to improve this speed? 30-40fps would be already a good target for me.
I will really appreciate any help from you all. Many thanks in advance.
I would suggest trying to do your shape analysis on a lower resolution version of the image, if that is acceptable. I often see directly proportional timing with number of pixels of the image and the number of channels of the image - so if you can halve the width and height it could be a 4 times performance improvement. If that works, likely the first thing to do is a resize, then all subsequent calls have a smaller burden.
Next, be careful using OpenCV in Java/Kotlin because there is a definite cost to marshalling over the JNI interface. You could write the majority of your code in native C++, and then make just a single call across JNI to a C++ function that handles all of the shape analysis at once.

JavaCV Convert color HSV2RGB very slow

I have recently started with JavaCv using Android for camera preview image processing.
Basically, I take the camera preview, do some processing, convert it to HSV to modify some colors, and then I want to convert it to RGBA to fill a bitmap.
Everything works normally, but quite slow. In order to find the slowest part I made some measurements, and to my surprise found this line:
cvCvtColor( hsvimage, imageBitmap, CV_HSV2RGB); //<-- 50msecs
where hsvimage is a 3-channel IplImage, and imageBitmap is 4 channel.image. (The conversion is good and leaves the alpha channel to 255, giving an opaque bitmap as expected)
Just for comparison, the following two lines only take 3msec
cvCvtColor(yuvimage, bgrimage, CV_YUV2BGR_NV21);
cvCvtColor(bgrimage, hsvimage, CV_BGR2HSV);
(yuvimage is 1 channel IplImage, bgrimage and hsvimage are 3 channel IplImages)
It seems as if the first conversion (HSV2RGB) isn't so much optimized as others. Also tested it with a 3-channel destination image, just in case, but with the same results.
I would like to find a way to make it as fast as BGR2HSV. Possible ways:
Find if there is another "equivalent" constant to CV_HSV2RGB which is
faster
Get direct access to the H-S-V byte arrays and make my own "fast" conversion
in C.
Any idea to solve this issue will be welcome
--EDIT--
All this is happening with a small 320*240 image and running on a Xiaomi Redmi Note 4. Most of the operations such as converting color from RGB to HSV take less than 1 msec. Canny takes 5msec, Floodfill takes about 5 or 6 msec. It is only this conversion HSV2RGB which gives such strange results.
Will try to use OpenCV directly (not JavaCV) to see if this behaviour disappears.
I was using an old JavaCV version (0.11) Now I have updated to 1.3 and results are nearly the same
...
long startTime=System.currentTimeMillis();
cvCvtColor(hsvimage, imageBitmap, CV_HSV2RGB);
Log.w(LOG_TAG, "Time:" + String.valueOf(System.currentTimeMillis() - startTime)); //<-- From 45 to 50msec
Log.w(LOG_TAG,"Channels:"+imageBitmap.nChannels()); // <-- returns 4
I can fill a 32bit/pixel android bitmap with the result
Mat mim4C= new Mat(imageBitmap);
Mat mhsvimage = new Mat(hsvimage);
long startTime**strong text**=System.currentTimeMillis();
CvtColor(mhsvimage, mim4C, CV_HSV2RGB);
Log.w(LOG_TAG, "Time:" + String.valueOf(System.currentTimeMillis() - startTime)); //<-- From 45 to 50mse
IplImage iim4C=new IplImage(mim4C);
Log.w(LOG_TAG,"Channels:"+iim4C.nChannels()); // <-- returns 3!!!
In this second case, if I try to fill a 32bits/pixel android bitmap (after converting back mim4C to IplImage), it crashes since it has 3 channels

Strange Delphi Android image assign / image garbled issue

I have some code that works fine in iOs, but which results in completely messed up images when on Android. I have found a partial workaround (not call some code), but it hints something is terrible wrong:
// some bitmap object buffer for mainthread only
R.BitmapRef := FPersistentBitmapBuffer;
// this TImage now contains the original wrongly sized bitmap
ImageBackground.Bitmap.Assign(R.BitmapRef);
// calculated somewhere
TmpNewWidth := 500;
TmpNewHeight := 500;
// draw the bitmap resized to wanted size
R.BitmapRef.Width := Round(TmpNewWidth);
R.BitmapRef.Height := Round(TmpNewHeight);
R.BitmapRef.Canvas.BeginScene();
R.BitmapRef.Canvas.DrawBitmap(ImageBackground.Bitmap, RectF(0,0,ImageBackground.Bitmap.Width,ImageBackground.Bitmap.Height), RectF(0,0,TmpNewWidth,TmpNewHeight), 1);
R.BitmapRef.Canvas.EndScene();
// assign it back to the image
ImageBackground.Bitmap.Assign(R.BitmapRef);
// THIS code causes the image shown in TImageBackground to look completely garbled ... which would indicate something is shareing memory/reference somewhere somehow... There is more odd behavior like debugger unhooking (it seems) if mouse in Delphi debugger hovers over ImageBackground.Bitmap - no error is reported
R.BitmapRef.Clear(TAlphaColorRec.White);
As can be seen, it the last line that messes it up. In some tests it has seemed to be enough to remove he line, but not in others. This is my best lead/description/example of the problem.
Here is an example of how a garbled image looks like. Since they look garbled the same way each time I run the app, I suspect it must be somehow relate to the image, but there is not any visual similarity.
My question is what could be wrong? I am testing the Delphi XE7 trial, so I can not access the source. It worked flawlessly on iOS using XE4 and XE7, but with Android something is going on. I am thinking it could possibly be some bitmap data that is sharing a reference... Does anyone have any ideas on how to test this theory / possible workarounds?
This looks plainly wrong. I'd suggest that you fill a bugreport at http://quality.embarcadero.com
Try using CopyFromBitmap instead of the "Assign". This will create a unique copy of the image. You'll also get a new unique image if you call MyBitmap.Map(TMapAccess.Write, MyBitmapData); followed by MyBitmap.UnMap(MyBitmapData);.

OpenCV Android Histogram equalization reducing the range of intensities

In my android project I am using OpenCV 2.4.8 and the function Imgproc.equalizeHist gives me strange results:
http://imgur.com/a/dhNqH
First shows the original image, second is what I get in android, and third is what I expected (made with imageJ from the original using Process->Enhance Contrast).
Code:
Imgproc.equalizeHist(imageROI, imageROI); //src, dst
imageROI is CvType.CV_8UC1.
Am I supposed to do something with imageROI before calling equalize? OpenCV documentation is mostly C/C++, so i don't know if anything is different for java on android.
Any help would be welcome!

OpenCV imwrite, the colors are wrong: Which argument for convertTo/cvColor do I use?

Okay, so I've been trying and searching online, but I can't find this.
I have:
OpenCV4Android which I am using in a mixed fashion: Java and Native.
a Mat obtained with
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
This can not be changed to Native because it is someone else's library and it is entirely built in non-native way.
Native methods to which I pass this Mat by using mFrame.nativeObj and using:
JNIEXPORT int JNICALL Java_com_... ( ...jlong addr ... )
{
Mat& mrgba = *((Mat*)addr);
// do stuff
imwrite( mrgba, ... );
}
Now... I use this matrix and then I write it with imwrite, all in this native part. Although imwrite does write a file, its colors are all wrong. Red where they should be white, green where they should be black and purple where they should be the color of my table i.e. yellowish. Now, instead of blindly trying cvColor and convertTo, I'd rather know stuff.
What is the number of channels, type, channel order and whatnot that I should know about a frame that was first retrieved with
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
and then passed through JNI to native OpenCV? Effectively, what conversions do I need to do for native imwrite to behave?
For some reason, the image obtained with
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
needs to be converted like so:
cvtColor(mrgba, mbgr, CV_YCrCb2RGB, 4);
in order for imwrite to correctly output an image to SD Card.
I don't understand why (imwrite is supposed to accept BGR images), but at least this answers my question.
Try
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_BGRA);
coz OpenCV reads images with blue , green and red channel instead of red,green, blue..

Categories

Resources