SIGSEGV on OpenCV JNI from Android - android

I'm trying to run a piece of code through OpenCV Java, then pass the Mat object to OpenCV JNI code which does Canny Edge detection on it and returns the Mat. But somehow, I'm repeatedly getting a SIGSEGV when the app launches and I'm unsure why this is:
09-23 00:30:19.501 20399-20547/com.example.opencv.opencvtest A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x3 in tid 20547 (Thread-7450)
The Java code segment in question is:
#Override
public void onCameraViewStarted(int width, int height) {
// Everything initialized
mGray = new Mat(height, width, CvType.CV_8UC4);
mGauss = new Mat(height, width, CvType.CV_8UC4);
mCanny = new Mat(height, width, CvType.CV_8UC4);
}
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
mGray = inputFrame.rgba();
Imgproc.GaussianBlur(mGray, mGauss, new Size(), 5);
// This works perfectly fine
// Imgproc.Canny(mGauss, mCanny, 0, 20);
// But this causes a SIGSEGV
nativeCanny(mGauss.getNativeObjAddr(), mCanny.getNativeObjAddr());
return mCanny;
}
The JNI code is:
extern "C" {
JNIEXPORT jboolean JNICALL
Java_com_example_opencv_opencvtest_MainActivity_nativeCanny(JNIEnv *env, jobject instance, long iAddr, long oAddr) {
cv::Mat* blur = (cv::Mat*) iAddr;
cv::Mat* canny = (cv::Mat*) oAddr;
// This line is causing the SIGSEGV because if I comment it,
// everything works (but Mat* canny is empty so shows up black screen)
Canny(*blur, *canny, 10, 30, 3 );
return true;
}
}
Any idea why this is happening? I've spent the better half of the day trying to figure out why this is breaking but made no headway other than isolating the problematic statements.
EDIT: From the comments
I think it was an error with the initialization of mCanny. If I change the JNI call to Canny(*blur, *blur, 10, 30, 3 ); and then in Java return mGauss instead of mCanny then it works fine. This fixes it for the moment, but I'm honestly still unsure why mCanny is causing the SIGSEGV.

SEGV means you tried to read/write unallocated memory. The fault address is 3. Something that close to 0 almost always means you dereferenced a null pointer. My guess is that either mGauss or mCanny had a 0 for their native object addr.

Related

Is there implicit limit for DNN input size of OpenCV Android?

I trained a Caffe model which does NOT specify input sizes. Then I ran it on Android using OpenCV. The code is like below.
private void runEntireImageTest(Mat inY) {
float scaleFactor = 1.0f / 255.0f;
Scalar mean = new Scalar(0);
Mat resized = new Mat();
// (550, 441) runs OK. (smaller sized runs also fine)
// (551, 441), (550, 442) crash.
// (441, 550) crashes
Imgproc.resize(inY, resized, new Size(550, 441));
Mat segBlob = Dnn.blobFromImage(resized, scaleFactor, resized.size(), mean, false, false);
mNet.setInput(segBlob);
// if input size is above some value, crash will happen here
Mat lastLayer = mNet.forward();
Mat outY = lastLayer.reshape(1, 1);
}
As it is written in the comment, there seems to be some intrinsic, or implicit limitation for the input size. (550, 441) ran okay, but (551, 441) will cause SIGSEGV:
A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x6d494bd744 in tid 19693 (myappname), pid 19661 (myappname)
I think it's not a memory problem because (550, 441) runs fine but (441, 550) will crash. What is the cause of this problem?

OpenCV Native Android cvtColor crash

I need to convert an image to grayscale and then back to RGBA to be able to draw in it.
Currently, I am doing it with two different cvtColor calls, which works fine, although the performance is not good in Android (RGBA -> GRAY -> RGBA).
Getting a gray image from the camera directly is faster and only having to do one cvtColor call makes a huge difference (GRAY -> RGBA).
The problem is that the second method makes the app close after a few seconds. The logcat in Android Studio does not show a crash for the app, but it shows some errors with the No Filters option selected. Here is the log https://pastebin.com/jA7jFSvu. It seems to point to a problem with OpenCV's camera.
Below are the two different pieces of code.
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
// Method 1 - works
cameraImage = inputFrame.rgba();
native.exampleProcessImage1(cameraImage.getNativeObjAddr(), cameraImage.getNativeObjAddr());
return cameraImage;
// Method 2 - app closes after a few seconds
cameraImage = inputFrame.gray();
Mat result = new Mat();
native.exampleProcessImage2(cameraImage.getNativeObjAddr(), result.getNativeObjAddr());
return result;
}
And this is my code in C++:
void Java_com_example_native_exampleProcessImage1(JNIEnv *env, jobject instance, jlong sourceImage, jlong destImage) {
// works!
Mat &src = * ((Mat *) sourceImage);
Mat &dest = * ((Mat *) destImage);
Mat pivot;
// src is RGBA
cvtColor(src, pivot, COLOR_RGBA2GRAY);
cvtColor(pivot, dest, COLOR_GRAY2RGBA);
// dest is RGBA
// process
}
void Java_com_example_native_exampleProcessImage2(JNIEnv *env, jobject instance, jlong sourceImage, jlong destImage) {
// does not work
Mat &src = * ((Mat *) sourceImage);
Mat &dest = * ((Mat *) destImage);
// src is GRAY
cvtColor(src, dest, COLOR_GRAY2RGBA);
// dest is RGBA
// process
}
This works as expected on Linux and OpenCV.
Do you know what I am doing wrong? Is there another way to achieve the same? Performance is key, in particular for Android devices.
Thank you in advance.
For second case you have memory leak and this leads to leak
~ 3 sec * fps * frame_resolution * 4 bytes
I think crash is happening after the memory is full.
You need to call result.release(); somewhere after each exampleProcessImage2 call

Android OpenCV poor performance with JNI

I have a huge problem with OpenCV 3.10 under Android. I am developing an App which does TemplateMatching of an Camera Preview.
The first approach has been to use the OpenCV Java Wrapper which worked okay. One Processing cycle took about 3.6s. To speed this up i redeveloped the code in C++. For some reason since of that the execution of one cycle started to take up to 35s.
Trying to speed this up and leverage the multithreading abilities i move the JNI execution to an AsyncTask. Since that, a single execution takes up to 65s.
I am using the gradle experimental plugin 0.7.0 which is considered stable and the most recent NDK (12.1 as of now).
here's my module build.gradle
ndk {
moduleName "OpenCVWrapper"
ldLibs.addAll(["android", "log", "z"])
cppFlags.add("-std=c++11")
cppFlags.add("-fexceptions")
cppFlags.add("-I"+file("src/main/jni").absolutePath)
cppFlags.add("-I"+file("src/main/jni/opencv2").absolutePath)
cppFlags.add("-I"+file("src/main/jni/opencv").absolutePath)
stl = "gnustl_shared"
debuggable = "true"
}
productFlavors {
create("arm") {
ndk.with {
abiFilters.add("armeabi")
String libsDir = file('../openCVLibrary310/src/main/jniLibs/armeabi/').absolutePath+'/'
ldLibs.add(libsDir + "libopencv_core.a")
ldLibs.add(libsDir + "libopencv_highgui.a")
ldLibs.add(libsDir + "libopencv_imgproc.a")
ldLibs.add(libsDir + "libopencv_java3.so")
ldLibs.add(libsDir + "libopencv_ml.a")
}
}
create("armv7") {
ndk.with {
abiFilters.add("armeabi-v7a")
String libsDir = file('../openCVLibrary310/src/main/jniLibs/armeabi-v7a/').absolutePath+'/'
ldLibs.add(libsDir + "libopencv_core.a")
[... and so on ...]
So heres the Android-Java code which executed in about 3-4 seconds:
// data is byte[] from camera
Mat yuv = new Mat(height+height/2, width, CvType.CV_8UC1);
yuv.put(0,0,data);
Mat input = new Mat(height, width, CvType.CV_8UC3);
Imgproc.cvtColor(yuv, input, Imgproc.COLOR_YUV2RGB_NV12, 3);
yuv.release();
int midPoint = Math.min(input.cols(), input.rows())/2;
Mat rotated = new Mat();
Imgproc.warpAffine(input, rotated,
Imgproc.getRotationMatrix2D(new Point(midPoint, midPoint), 270, 1.0),
new Size(input.rows(), input.cols()));
input.release();
android.util.Size packageRect = midRect.getSize();
input.release();
Rect r = new Rect(((rotated.cols()/2)-(packageRect.getWidth()/2)),
((rotated.rows()/2)-(packageRect.getHeight()/2)),
packageRect.getWidth(), packageRect.getHeight());
Mat cut = new Mat(rotated, r);
Mat scaled = new Mat();
Imgproc.resize(cut,scaled, new Size(323, 339), 0, 0, Imgproc.INTER_AREA);
Imgcodecs.imwrite(getExternalFileName("cutout").getAbsolutePath(), cut);
cut.release();
Mat output = new Mat();
Imgproc.matchTemplate(pattern, scaled, output, Imgproc.TM_CCOEFF_NORMED);
Core.MinMaxLocResult tmplResult = Core.minMaxLoc(output);
findPackage(tmplResult.maxLoc.x+150);
scaled.release();
input.release();
output.release();
cut.release();
In turn thats the C++ code to do exactly the same:
JNIEXPORT void JNICALL Java_at_identum_planogramscanner_ScanActivity_scanPackage(JNIEnv *env, jobject instance, jbyteArray input_, jobject data, jlong output, jint width, jint height, jint rectWidth, jint rectHeight) {
jbyte *input = env->GetByteArrayElements(input_, NULL);
jclass resultDataClass = env->GetObjectClass(data);
jmethodID setResultMaxXPos = env->GetMethodID(resultDataClass, "setMaxXPos", "(I)V");
jmethodID setResultMinXPos = env->GetMethodID(resultDataClass, "setMinXPos", "(I)V");
jmethodID setResultMinVal = env->GetMethodID(resultDataClass, "setMinVal", "(F)V");
jmethodID setResultMaxVal = env->GetMethodID(resultDataClass, "setMaxVal", "(F)V");
LOGE("Before work");
Mat convert(height+height/2, width, CV_8UC1, (unsigned char*)input);
Mat img(height, width, CV_8UC3);
cvtColor(convert, img, CV_YUV2RGB_NV12, 3);
convert.release();
LOGE("After Colorconvert");
int midCoord = min(img.cols, img.rows)/2;
Mat rot;
Mat rotMat = getRotationMatrix2D(Point2f(midCoord,midCoord), 270, 1.0);
warpAffine(img, rot, rotMat, Size(img.rows, img.cols));
rotMat.release();
LOGE("After Rotation");
Rect r(
(rot.cols/2-rectWidth/2),
(rot.rows/2-rectHeight/2),
rectWidth, rectHeight );
Mat cut(rot,r);
rot.release();
LOGE("After Cutting");
Mat scaled(Size(323, 339), CV_8UC3);
resize(cut, scaled, Size(323,339),0,0,INTER_AREA);
cut.release();
LOGE("After Scaling");
Mat match(pattern.cols, 1, CV_8UC1);
matchTemplate(pattern, scaled, match, TM_SQDIFF_NORMED);
scaled.release();
LOGE("After Templatematching and normalize");
double minVal; double maxVal; Point minLoc; Point maxLoc;
minMaxLoc(match, &minVal, &maxVal, &minLoc, &maxLoc, Mat());
img.release();
env->CallVoidMethod(data, setResultMinXPos, minLoc.x);
env->CallVoidMethod(data, setResultMaxXPos, maxLoc.x);
env->CallVoidMethod(data, setResultMinVal, minVal);
env->CallVoidMethod(data, setResultMaxVal, maxVal);
LOGE("After Calling JNI funcs");
env->ReleaseByteArrayElements(input_, input, 0);
as you can see it is practically exactly the same work and i expected it to run a little faster than written in Android-Java but for sure not 10 times slower and definetely not 20 times slower when ran from AsyncTask.
My best conclusion is that the .a archives of OpenCV need some kind of Compiler settings to speed up as much as possible. I hope anyone can point me into the right direction!
Thanks in advance!
I recently did a real-time face recognition application using the OpenCV's JAVA wrapper, and like you I wanted to squeeze more performance out of it so I implemented a JNI version. Again like your case, JNI version turns out to be slower than JAVA wrapper version albeit just a little.
For your case I can see why the performance suddenly suffered, which occurs here
jbyte *input = env->GetByteArrayElements(input_, NULL);
You can read more online that this is slow because JNI always copy (using GetByteArrayElements) from JAVA to C++. Depends on the camera preview size, the copy can be very significant especially for real-time process .
Here's a way to quicken up your code, instead of sending the Mat bytes to JNI, you can send the Mat pointer address directly,
In JAVA
public void processFrame(byte[] data) {
Mat raw = new Mat();
raw.put(0, 0, data); //place the bytes into a Mat
scanPackage(...,raw.native_obj, ...);
}
where native_obj is the address of the Mat object, which is type long
To convert jlong back to Mat in C++, change your jbyteArray input_ to jlong input_
JNIEXPORT void JNICALL Java_at_identum_planogramscanner_ScanActivity_scanPackage(..., jlong input_, ...) {
cv::Mat* pframe_addr = (cv::Mat*)input_;
Mat img(height, width, CV_8UC3);
cv::cvtColor(*pframe_addr,img,CV_YUV2RGB_NV12, 3);
/** The rest of your code */

Threshold in Image Processing with OpenCV for Android not working

I am working on an Android application that recognizes characters using OpenCV library for Image Processing.
I first wrote the code in Java with Eclipse and I am now transfering the code to Android Studio. The problem I am facing is that this line doesn't seem to have any effect on the Camera Preview, it shows an ordinary image with no effects.
Here are some of my declarations my declarations:
Mat rgba = inputFrame.rgba();
Size sizeRgba = rgba.size();
Mat rgbaInnerWindow;
int rows = (int) sizeRgba.height;
int cols = (int) sizeRgba.width;
int left = cols / 8;
int top = rows / 2;
int width = cols * 3 / 4;
int height = rows * 20 / 100;
rgbaInnerWindow = rgba.submat(top, top + height, left, left + width);
And here are the methods I applied:
//Filtre Gaussien
Imgproc.GaussianBlur(mIntermediateMat, rgbaInnerWindow, new org.opencv.core.Size(7, 7), 0, 3);
//binarisation
Imgproc.threshold(mIntermediateMat, rgbaInnerWindow, 181, 255, Imgproc.THRESH_BINARY);
rgbaInnerWindow.release();
Thank you in advance for any help!
Problem solved. I had to grayscale the image before processing it, so I placed inputFrame.gray() as the first parameter in the GaussianBlur funtion:
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
Imgproc.GaussianBlur(inputFrame.gray(), mIntermediateMat, new org.opencv.core.Size(7, 7), 0 , 3);
Imgproc.threshold(mIntermediateMat, mRgba, 181, 255, Imgproc.THRESH_BINARY);
return mRgba;
}
Also, setting up a new OpenCV project in Android Studio eliminated these errors:
E/ LoadedApk: It takes too much time onReceive and the onReceive time
is:28051 ms intent is:Intent {
act=android.net.conn.CONNECTIVITY_CHANGE_IMMEDIATE flg=0x4000010 (has
extras) }
Skipped 38 frames! The application may be doing too much work on its
main thread.
The threshold function works on a single channel image and outputs another single channel image. Since your rgbaInnerWindow is a 4 channel Mat that doesn't match the output dimensions, it's reference is deleted and a new matrix is created. So you don't see the result when displaying rgba.
Try this:
Imgproc.threshold(mIntermediateMat, mIntermediateMat, 181, 255, Imgproc.THRESH_BINARY);
Imgproc.cvtColor(mIntermediateMat, rgbaInnerWindow, Imgproc.COLOR_GRAY2RGBA);
cvtColor()'s output will match your rgbaInnerWindow and rgba will be modified.
You can also use Core.merge() to replicate the single channel image on all the 4 channels and push the results to rgbaInnerWindow.

SIGNAL 11 SIGSEGV code=2 crash Android

I'm trying to process bitmaps from frames I grabbed from MediaMetadataretriever using native function. But I got a fatal crash saying
SIGNAL 11 (SIGSEGV) at 0x422d8f20 (code=2)
SIGNAL 11 (SIGSEGV) at 0x42311320 (code=2)
I tried logging to see where it went wrong. it appears that it crashes when i call for the native function. Below is the function that I called for native function.
protected Bitmap processFrame(Bitmap l_frame) {
WarnC='a';
int[] rgba = mRGBA;
byte[] src_array =stream;
ByteArrayOutputStream src_stream = new ByteArrayOutputStream();
l_frame.compress(Bitmap.CompressFormat.PNG, 0, src_stream);
src_array = src_stream.toByteArray();
Log.i("test", "ok");
WarnC= processcaller.LaneDetection(mFrameWidth, mFrameHeight, src_array, rgba);
Bitmap bmp = g_frame;
bmp.setPixels(rgba, 0/* offset */, mFrameWidth /* stride */, 0, 0, mFrameWidth,mFrameHeight);
rgba =null;
src_array=null;
return bmp;
}
The crash signal came out right after the Log.i("test","ok");
I searched around the net and got saw most say it's segmentation fault and might be cause by calling uninitialized or functions that does not exist. But scanning through my code, I just can't find any of it. Any pointers?

Categories

Resources