OpenCV Native Android cvtColor crash - android

I need to convert an image to grayscale and then back to RGBA to be able to draw in it.
Currently, I am doing it with two different cvtColor calls, which works fine, although the performance is not good in Android (RGBA -> GRAY -> RGBA).
Getting a gray image from the camera directly is faster and only having to do one cvtColor call makes a huge difference (GRAY -> RGBA).
The problem is that the second method makes the app close after a few seconds. The logcat in Android Studio does not show a crash for the app, but it shows some errors with the No Filters option selected. Here is the log https://pastebin.com/jA7jFSvu. It seems to point to a problem with OpenCV's camera.
Below are the two different pieces of code.
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
// Method 1 - works
cameraImage = inputFrame.rgba();
native.exampleProcessImage1(cameraImage.getNativeObjAddr(), cameraImage.getNativeObjAddr());
return cameraImage;
// Method 2 - app closes after a few seconds
cameraImage = inputFrame.gray();
Mat result = new Mat();
native.exampleProcessImage2(cameraImage.getNativeObjAddr(), result.getNativeObjAddr());
return result;
}
And this is my code in C++:
void Java_com_example_native_exampleProcessImage1(JNIEnv *env, jobject instance, jlong sourceImage, jlong destImage) {
// works!
Mat &src = * ((Mat *) sourceImage);
Mat &dest = * ((Mat *) destImage);
Mat pivot;
// src is RGBA
cvtColor(src, pivot, COLOR_RGBA2GRAY);
cvtColor(pivot, dest, COLOR_GRAY2RGBA);
// dest is RGBA
// process
}
void Java_com_example_native_exampleProcessImage2(JNIEnv *env, jobject instance, jlong sourceImage, jlong destImage) {
// does not work
Mat &src = * ((Mat *) sourceImage);
Mat &dest = * ((Mat *) destImage);
// src is GRAY
cvtColor(src, dest, COLOR_GRAY2RGBA);
// dest is RGBA
// process
}
This works as expected on Linux and OpenCV.
Do you know what I am doing wrong? Is there another way to achieve the same? Performance is key, in particular for Android devices.
Thank you in advance.

For second case you have memory leak and this leads to leak
~ 3 sec * fps * frame_resolution * 4 bytes
I think crash is happening after the memory is full.
You need to call result.release(); somewhere after each exampleProcessImage2 call

Related

SIGSEGV on OpenCV JNI from Android

I'm trying to run a piece of code through OpenCV Java, then pass the Mat object to OpenCV JNI code which does Canny Edge detection on it and returns the Mat. But somehow, I'm repeatedly getting a SIGSEGV when the app launches and I'm unsure why this is:
09-23 00:30:19.501 20399-20547/com.example.opencv.opencvtest A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x3 in tid 20547 (Thread-7450)
The Java code segment in question is:
#Override
public void onCameraViewStarted(int width, int height) {
// Everything initialized
mGray = new Mat(height, width, CvType.CV_8UC4);
mGauss = new Mat(height, width, CvType.CV_8UC4);
mCanny = new Mat(height, width, CvType.CV_8UC4);
}
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
mGray = inputFrame.rgba();
Imgproc.GaussianBlur(mGray, mGauss, new Size(), 5);
// This works perfectly fine
// Imgproc.Canny(mGauss, mCanny, 0, 20);
// But this causes a SIGSEGV
nativeCanny(mGauss.getNativeObjAddr(), mCanny.getNativeObjAddr());
return mCanny;
}
The JNI code is:
extern "C" {
JNIEXPORT jboolean JNICALL
Java_com_example_opencv_opencvtest_MainActivity_nativeCanny(JNIEnv *env, jobject instance, long iAddr, long oAddr) {
cv::Mat* blur = (cv::Mat*) iAddr;
cv::Mat* canny = (cv::Mat*) oAddr;
// This line is causing the SIGSEGV because if I comment it,
// everything works (but Mat* canny is empty so shows up black screen)
Canny(*blur, *canny, 10, 30, 3 );
return true;
}
}
Any idea why this is happening? I've spent the better half of the day trying to figure out why this is breaking but made no headway other than isolating the problematic statements.
EDIT: From the comments
I think it was an error with the initialization of mCanny. If I change the JNI call to Canny(*blur, *blur, 10, 30, 3 ); and then in Java return mGauss instead of mCanny then it works fine. This fixes it for the moment, but I'm honestly still unsure why mCanny is causing the SIGSEGV.
SEGV means you tried to read/write unallocated memory. The fault address is 3. Something that close to 0 almost always means you dereferenced a null pointer. My guess is that either mGauss or mCanny had a 0 for their native object addr.

Android OpenCV poor performance with JNI

I have a huge problem with OpenCV 3.10 under Android. I am developing an App which does TemplateMatching of an Camera Preview.
The first approach has been to use the OpenCV Java Wrapper which worked okay. One Processing cycle took about 3.6s. To speed this up i redeveloped the code in C++. For some reason since of that the execution of one cycle started to take up to 35s.
Trying to speed this up and leverage the multithreading abilities i move the JNI execution to an AsyncTask. Since that, a single execution takes up to 65s.
I am using the gradle experimental plugin 0.7.0 which is considered stable and the most recent NDK (12.1 as of now).
here's my module build.gradle
ndk {
moduleName "OpenCVWrapper"
ldLibs.addAll(["android", "log", "z"])
cppFlags.add("-std=c++11")
cppFlags.add("-fexceptions")
cppFlags.add("-I"+file("src/main/jni").absolutePath)
cppFlags.add("-I"+file("src/main/jni/opencv2").absolutePath)
cppFlags.add("-I"+file("src/main/jni/opencv").absolutePath)
stl = "gnustl_shared"
debuggable = "true"
}
productFlavors {
create("arm") {
ndk.with {
abiFilters.add("armeabi")
String libsDir = file('../openCVLibrary310/src/main/jniLibs/armeabi/').absolutePath+'/'
ldLibs.add(libsDir + "libopencv_core.a")
ldLibs.add(libsDir + "libopencv_highgui.a")
ldLibs.add(libsDir + "libopencv_imgproc.a")
ldLibs.add(libsDir + "libopencv_java3.so")
ldLibs.add(libsDir + "libopencv_ml.a")
}
}
create("armv7") {
ndk.with {
abiFilters.add("armeabi-v7a")
String libsDir = file('../openCVLibrary310/src/main/jniLibs/armeabi-v7a/').absolutePath+'/'
ldLibs.add(libsDir + "libopencv_core.a")
[... and so on ...]
So heres the Android-Java code which executed in about 3-4 seconds:
// data is byte[] from camera
Mat yuv = new Mat(height+height/2, width, CvType.CV_8UC1);
yuv.put(0,0,data);
Mat input = new Mat(height, width, CvType.CV_8UC3);
Imgproc.cvtColor(yuv, input, Imgproc.COLOR_YUV2RGB_NV12, 3);
yuv.release();
int midPoint = Math.min(input.cols(), input.rows())/2;
Mat rotated = new Mat();
Imgproc.warpAffine(input, rotated,
Imgproc.getRotationMatrix2D(new Point(midPoint, midPoint), 270, 1.0),
new Size(input.rows(), input.cols()));
input.release();
android.util.Size packageRect = midRect.getSize();
input.release();
Rect r = new Rect(((rotated.cols()/2)-(packageRect.getWidth()/2)),
((rotated.rows()/2)-(packageRect.getHeight()/2)),
packageRect.getWidth(), packageRect.getHeight());
Mat cut = new Mat(rotated, r);
Mat scaled = new Mat();
Imgproc.resize(cut,scaled, new Size(323, 339), 0, 0, Imgproc.INTER_AREA);
Imgcodecs.imwrite(getExternalFileName("cutout").getAbsolutePath(), cut);
cut.release();
Mat output = new Mat();
Imgproc.matchTemplate(pattern, scaled, output, Imgproc.TM_CCOEFF_NORMED);
Core.MinMaxLocResult tmplResult = Core.minMaxLoc(output);
findPackage(tmplResult.maxLoc.x+150);
scaled.release();
input.release();
output.release();
cut.release();
In turn thats the C++ code to do exactly the same:
JNIEXPORT void JNICALL Java_at_identum_planogramscanner_ScanActivity_scanPackage(JNIEnv *env, jobject instance, jbyteArray input_, jobject data, jlong output, jint width, jint height, jint rectWidth, jint rectHeight) {
jbyte *input = env->GetByteArrayElements(input_, NULL);
jclass resultDataClass = env->GetObjectClass(data);
jmethodID setResultMaxXPos = env->GetMethodID(resultDataClass, "setMaxXPos", "(I)V");
jmethodID setResultMinXPos = env->GetMethodID(resultDataClass, "setMinXPos", "(I)V");
jmethodID setResultMinVal = env->GetMethodID(resultDataClass, "setMinVal", "(F)V");
jmethodID setResultMaxVal = env->GetMethodID(resultDataClass, "setMaxVal", "(F)V");
LOGE("Before work");
Mat convert(height+height/2, width, CV_8UC1, (unsigned char*)input);
Mat img(height, width, CV_8UC3);
cvtColor(convert, img, CV_YUV2RGB_NV12, 3);
convert.release();
LOGE("After Colorconvert");
int midCoord = min(img.cols, img.rows)/2;
Mat rot;
Mat rotMat = getRotationMatrix2D(Point2f(midCoord,midCoord), 270, 1.0);
warpAffine(img, rot, rotMat, Size(img.rows, img.cols));
rotMat.release();
LOGE("After Rotation");
Rect r(
(rot.cols/2-rectWidth/2),
(rot.rows/2-rectHeight/2),
rectWidth, rectHeight );
Mat cut(rot,r);
rot.release();
LOGE("After Cutting");
Mat scaled(Size(323, 339), CV_8UC3);
resize(cut, scaled, Size(323,339),0,0,INTER_AREA);
cut.release();
LOGE("After Scaling");
Mat match(pattern.cols, 1, CV_8UC1);
matchTemplate(pattern, scaled, match, TM_SQDIFF_NORMED);
scaled.release();
LOGE("After Templatematching and normalize");
double minVal; double maxVal; Point minLoc; Point maxLoc;
minMaxLoc(match, &minVal, &maxVal, &minLoc, &maxLoc, Mat());
img.release();
env->CallVoidMethod(data, setResultMinXPos, minLoc.x);
env->CallVoidMethod(data, setResultMaxXPos, maxLoc.x);
env->CallVoidMethod(data, setResultMinVal, minVal);
env->CallVoidMethod(data, setResultMaxVal, maxVal);
LOGE("After Calling JNI funcs");
env->ReleaseByteArrayElements(input_, input, 0);
as you can see it is practically exactly the same work and i expected it to run a little faster than written in Android-Java but for sure not 10 times slower and definetely not 20 times slower when ran from AsyncTask.
My best conclusion is that the .a archives of OpenCV need some kind of Compiler settings to speed up as much as possible. I hope anyone can point me into the right direction!
Thanks in advance!
I recently did a real-time face recognition application using the OpenCV's JAVA wrapper, and like you I wanted to squeeze more performance out of it so I implemented a JNI version. Again like your case, JNI version turns out to be slower than JAVA wrapper version albeit just a little.
For your case I can see why the performance suddenly suffered, which occurs here
jbyte *input = env->GetByteArrayElements(input_, NULL);
You can read more online that this is slow because JNI always copy (using GetByteArrayElements) from JAVA to C++. Depends on the camera preview size, the copy can be very significant especially for real-time process .
Here's a way to quicken up your code, instead of sending the Mat bytes to JNI, you can send the Mat pointer address directly,
In JAVA
public void processFrame(byte[] data) {
Mat raw = new Mat();
raw.put(0, 0, data); //place the bytes into a Mat
scanPackage(...,raw.native_obj, ...);
}
where native_obj is the address of the Mat object, which is type long
To convert jlong back to Mat in C++, change your jbyteArray input_ to jlong input_
JNIEXPORT void JNICALL Java_at_identum_planogramscanner_ScanActivity_scanPackage(..., jlong input_, ...) {
cv::Mat* pframe_addr = (cv::Mat*)input_;
Mat img(height, width, CV_8UC3);
cv::cvtColor(*pframe_addr,img,CV_YUV2RGB_NV12, 3);
/** The rest of your code */

Assigning OpenCV Mat object causes a memory leak

I want to assign frame (Mat object) from function parameter into object variable, as shown in the code below. This function should be called may times (for each frame from the video camera), but this line
this->nFrame = frame;
causes a memory leak (when commented there is no error!).
NOTE
The function setCurrentFrame is called inside JNI function, where this JNI function is called every time I want to process the frame from the video camera.
The JNI function is like:
JNIEXPORT jbyteArray JNICALL Java_com_adhamenaya_Native_run(JNIEnv * env,
jobject obj, jstring faceCascadeFile, jstring noseCascadeFile,
jstring landmarks, jlong frame) {
MyClass gsys;
cv::Mat& inFrame = *(cv::Mat*) frame;
gsys.setCurrentFrame(inFrame);
// SOME PROCESSING FOR THE FRAME
inFrame.release();
gsys.release();
......
......
}
The code for C++ function (setCurrentFrame)
void MyClass::setCurrentFrame(cv::Mat& frame) {
cv::Size2d imgRes;
float resRatio;
if (frame.cols > frame.rows) {
//landscape
imgRes.width = 640.0f;
resRatio = frame.cols / 640.0f;
imgRes.height = floor(frame.rows / resRatio);
} else {
//portrait
imgRes.height = 640.0f;
resRatio = frame.rows / 640.0f;
imgRes.width = floor(frame.cols / resRatio);
}
//save scaled height, width for further use
this->frameWidth = nFrame.cols;
this->frameHeight = nFrame.rows;
//set frame and increment frameCount
this->nFrame = frame;
this->frameCount++;
}
Kindly, can you help me to solve this problem ? I tried to release the frame by calling :
void MyClass::release(void) {
this->nFrame = cv::Mat();
}
nothing happened, even like this:
void MyClass::release(void) {
this->nFrame.release();
}
Still the same error!
Edit
MyClass.h
class MyClass {
public:
cv::Mat nFrame;
MyClass ();
~MyClass ();
void release (void);
void setCurrentFrame(cv::Mat& frame);
};
In the jni file, the order of releasing frame objects can be erroneus:
inFrame.release();
gsys.release();
should be
gsys.release();
inFrame.release();
because when you free source frame then frame reference in gsys is invalidated.

android: how to Update a SurfaceView from the ndk by updating the Surface buffer?

I am working on an image processing project. I have a SurfaceView where I want to show "images" form the jni side.
I followed this blog , the NDK ANativeWindow API instructions to get the pointer of the buffer and updated from the C side.
I got the code to run but my SurfaceView is not updating (not showing any image...). Also the callback surfaceChanged is not called when the buffer is updated.
Here is what I am doing:
JNI SIDE :
/*
* Class: com_example_myLib
* Method: renderImage
* Signature: (JI)V
*/
JNIEXPORT void JNICALL com_example_myLib_renderImage
(JNIEnv *mJNIEnv, jobject mjobject, jobject javaSurface) {
#ifdef DEBUG_I
LOGI("renderImage attempt !");
#endif
// load an ipl image. code tested and works well with ImageView.
IplImage *iplimage = loadMyIplImage();
int iplImageWidth = iplimage->width;
int iplImageHeitgh = iplimage->height;
char * javalBmpPointer = malloc(iplimage->width * iplimage->height * 4);
int _javaBmpRowBytes = iplimage->width * 4;
// code tested and works well with ImageView.
copyIplImageToARGBPointer(iplimage, javalBmpPointer, _javaBmpRowBytes,
iplimage->width, iplimage->height);
#ifdef DEBUG_I
LOGI("ANativeWindow_fromSurface");
#endif
ANativeWindow* window = ANativeWindow_fromSurface(env, javaSurface);
#ifdef DEBUG_I
LOGI("Got window %p", window);
#endif
if(window != 0 ){
ANativeWindow_Buffer buffer;
if (ANativeWindow_lock(window, &buffer, NULL) == 0) {
#ifdef DEBUG_I
LOGI("GANativeWindow_lock %p", buffer);
#endif
memcpy(buffer.bits, javalBmpPointer, iplimage->width* iplimage->height* 4); // ARGB_8888
ANativeWindow_unlockAndPost(window);
}
ANativeWindow_release(window);
}
}
java SIDE :
// every time that I want to reload the image:
renderImage(mySurfaceView.getHolder().getSurface());
Thanks for your time and help!
One of the most common problems leading to a blank SurfaceView is setting a background for the View. The View contents are intended to be a transparent "hole", used only for layout. The Surface contents are on a separate layer, below the View layer, so they are only visible if the View remains transparent.
The View background should generally be disabled, and nothing drawn on the View, unless you want some sort of "mask" effect (like rounded corners).

Why does populating this array segfault?

I am having a bit of trouble with the following code that appears to be causing a segmentation fault at the indicated line. I'm trying to create an array of 8 bit unsigned integers in order to instantiate an OpenCV Mat object with, however the segfault occurs partway through the loop that populates the array.
It appears to happen at a different iteration each time, leading me to suspect that something is getting deallocated by GC, but I can't determine what.
SignDetector.c
JNIEXPORT void JNICALL Java_org_xxx_detectBlobs(JNIEnv *env, jclass clazz, jintArray in)
{
jint *contents = (*env)->GetIntArrayElements(env, in, NULL);
threshold(contents, PIXEL_SAMPLE_RATE);
detectBlobs(contents);
(*env)->ReleaseIntArrayElements(env, in, contents, 0);
}
BlobDetector.cpp
void detectBlobs(jint *contents)
{
LOGD("Call to detectBlobs in BlobDetector.cpp");
uint8_t *thresholded = (uint8_t*) malloc(frame_size);
int i;
for(i = 0; i < frame_size - 1; i++)
thresholded[i] = (contents[i] == WHITE) ? 0 : 1; // Segfaults partway through this loop.
frame_size is simply the number of pixels in an image, which is also equivalent to the length of the jintArray that the image is passed to native code in.
Any suggestions?
Managed to resolve this by simply restarting my AVD on the Android emulator. Problem appears to not occur on a real device either, so I can only conclude that something exploded in the virtual device's RAM.

Categories

Resources