I'm trying to process bitmaps from frames I grabbed from MediaMetadataretriever using native function. But I got a fatal crash saying
SIGNAL 11 (SIGSEGV) at 0x422d8f20 (code=2)
SIGNAL 11 (SIGSEGV) at 0x42311320 (code=2)
I tried logging to see where it went wrong. it appears that it crashes when i call for the native function. Below is the function that I called for native function.
protected Bitmap processFrame(Bitmap l_frame) {
WarnC='a';
int[] rgba = mRGBA;
byte[] src_array =stream;
ByteArrayOutputStream src_stream = new ByteArrayOutputStream();
l_frame.compress(Bitmap.CompressFormat.PNG, 0, src_stream);
src_array = src_stream.toByteArray();
Log.i("test", "ok");
WarnC= processcaller.LaneDetection(mFrameWidth, mFrameHeight, src_array, rgba);
Bitmap bmp = g_frame;
bmp.setPixels(rgba, 0/* offset */, mFrameWidth /* stride */, 0, 0, mFrameWidth,mFrameHeight);
rgba =null;
src_array=null;
return bmp;
}
The crash signal came out right after the Log.i("test","ok");
I searched around the net and got saw most say it's segmentation fault and might be cause by calling uninitialized or functions that does not exist. But scanning through my code, I just can't find any of it. Any pointers?
Related
I trained a Caffe model which does NOT specify input sizes. Then I ran it on Android using OpenCV. The code is like below.
private void runEntireImageTest(Mat inY) {
float scaleFactor = 1.0f / 255.0f;
Scalar mean = new Scalar(0);
Mat resized = new Mat();
// (550, 441) runs OK. (smaller sized runs also fine)
// (551, 441), (550, 442) crash.
// (441, 550) crashes
Imgproc.resize(inY, resized, new Size(550, 441));
Mat segBlob = Dnn.blobFromImage(resized, scaleFactor, resized.size(), mean, false, false);
mNet.setInput(segBlob);
// if input size is above some value, crash will happen here
Mat lastLayer = mNet.forward();
Mat outY = lastLayer.reshape(1, 1);
}
As it is written in the comment, there seems to be some intrinsic, or implicit limitation for the input size. (550, 441) ran okay, but (551, 441) will cause SIGSEGV:
A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x6d494bd744 in tid 19693 (myappname), pid 19661 (myappname)
I think it's not a memory problem because (550, 441) runs fine but (441, 550) will crash. What is the cause of this problem?
I need to convert an image to grayscale and then back to RGBA to be able to draw in it.
Currently, I am doing it with two different cvtColor calls, which works fine, although the performance is not good in Android (RGBA -> GRAY -> RGBA).
Getting a gray image from the camera directly is faster and only having to do one cvtColor call makes a huge difference (GRAY -> RGBA).
The problem is that the second method makes the app close after a few seconds. The logcat in Android Studio does not show a crash for the app, but it shows some errors with the No Filters option selected. Here is the log https://pastebin.com/jA7jFSvu. It seems to point to a problem with OpenCV's camera.
Below are the two different pieces of code.
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
// Method 1 - works
cameraImage = inputFrame.rgba();
native.exampleProcessImage1(cameraImage.getNativeObjAddr(), cameraImage.getNativeObjAddr());
return cameraImage;
// Method 2 - app closes after a few seconds
cameraImage = inputFrame.gray();
Mat result = new Mat();
native.exampleProcessImage2(cameraImage.getNativeObjAddr(), result.getNativeObjAddr());
return result;
}
And this is my code in C++:
void Java_com_example_native_exampleProcessImage1(JNIEnv *env, jobject instance, jlong sourceImage, jlong destImage) {
// works!
Mat &src = * ((Mat *) sourceImage);
Mat &dest = * ((Mat *) destImage);
Mat pivot;
// src is RGBA
cvtColor(src, pivot, COLOR_RGBA2GRAY);
cvtColor(pivot, dest, COLOR_GRAY2RGBA);
// dest is RGBA
// process
}
void Java_com_example_native_exampleProcessImage2(JNIEnv *env, jobject instance, jlong sourceImage, jlong destImage) {
// does not work
Mat &src = * ((Mat *) sourceImage);
Mat &dest = * ((Mat *) destImage);
// src is GRAY
cvtColor(src, dest, COLOR_GRAY2RGBA);
// dest is RGBA
// process
}
This works as expected on Linux and OpenCV.
Do you know what I am doing wrong? Is there another way to achieve the same? Performance is key, in particular for Android devices.
Thank you in advance.
For second case you have memory leak and this leads to leak
~ 3 sec * fps * frame_resolution * 4 bytes
I think crash is happening after the memory is full.
You need to call result.release(); somewhere after each exampleProcessImage2 call
I'm trying to do the most basic of things using JavaCV on Android but still failing.
I want to convert yuv byte[] array received from Camera.PreviewCallback OnPreview into grayscale Mat for further processing.
public void onPreviewFrame(byte[] data, Camera camera) {
/* Extract grayscale from yuv */
ByteBuffer gray = ByteBuffer.allocate(data.length * 4);
IntBuffer intBuffer = gray.asIntBuffer();
int p;
int size = width*height;
for(int i = 0; i < size; i++) {
p = data[i] & 0xFF;
intBuffer.put(0xff000000 | p<<16 | p<<8 | p);
}
Mat g = new Mat(height,width,CV_8UC3);
byte test[] = gray.array();
g.data().put(test); /* crashes here */
In my debug inspection it seems that it crashes because of lack of memory. It doesn't give any error tracke. Just a message like this
A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0xa0c6a000 in
tid 5475 (com.aztech.jcv)
I have tried CV_8UC1 but it still fails. The issue is probably with memory mismatch. Pls can someone suggest a working alternative. Using inbuilt cvCvtColor doesn't decode properly - it gives an all black mat with strange lines.
I'm trying to run a piece of code through OpenCV Java, then pass the Mat object to OpenCV JNI code which does Canny Edge detection on it and returns the Mat. But somehow, I'm repeatedly getting a SIGSEGV when the app launches and I'm unsure why this is:
09-23 00:30:19.501 20399-20547/com.example.opencv.opencvtest A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x3 in tid 20547 (Thread-7450)
The Java code segment in question is:
#Override
public void onCameraViewStarted(int width, int height) {
// Everything initialized
mGray = new Mat(height, width, CvType.CV_8UC4);
mGauss = new Mat(height, width, CvType.CV_8UC4);
mCanny = new Mat(height, width, CvType.CV_8UC4);
}
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
mGray = inputFrame.rgba();
Imgproc.GaussianBlur(mGray, mGauss, new Size(), 5);
// This works perfectly fine
// Imgproc.Canny(mGauss, mCanny, 0, 20);
// But this causes a SIGSEGV
nativeCanny(mGauss.getNativeObjAddr(), mCanny.getNativeObjAddr());
return mCanny;
}
The JNI code is:
extern "C" {
JNIEXPORT jboolean JNICALL
Java_com_example_opencv_opencvtest_MainActivity_nativeCanny(JNIEnv *env, jobject instance, long iAddr, long oAddr) {
cv::Mat* blur = (cv::Mat*) iAddr;
cv::Mat* canny = (cv::Mat*) oAddr;
// This line is causing the SIGSEGV because if I comment it,
// everything works (but Mat* canny is empty so shows up black screen)
Canny(*blur, *canny, 10, 30, 3 );
return true;
}
}
Any idea why this is happening? I've spent the better half of the day trying to figure out why this is breaking but made no headway other than isolating the problematic statements.
EDIT: From the comments
I think it was an error with the initialization of mCanny. If I change the JNI call to Canny(*blur, *blur, 10, 30, 3 ); and then in Java return mGauss instead of mCanny then it works fine. This fixes it for the moment, but I'm honestly still unsure why mCanny is causing the SIGSEGV.
SEGV means you tried to read/write unallocated memory. The fault address is 3. Something that close to 0 almost always means you dereferenced a null pointer. My guess is that either mGauss or mCanny had a 0 for their native object addr.
I'm finding methods to get drawing buffers very quickly from Android GLSurfaceView.
eventhough I know glreadpixel can do this job, glreadpixel is too slow to get drawing buffer.
I want to read buffers maintaining 30 fps.
ANativeWindow api seems like what I am looking for..
ANativeWindow api performance
I couldn't found any example of ANativeWindow api for GLSurfaceView.
My procedure :
Send GLSurfaceView surface to jni code.(using GLSurfaceView.getHolder().getSurface())**
Get Window handle using ANativeWindow_fromSurface method.**
Set Window Buffer**
Lock surface and get window buffer**
Do something using this buffer**
UnlockAndPost window**
I tried below jni code using Android "BasicGLSurfaceView" example.
JNIEXPORT void JNICALL Java_com_example_android_basicglsurfaceview_BasicGLSurfaceViewActivity_readSurface(JNIEnv* jenv, jobject obj, jobject surface)
{
LOG_INFO("Java_com_example_android_basicglsurfaceview_BasicGLSurfaceViewActivity_readSurface");
if (surface != 0) {
ANativeWindow *window = ANativeWindow_fromSurface(jenv, surface);
LOG_INFO("Got window %p", window);
if(window > 0)
{
int width = ANativeWindow_getWidth(window);
int height = ANativeWindow_getHeight(window);
LOG_INFO("Got window %d %d", width,height);
ANativeWindow_setBuffersGeometry(window,0,0,WINDOW_FORMAT_RGBA_8888);
ANativeWindow_Buffer buffer;
memset((void*)&buffer,0,sizeof(buffer));
int lockResult = -22;
lockResult = ANativeWindow_lock(window, &buffer, NULL);
if (lockResult == 0) { \
LOG_INFO("ANativeWindow_locked");
ANativeWindow_unlockAndPost(window);
}
else
{
LOG_INFO("ANativeWindow_lock failed error %d",lockResult);
}
LOG_INFO("Releasing window");
ANativeWindow_release(window);
}
} else {
LOG_INFO("surface is null");
}
return;
}
ANativeWindow_fromSurface and getHeight, setBuffersGeometry api work well.
But ANativeWindow_lock api always fails returning -22 value.
Error Message
[Surfaceview] connect: already connected(cur=1, req=2)
I tried this code at onDrawFrame in Renderer or main Thread or onSurfaceChanged.
but Always It return -22 value.
I am calling this api at wrong place?
is it possible to use ANativeWindow_lock for GLSurfaceView?
Here is my example code
Any help will be really appreciated~
Try doing this instead:
http://forums.arm.com/index.php?/topic/15782-glreadpixels/
Hopefully you can use an ANativeWindow for the buffer so you don't have to call gralloc directly...
I'm not sure that the memset operation is needed.
This worked for me:
ANativeWindow *window = ANativeWindow_fromSurface(env, surface);
if(window > 0){
unsigned char *data = ... // an RGBA(8888) image
int32_t w = ANativeWindow_getWidth(window);
int32_t h = ANativeWindow_getHeight(window);
ANativeWindow_setBuffersGeometry(window, w, h, WINDOW_FORMAT_RGBA_8888);
ANativeWindow_Buffer buffer;
int lockResult = -1;
lockResult = ANativeWindow_lock(window, &buffer, NULL);
if(lockResult == 0){
//write data
memcpy(buffer.bits, data, w * h * 4);
ANativeWindow_unlockAndPost(window);
}
//free data...
ANativeWindow_release(window);
}
When you encountered the same error message,
[Surfaceview] connect: already connected
You can resolve this by GLSurface.onPause() before calling the native part.
You can render the by ANativeWindow_lock, write frame buffer, ANativeWindow_unlockAndPost in native code.
// Java
SurfaceHolder holder = mGLSurfaceView.getHolder();
mPreviewSurface = holder.getSurface();
mGLSurfaceView.onPause();
camera.nativeSetPreviewDisplay(mPreviewSurface);
I call GLSurfaceView onPause() only for release EGL context. The GLSurfaceView will be rendered by native code.
GLSurfaceView
At least I think this can be the answer for the question "is it possible to use ANativeWindow_lock for GLSurfaceView?".
Yes it is.
But in this way, you can render the view only using window buffer. You cannot use opengl api because GLSurfaceView.Renderer onDrawFrame() is not called after GLSurfaceView paused. It still show the better performance than using but it is useless if you need opengl api.
Shader calls should be within a GL thread that is onSurfaceChanged(), onSurfaceCreated() or onDrawFrame()