I am having a bit of trouble with the following code that appears to be causing a segmentation fault at the indicated line. I'm trying to create an array of 8 bit unsigned integers in order to instantiate an OpenCV Mat object with, however the segfault occurs partway through the loop that populates the array.
It appears to happen at a different iteration each time, leading me to suspect that something is getting deallocated by GC, but I can't determine what.
SignDetector.c
JNIEXPORT void JNICALL Java_org_xxx_detectBlobs(JNIEnv *env, jclass clazz, jintArray in)
{
jint *contents = (*env)->GetIntArrayElements(env, in, NULL);
threshold(contents, PIXEL_SAMPLE_RATE);
detectBlobs(contents);
(*env)->ReleaseIntArrayElements(env, in, contents, 0);
}
BlobDetector.cpp
void detectBlobs(jint *contents)
{
LOGD("Call to detectBlobs in BlobDetector.cpp");
uint8_t *thresholded = (uint8_t*) malloc(frame_size);
int i;
for(i = 0; i < frame_size - 1; i++)
thresholded[i] = (contents[i] == WHITE) ? 0 : 1; // Segfaults partway through this loop.
frame_size is simply the number of pixels in an image, which is also equivalent to the length of the jintArray that the image is passed to native code in.
Any suggestions?
Managed to resolve this by simply restarting my AVD on the Android emulator. Problem appears to not occur on a real device either, so I can only conclude that something exploded in the virtual device's RAM.
Related
In my native code I generate a vector of float and need to send this to java part by converting it to byte array (using little endian scheme). Later I resend this byte array and need to convert it back to original float vector. I could not find exact example and wrote below code, that will take 4 byte values at a time and will convert it to float and add it to final vector of float. I will not be modifying any of the data, just perform some calculations, so need it to be fast and if possible avoid memory copy wherever possible.
Currently it is giving me warning that "Using unsigned char for signed value of type jbyte". Can someone guide me how to proceed?
JNIEXPORT jfloat JNICALL Java_com_xyzxyzxcyzxczxczc(JNIEnv *env, jclass type, jlong hEngineHandle, jbyteArray feature1){
try {
PeopleCounting *obj = (PeopleCounting *) hEngineHandle;
jbyte *f1 = (jbyte *)env->GetByteArrayElements(feature1, NULL);
if(obj->faceRecognitionByteArraySize == 0){ // Setting it once for future use as it not going to change for my use case
obj->faceRecognitionByteArraySize = env->GetArrayLength(feature1);
}
union UStuff
{
float f;
unsigned char c[4];
};
UStuff f1bb;
std::vector<float> f1vec;
//Convert every 4 bytes to float using a union
for (int i = 0; i < obj->faceRecognitionByteArraySize; i+=4){
//Going backwards - due to endianness
// Warning here. // Using unsigned char for signed value of type jbyte
f1bb.c[3] = f1[i];
f1bb.c[2] = f1[i+1];
f1bb.c[1] = f1[i+2];
f1bb.c[0] = f1[i+3];
f1vec.push_back(f1bb.f);
}
// release it
env->ReleaseByteArrayElements(feature1, f1, 0 );
// Work with f1vec data
}
UPDATES:
As suggested by #Alex both consumer and producer of byte array will be the C++ then there is no need for any endianess. So the approach I am going to take is as below:
A) Java end I initialize a byte[] of needed length (4 * number of float values)
B) Pass this as jbyteArray to JNI function
Now, How to fill this byteArray from C++ end?
JNIEXPORT void JNICALL Java_com_xyz_FaceRecognizeGenerateFeatureData(JNIEnv *env, jclass type, jlong hEngineHandle, jlong addrAlignedFaceMat, jbyteArray featureData){
try {
PeopleCounting *obj = (PeopleCounting *) hEngineHandle;
Mat *pMat = (Mat *) addrAlignedFaceMat;
vector<float> vecFloatFeatureData = obj->faceRecognizeGenerateFeatureData(*pMat);
void* data = env->GetDirectBufferAddress(featureData); // How to fill the byteArray with values from vecFloatFeatureData? (If requied I can have a constant having the length of the array or number of actual float values i.e. len of array/4
C) Now, later on I need to consume this data again by passing this from Java to C++. So passing the jbyteArray to the JNI function
JNIEXPORT jfloat JNICALL Java_com_xyz_ConsumeData(JNIEnv *env, jclass type, jlong hEngineHandle, jbyteArray feature1){
try {
PeopleCounting *obj = (PeopleCounting *) hEngineHandle;
void* data = env->GetDirectBufferAddress(featureData);
float *floatBuffer = (float *) data1;
vector<float> vecFloatFeature1Data(floatBuffer, floatBuffer + obj->_faceRecognitionByteArraySize); // _faceRecognitionByteArraySize contains the byte array size i.e. 4*no. of floats
Would this work?
Unfortunately, the updated code won't work either.
But first, let's address the answer you gave to #Botje.
java end just stores the data in the database and maybe send this data further to servers
These are two signoficant restrictions. First, if your database interface takes only byte arrays as blobs, this would prevent you from using DirectByteBuffer. Second, if the array may be sent to a different machine, you must be sure that the floating point values are stored there exactly as on the machine that produced the byte array. Mostly, this won't be a problem, but you should better check before deploying your distributed solution.
Now, back to your JNI code. There is actually no need to preallocate an array on Java side. The FaceRecognizeGenerateFeatureData() method can simply return the new byte array it creates:
JNIEXPORT jbyteArray JNICALL Java_com_xyz_FaceRecognizeGenerateFeatureData(JNIEnv *env, jclass type,
jlong hEngineHandle, jlong addrAlignedFaceMat) {
PeopleCounting *obj = (PeopleCounting *) hEngineHandle;
Mat *pMat = (Mat *) addrAlignedFaceMat;
vector<float> vecFloatFeatureData = obj->faceRecognizeGenerateFeatureData(*pMat);
jbyte* dataBytes = reinterpret_cast<jbyte*>(vecFloatFeatureData.data());
size_t dataLen = vecFloatFeatureData.size()*sizeof(vecFloatFeatureData[0]);
jbyteArray featureData = env->NewByteArray(dataLen);
env->SetByteArrayRegion(featureData, 0, dataLen, dataBytes);
return featureData;
}
Deserialization can use the complementary GetByteArrayRegion() and avoid double copy:
JNIEXPORT jfloat JNICALL Java_com_xyz_ConsumeData(JNIEnv *env, jclass type,
jlong hEngineHandle, jbyteArray featureData) {
PeopleCounting *obj = (PeopleCounting *) hEngineHandle;
size_t dataLen = env->GetArrayLength(featureData);
vector<float> vecFloatFeatureDataNew(dataLen/sizeof(float));
jbyte* dataBytes = reinterpret_cast<jbyte*>(vecFloatFeatureDataNew.data());
env->GetByteArrayRegion(featureData, 0, dataLen, dataBytes);
…
Note that with this architecture, you could gain a bit from using DirectByteBuffer instead of byte array. Your PeopleCounting engine produces a vector which cannot be mapped to an external buffer; on the other side, you can wrap the buffer to fill the vecFloatFeatureDataNew vector without copy. I believe this optimization would not be significant, but lead to less more cumbersome code.
From https://docs.oracle.com/javase/7/docs/technotes/guides/jni/spec/types.html I see that jbyte is a signed 8 bits type. So it would make sense to use signed char in your union. The warnings should be gone then. After that is fixed, there is the issue of if floats are represented on the native side the same way they are on the java side.
I need to convert an image to grayscale and then back to RGBA to be able to draw in it.
Currently, I am doing it with two different cvtColor calls, which works fine, although the performance is not good in Android (RGBA -> GRAY -> RGBA).
Getting a gray image from the camera directly is faster and only having to do one cvtColor call makes a huge difference (GRAY -> RGBA).
The problem is that the second method makes the app close after a few seconds. The logcat in Android Studio does not show a crash for the app, but it shows some errors with the No Filters option selected. Here is the log https://pastebin.com/jA7jFSvu. It seems to point to a problem with OpenCV's camera.
Below are the two different pieces of code.
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
// Method 1 - works
cameraImage = inputFrame.rgba();
native.exampleProcessImage1(cameraImage.getNativeObjAddr(), cameraImage.getNativeObjAddr());
return cameraImage;
// Method 2 - app closes after a few seconds
cameraImage = inputFrame.gray();
Mat result = new Mat();
native.exampleProcessImage2(cameraImage.getNativeObjAddr(), result.getNativeObjAddr());
return result;
}
And this is my code in C++:
void Java_com_example_native_exampleProcessImage1(JNIEnv *env, jobject instance, jlong sourceImage, jlong destImage) {
// works!
Mat &src = * ((Mat *) sourceImage);
Mat &dest = * ((Mat *) destImage);
Mat pivot;
// src is RGBA
cvtColor(src, pivot, COLOR_RGBA2GRAY);
cvtColor(pivot, dest, COLOR_GRAY2RGBA);
// dest is RGBA
// process
}
void Java_com_example_native_exampleProcessImage2(JNIEnv *env, jobject instance, jlong sourceImage, jlong destImage) {
// does not work
Mat &src = * ((Mat *) sourceImage);
Mat &dest = * ((Mat *) destImage);
// src is GRAY
cvtColor(src, dest, COLOR_GRAY2RGBA);
// dest is RGBA
// process
}
This works as expected on Linux and OpenCV.
Do you know what I am doing wrong? Is there another way to achieve the same? Performance is key, in particular for Android devices.
Thank you in advance.
For second case you have memory leak and this leads to leak
~ 3 sec * fps * frame_resolution * 4 bytes
I think crash is happening after the memory is full.
You need to call result.release(); somewhere after each exampleProcessImage2 call
I should rotate every yuv image buffer, which i receive from camera, on 90 degrees counterclockwise. I found this post where using a java. This code works fine.
But I've tried to do a native method, because I wanted to do method which has the same logic but works faster.
JNIEXPORT jbyteArray JNICALL Java_com_ndk_example_utils_NativeUtils_rotateFrameBackward
(JNIEnv *env, jobject obj, jbyteArray arr, jint w, jint h){
jint arrSize = w*h*3/2;
jbyte *data,*yuv;
data = (*env)->GetByteArrayElements(env, arr, JNI_FALSE);
yuv = (*env)->GetByteArrayElements(env, arr, JNI_FALSE);
int x,y,i = 0;
for(x = 0; x < w; x++){
for(y = h-1;y >= 0;y--){
yuv[i] = data[y*w+x];
i++;
}
}
i = arrSize - 1;
for(x = w-1;x > 0;x=x-2)
{
for(y = 0;y < h/2;y++)
{
yuv[i] = data[(w*h)+(y*w)+x];
i--;
yuv[i] = data[(w*h)+(y*w)+(x-1)];
i--;
}
}
(*env)->ReleaseByteArrayElements(env, arr, yuv, JNI_ABORT);
yuv = 0;
data = 0;
return arr;
}
When i launched this method on my htc 816(v5.1) it works fine, but when I launched the app on Samsung S3(v4.3) and Lenovo P-70(v4.4.2), the app is crashes. And in Android monitor tab in Android Studio, i saw that
memory usage is always increasing until my app is crashes. In my htc i don't have problems with it. Any ideas?
You do a double GetByteArrayElements for arr using data and yuv, then only release yuv. You also don't check if a copy was made using the last parameter, you just give in JNI_FALSE. You shouldn't do that, you should use a boolean parameter to receive the value, not try to tell the system whether to copy.
You should therefore release both pointers in the end.
Also if this code works, it means that copies are in fact made since you are reading and writing from the same memory area and that would cause corruption of the image.
I'm new one and using Eclipse to develop Android app.
I have an android project that use JNI.
But I have a question, and I can't solve it now.
How to change int value in activity when I use public native void function?
My code as follow,
ImageActivity.java
int width = 100, height = 0;
public native void FindFeatures(long matRgba1, long matRgba2, long matRgba3, int width, int height);
FindFeatures(M1.getNativeObjAddr(), M2.getNativeObjAddr(), M3.getNativeObjAddr(), width, height);
String ch = ""+ width;
t.setText(ch);
jin_part.cpp
extern "C" {
JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial2_ImageActivity_FindFeatures(JNIEnv*, jobject, jlong Rgba1, jlong Rgba2, jlong Rgba3, jint width, jint height);
JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial2_ImageActivity_FindFeatures(JNIEnv*, jobject, jlong Rgba1, jlong Rgba2, jlong Rgba3, jint width, jint height)
{
width = 20;
}
}
I want to change the with value from the JNI, and show it on the text of activity.
After running the code, the text result is still 100 as initial value, not 20.
I googled lots articles and tried, But failed.
If anyone can help, I'll appreciate your help. Thanks in advance.
The value isn't changing because the width variable in your JNI code is passed in by value, not by reference. (In this case, it would be the same if you had a Java implementation instead of a native implementation since the variable is a primitive type.) You cannot do what you want to directly.
Possible work arounds:
Make the method return the value instead of being void.
Create a simple class that holds the integer values that you want to pass back and forth as member variables.
For what it's worth, I think methods with "side-effects" like this are generally considered to be bad form, especially in the Java community. Maybe there's another way to get what you want?
I am working on an image processing project. I have a SurfaceView where I want to show "images" form the jni side.
I followed this blog , the NDK ANativeWindow API instructions to get the pointer of the buffer and updated from the C side.
I got the code to run but my SurfaceView is not updating (not showing any image...). Also the callback surfaceChanged is not called when the buffer is updated.
Here is what I am doing:
JNI SIDE :
/*
* Class: com_example_myLib
* Method: renderImage
* Signature: (JI)V
*/
JNIEXPORT void JNICALL com_example_myLib_renderImage
(JNIEnv *mJNIEnv, jobject mjobject, jobject javaSurface) {
#ifdef DEBUG_I
LOGI("renderImage attempt !");
#endif
// load an ipl image. code tested and works well with ImageView.
IplImage *iplimage = loadMyIplImage();
int iplImageWidth = iplimage->width;
int iplImageHeitgh = iplimage->height;
char * javalBmpPointer = malloc(iplimage->width * iplimage->height * 4);
int _javaBmpRowBytes = iplimage->width * 4;
// code tested and works well with ImageView.
copyIplImageToARGBPointer(iplimage, javalBmpPointer, _javaBmpRowBytes,
iplimage->width, iplimage->height);
#ifdef DEBUG_I
LOGI("ANativeWindow_fromSurface");
#endif
ANativeWindow* window = ANativeWindow_fromSurface(env, javaSurface);
#ifdef DEBUG_I
LOGI("Got window %p", window);
#endif
if(window != 0 ){
ANativeWindow_Buffer buffer;
if (ANativeWindow_lock(window, &buffer, NULL) == 0) {
#ifdef DEBUG_I
LOGI("GANativeWindow_lock %p", buffer);
#endif
memcpy(buffer.bits, javalBmpPointer, iplimage->width* iplimage->height* 4); // ARGB_8888
ANativeWindow_unlockAndPost(window);
}
ANativeWindow_release(window);
}
}
java SIDE :
// every time that I want to reload the image:
renderImage(mySurfaceView.getHolder().getSurface());
Thanks for your time and help!
One of the most common problems leading to a blank SurfaceView is setting a background for the View. The View contents are intended to be a transparent "hole", used only for layout. The Surface contents are on a separate layer, below the View layer, so they are only visible if the View remains transparent.
The View background should generally be disabled, and nothing drawn on the View, unless you want some sort of "mask" effect (like rounded corners).