I am building a image processing project in Android. I capture bitmap pictures through camera, and feed it to the opencv C++ function through JNI.
Firstly, I test my opencv c++ function using saved bitmap pictures (PNG format), and it it successful.
// in Android, save bitmap
Bitmap bmp = YUV_420_888_toRGB(img,img.getWidth(),img.getHeight());
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
bmp.compress(Bitmap.CompressFormat.PNG, 100, fos);
fos.flush();
fos.close();
Log.e(TAG,"saved successfully.)");
} catch (FileNotFoundException e) {
Log.d(TAG, "File not found: " + e.getMessage());
} catch (IOException e) {
Log.d(TAG, "Error accessing file: " + e.getMessage());
}
// in opencv c++ function
Mat im = imread("/Users/Jun/Downloads/20170227/P9/1488167433596_frame.PNG");
// processing im
Then i feed each captured bitmap picture to the same opencv c++ function. However, the detected result is totally different. I think there must be some errors when converting bitmap in Java to opencv mat in C++ through JNI. Please find the converting codes below:
//Java side:
public static int[] detector(Bitmap bitmap) {
int w = bitmap.getWidth();
int h = bitmap.getHeight();
int []pixels = new int[w*h];
bitmap.getPixels(pixels,0,w,0,0,w,h);
return detect(pixels,w,h);
}
private static native int[] detect(int pixels[],int w,int h);
// c++ side:
JNIEXPORT jintArray JNICALL Java_com_example_jun_helloworld_JNIUtils_detect(JNIEnv *env, jclass cls, jintArray buf, jint w, jint h) {
jint* cbuf = env->GetIntArrayElements(buf, false);
if (cbuf == NULL) {
return NULL;
}
Mat im(h, w, CV_8UC4, (unsigned char *) cbuf);
// processing im
The two "im"s should be different. Can someone tell me what's wrong in the converting? thanks.
In your code, you cast int pointer to char pointer. So, you will change the way your code treats your data.
Take a look here:
#include <stdio.h>
int main() {
// what you have in the code is array of ints
int iarray[5] = {1, 2, 3, 4, 5};
int *iarray_ptr = iarray;
// and you cast int pointer to char pointer
char *carray_ptr = (char *) iarray_ptr;
// so, you simply skip some values because of
// pointer aritmetics; your data are shifted
for(int i=0;i<5;i++) {
printf("int: %p, char %p\n", iarray_ptr + i, carray_ptr + i);
}
// you can always do something like this
char carray2[5];
for(int p=0;p<5;p++) {
// you can loose precision here!
carray2[p] = (char) iarray[p];
}
// and then, you can simply pass &carray2 to
// to your code
}
If you run the code, you can clearly see what will be the difference in pointer arithmetics:
./pointer
int: 0x7fff51d859f0, char 0x7fff51d859f0
int: 0x7fff51d859f4, char 0x7fff51d859f1
int: 0x7fff51d859f8, char 0x7fff51d859f2
int: 0x7fff51d859fc, char 0x7fff51d859f3
int: 0x7fff51d85a00, char 0x7fff51d859f4
After casting to char *, you will simply "scatter" your data.
Related
I need to do some real-time image processing with the camera preview data, such as face detection which is a c++ library, and then display the processed preview with face labeled on screen.
I have read http://nezarobot.blogspot.com/2016/03/android-surfacetexture-camera2-opencv.html and Eddy Talvala's answer from Android camera2 API - Display processed frame in real time. Following the two webpages, I managed to build the app(no calling the face detection lib, only trying to display preview using ANativeWindow), but everytime I run this app on Google Pixel - 7.1.0 - API 25 running on Genymotion, the app always collapses throwing the following log
08-28 14:23:09.598 2099-2127/tau.camera2demo A/libc: Fatal signal 11 (SIGSEGV), code 2, fault addr 0xd3a96000 in tid 2127 (CAMERA2)
[ 08-28 14:23:09.599 117: 117 W/ ]
debuggerd: handling request: pid=2099 uid=10067 gid=10067 tid=2127
I googled this but no answer found.
The whole project on Github:https://github.com/Fung-yuantao/android-camera2demo
Here is the key code(I think).
Code in Camera2Demo.java:
private void startPreview(CameraDevice camera) throws CameraAccessException {
SurfaceTexture texture = mPreviewView.getSurfaceTexture();
// to set PREVIEW size
texture.setDefaultBufferSize(mPreviewSize.getWidth(),mPreviewSize.getHeight());
surface = new Surface(texture);
try {
// to set request for PREVIEW
mPreviewBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
} catch (CameraAccessException e) {
e.printStackTrace();
}
mImageReader = ImageReader.newInstance(mImageWidth, mImageHeight, ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener,mHandler);
mPreviewBuilder.addTarget(mImageReader.getSurface());
//output Surface
List<Surface> outputSurfaces = new ArrayList<>();
outputSurfaces.add(mImageReader.getSurface());
/*camera.createCaptureSession(
Arrays.asList(surface, mImageReader.getSurface()),
mSessionStateCallback, mHandler);
*/
camera.createCaptureSession(outputSurfaces, mSessionStateCallback, mHandler);
}
private CameraCaptureSession.StateCallback mSessionStateCallback = new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
try {
updatePreview(session);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
};
private void updatePreview(CameraCaptureSession session)
throws CameraAccessException {
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO);
session.setRepeatingRequest(mPreviewBuilder.build(), null, mHandler);
}
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// get the newest frame
Image image = reader.acquireNextImage();
if (image == null) {
return;
}
// print image format
int format = reader.getImageFormat();
Log.d(TAG, "the format of captured frame: " + format);
// HERE to call jni methods
JNIUtils.display(image.getWidth(), image.getHeight(), image.getPlanes()[0].getBuffer(), surface);
//ByteBuffer buffer = image.getPlanes()[0].getBuffer();
//byte[] bytes = new byte[buffer.remaining()];
image.close();
}
};
Code in JNIUtils.java:
import android.media.Image;
import android.view.Surface;
import java.nio.ByteBuffer;
public class JNIUtils {
// TAG for JNIUtils class
private static final String TAG = "JNIUtils";
// Load native library.
static {
System.loadLibrary("native-lib");
}
public static native void display(int srcWidth, int srcHeight, ByteBuffer srcBuffer, Surface surface);
}
Code in native-lib.cpp:
#include <jni.h>
#include <string>
#include <android/log.h>
//#include <android/bitmap.h>
#include <android/native_window_jni.h>
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, "Camera2Demo", __VA_ARGS__)
extern "C" {
JNIEXPORT jstring JNICALL Java_tau_camera2demo_JNIUtils_display(
JNIEnv *env,
jobject obj,
jint srcWidth,
jint srcHeight,
jobject srcBuffer,
jobject surface) {
/*
uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(srcBuffer));
if (srcLumaPtr == nullptr) {
LOGE("srcLumaPtr null ERROR!");
return NULL;
}
*/
ANativeWindow * window = ANativeWindow_fromSurface(env, surface);
ANativeWindow_acquire(window);
ANativeWindow_Buffer buffer;
ANativeWindow_setBuffersGeometry(window, srcWidth, srcHeight, 0/* format unchanged */);
if (int32_t err = ANativeWindow_lock(window, &buffer, NULL)) {
LOGE("ANativeWindow_lock failed with error code: %d\n", err);
ANativeWindow_release(window);
return NULL;
}
memcpy(buffer.bits, srcBuffer, srcWidth * srcHeight * 4);
ANativeWindow_unlockAndPost(window);
ANativeWindow_release(window);
return NULL;
}
}
After I commented the memcpy out, the app no longer collapses but displays nothing. So I guess the problem is now turning to how to correctly use memcpy to copy the captured/processed buffer to buffer.bits.
Update:
I change
memcpy(buffer.bits, srcBuffer, srcWidth * srcHeight * 4);
to
memcpy(buffer.bits, srcLumaPtr, srcWidth * srcHeight * 4);
the app no longer collapses and starts to display but it's displaying something strange.
As mentioned by yakobom, you're trying to copy a YUV_420_888 image directly into a RGBA_8888 destination (that's the default, if you haven't changed it). That won't work with just a memcpy.
You need to actually convert the data, and you need to ensure you don't copy too much - the sample code you have copies width*height*4 bytes, while a YUV_420_888 image takes up only stride*height*1.5 bytes (roughly). So when you copied, you were running way off the end of the buffer.
You also have to account for the stride provided at the Java level to correctly index into the buffer. This link from Microsoft has a useful diagram.
If you just care about the luminance (so grayscale output is enough), just duplicate the luminance channel into the R, G, and B channels. The pseudocode would be roughly:
uint8_t *outPtr = buffer.bits;
for (size_t y = 0; y < height; y++) {
uint8_t *rowPtr = srcLumaPtr + y * srcLumaStride;
for (size_t x = 0; x < width; x++) {
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = 255; // gamma for RGBA_8888
++rowPtr;
}
}
You'll need to read the srcLumaStride from the Image object (row stride of the first Plane) and pass it down via JNI as well.
Just to put it as an answer, to avoid a long chain of comments - such a crash issue may be due to improper size of bites being copied by the memcpy (UPDATE following other comments: In this case it was due to forbidden direct copy).
If you are now getting a weird image, it is probably another issue - I would suspect the image format, try to modify that.
I'm trying to implements a rtsp player based on the roman10 tutorial.
I can play a stream but each time i leave the activity a lot of memory is leaked.
After some research it appears that the bitmap which is a global jobject is the cause :
jobject createBitmap(JNIEnv *pEnv, int pWidth, int pHeight) {
int i;
//get Bitmap class and createBitmap method ID
jclass javaBitmapClass = (jclass)(*pEnv)->FindClass(pEnv, "android/graphics/Bitmap");
jmethodID mid = (*pEnv)->GetStaticMethodID(pEnv, javaBitmapClass, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
//create Bitmap.Config
//reference: https://forums.oracle.com/thread/1548728
const wchar_t* configName = L"ARGB_8888";
int len = wcslen(configName);
jstring jConfigName;
if (sizeof(wchar_t) != sizeof(jchar)) {
//wchar_t is defined as different length than jchar(2 bytes)
jchar* str = (jchar*)malloc((len+1)*sizeof(jchar));
for (i = 0; i < len; ++i) {
str[i] = (jchar)configName[i];
}
str[len] = 0;
jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)str, len);
} else {
//wchar_t is defined same length as jchar(2 bytes)
jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)configName, len);
}
jclass bitmapConfigClass = (*pEnv)->FindClass(pEnv, "android/graphics/Bitmap$Config");
jobject javaBitmapConfig = (*pEnv)->CallStaticObjectMethod(pEnv, bitmapConfigClass,
(*pEnv)->GetStaticMethodID(pEnv, bitmapConfigClass, "valueOf", "(Ljava/lang/String;)Landroid/graphics/Bitmap$Config;"), jConfigName);
//create the bitmap
return (*pEnv)->CallStaticObjectMethod(pEnv, javaBitmapClass, mid, pWidth, pHeight, javaBitmapConfig);
}
The bitmap is created like this :
bitmap = createBitmap(...);
When the activity is closed this method is called :
void finish(JNIEnv *pEnv) {
//unlock the bitmap
AndroidBitmap_unlockPixels(pEnv, bitmap);
av_free(buffer);
// Free the RGB image
av_free(frameRGBA);
// Free the YUV frame
av_free(decodedFrame);
// Close the codec
avcodec_close(codecCtx);
// Close the video file
avformat_close_input(&formatCtx);
}
The bitmap seems to never be freed, just unlocked.
What should i do be sure to get back all the memory ?
Note : i'm using ffmpeg 2.5.2.
I have 2 separate pthread, and one static struct array. One of the pthread writes the decoding object which include bytes, size, width and height. the other pthread is actually reading the stack and doing some image manipulation and posting to a java function the result.
Here is the problem, on pthread1 I convert the jbytearray to unsigned char*, and store to the position 0 on the static array.
But when it comes pthread2 to convert it back to jbytearray something happens and i always get fatal signal.
This is the top of my cpp class
struct DecodeObject {
unsigned char* data;
int data_size;
int width;
int height;
int orientation;
};
static int decodeLimit = 200 ;
static DecodeObject decodeList[200] ;
static int decodeSize = -1 ;
Here is part of my pthread1
//Values
jbyteArray imageData = (jbyteArray) env->CallObjectMethod(decodeObject,getData);
jint width = (jint) env->CallIntMethod(decodeObject,getWidth);
jint height = (jint) env->CallIntMethod(decodeObject,getHeight);
jint orientation = (jint) env->CallIntMethod(decodeObject,getOrientation);
if(decodeSize<decodeLimit-1){
DecodeObject object;
object.data_size = env->GetArrayLength (imageData);
object.data = as_unsigned_char_array(env,imageData);
object.width = width;
object.height = height;
object.orientation = orientation;
decodeSize++;
decodeList[decodeSize] = object;
}
else {
LOGD("ERROR => BUFFER IS FULL");
}
Here is part of my pthread2
//PREPARE RUNS OK
LOGD("PREPARE"); // RUNS OK
tempObject.data = Prepare(tempObject.data,tempObject.width,tempObject.height);
LOGD("CONVERT BACK TO JBYTEARRAY"); //HERE FAILS
jbyteArray converted = as_byte_array(env,tempObject.data,tempObject.data_size);
LOGD("DONE CONVERTING");
And finally here is the function i am using to convert
unsigned char* as_unsigned_char_array(JNIEnv* &env,jbyteArray array) {
int len = env->GetArrayLength (array);
unsigned char* buf = new unsigned char[len];
env->GetByteArrayRegion (array, 0, len, reinterpret_cast<jbyte*>(buf));
return buf;
}
jbyteArray as_byte_array(JNIEnv* &env,unsigned char* buf, jsize len) {
jbyteArray array = env->NewByteArray(len);
//HERE I GET THE ERROR, I HAVE BEEN TRYING WITH len/2 and WORKS , PROBABLY SOME BYTS ARE GETTING LOST.
env->SetByteArrayRegion (array, 0, len, (jbyte*)(buf));
return array;
}
I use native c to read data from an audio file to jbyte pointer. Now i want to send it to java as an jbyteArray.
jbyteArray Java_com_app_audio_player_readData(JNIEnv * env, jobject jobj,jstring readPath)
{
FILE *fin;
const char *inFile= (*env)->GetStringUTFChars(env,readPath,0);
fin = fopen(inFile, "r");
fseek(fin, 0, SEEK_END); // seek to end of file
int size = ftell(fin); // get current file pointer
fseek(fin, 0, SEEK_SET);
jbyte *data=(jbyte *)malloc(size*sizeof(jbyte));
int charCnt = 0;
charCnt=fread(data, 1, size, fin);
jbyteArray result=(*env)->NewByteArray(env, size);
//-- I want to convert data to jbyteArray and return it to java
fclose(fin);
return result;
}
How it is done?
use SetByteArrayRegion
charCnt=fread(data, 1, size, fin);
jbyteArray result=(*env)->NewByteArray(env, size);
(*env)->SetByteArrayRegion(env, result, 0, size, data);
one could also use GetByteArrayElements
eg:
jboolean isCopy;
jbyte* rawjBytes = (*env)->GetByteArrayElements(env, result, &isCopy);
//do stuff to raw bytes
memcpy(rawjBytes, data, size*sizeof(jbyte));
(*env)->ReleaseByteArrayElements(env, result, rawjBytes, 0);
see here for more details on SetByteArrayRegion, GetByteArrayElements and ReleaseByteArrayElements.
NB: this question is probably a special case of this question
In the process of tracking severe memory issues in my app, I looked at several heap dumps from my app, and most of the time I have a HUGE bitmap that I don't know of.
It takes 9.4MB, or 9,830,400 bytes, or actually a 1280x1920 image at 4 bytes per pixels.
I checked in Eclipse MAT, it is indeed a byte[9830400], that has one incoming reference which is a android.graphics.Bitmap.
I'd like to dump this to a file and try to see it. I can't understand where is it coming from. My biggest image in all my drawables is a 640x960 png, which takes less than 3MB.
I tried to use Eclipse to "copy value to file", but I think it simply prints the buffer to the file, and I don't know any image software that can read a stream of bytes and display it as a 4 bytes per pixel image.
Any idea?
Here's what I tried: dump the byte array to a file, push it to /sdcard/img, and load an activity like this:
#Override
public void onCreate(final Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
final File inputFile = new File("/sdcard/img");
final FileInputStream isr = new FileInputStream(inputFile);
final Bitmap bmp = BitmapFactory.decodeStream(isr);
ImageView iv = new ImageView(this);
iv.setImageBitmap(bmp);
setContentView(iv);
Log.d("ImageTest", "Image was inflated");
} catch (final FileNotFoundException e) {
Log.d("ImageTest", "Image was not inflated");
}
}
I didn't see anything.
Do you know how is encoded the image? Say it is stored into byte[] buffer. buffer[0] is red, buffer[1] is green, etc?
See here for an easier answer: MAT (Eclipse Memory Analyzer) - how to view bitmaps from memory dump
TL;DR - Install GIMP and load the image as raw RGB Alpha
OK -- After quite some unsuccessful tries, I finally got something out of this byte array. I wrote this simple C program to convert the byte array to a Windows Bitmap file. I'm dropping the code in case somebody is interested.
I compiled this against VisualC 6.0 and gcc 3.4.4, it should work on any OS (tested on Windows, Linux and MacOS X).
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <stdlib.h>
/* Types */
typedef unsigned char byte;
typedef unsigned short uint16_t;
typedef unsigned int uint32_t;
typedef int int32_t;
/* Constants */
#define RMASK 0x00ff0000
#define GMASK 0x0000ff00
#define BMASK 0x000000ff
#define AMASK 0xff000000
/* Structures */
struct bmpfile_magic {
unsigned char magic[2];
};
struct bmpfile_header {
uint32_t filesz;
uint16_t creator1;
uint16_t creator2;
uint32_t bmp_offset;
};
struct bmpfile_dibheader {
uint32_t header_sz;
uint32_t width;
uint32_t height;
uint16_t nplanes;
uint16_t bitspp;
uint32_t compress_type;
uint32_t bmp_bytesz;
int32_t hres;
int32_t vres;
uint32_t ncolors;
uint32_t nimpcolors;
uint32_t rmask, gmask, bmask, amask;
uint32_t colorspace_type;
byte colorspace[0x24];
uint32_t rgamma, ggamma, bgamma;
};
/* Displays usage info and exits */
void usage(char *cmd) {
printf("Usage:\t%s <img_src> <img_dest.bmp> <width> <height>\n"
"\timg_src:\timage byte buffer obtained from Eclipse MAT, using 'copy > save value to file' while selecting the byte[] buffer corresponding to an android.graphics.Bitmap\n"
"\timg_dest:\tpath to target *.bmp file\n"
"\twidth:\t\tpicture width, obtained in Eclipse MAT, selecting the android.graphics.Bitmap object and seeing the object member values\n"
"\theight:\t\tpicture height\n\n", cmd);
exit(1);
}
/* C entry point */
int main(int argc, char **argv) {
FILE *in, *out;
char *file_in, *file_out;
int w, h, W, H;
byte r, g, b, a, *image;
struct bmpfile_magic magic;
struct bmpfile_header header;
struct bmpfile_dibheader dibheader;
/* Parse command line */
if (argc < 5) {
usage(argv[0]);
}
file_in = argv[1];
file_out = argv[2];
W = atoi(argv[3]);
H = atoi(argv[4]);
in = fopen(file_in, "rb");
out = fopen(file_out, "wb");
/* Check parameters */
if (in == NULL || out == NULL || W == 0 || H == 0) {
usage(argv[0]);
}
/* Init BMP headers */
magic.magic[0] = 'B';
magic.magic[1] = 'M';
header.filesz = W * H * 4 + sizeof(magic) + sizeof(header) + sizeof(dibheader);
header.creator1 = 0;
header.creator2 = 0;
header.bmp_offset = sizeof(magic) + sizeof(header) + sizeof(dibheader);
dibheader.header_sz = sizeof(dibheader);
dibheader.width = W;
dibheader.height = H;
dibheader.nplanes = 1;
dibheader.bitspp = 32;
dibheader.compress_type = 3;
dibheader.bmp_bytesz = W * H * 4;
dibheader.hres = 2835;
dibheader.vres = 2835;
dibheader.ncolors = 0;
dibheader.nimpcolors = 0;
dibheader.rmask = RMASK;
dibheader.gmask = BMASK;
dibheader.bmask = GMASK;
dibheader.amask = AMASK;
dibheader.colorspace_type = 0x57696e20;
memset(&dibheader.colorspace, 0, sizeof(dibheader.colorspace));
dibheader.rgamma = dibheader.bgamma = dibheader.ggamma = 0;
/* Read picture data */
image = (byte*) malloc(4*W*H);
if (image == NULL) {
printf("Could not allocate a %d-byte buffer.\n", 4*W*H);
exit(1);
}
fread(image, 4*W*H, sizeof(byte), in);
fclose(in);
/* Write header */
fwrite(&magic, sizeof(magic), 1, out);
fwrite(&header, sizeof(header), 1, out);
fwrite(&dibheader, sizeof(dibheader), 1, out);
/* Convert the byte array to BMP format */
for (h = H-1; h >= 0; h--) {
for (w = 0; w < W; w++) {
r = *(image + w*4 + 4 * W * h);
b = *(image + w*4 + 4 * W * h + 1);
g = *(image + w*4 + 4 * W * h + 2);
a = *(image + w*4 + 4 * W * h + 3);
fwrite(&b, 1, 1, out);
fwrite(&g, 1, 1, out);
fwrite(&r, 1, 1, out);
fwrite(&a, 1, 1, out);
}
}
free(image);
fclose(out);
}
So using this tool I was able to recognise the picture used to generate this 1280x1920 bitmap.
I found that starting from latest version of Android Studio (2.2.2 as of writing), you can view the bitmap file directly:
Open the ‘Android Monitor’ tab (at the bottom left) and then Memory tab.
Press the ‘Dump Java Heap’ button
Choose the ‘Bitmap’ Class Name for the current snapshot, select each Instance of bitmap and view what image exactly consume more memory than expected. (screens 4 and 5)
Choose the Bitmap class name…
Select each Instance of bitmap
and right click on it, select View Bitmap
Just take the input to the image and convert it into a bitmap object by using the fileinput stream/datastream. Also add logs for seeing data for each image that gets used.
You could enable an usb connection and copy the file to an other computer with more tools to investigate.
Some devices could be configured to dump the current screen to file system when the start button is pressed. Maybe this happens to you.