I want to use this code in JNI, without going back to Java.
I already converted bitmap manipulation to JNI (thanks to other stackoverflow posters), but this seems more complicated because I do not understand how to call constructors.
Bitmap bmp;
bmp = ((BitmapDrawable)imgview.getDrawable()).getBitmap();
if (bmp == null || !bmp.isMutable())
Bitmap bmp = Bitmap.createBitmap(w, h, Config.ARGB_8888);
// bitmap manipulations goes here
jclass java_bitmap_class = (env)->GetObjectClass(java_bitmap);
class SkBitmap;
SkBitmap *sk_bitmap = (SkBitmap*)(env)->CallIntMethod(
java_bitmap, (env)->GetMethodID(java_bitmap_class, "ni", "()I"));
// there is more c++ code to manipulate bmp, but it is not relevant to a question
imgview.setImageBitmap(bmp);
Ok, it is actually very simple once you master java->jni translation. Basically you can do anything in JNI side what you can do in Java. Yes, it looks messy. I decided not to create the bitmap in JNI but to access the existing one though.
JNIEnv* Env = 0; jobject Obj;
jclass cls = 0, ClassImageView = 0, class_drawable = 0, java_bitmap_class = 0;
jmethodID jcontrol_ui = 0, jfindViewById = 0, jgetBitmap = 0, jgetDrawable = 0;
int *getViewBitmapBuffer(int ID) {
jobject image_view = (jobject) (Env)->CallObjectMethod(Obj, jfindViewById, ID);
// some values can be cached, hence the checks for "(something == 0)"
if (ClassImageView == 0) ClassImageView = (Env)->GetObjectClass(image_view);
if (jgetDrawable == 0) jgetDrawable = (Env)->GetMethodID(ClassImageView, "getDrawable", sig_drawable);
jobject drawable = (jobject) (Env)->CallObjectMethod(image_view, jgetDrawable);
if (class_drawable == 0) class_drawable = (Env)->GetObjectClass(drawable);
if (jgetBitmap == 0) jgetBitmap = (Env)->GetMethodID(class_drawable, "getBitmap", sig_bitmap);
jobject java_bitmap = (jobject) (Env)->CallObjectMethod(drawable, jgetBitmap);
if (java_bitmap_class == 0) java_bitmap_class = (Env)->GetObjectClass(java_bitmap);
class SkBitmap;
SkBitmap *sk_bitmap = (SkBitmap*)(Env)->CallIntMethod(java_bitmap, (Env)->GetMethodID(java_bitmap_class, "ni", "()I"));
SkPixelRef *sk_pix_ref;
sk_pix_ref = (SkPixelRef*)((int*)sk_bitmap)[1];
int *B = (int*) sk_pix_ref->GetPixels();
return B;
}
Related
So my issue is that I get for a video call the frames in my c code as a byte array of I420. Which I then convert to NV21 and send the byte array to create the bitmap. But because I need to create a YUV Image from the byte array, and then a bitmap from that, I have a conversion overhead and that is causing delays and loss in quality.
I am wondering if there is another way to do this. Somehow so that I can create the bitmap directly in the c code, and maybe even add it to the bitmap, or a surface view from the c code? Or just simply send the bitmap to my function so I can set it there, without needing to create the bitmap in Android.
This is what I do with the byte array in the c code:
if(size == 0)
return;
jboolean isAttached;
JNIEnv *env;
jint jParticipant;
jint jWidth;
jint jHeight;
jbyteArray jRawImageBytes;
env = getJniEnv(&isAttached);
if (env == NULL)
goto FAIL0;
//LOGE(".... **** ....TRYING TO FIND CALLBACK");
LOGI("FrameReceived will reach here 1");
char *modifiedRawImageBytes = malloc(size);
memcpy(modifiedRawImageBytes, rawImageBytes, size);
jint sizeWH = width * height;
jint quarter = sizeWH/4;
jint v0 = sizeWH + quarter;
for (int u = sizeWH, v = v0, o = sizeWH; u < v0; u++, v++, o += 2) {
modifiedRawImageBytes[o] = rawImageBytes[v]; // For NV21, V first
modifiedRawImageBytes[o + 1] = rawImageBytes[u]; // For NV21, U second
}
if(remote)
{
if(frameReceivedRemoteMethod == NULL)
frameReceivedRemoteMethod = getApplicationJniMethodId(env, applicationJniObj, "vidyoConferenceFrameReceivedRemoteCallback", "(III[B)V");
if (frameReceivedRemoteMethod == NULL) {
//LOGE(".... **** ....CALLBACK NOT FOUND");
goto FAIL1;
}
}
This is what I do in the Android java code:
remoteResolution = width + "x" + height;
remoteBAOS = new ByteArrayOutputStream();
remoteYUV = new YuvImage(rawImageBytes, ImageFormat.NV21, width, height, null);
remoteYUV.compressToJpeg(new Rect(0, 0, width, height), 100, remoteBAOS);
remoteBA = remoteBAOS.toByteArray();
remoteBitmap = BitmapFactory.decodeByteArray(remoteBA, 0, remoteBA.length);
new Handler(Looper.getMainLooper()).post(new Runnable() {
#Override
public void run() {
remoteView.setImageBitmap(remoteBitmap);
}
});
This is how the sample app of the SDK I am using had the sample. but I feel that this is not at all best practice, and there has to be a way to get the Bitmap quicker from the byte array, and preferably in the c code. Any ideas on how to improve this?
EDIT:
I modified my Java code. I know use this library: https://github.com/silvaren/easyrs
so my code will be:
remoteBitmap = Nv21Image.nv21ToBitmap(rs, rawImageBytes, width, height);
new Handler(Looper.getMainLooper()).post(new Runnable() {
#Override
public void run() {
remoteView.setImageBitmap(remoteBitmap);
}
});
Where nv21ToBitmap does this:
public static Bitmap yuvToRgb(RenderScript rs, Nv21Image nv21Image) {
long startTime = System.currentTimeMillis();
Type.Builder yuvTypeBuilder = new Type.Builder(rs, Element.U8(rs))
.setX(nv21Image.nv21ByteArray.length);
Type yuvType = yuvTypeBuilder.create();
Allocation yuvAllocation = Allocation.createTyped(rs, yuvType, Allocation.USAGE_SCRIPT);
yuvAllocation.copyFrom(nv21Image.nv21ByteArray);
Type.Builder rgbTypeBuilder = new Type.Builder(rs, Element.RGBA_8888(rs));
rgbTypeBuilder.setX(nv21Image.width);
rgbTypeBuilder.setY(nv21Image.height);
Allocation rgbAllocation = Allocation.createTyped(rs, rgbTypeBuilder.create());
ScriptIntrinsicYuvToRGB yuvToRgbScript = ScriptIntrinsicYuvToRGB.create(rs, Element.RGBA_8888(rs));
yuvToRgbScript.setInput(yuvAllocation);
yuvToRgbScript.forEach(rgbAllocation);
Bitmap bitmap = Bitmap.createBitmap(nv21Image.width, nv21Image.height, Bitmap.Config.ARGB_8888);
rgbAllocation.copyTo(bitmap);
Log.d("NV21", "Conversion to Bitmap: " + (System.currentTimeMillis() - startTime) + "ms");
return bitmap;
}
This is faster. but still I feel there still is some delay. Now that I get my bitmap from renderscript instead of using a YUV Image. Is it possible to set it to my imageView somehow faster? or set it on a surfaceView somehow?
I'm trying to implements a rtsp player based on the roman10 tutorial.
I can play a stream but each time i leave the activity a lot of memory is leaked.
After some research it appears that the bitmap which is a global jobject is the cause :
jobject createBitmap(JNIEnv *pEnv, int pWidth, int pHeight) {
int i;
//get Bitmap class and createBitmap method ID
jclass javaBitmapClass = (jclass)(*pEnv)->FindClass(pEnv, "android/graphics/Bitmap");
jmethodID mid = (*pEnv)->GetStaticMethodID(pEnv, javaBitmapClass, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
//create Bitmap.Config
//reference: https://forums.oracle.com/thread/1548728
const wchar_t* configName = L"ARGB_8888";
int len = wcslen(configName);
jstring jConfigName;
if (sizeof(wchar_t) != sizeof(jchar)) {
//wchar_t is defined as different length than jchar(2 bytes)
jchar* str = (jchar*)malloc((len+1)*sizeof(jchar));
for (i = 0; i < len; ++i) {
str[i] = (jchar)configName[i];
}
str[len] = 0;
jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)str, len);
} else {
//wchar_t is defined same length as jchar(2 bytes)
jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)configName, len);
}
jclass bitmapConfigClass = (*pEnv)->FindClass(pEnv, "android/graphics/Bitmap$Config");
jobject javaBitmapConfig = (*pEnv)->CallStaticObjectMethod(pEnv, bitmapConfigClass,
(*pEnv)->GetStaticMethodID(pEnv, bitmapConfigClass, "valueOf", "(Ljava/lang/String;)Landroid/graphics/Bitmap$Config;"), jConfigName);
//create the bitmap
return (*pEnv)->CallStaticObjectMethod(pEnv, javaBitmapClass, mid, pWidth, pHeight, javaBitmapConfig);
}
The bitmap is created like this :
bitmap = createBitmap(...);
When the activity is closed this method is called :
void finish(JNIEnv *pEnv) {
//unlock the bitmap
AndroidBitmap_unlockPixels(pEnv, bitmap);
av_free(buffer);
// Free the RGB image
av_free(frameRGBA);
// Free the YUV frame
av_free(decodedFrame);
// Close the codec
avcodec_close(codecCtx);
// Close the video file
avformat_close_input(&formatCtx);
}
The bitmap seems to never be freed, just unlocked.
What should i do be sure to get back all the memory ?
Note : i'm using ffmpeg 2.5.2.
There are several posts about converting Mat to Bitmap using the Utils.matToBitmap() function. But I'm assuming this function can only be called in the Java layer after importing the Utils class.
I want to transfer the data to a memory address pointed to by uint32_t* bmpContent; in the code below.
JNIEXPORT void JNICALL Java_com_nod_nodcv_NodCVActivity_runfilter(
JNIEnv *env, jclass clazz, jobject outBmp, jbyteArray inData,
jint width, jint height, jint choice, jint filter)
{
int outsz = width*height;
int insz = outsz + outsz/2;
AndroidBitmapInfo bmpInfo;
if (AndroidBitmap_getInfo(env, outBmp, &bmpInfo) < 0) {
throwJavaException(env,"gaussianBlur","Error retrieving bitmap meta data");
return;
}
if (bmpInfo.format != ANDROID_BITMAP_FORMAT_RGBA_8888) {
throwJavaException(env,"gaussianBlur","Expecting RGBA_8888 format");
return;
}
uint32_t* bmpContent;
if (AndroidBitmap_lockPixels(env, outBmp,(void**)&bmpContent) < 0) {
throwJavaException(env,"gaussianBlur","Unable to lock bitmap pixels");
return;
}
//This function runs the kernel on the inData and gives a matrix
tester(env, clazz, bmpContent, outsz, inData, insz, width, height);
AndroidBitmap_unlockPixels(env, outBmp);
}
This is roughly what happens in the tester function:
jbyte* b_mat = env->GetByteArrayElements(inData, 0);
cv::Mat mdata(h, w, CV_8UC4, (unsigned char *)b_mat);
cv::Mat mat_src = imdecode(mdata,1);
cv::UMat umat_src = mat_src.getUMat(cv::ACCESS_READ, cv::USAGE_ALLOCATE_DEVICE_MEMORY);
cv::UMat umat_dst (mat_src.size(), mat_src.type(), cv::ACCESS_WRITE, cv::USAGE_ALLOCATE_DEVICE_MEMORY);
kernel.args(cv::ocl::KernelArg::ReadOnlyNoSize(umat_src), cv::ocl::KernelArg::ReadWrite(umat_dst));
size_t globalThreads[3] = {static_cast<unsigned int>(mat_src.cols), static_cast<unsigned int>(mat_src.rows), 1 };
bool success = kernel.run(3, globalThreads, NULL, true);
cv::Mat mat_dst = umat_dst.getMat(cv::ACCESS_READ);
mat_dst holds the results I need and that I need to display on my phone.
How can I do that?
I'm assuming I'll need to copy the data from mat_dst to the bmpContent place, but I'm not sure.
If you really need to call this method from the JNI layer, you could simply use the OpenCV's original C++ implementation Here.
An example code would be like:
#include <jni.h>
#include <string>
#include <android/bitmap.h>
#include "opencv2/opencv.hpp"
// using namespace cv;
void MatToBitmap2 (JNIEnv * env, cv::Mat src, jobject bitmap, bool needPremultiplyAlpha)
{
AndroidBitmapInfo info;
void* pixels = 0;
try {
// LOGD("nMatToBitmap");
CV_Assert( AndroidBitmap_getInfo(env, bitmap, &info) >= 0 );
CV_Assert( info.format == ANDROID_BITMAP_FORMAT_RGBA_8888 ||
info.format == ANDROID_BITMAP_FORMAT_RGB_565 );
CV_Assert( src.dims == 2 && info.height == (uint32_t)src.rows && info.width == (uint32_t)src.cols );
CV_Assert( src.type() == CV_8UC1 || src.type() == CV_8UC3 || src.type() == CV_8UC4 );
CV_Assert( AndroidBitmap_lockPixels(env, bitmap, &pixels) >= 0 );
CV_Assert( pixels );
if( info.format == ANDROID_BITMAP_FORMAT_RGBA_8888 )
{
cv::Mat tmp(info.height, info.width, CV_8UC4, pixels);
if(src.type() == CV_8UC1)
{
cvtColor(src, tmp, cv::COLOR_GRAY2RGBA);
} else if(src.type() == CV_8UC3){
cvtColor(src, tmp, cv::COLOR_RGB2RGBA);
} else if(src.type() == CV_8UC4){
if(needPremultiplyAlpha) cvtColor(src, tmp, cv::COLOR_RGBA2mRGBA);
else src.copyTo(tmp);
}
} else {
// info.format == ANDROID_BITMAP_FORMAT_RGB_565
cv::Mat tmp(info.height, info.width, CV_8UC2, pixels);
if(src.type() == CV_8UC1)
{
cvtColor(src, tmp, cv::COLOR_GRAY2BGR565);
} else if(src.type() == CV_8UC3){
cvtColor(src, tmp, cv::COLOR_RGB2BGR565);
} else if(src.type() == CV_8UC4){
cvtColor(src, tmp, cv::COLOR_RGBA2BGR565);
}
}
AndroidBitmap_unlockPixels(env, bitmap);
return;
} catch(const cv::Exception& e) {
AndroidBitmap_unlockPixels(env, bitmap);
jclass je = env->FindClass("java/lang/Exception");
env->ThrowNew(je, e.what());
return;
} catch (...) {
AndroidBitmap_unlockPixels(env, bitmap);
jclass je = env->FindClass("java/lang/Exception");
env->ThrowNew(je, "Unknown exception in JNI code {nMatToBitmap}");
return;
}
}
The function was directly adopted from the OpenCV sources and contains some extra checks for different formats. You could just strip the checks out if you knew what matrix format you're going to use.
mat_dst holds the results I need and that I need to display on my phone. How can I do that?
You can call something like:
extern "C"
JNIEXPORT void JNICALL
Java_com_your_package_MainActivity_DoStuff(JNIEnv *env, jobject thiz,
jobject bitmap) {
// Do your stuff with mat_dst.
try {
MatToBitmap2(env, mat_dst, bitmap, false);
}
catch(const cv::Exception& e)
{
jclass je = env->FindClass("java/lang/Exception");
env->ThrowNew(je, e.what());
}
}
and define it in the Java side like:
public native void DoStuff(Bitmap bitmap);
You don't need to return anything back to java side as the Bitmap class would be reference type and the MatToBitmap2 method would already take care of locking and unlocking pixels buffer.
Use this to convert your Mat to Bitmap.
jclass java_bitmap_class = (jclass)env->FindClass("android/graphics/Bitmap");
jmethodID mid = env->GetMethodID(java_bitmap_class, "getConfig", "()Landroid/graphics/Bitmap$Config;");
jobject bitmap_config = env->CallObjectMethod(bitmap, mid);
jobject _bitmap = mat_to_bitmap(env,dst,false,bitmap_config);
AndroidBitmap_unlockPixels(env, bitmap);
return _bitmap;
I am trying to fetch an image from a URL into a Bitmap and then using the raw data from the Bitmap am trying to create a CCSprite. The issue here is that the image is corrupted when I display the sprite. I created a standalone android only application(no cocos2dx) and used the same code to fetch and display the Bitmap and its displayed correctly. Any reason why the image is not being properly rendered in cocos2dx?
My code to fetch the image from the URL is:
String urlString = "http://www.mathewingram.com/work/wp-content/themes/thesis/rotator/335f69c5de_small.jpg";//http://graph.facebook.com/"+user.getId()+"/picture?type=large";
Bitmap pic = null;
pic = BitmapFactory.decodeStream((InputStream) new URL(urlString).getContent());
int[] pixels = new int[pic.getWidth() * pic.getHeight()];
pic.getPixels(pixels, 0, pic.getWidth(), 0, 0,pic.getWidth(),pic.getHeight());
int len = pic.getWidth()* pic.getHeight();
nativeFbUserName(pixels,len,pic.getWidth(), pic.getHeight());
The function "nativeFbUserName" is a call to a native c++ function which is :
void Java_com_WBS_Test0001_Test0001_nativeFbUserName(JNIEnv *env, jobject thiz,jintArray name, jint len, jint width, jint height) {
jint *jArr = env->GetIntArrayElements(name,NULL);
int username[len];
for (int i=0; i<len; i++){
username[i] = (int)jArr[i];
}
HelloWorld::getShared()->picLen = (int)len;
HelloWorld::getShared()->picHeight = (int)height;
HelloWorld::getShared()->picWidth = (int)width;
HelloWorld::getShared()->saveArray(username);
HelloWorld::getShared()->schedule(SEL_SCHEDULE(&HelloWorld::addSprite),0.1);
}
void HelloWorld::saveArray(int *arrayToSave)
{
arr = new int[picLen];
for(int i = 0; i < picLen; i++){
arr[i] = arrayToSave[i];
}
}
void HelloWorld::addSprite(float time)
{
this->unschedule(SEL_SCHEDULE(&HelloWorld::addSprite));
CCTexture2D *tex = new CCTexture2D();
bool val = tex->initWithData(arr,(cocos2d::CCTexture2DPixelFormat)0,picWidth,picHeight, CCSizeMake(picWidth,picHeight));
CCLog("flag is %d",val);
CCSprite *spriteToAdd = CCSprite::createWithTexture(tex);
spriteToAdd->setPosition(ccp(500, 300));
this->addChild(spriteToAdd);
}
Edit:
So I found this link Access to raw data in ARGB_8888 Android Bitmap that states that it might be a bug. Has anyone found a solution to this?
EDIT
So I just noticed a corruption of images on the lower right corner of the image.I am not sure why this is happening and how to fix it. Any ideas?
EDIT END
Answering my own question, I obtained a byte array from the bitmap using:
byte[] data = null;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
pic.compress(Bitmap.CompressFormat.JPEG, 100, baos);
data = baos.toByteArray();
And then passed this byte array to the native code.
i am trying to do real time image processing in android using jni. I have a native method to decode image data and i call this method for every frame. After a few seconds later i get out of memory and my app terminats.
LOG OUTPUT:
12-03 20:54:19.780: E/dalvikvm-heap(8119): Out of memory on a 3686416-byte allocation.
MY NATIVE METHOD:
JNIEXPORT jintArray JNICALL Java_net_oyunyazar_arcc_data_FrameManager_processImage(JNIEnv* env, jobject javaThis, jint width, jint height, jbyteArray arr) {
jint *convertedData;
convertedData = (jint*)malloc((width*height) * sizeof(jint));
jintArray result = (*env)->NewIntArray(env, width*height);
jint y,x;
jbyte grey;
jsize len = (*env)->GetArrayLength(env, arr);
jbyte *YUVData = (*env)->GetByteArrayElements(env, arr, 0);
for (y = 0; y < height; y++){
for (x = 0; x < width; x++){
grey = YUVData[y * width + x];
convertedData[y*width+x] =(jint) grey & 0xff;
}
}
LOGD("Random [%d]",len);
(*env)->SetIntArrayRegion(env, result, 0, (width*height),convertedData );
free(convertedData);
(*env)->ReleaseByteArrayElements(env, YUVData, (jbyte*)arr, 0);
return result;
}
Thanks for any help.
I have the same problem as yours.
In your specific case, while you are using pixel (and probably bitmap) you can send a bitmap instead of your bytearray and modify it
void *pixel_bm;
int retValue;
AndroidBitmapInfo info;
if ((retValue = AndroidBitmap_getInfo(env, bitmap, &info)) < 0) return 0;
if ((retValue = AndroidBitmap_lockPixels(env, bitmap, &pixel_bm)) < 0) return 0;
// you can now read an write into pixel_bm
AndroidBitmap_unlockPixels(env, bitmap);
If you find a solution to correctly free a GetByteArrayElement result, I'm instrested by the solution !!!
I have solved this problem by releasing the parameters.
(*env)->ReleaseByteArrayElements(env, arr, YUVData, 0);
It works great now.