I try to display video on Android using Gstreamer like on other platforms:
gst_video_overlay_set_window_handle(GST_VIDEO_OVERLAY(video_sink), this->ui->playback_widget->winId());
//playback_widget - QOpenGLWidget
But I think that winId() returns something else instead of ANativeWindow, because I get:
F libc : Fatal signal 11 (SIGSEGV), code 1, fault addr 0x5e in tid
3154 (gstglcontext)
So, how can I get instance of(or pointer to) ANativeWindow from some Qt widget on Android?
Support for embedding native widgets is incomplete and doesn't always work. Bear in mind that Qt may create native handles, but they do not necessarily represent actual native windows. Additionally, QWidget::winId() provides no effective guarantee of portability, only that the identifier is unique:
Portable in principle, but if you use it you are probably about to do something non-portable. Be careful.
The reason for this is that WId is actually a typedef for quintptr.
Solution: You will, at the very least, need to cast the return from winId to ANativeWindow, assuming this is the underlying window handle type that Qt uses to identify native windows.
This solution seems directed at X Windows but may provide some guidance.
Also see the documentation for QWidget::effectiveWinId() and QWidget::nativeParentWidget() for more helpful background information.
Update: Per the platform notes, there are quite a few caveats in using OpenGL + Qt + Android, including:
The platform plugin only supports full screen top-level OpenGL windows.
After the answer of #don-prog and some reading time I could make it work. Here is my code. Just a note: I have to delay this code execution until Qt finished loading views and layouts.
The following code retrieves the QtActivity Java object, then it dives into views and layout objects up to the QtSurface - which extends android.view.SurfaceView. Then it ask for the SurfaceHolder to finally get the Surface object that we need to call ANativeWindow_fromSurface.
QPlatformNativeInterface *nativeInterface = QApplication::platformNativeInterface();
jobject activity = (jobject)nativeInterface->nativeResourceForIntegration("QtActivity");
QAndroidJniEnvironment * qjniEnv;
JNIEnv * jniEnv;
JavaVM * jvm = qjniEnv->javaVM();
jvm->GetEnv(reinterpret_cast<void**>(&qjniEnv), JNI_VERSION_1_6);
jvm->AttachCurrentThread(&jniEnv,NULL);
jint r_id_content = QAndroidJniObject::getStaticField<jint>("android/R$id", "content");
QAndroidJniObject view = ((QAndroidJniObject) activity).callObjectMethod("findViewById", "(I)Landroid/view/View;", r_id_content);
if (view.isValid()) {
QAndroidJniObject child1 = view.callObjectMethod("getChildAt", "(I)Landroid/view/View;", 0);
QAndroidJniObject child2 = child1.callObjectMethod("getChildAt", "(I)Landroid/view/View;", 0);
if (child2.isValid()) {
QAndroidJniObject sHolder = child2.callObjectMethod("getHolder","()Landroid/view/SurfaceHolder;");
if (sHolder.isValid()) {
QAndroidJniObject theSurface = sHolder.callObjectMethod("getSurface","()Landroid/view/Surface;");
if (theSurface.isValid()) {
ANativeWindow* awindow = ANativeWindow_fromSurface(jniEnv, theSurface.object());
qDebug() << "This is a ANativeWindow " << awindow;
}
}
} else {
qDebug() << "Views are not loaded yet or you are not in the Qt UI Thread";
}
}
Hope it helps anybody else
Related
I'm trying to inference my tflite model on c++ code at an embedded device.
So, I type simple tflite inference code which is using GPU.
And I cross-compile in my PC and run at the embedded device which is running android.
However, (1) If I use delegate gpu option, c++ codes give random results.
(2) It has given same input, but the results changed every time.
(3) When I turn off the gpu option, it gives me a correct results.
When I test my tflite model in python, it gives the correct output.
So I think the model file has no problem.
Should I re-build TensorFlow-lite because I use a prebuilt .so file? or Should I change the GPU option?
I don't know what should I check more.
Please Help!
Here is my C++ code
// Load model
std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile(model_file.c_str());
// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
// set delegate option
bool use_gpu = true;
if(use_gpu)
{
TfLiteDelegate* delegate;
auto options = TfLiteGpuDelegateOptionsV2Default();
options.inference_preference = TFLITE_GPU_INFERENCE_PREFERENCE_FAST_SINGLE_ANSWER;
options.inference_priority1 = TFLITE_GPU_INFERENCE_PRIORITY_AUTO;
delegate = TfLiteGpuDelegateV2Create(&options);
interpreter->ModifyGraphWithDelegate(delegate);
}
interpreter->AllocateTensors();
// set input
float* input = interpreter->typed_input_tensor<float>(0);
for(int i=0; i<width*height*channel; i++)
*(input+i) = 1;
TfLiteTensor* output_tensor = nullptr;
// Inference
interpreter->Invoke();
// Check output
output_tensor = interpreter->tensor(interpreter->outputs()[0]);
printf("Result : %f\n",output_tensor->data.f[0]);
//float* output = interpreter->typed_output_tensor<float>(0);
//printf("output : %f\n",*(output));
Luckily, I found the answer.
This was caused by the mismatch of the NDK version. (The prebuilt so file I used is built another version)
After unifying the NDK version to 21, it was tested again, and it worked normally.
In my case same application using gpu on S10 lite was not working but on S22,S21 Ultra 5G was working.
So you can consider running on different chipset configuration on different devices with same application.
I am following the Google tips to implement a JNI layer between my Android app and my C++ library. It suggests to use the following code to register native methods when the library is loaded:
JNIEXPORT jint JNI_OnLoad(JavaVM* vm, void* reserved) {
JNIEnv* env;
if (vm->GetEnv(reinterpret_cast<void**>(&env), JNI_VERSION_1_6) != JNI_OK) {
return JNI_ERR;
}
...
// Register your class' native methods.
static const JNINativeMethod methods[] = {
{"nativeFoo", "()V", reinterpret_cast<void*>(nativeFoo)},
{"nativeBar", "(Ljava/lang/String;I)Z", reinterpret_cast<void*>(nativeBar)},
};
int rc = env->RegisterNatives(c, methods, sizeof(methods)/sizeof(JNINativeMethod));
...
}
I am quite new to C++ so I decided to use clang-tidy to ensure my C++ code is modern and safe. clang-tidy reports:
error: do not use reinterpret_cast [cppcoreguidelines-pro-type-reinterpret-cast,-warnings-as-errors]
According to the clang-tidy documentation:
cppcoreguidelines-pro-type-reinterpret-cast
This check flags all uses of reinterpret_cast in C++ code.
Use of these casts can violate type safety and cause the program to
access a variable that is actually of type X to be accessed as if it
were of an unrelated type Z.
This rule is part of the “Type safety” profile of the C++ Core
Guidelines, see
https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Pro-type-reinterpretcast.
So I have a few options:
Disable this check and risk using reinterpret_cast inappropriately elsewhere
Ignore the check everywhere where I need to use it and create a messy codebase
Find some alternative way of implementing this more safely
I would like to do 3 if it's possible but I'm not quite sure where to start.
It’s not clear why GetEnv wants a void** rather than a JNIEnv** (what other type would you use?), but you can avoid the reinterpret_cast and the concomitant undefined behavior(!) with a temporary variable:
void *venv;
if (vm->GetEnv(&venv, JNI_VERSION_1_6) != JNI_OK) {
return JNI_ERR;
}
const auto env=static_cast<JNIEnv*>(venv);
Note that a static_cast from void* is every bit as dangerous as a reinterpret_cast, but its use here is correct, unavoidable, and shouldn’t produce linter warnings.
In the function pointer case there’s nothing to be done: reinterpret_cast is the only, correct choice (and for void* rather than some placeholder function pointer type like void (*)() its correctness is not guaranteed by C++, although it works widely and POSIX does guarantee it). You can of course hide that conversion in a function (template) so that the linter can be told to ignore only it, but make sure to use a clear name to avoid “hiding” the conversion.
I'm calling native function from Java:
String pathTemp = Environment.getExternalStorageDirectory().getAbsolutePath()+Const.PATH_TEMP
String pathFiles = Environment.getExternalStorageDirectory().getAbsolutePath()+Const.PATH_FILES
engine.init(someInt, pathTemp, pathFiles);
And I have the native function:
extern "C" JNIEXPORT void Java_com_engine_init(JNIEnv *env, jobject __unused obj, jint someInt, jstring pathTemp, jstring pathFiles) {
const char *pathTemp_ = env->GetStringUTFChars(pathTemp, JNI_FALSE);
const char *pathFiles_ = env->GetStringUTFChars(pathFiles, JNI_FALSE); // <-- CRASH
// More init code
env->ReleaseStringUTFChars(pathTemp, pathTemp_);
env->ReleaseStringUTFChars(pathRecording, pathRecording_);
}
The problem: pathTemp is arriving good, but pathFiles==NULL in native function.
Rechecked, and confirmed - both strings are non NULL in java.
One more strange thing - The problem is on LG-G3 (android 6.0).
On Meizu PRO 5 (android 7.0) - everything works good - both strings are intact.
What is this JNI magic? Any clue?
I had the same problem as this and while I can't guarantee this is the same, I found a better solution than re-ordering the parameters.
tldr; Ensure the code works for 32bit and 64bit platforms as pointers have different sizes. I was running 32bit native code and passed nullptr as a parameter and java expected a long which resulted in all parameters after the nullptr to be invalid.
(JJLjava/lang/String;Z)V -> (final long pCallback, final long pUserPointer, final String id, final boolean b)
pCallback was always set to a valid value (pointer casted to jlong in c++) and pUserPointer was always nullptr. I found this answer and tried switching the order around and it 'just worked' but I knew that fix was never going to be approved.
After looking at the JNI documentation on the Android website again (https://developer.android.com/training/articles/perf-jni) I took note of the "64-bit considerations" section and took a stab at my assumption of the data size. This feature was developed with a 64bit device (Pixel 3) but issues had been reported on a 32bit device (Amazon Fire Phone) so nullptr would be 32bit but the java function still expected a long (64bit).
In my situation the offending parameter was always unused so I could safely remove it and everything "just worked" (including some other parameters which were broken).
An alternative would be to have a define/function/macro for JniLongNullptr which is just 0 casted to jlong.
Not an answer, but workaround. Moved the strings (in parameters) to be before int parameter. Now it's working. I have no idea why is this.
This question already has answers here:
Access android context in ndk application
(2 answers)
Closed 5 years ago.
I want to use ARCore inside a native c++ android application. Therefore, I need the JNI env and context. The JNI env is provided inside the android_app struct but no context is given. I found a solution to get the android context by using the VM. But there is no valid pointer to the context.
JNIEnv *env = 0; // env: 0x0000007ed9bdb0c8
jobject contextObj = 0; // env: 0x0000000000000011
_pAndroidApp->activity->vm->AttachCurrentThread(&env, NULL);
jclass activityClass = env->FindClass("android/app/NativeActivity"); // activityClass: 0x0000000000000001
// or use: jclass activityClass = env->GetObjectClass(_pAndroidApp->activity->clazz);
jmethodID contextMethod = env->GetMethodID(activityClass, "getApplicationContext", "()Landroid/content/Context;"); // contextMethod: 0x000000709660f0
contextObj = env->CallObjectMethod(_pAndroidApp->activity->clazz, contextMethod);
After this I want to create an ARCore session:
ArSession_create(_pAndroidApp->activity->env, contextObj, &ar_session_);
The pointer to the contextObj seems to be wrong. In fact, the activityClass seems to be wrong as well. The result creating the session is a SIGINT (signal SIGINT).
The java instance of the activity can be used as the context. I created the session be calling:
ANativeActivity *activity = app->activity;
CHECK(ArSession_create(activity->env, activity->clazz, &ar_session_) == AR_SUCCESS);
CHECK(ar_session_);
ArConfig* ar_config = nullptr;
ArConfig_create(ar_session_, &ar_config);
CHECK(ar_config);
const ArStatus status = ArSession_checkSupported(ar_session_, ar_config);
CHECK(status == AR_SUCCESS);
CHECK(ArSession_configure(ar_session_, ar_config) == AR_SUCCESS);
ArConfig_destroy(ar_config);
ArFrame_create(ar_session_, &ar_frame_);
CHECK(ar_frame_);
Remember that the session creation can cause some Java exceptions for things like the phone is unsupported, or the companion app is not installed. This will result in the the return code from ArSession_create() not being AR_SUCCESS, but something like AR_UNAVAILABLE_ARCORE_NOT_INSTALLED. In that case, asserting is probably not the best thing to do, but rather show a message to the user.
I have spent a ridiculous amount of time trying to figure this out and I am at an absolute loss.
I am working with the JUCE library and have modified one of their sample projects. My goal is to have a very simple Android app that is written in C++ and then ported to Android. I need a function in C++ that I can call that will then call a function on the Android side that will return my heap size and other characteristics to my C++ code so that I can manage memory there.
If anyone has a simple solution that would be amazing. Right now my current snag is this:
char const buf[] = "From JNI";
jstring jstr = env->NewStringUTF(buf);
jclass clazz = env->FindClass("android/os/Debug");
But I keep getting an error saying that 'NewStringUTF' is not a _JNIEnv member... but if I right click on the method and jump to the definition, I see it in my jni.h file... any suggestions? I'm working in Xcode by the way...
Is it plain C, not C++? Perhaps your file has a .c extension.
If it's plain C it should be
JNIEnv* env;
JNI_CreateJavaVM(&jvm, (void **)&env, &args);
(*env)->NewStringUTF(env, buf);