So, I'm working with a library that uses a callback function that is configured and called when it's needed. I need to access local variables in my c function from inside that function and can't make them members of the parent class for other reasons.
So, essentially this is my set up
callback.h
typedef void handler_func(uint8_t *data, size_t len);
typedef struct my_cfg {
handler_func *handler;
} my_cfg;
otherfile.c
#include "callback.h"
void test() {
char *test = "This is a test";
my_cfg cfg = { 0 };
memset(&cfg, 0, sizeof(my_cfg));
my_cfg.handler = my_handler;
// This is just an example, basically
// elsewhere in the code the handler
// function will be called when needed.
load_config(my_cfg);
}
void my_handler(uint8_t *data, size_t len) {
// I need to access the `test` var here.
}
What I need is something like this:
#include "callback.h"
void test() {
final char *test = "This is a test";
my_cfg cfg = { 0 };
memset(&cfg, 0, sizeof(my_cfg));
// This is the type of functionality I need.
my_cfg.handler = void (uint8_t *data, size_t len) {
printf("I can now access test! %s", test);
};
// This is just an example, basically
// elsewhere in the code the handler
// function will be called when needed.
load_config(my_cfg);
}
Please keep in mind that I cannot change the header files that define the function definition for handler_func, nor can I modify the my_cfg struct, nor can I modify the area of the code that is calling the handler_func, my_cfg.handler. They are all internal in the library.
(Also note that there may be code errors above, this is all psuedo code technically. I'm not at my computer, just typing this all out free hand on a tablet)
Edit
From what I understand, nested functions would solve this issue. But it appears that clang doesn't support nested functions.
Reference: https://clang.llvm.org/docs/UsersManual.html#gcc-extensions-not-implemented-yet
clang does not support nested functions; this is a complex feature
which is infrequently used, so it is unlikely to be implemented
anytime soon.
Is there another work around?
Related
Rust's glium lib is a nice OpenGL wrapper that facilitate slots of stuff. In order to implement a new backend for it, you must implement https://github.com/glium/glium/blob/cacb970c8ed2e45a6f98d12bd7fcc03748b0e122/src/backend/mod.rs#L36
I want to implement Android's SurfaceTexture as a Backend
Looks like I need to implement a new Backend for SurfaceTexture: https://github.com/glium/glium/blob/master/src/backend/mod.rs#L36
Here are the C++ functions of SurfaceTexture https://developer.android.com/ndk/reference/group/surface-texture#summary
I think that Backend::make_current(&self); maps to ASurfaceTexture_attachToGLContext(ASurfaceTexture *st, uint32_t texName)
and Backend::is_current(&self) -> bool can be simulated somehow based on each SurfaceTexture being marked as active or not when this is called.
Maybe Backend::get_framebuffer_dimensions(&self) -> (u32, u32) is the size of the SurfaceTexture which is defined at creation so I can use that. I just don't know what to do with Backend::swap_buffers(&self) -> Result<(), SwapBuffersError>
and maybe Backend::unsafe fn get_proc_address(&self, symbol: &str) -> *const c_void can call some Android API that gets the address of the OpenGL functions
However, ASurfaceTexture_updateTexImage(ASurfaceTexture *st) looks important and needed, and I don't know what to map it to in the Backend. Also, what about ASurfaceTexture_detachFromGLContext(ASurfaceTexture *st)?
PS: I know there are other ways to render to an android widget, but I need to render to a Flutter widget, and the only way it through a SurfaceTexture
I managed to make this work some time ago, with a hack-ish solution, maybe it still works, because glium is not changing very much lately.
But in my experience using ASurfaceTexture yields unreliable results, maybe that is because I used it wrongly, or maybe because Android manufacturers do not pay too much attention to it, I don't know. But I didn't see any real program using it, so I decided to use the well tested Java GLSurfaceView instead and a bit of JNI to connect everything.
class MyGLView extends GLSurfaceView
implements GLSurfaceView.Renderer {
public MyGLView(Context context) {
super(context);
setEGLContextClientVersion(2);
setEGLConfigChooser(8, 8, 8, 0, 0, 0);
setRenderer(this);
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
GLJNILib.init();
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLJNILib.resize(width, height);
}
public void onDrawFrame(GL10 gl) {
GLJNILib.render();
}
Being com.example.myapp.GLJNILib the JNI binding to the Rust native library, where the magic happens. The interface is quite straightforward:
package com.example.myapplication;
public class GLJNILib {
static {
System.loadLibrary("myrustlib");
}
public static native void init();
public static native void resize(int width, int height);
public static native void step();
}
Now, this Rust library can be designed in several ways. In my particular projects, since it was a simple game with a single full-screen view, I just created the glium context and store it in a global variable. More sophisticated programs could store the Backend into a Java object, but that complicates the lifetimes and I didn't need it.
struct Data {
dsp: Rc<glium::backend::Context>,
size: (u32, u32),
}
static mut DATA: Option<Data> = None;
But first we have to implement the trait glium::backend::Backend, which happens to be surprisingly easy, if we assume that every time one of the Rust functions is called the proper GL context is always current:
struct Backend;
extern "C" {
fn eglGetProcAddress(procname: *const c_char) -> *const c_void;
}
unsafe impl glium::backend::Backend for Backend {
fn swap_buffers(&self) -> Result<(), glium::SwapBuffersError> {
Ok(())
}
unsafe fn get_proc_address(&self, symbol: &str) -> *const c_void {
let cs = CString::new(symbol).unwrap();
let ptr = eglGetProcAddress(cs.as_ptr());
ptr
}
fn get_framebuffer_dimensions(&self) -> (u32, u32) {
let data = unsafe { &DATA.as_ref().unwrap() };
data.size
}
fn is_current(&self) -> bool {
true
}
unsafe fn make_current(&self) {
}
}
And now we can implement the JNI init function:
use jni::{
JNIEnv,
objects::{JClass, JObject},
sys::{jint}
};
#[no_mangle]
#[allow(non_snake_case)]
pub extern "system"
fn Java_com_example_myapp_GLJNILib_init(_env: JNIEnv, _class: JClass) { log_panic(|| {
unsafe {
DATA = None
};
let backend = Backend;
let dsp = unsafe { glium::backend::Context::new(backend, false, Default::default()).unwrap() };
// Use dsp to create additional GL objects: programs, textures, buffers...
// and store them inside `DATA` or another global.
unsafe {
DATA = Some(Data {
dsp,
size: (256, 256), //dummy size
});
}
}
The size will be updated when the size of the view changes (not that glium uses that value so much):
#[no_mangle]
#[allow(non_snake_case)]
pub extern "system"
fn Java_com_example_myapp_GLJNILib_resize(_env: JNIEnv, _class: JClass, width: jint, height: jint) {
let data = unsafe { &mut DATA.as_mut().unwrap() };
data.size = (width as u32, height as u32);
}
And similarly the render function:
#[no_mangle]
#[allow(non_snake_case)]
pub extern "system"
fn Java_com_example_myapp_GLJNILib_render(_env: JNIEnv, _class: JClass) {
let data = unsafe { &mut DATA.as_ref().unwrap() };
let dsp = &data.dsp;
let mut target = glium::Frame::new(dsp.clone(), dsp.get_framebuffer_dimensions());
// use dsp and target at will, such as:
target.clear_color(0.0, 0.0, 1.0, 1.0);
let (width, height) = target.get_dimensions();
//...
target.finish().unwrap();
}
Note that target.finish() is still needed although glium is not actually doing the swap.
I'm using C++Builder 10.2.
In Android, I would like to send messages from various threads, including the main thread, to the main GUI thread. In Windows, I could post a message and assign an LPARAM or WPARAM to the address of some instance of a struct or class.
I'm trying to use System.Messaging.TMessageManager to do the same thing, similar to the example here: System.Messaging (C++). But I can only send 'simple' types, like UnicodeString or int. I haven't worked out how to send a pointer, assuming it's even possible at all.
I would like to send a struct/class instance like this:
class TSendResult
{
public:
String Message;
unsigned int Value;
int Errno;
__fastcall TSendResult(void);
__fastcall ~TSendResult();
};
If this can be done, how do I write this? I managed to get one version to compile, but got a linker error:
error: undefined reference to 'vtable for System::Messaging::TMessage__1<TSendResult>'
Form constructor:
__fastcall TForm1::TForm1(TComponent* Owner)
: TForm(Owner)
{
TMessageManager* MessageManager = TMessageManager::DefaultManager;
TMetaClass* MessageClass = __classid(TMessage__1<TSendResult>);
TMessageListenerMethod ShowReceivedMessagePointer = &(this->MMReceiveAndCallBack);
MessageManager->SubscribeToMessage(MessageClass, ShowReceivedMessagePointer);
}
Button click handler:
void __fastcall TForm1::SpeedButton1Click(TObject *Sender)
{
...
TSendResult *SPtr = new TSendResult();
SPtr->Message = "All good";
SPtr->Value = 10;
SPtr->Errno = 0;
TMessageManager* MessageManager = TMessageManager::DefaultManager;
TMessage__1<TSendResult>* Message = new TMessage__1<TSendResult>(*SPtr); // <-- this doesn't look right...
MessageManager->SendMessage(Sender, Message, false);
}
Function that captures messages:
void __fastcall TForm1::MMReceiveAndCallBack(System::TObject* const Sender,
System::Messaging::TMessageBase* const M)
{
TMessage__1<TSendResult>* Message = dynamic_cast<TMessage__1<TSendResult>*>(M);
if (Message) {
ShowMessage(Message->Value.Message);
}
}
TMessage__1<T> is a C++ class implementation for the Delphi Generic TMessage<T> class. Unfortunately, there is a documented limitation when using Delphi Generic classes in C++, which is why you are getting a linker error:
How to Handle Delphi Generics in C++
Delphi generics are exposed to C++ as templates. However, it is important to realize that the instantiations occur on the Delphi side, not in C++. Therefore, you can only use these template for types that were explicitly instantiated in Delphi code.
...
If C++ code attempts to use a Delphi generic for types that were not instantiated in Delphi, you'll get errors at link time.
Which is why TMessage__1<UnicodeString> works but TMessage__1<TSendResult> does not, as there is an instantiation of TMessage<UnicodeString> present in the Delphi RTL. Whoever wrote the C++ example you are looking at was likely not aware of this limitation and was just translating the Delphi example as-is.
That being said, you have two choices:
Add a Delphi .pas unit to your C++ project, implementing TSendResult as a Delphi record, and defining an instantiation of TMessage<TSendResult> for it. Then you can use that unit in your C++ code (C++Builder will generate a C++ .hpp file for you when the .pas file is compiled), eg:
unit MyMessageTypes;
interface
uses
System.Messaging;
type
TSendResult = record
Message: String;
Value: UInt32;
Errno: Integer;
end;
TSendResultMsg = TMessage<TSendResult>;
implementation
initialization
TSendResultMsg.Create.Free;
finalization
end.
#include "MyMessageTypes.hpp"
__fastcall TForm1::TForm1(TComponent* Owner)
: TForm(Owner)
{
TMessageManager::DefaultManager->SubscribeToMessage(__classid(TSendResultMsg), &MMReceiveAndCallBack);
}
void __fastcall TForm1::SpeedButton1Click(TObject *Sender)
{
...
TSendResult Res;
Res.Message = _D("All good");
Res.Value = 10;
Res.Errno = 0;
TSendResultMsg *Message = new TSendResultMsg(Res);
TMessageManager::DefaultManager->SendMessage(this, Message, true);
}
void __fastcall TForm1::MMReceiveAndCallBack(System::TObject* const Sender,
System::Messaging::TMessageBase* const M)
{
const TSendResultMsg* Message = static_cast<const TSendResultMsg*>(M);
ShowMessage(Message->Value.Message);
}
rather than using TMessage__1 at all, you can instead derive TSendResult directly from TMessageBase, eg:
class TSendResultMsg : public TMessageBase
{
public:
String Message;
unsigned int Value;
int Errno;
};
__fastcall TForm1::TForm1(TComponent* Owner)
: TForm(Owner)
{
TMessageManager::DefaultManager->SubscribeToMessage(__classid(TSendResultMsg), &MMReceiveAndCallBack);
}
void __fastcall TForm1::SpeedButton1Click(TObject *Sender)
{
...
TSendResultMsg *Message = new TSendResultMsg;
Message->Message = _D("All good");
Message->Value = 10;
Message->Errno = 0;
TMessageManager::DefaultManager->SendMessage(this, Message, true);
}
void __fastcall TForm1::MMReceiveAndCallBack(System::TObject* const Sender,
System::Messaging::TMessageBase* const M)
{
const TSendResultMsg* Message = static_cast<const TSendResultMsg*>(M);
ShowMessage(Message->Message);
}
Most of the examples I'm looking at on the Web have pthread_mutex_t sitting at the top of the file in the global space and I think I read somewhere that Linux mutexes have to be global. Is this true?
edit:
I have some Win32 multithreading code that I'm porting over to Linux. For the windows code, there are several wrapper functions that encapsulate things like mutex creation and locking/unlocking. My understanding is that every synchronization primitive that's created through one of the Create() API calls in Windows returns a HANDLE that can be stored in an instance field and then used later. In this case, it's used in the Lock() function, which is wrapper around WaitForSingleObject(). For Linux, could I simply store the mutex in an instance field and call pthread_mutex_lock()/pthread_cond_wait() in the Lock() function and expect the same behavior as on Windows?
Nv_Mutex::Nv_Mutex(Nv_XprocessID name)
{
#if defined(WIN32)
if((handle = ::CreateMutexA(0, false, name)) == NULL)
{
throw Nv_EXCEPTION(XCPT_ResourceAllocationFailure, GetLastError());
}
isCreator = !(::GetLastError() == ERROR_ALREADY_EXISTS);
#else
if (name == Nv_XprocessID_NULL) {
/*
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; // Fast
pthread_mutex_t recmutex = PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP; // Recursive
pthread_mutex_t errchkmutex = PTHREAD_ERRORCHECK_MUTEX_INITIALIZER_NP; // Errorcheck
*/
mutex = PTHREAD_MUTEX_INITIALIZER;
// attributes??
if (pthread_mutex_init(&mutex, NULL) != 0) {
throw Nv_EXCEPTION(XCPT_ResourceAllocationFailure, GetLastError());
}
}
else {
// insert code for named mutex (needed for shared mutex across processes) here.
}
//isCreator = !(GetLastError() == EBUSY);
#endif
}
bool
Nv_Mutex::Lock(const char *f, int l, Nv_uint32 timeout)
{
switch(WaitForSingleObject(handle, timeout))
{
case WAIT_OBJECT_0:
file = f;
line = l;
return true;
case WAIT_TIMEOUT:
return false;
}
throw Nv_EXCEPTION(XCPT_WaitFailed, GetLastError());
}
No, they can scoped. There is nothing special about the actual mutex pointer.
You have the requirement a bit wrong. Mutexes do not need to be global, however, you cannot statically initialize a non-static mutex. But you do not need to statically initialize a mutex prior to calling pthread_mutex_init on it, because that initializes it. So just don't use static initializers and instead call pthread_mutex_init.
It will actually work, but this is by luck due to the details of the implementation. Please don't rely on an implementation detail.
Static initialization is legal only for statically ALLOCATED storage[.] ... Although C syntax allows using the static initialization macros on "automatic" variables, this is specifically prohibited by the POSIX standard. It's not correct, and it's not portable. - David Butenhof
I'm at wits end here and would like to know if anyone's seen anything similar when working with JNI within Android. I'm finding that CallVoidMethod() doesn't work for me unless the void java method had no parameters as well. However, if I simply change the targeted java method to have an int return, then CallIntMethod() works just fine. It's not the end of the world, but I'd like to omit the dummy int return value if I can, just for the sake of simplicity and correctness. The two (almost equivalent) code snippets are below:
// Example One - this works fine!
// java object method
public int java_callback(int value)
{
assert(value == 666);
return 0; // useless
}
// cpp native function
void cpp_callback()
{
// JNI globals
// g_jvm cached as is
// g_cls cached as GlobalRef
// g_obj cached as GlobalRef
JNIEnv *env;
g_jvm->AttachCurrentThread(&env, NULL);
jmethodID mid = env->GetMethodID(g_cls, "java_callback", "(I)I");
env->CallIntMethod(g_obj, mid, 666);
}
// Example Two - this doesn't work!?
// java object method
public void java_callback(int value)
{
assert(value == 666); // never gets here
}
// cpp native function
void cpp_callback()
{
// JNI globals
// g_jvm cached as is
// g_cls cached as GlobalRef
// g_obj cached as GlobalRef
JNIEnv *env;
g_jvm->AttachCurrentThread(&env, NULL);
jmethodID mid = env->GetMethodID(g_cls, "java_callback", "(I)V");
env->CallVoidMethod(g_obj, mid, 666);
}
Let me emphasize that the first example does indeed work, so there's no outside issues here. I simply can't get the code to work unless I have a dummy int return. Ideas?
I have some problems when using the dynamic loading API (<dlfcn.h>: dlopen(), dlclose(), etc) on Android.
I'm using NDK standalone toolchain (version 8) to compile the applications and libraries.
The Android version is 2.2.1 Froyo.
Here is the source code of the simple shared library.
#include <stdio.h>
int iii = 0;
int *ptr = NULL;
__attribute__((constructor))
static void init()
{
iii = 653;
}
__attribute__((destructor))
static void cleanup()
{
}
int aaa(int i)
{
printf("aaa %d\n", iii);
}
Here is the program source code which uses the mentioned library.
#include <dlfcn.h>
#include <stdlib.h>
#include <stdio.h>
int main()
{
void *handle;
typedef int (*func)(int);
func bbb;
printf("start...\n");
handle = dlopen("/data/testt/test.so", RTLD_LAZY);
if (!handle)
{
return 0;
}
bbb = (func)dlsym(handle, "aaa");
if (bbb == NULL)
{
return 0;
}
bbb(1);
dlclose(handle);
printf("exit...\n");
return 0;
}
With these sources everything is working fine, but when I try to use some STL functions or classes, the program crashes with a segmentation fault, when the main() function exits, for example when using this source code for the shared library.
#include <iostream>
using namespace std;
int iii = 0;
int *ptr = NULL;
__attribute__((constructor))
static void init()
{
iii = 653;
}
__attribute__((destructor))
static void cleanup()
{
}
int aaa(int i)
{
cout << iii << endl;
}
With this code, the program crashes with segmentation fault after or the during main() function exit.
I have tried couple of tests and found the following results.
Without using of STL everything is working fine.
When use STL and do not call dlclose() at the end, everything is working fine.
I tried to compile with various compilation flags like -fno-use-cxa-atexit or -fuse-cxa-atexit, the result is the same.
What is wrong in my code that uses the STL?
Looks like I found the reason of the bug. I have tried another example with the following source files:
Here is the source code of the simple class:
myclass.h
class MyClass
{
public:
MyClass();
~MyClass();
void Set();
void Show();
private:
int *pArray;
};
myclass.cpp
#include <stdio.h>
#include <stdlib.h>
#include "myclass.h"
MyClass::MyClass()
{
pArray = (int *)malloc(sizeof(int) * 5);
}
MyClass::~MyClass()
{
free(pArray);
pArray = NULL;
}
void MyClass::Set()
{
if (pArray != NULL)
{
pArray[0] = 0;
pArray[1] = 1;
pArray[2] = 2;
pArray[3] = 3;
pArray[4] = 4;
}
}
void MyClass::Show()
{
if (pArray != NULL)
{
for (int i = 0; i < 5; i++)
{
printf("pArray[%d] = %d\n", i, pArray[i]);
}
}
}
As you can see from the code I did not used any STL related stuff.
Here is the source files of the functions library exports.
func.h
#ifdef __cplusplus
extern "C" {
#endif
int SetBabe(int);
int ShowBabe(int);
#ifdef __cplusplus
}
#endif
func.cpp
#include <stdio.h>
#include "myclass.h"
#include "func.h"
MyClass cls;
__attribute__((constructor))
static void init()
{
}
__attribute__((destructor))
static void cleanup()
{
}
int SetBabe(int i)
{
cls.Set();
return i;
}
int ShowBabe(int i)
{
cls.Show();
return i;
}
And finally this is the source code of the programm that uses the library.
main.cpp
#include <dlfcn.h>
#include <stdlib.h>
#include <stdio.h>
#include "../simple_lib/func.h"
int main()
{
void *handle;
typedef int (*func)(int);
func bbb;
printf("start...\n");
handle = dlopen("/data/testt/test.so", RTLD_LAZY);
if (!handle)
{
printf("%s\n", dlerror());
return 0;
}
bbb = (func)dlsym(handle, "SetBabe");
if (bbb == NULL)
{
printf("%s\n", dlerror());
return 0;
}
bbb(1);
bbb = (func)dlsym(handle, "ShowBabe");
if (bbb == NULL)
{
printf("%s\n", dlerror());
return 0;
}
bbb(1);
dlclose(handle);
printf("exit...\n");
return 0;
}
Again as you can see the program using the library also does not using any STL related stuff, but after run of the program I got the same segmentation fault during main(...) function exit. So the issue is not connected to STL itself, and it is hidden in some other place. Then after some long research I found the bug.
Normally the destructors of static C++ variables are called immediately before main(...) function exit, if they are defined in main program, or if they are defined in some library and you are using it, then the destructors should be called immediately before dlclose(...).
On Android OS all destructors(defined in main program or in some library you are using) of static C++ variables are called during main(...) function exit. So what happens in our case? We have cls static C++ variable defined in library we are using. Then immediately before main(...) function exit we call dlclose(...) function, as a result library closed and cls becomes non valid. But the pointer of cls is stored somewhere and it's destructor should be called during main(...) function exit, and because at the time of call it is already invalid, we get segmentation fault. So the solution is to not call dlclose(...) and everything should be fine. Unfortunately with this solution we cannot use attribute((destructor)) for deinitializing of something we want to deinitialize, because it is called as a result of dlclose(...) call.
I have a general aversion to calling dlclose(). The problem is that you must ensure that nothing will try to execute code in the shared library after it has been unmapped, or you will get a segmentation fault.
The most common way to fail is to create an object whose destructor is defined in or calls code defined in the shared library. If the object still exists after dlclose(), your app will crash when the object is deleted.
If you look at logcat you should see a debuggerd stack trace. If you can decode that with the arm-eabi-addr2line tool you should be able to determine if it's in a destructor, and if so, for what class. Alternatively, take the crash address, strip off the high 12 bits, and use that as an offset into the library that was dlclose()d and try to figure out what code lives at that address.
I encountered the same headache on Linux. A work-around that fixes my segfault is to put these lines in the same file as main(), so that dlclose() is called after main returns:
static void* handle = 0;
void myDLClose(void) __attribute__ ((destructor));
void myDLClose(void)
{
dlclose(handle);
}
int main()
{
handle = dlopen(...);
/* ... real work ... */
return 0;
}
The root cause of dlclose-induced segfault may be that a particular implementation of dlclose() does not clean up the global variables inside the shared object.
You need to compile with -fpic as a compiler flag for the application that is using dlopen() and dlclose(). You should also try error handling via dlerror() and perhaps checking if the assignment of your function pointer is valid, even if it's not NULL the function pointer could be pointing to something invalid from the initialization, dlsym() is not guaranteed to return NULL on android if it cannot find a symbol. Refer to the android documentation opposed to the posix compliant stuff, not everything is posix compliant on android.
You should use extern "C" to declare you function aaa()