I am building C++ code that is used on both android and iOS. I need some form of debugger macro to insert debugging easily into the code.
For example, I was thinking of something like this:
#ifdef ANDROID
# define MY_DEBUG(debugmsg) __android_log_print(ANDROID_LOG_INFO, ANDROID_DEBUG_TAG,debugmsg)
# define MY_DEBUG(debugmsg, mystr) __android_log_print(ANDROID_LOG_INFO, ANDROID_DEBUG_TAG,debugmsg,mystr)
#elif defined (iOS)
# define MY_DEBUG(debugmsg) printf(debugmsg)
# define MY_DEBUG(debugmsg, mystr) printf(debugmsg, mystr)
#endif
So for example I could use MY_DEBUG("hello %s","world") and MY_DEBUG("hello")
However it complains about macro redefinition (and rightfully so). How do I make a macro.. 'overload', or accept more than one parameter if entered?
Also - does printf() send data to the iOS console?
You can't overload macros the way you can with functions because the preprocessor has not changed significantly, if at all, since C. A common approach is to use MY_DEBUG and MY_DEBUG2 etc.
There are variatic macros but I avoid them in multi-platform code.
Related
I have a JNI function in a C++ library.
When I create the library using cmake (but forget to put function declaration) in the header file, the library is created successfully.
When I look for function name in library symbol table I get following output
nm libuserlibrary.so | grep printxx
00506e60 T _Z60Java_com_example_user_myapplication_userlibrary_printxxP7JNIEnv_P8_jobject
But when I give function declaration in the header file, I get following output
nm libuserlibrary.so | grep printxx
00506e50 T Java_com_example_user_myapplication_userlibrary_printxx
Why is there this difference between these two symbol table entries? What is the purpose of _Z60 and P7JNIEnv_P8_jobject around the function name?
I also noticed that in the first case, I cannot call the JNI funciton from Android java code (it says unsatisfied-linker-error, implementation not found).
C++ allows function overloads and namespaces like Java does. So, it annotated the function name with parameter information so the linker can bind to the correct overload.
JNI was designed for C which does not allow function overloads or namespaces. So it invented it's own annotation system and provides the javah tool to help you use it. The header can be used in C++ too. C++ was designed to allow some functions to be called as if they written in C. The header has code that indicates that to the compiler. So, put it all together and you can write Java-callable functions in C++.
We're having a problem struct memory packing and alignment.
Android is not honoring the #pragma pack(push, <n>) which is in several hundred places in our code base. This is causes segfault.
The Android Clang compiler requires an __ attribute __ decorator on the struct or class, for example:
struct __attribute__((packed, aligned(8))) Test
{
char a;
char b;
double d;
};
As opposed to this for Visual C++ that honors the pragma:
#pragma pack(push, 8)
struct Test
{
char a;
char b;
double d;
};
#pragma pack(pop)
Since the use of #pragma pack is so widespread, it will be a time-consuming task to fix.
We tried using the -mms-bitfields compiler flag which sets the default structure layout to be compatible with the Microsoft compiler standard (i.e. it honors the #pragma pack). However this only works for trivial structs and not classes with base classes or virtual functions. We get the following error with these type of classes.
“error : ms_struct may not produce Microsoft-compatible layouts for classes with base classes or virtual functions [-Wincompatible-ms-struct]”
How can we mitigate this issue - is there any workaround to make #pragma pack work for non-trivial structs/classes other than to go over all the classes/struct between push and pop pragmas and add the packed attribute?
Thanks
First of all, I have the impression, that you are doing something fundamentally wrong, when you have "several hundred places" in your code, where you need to define alignment to prevent a segfault. This pragma is non standard and it is not widespread to use it. Most noticeably it is not widespread to use it as extensive as you do. It is not in the standard as well.
Anyway, since clang will ignore the pragma and msvcc will ignore the attributes, I'd put both in the code. You might use e.g. grep and sed to prevent a lot of manual work.
I have managed to get SQLite setup using the NDK, but I can't manage to get custom functions to work which was the whole reason for implementing SQLite using the NDK.
I used this library to get the same SQLite files. It also contains an extension file called extensionfunctions.c which adds in string and mathematical functions for SQLite.
From what I can see, the SQLite implementation appears to be working correctly, but I cannot call any of the custom functions.
I've little to no knowledge of C/C++, so any help would be great. Do I have to compile the extensionfunctions.c file independently, and then add in the SO file with the libsqliteX.so file? Or do I have to make a call in the android_database_SQLiteCommon.cpp to load in the other extension? I've no idea how this works.
Edit
The file extensionfunctions.c is included in sqlite3secure.c which is in the Android.mk file under LOCAL_SRC_FILES. I assume that means the file is being used correctly, but none of the custom functions are accessible.
Edit 2
// To enable the extension functions define SQLITE_ENABLE_EXTFUNC on compiling this module
#ifdef SQLITE_ENABLE_EXTFUNC
#define sqlite3_open sqlite3_open_internal
#define sqlite3_open16 sqlite3_open16_internal
#define sqlite3_open_v2 sqlite3_open_v2_internal
#endif
#include "sqlite3.c"
#ifdef SQLITE_ENABLE_EXTFUNC
#undef sqlite3_open
#undef sqlite3_open16
#undef sqlite3_open_v2
#endif
I found the above code with the comment and I added in the line below but it appears as though nothing has changed.
#define SQLITE_ENABLE_EXTFUNC
Do I have to do anything to get the app to refresh it's version of the sqlite3 files or could that be a problem? My C skills are poor so I assume what I did is what the comment has referring to?
I'm doing a project for anwsering questionnaires. The user creates the project with the questions on Delphi, than exports the project in a .txt file to Android, where the file is read and the user can answer. My problem is in characters like á,à,É,Ú, that appear like a ? in Android, with the code 65533. So, I need to know how to configure Android and Delphi to work in the same charset.
Android is Linux based and so presumably uses UTF-8. On the other hand, Android is also very Java-like and so possibly prefers UTF-16.
If you need the file to be UTF-8, you can do it like this, assuming you have your text in a TStringList.
StringList.SaveToFile(FileName, TEncoding.UTF8);
This will include a BOM in the file which I imagine Android won't like—Windows UTF-8 apps tend to use BOMs, but not Linux. If you want to output without a BOM do it like this:
type
TUTF8EncodingNoBOM = class(TUTF8Encoding)
public
function GetPreamble: TBytes; override;
end;
function TUTF8EncodingNoBOM.GetPreamble: TBytes;
begin
Result := nil;
end;
...
var
UTF8EncodingNoBOM: TEncoding;//make this a global variable
...
UTF8EncodingNoBOM := TUTF8EncodingNoBOM.Create;//create in a unit initialization, remember to free it
...
StringList.SaveToFile(FileName, UTF8EncodingNoBOM);
If you discover you need UTF-16 then use TEncoding.Unicode for UTF-16LE or TEncoding.BigEndianUnicode for UTF-16BE. If you need to strip the BOM then that's easy enough with the same technique as above.
Summary
Work out what encoding you need, and its endianness.
Find an appropriate TEncoding.
Use TStrings.SaveToFile with that TEncoding instance.
Use Unicode. UTF-16 or UTF-8 should work fine.
See Davids answer for an explanation why this should work and how to do it in D2009 and newer.
For Delphi 2007 and older you have to use another solution, UTF8Encode + Ansi TStringList can be used, you can also convert your strings to WideStrings and use WideStrings.
To write UTF-8 using D2007 and older see this question:
How can a text file be converted from ANSI to UTF-8 with Delphi 7?
To write UTF-16 using D2007 you can use the WideStrings unit which contains a TWideStringList. Beware that this class doesn't write the BOM by default.
There are also other WideStringList implementations for older Delphi versions out there.
I'm having troubles with <stdint.h> when using -std=c++0x in GCC 4.4.3 (for Android):
// using -std=c++0x
#include <stdint.h>
uint64_t value; // error: 'uint64_t' does not name a type
But using -std=gnu++0x works:
// using -std=gnu++0x
#include <stdint.h>
uint64_t value; // OK
Is <stdint.h> incompatible with C++0x?
So far as I can tell, I think this could be argued an implementation bug (or actually, since C++0x isn't published, not a bug per se but an incomplete implementation of the current state of the upcoming standard).
Here's why, referring to n3225 for the expected behavior of -std=c++0x:
D.7 says
Every C header, each of which has a
name of the form name.h, behaves as if
each name placed in the standard
library namespace by the corresponding
cname header is placed within the
global namespace scope
OK, so far so easy. What does <cstdint> place in the standard library namespace?
18.4.1:
typedef unsigned integer type uint64_t; // optional
How optional? 18.4.1/2:
The header defines all functions,
types, and macros the same as 7.18 in
the C standard
Drat. What does the C standard say? Taking out n1256, 7.18.1.1/3:
These types are optional. However,
if an implementation provides integer
types with widths of 8, 16, 32, or 64
bits, no padding bits, and (for the
signed types) that have a
two's complement representation, it
shall define the corresponding typedef
names
But hang on, surely on Android with -std=c++0x GCC does provide a 64 bit unsigned type with no padding bits: unsigned long long. So <cstdint> is required to provide std::uint64_t and hence stdint.h is required to provide uint64_t in the global namespace.
Go on, someone tell me why I'm wrong :-) One possibility is that C++0x refers to "ISO/IEC 9899:1999 Programming languages — C" without specifying a version. Can it really be that (a) 7.18.1.1/3 was added in one of the TCs, and also (b) C++0x intends to reference the original standard as of 1999, not the amendments since then? I doubt either of these is the case, but I don't have the original C99 on hand to check (a) and I'm not even sure how to check (b).
Edit: oh, as for which one should be used -std=c++0x isn't really a strict standards-compliant mode yet, since there isn't a strict standard yet. And even if there was a standard, gcc 4.4.3 certainly isn't a finished implementation of it. So I see no great need to use it if -std=gnu++0x is actually more complete, at least in this respect for your combination of gcc version and platform.
However, gnu++0x will enable other GNU extensions, that you might not want your code to use. If you're aiming to write portable C++0x, then eventually you'd want to switch to -std=c++0x. But I don't think GCC 4.4 or any other C++0x implementation-in-progress is complete enough yet for it to be practical to write code from the (draft) standard, such that you could say with a straight face "I'm programming C++0x, and it's only 2011!". So I'd say, use whichever one works, and understand that whichever one you use now, you'll probably be switching to -std=c++11 eventually anyway.