I have the following macro in a logging library of mine :
#define TRACE_E(__logCat,__format,...) \
do { \
::dbg::LogCategory * const __catPtrVal = (::dbg::LogCategory *)(__logCat); \
if( NULL != __catPtrVal && __catPtrVal->IsEnabled() ) \
{ \
__catPtrVal->Error( __format, __VA_ARGS__ ); \
} \
} while( false )
Under Visual Studio (2008) it works as intended, i.e i can do both TRACE_E( pLog, "some message without parameters" ); and TRACE_E( pLog, "some message with parameters %d %d", 4, 8 );
But when using this same library with eclipse and the Android NDK i'm getting a compilation error if i don't pass at least one parameter after the format string in my macro, i.e TRACE_E( pLog, "some message without parameters" ); is not valid, but TRACE_E( pLog, "some message without parameters", 0 ); is, which forces me to pass a dummy parameter when none is needed.
Is there any difference of behaviour with variadic macros when using g++ rather than Visual Studio's compiler ? Thank you.
Yes. What you are attempting is not possible in standard C or C++.
This is arguably a defect in the respective standards for which different compilers have different workarounds. Visual Studio tries to make it work as-is, gcc and clang require the following syntax:
__catPtrVal->Error( __format, ##__VA_ARGS__ );
This is described here for gcc; clang just adopted gcc's way of doing things. Unfortunately, MSVC does not understand this syntax. There is, to my knowledge, no portable way of solving this in the general case.
For your particular macro, though, you could simply write
#define TRACE_E(__logCat,...) \
do { \
::dbg::LogCategory * const __catPtrVal = (::dbg::LogCategory *)(__logCat); \
if( NULL != __catPtrVal && __catPtrVal->IsEnabled() ) \
{ \
__catPtrVal->Error(__VA_ARGS__ ); \
} \
} while( false )
Since the only place where you use __format is directly before __VA_ARGS__.
Side note: You're using a lot of reserved identifiers there. Unless you're writing a standard library implementation, you should go easier on the underscores.
Related
Former title was: crash on vsprintf starting from Android12 (api >= 31)
My Android app uses a native library (libexif) built with NDK. At some time in my native code (adapted from exif), I call the vsprintf function which makes the app crash.
Formerly in exif, it was a call to vfprintf (stderr, format, args); that I replaced by vsprintf so as to store the string for later use.
static void
log_func (ExifLog *log, ExifLogCode code, const char *domain,
const char *format, va_list args, void *data)
{
char dest[1024] = {0, };
vsprintf(dest, format, args);
}
The message (as per the output from an API where it works) should be:
The tag 'ComponentsConfiguration' contains an invalid number of components (3, expected 4).
The format variable contains: "The tag '%s' contains an invalid number of components (%i, expected %i)."
I couldn't check the values of the arguments (haven't found how to print them to check their values).
This works without any problems up to API30 (on the emulator).
On an image using API31 it crashes at the call of the vsprintf function.
Updating NDK to 25b doesn't fix either.
My API30 image is x86, the API31 image is x86_64.
Any idea to fix/workaround this?
Other parts of code that may be of interest:
#ifndef NO_VERBOSE_TAG_STRINGS
static void
exif_entry_log (ExifEntry *e, ExifLogCode code, const char *format, ...)
{
va_list args;
ExifLog *l = NULL;
if (e && e->parent && e->parent->parent)
l = exif_data_get_log (e->parent->parent);
va_start (args, format);
exif_logv (l, code, "ExifEntry", format, args);
va_end (args);
}
#else
#if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
#define exif_entry_log(...) do { } while (0)
#elif defined(__GNUC__)
#define exif_entry_log(x...) do { } while (0)
#else
#define exif_entry_log (void)
#endif
#endif
#define CC(entry,target,v,maxlen) \
{ \
if (entry->components != target) { \
exif_entry_log (entry, EXIF_LOG_CODE_CORRUPT_DATA, \
_("The tag '%s' contains an invalid number of " \
"components (%i, expected %i)."), \
exif_tag_get_name (entry->tag), \
(int) entry->components, (int) target); \
break; \
} \
}
// Then this call later
CC (e, 4, val, maxlen);
Update:
In the meantime since API30 images have x86 arch, and API31 images are x86_64 I just tried API30 with a x86_64. For now:
API 30 x86 → no crash in app
API 30 x86_64 → crash in app
API 31 x86_64 → crash in app
So this looks like it is x86 vs x86_64 emulator image related.
On a real arm8 device, there is no crash either.
I also found that the args passed to vsprintf don't have the expected values. And If a simulate vsprintf with the expected values, vsprintf works fine. So the problem is likely not vsprintf.
The cause of the problem was that I was using args twice: Once in vsprintf and once in vsprintf.
This is now allowed according to https://stackoverflow.com/a/10807375/15401262 and leads to undefined behavior.
It can't be used twice and one has to use va_copy in that case.
When I read the source code of google glog, I found the following macro confusing to me:
#define LOG_IF(severity, condition) \
static_cast<void>(0), \
!(condition) ? (void) 0 : google::LogMessageVoidify() & LOG(severity)
What's the meaning of static_cast<void>(0),?
What's the meaning of & in the third line?
What does the result look like if we expand this macro?
I'm new to c++. Thank you all for the help!
I have just created a protobuf file (.pb file) for my own custom images using a TensorFlow tutorial.
But when I replaced the same file into the assets folder in tensorflow/examples/android/assets and try to build and generate an APK, the APK gets generated, but when I run the APK in an Android device, the APK crashes.
If I run the classify_image from Python, it gives me proper results.
Appreciate any help.
Since DecodeJpeg isn't supported as part of the core, you'll need to strip it out of the graph first.
bazel build tensorflow/python/tools:strip_unused && \
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=your_retrained_graph.pb \
--output_graph=stripped_graph.pb \
--input_node_names=Mul \
--output_node_names=final_result \
--input_binary=true
Change few parameters in this file
/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowImageListener.java
The input sizes need to be 299, not 224. You'll also need to change the mean and std values both to 128.
INPUT_NAME to "Mul:0" ,
OUTPUT_NAME to "final_result:0"
after which you will be able to compile the apk.
Good Luck
The Android build process generates(?) Java stubs for each of the classes in the android.jar, and stores them in the following directory:
./out/target/common/obj/JAVA_LIBRARIES/android_stubs_current_intermediates/src/
For example, the subdirectory java/lang/ of the above directory contains .java files corresponding to java.lang.* classes, and the subdirectory `android/app/' contains .java files corresponding to android.app.* classes. These .java files dont contain actual code, but just signatures with dummy bodies.
I am assuming that those .java files are generated from the actual source code using a tool. My question is, which is this tool, and is it usable outside of the Android build process?
I want to use that tool to generate stubs for non-Android Java classes.
The "stubs" here is the framework API stub generated by running javadoc tool.
In most cases, when we talk about stub file in Android, we mean the java file generated by aidl tool. For example see How to generate stub in android? - Stack Overflow
In particular, the Android build system contains a makefile called droiddoc.mk that can be used to generate documentation, java API stubs and API xml files, which actually calls javadoc.
droiddoc.mk is under build/core. In build/core/config.mk there is a variable named BUILD_DROIDDOC to make it easier to include the droiddoc.mk.
Look at the droiddoc.mk, it calls javadoc:
javadoc \
\#$(PRIVATE_SRC_LIST_FILE) \
-J-Xmx1280m \
$(PRIVATE_PROFILING_OPTIONS) \
-quiet \
-doclet com.google.doclava.Doclava \
-docletpath $(PRIVATE_DOCLETPATH) \
-templatedir $(PRIVATE_CUSTOM_TEMPLATE_DIR) \
$(PRIVATE_DROIDDOC_HTML_DIR) \
$(addprefix -bootclasspath ,$(PRIVATE_BOOTCLASSPATH)) \
$(addprefix -classpath ,$(PRIVATE_CLASSPATH)) \
-sourcepath $(PRIVATE_SOURCE_PATH)$(addprefix :,$(PRIVATE_CLASSPATH)) \
-d $(PRIVATE_OUT_DIR) \
$(PRIVATE_CURRENT_BUILD) $(PRIVATE_CURRENT_TIME) \
$(PRIVATE_DROIDDOC_OPTIONS) \
&& touch -f $#
There is nothing about the stub right? Don't worry, notice that there is a PRIVATE_DROIDDOC_OPTIONS variable, and
PRIVATE_DROIDDOC_OPTIONS := $(LOCAL_DROIDDOC_OPTIONS)
Many Android.mk files in AOSP, for example the framework/base/Android.mk, contain the include $(BUILD_DROIDDOC) to generate docs. In framework/base/Android.mk, there is a piece of code:
LOCAL_DROIDDOC_OPTIONS:=\
$(framework_docs_LOCAL_DROIDDOC_OPTIONS) \
-stubs $(TARGET_OUT_COMMON_INTERMEDIATES)/JAVA_LIBRARIES/android_stubs_current_intermediates/src \
-api $(INTERNAL_PLATFORM_API_FILE) \
-nodocs
LOCAL_DROIDDOC_CUSTOM_TEMPLATE_DIR:=build/tools/droiddoc/templates-sdk
LOCAL_UNINSTALLABLE_MODULE := true
include $(BUILD_DROIDDOC)
The LOCAL_DROIDDOC_OPTIONS contains a -stubs option. And it will finally put into the javadoc command used by droiddoc.mk.
However, we may notice that the javadoc doesn't contain any option like -stubs. The key is that you can customize the content and format of the Javadoc tool's output by using doclets. The Javadoc tool has a default "built-in" doclet, called the standard doclet, that generates HTML-formatted API documentation. You can modify or subclass the standard doclet, or write your own doclet to generate HTML, XML, MIF, RTF or whatever output format you'd like.
We can use the -doclet option to specify our customized doclet. And the javadoc command in droiddoc.mk use the -doclet com.google.doclava.Doclava. That doclet receives the -stubs option.
Look at the Doclava implementation under external/doclava/src/com/google/doclava/Doclava.java
else if (a[0].equals("-stubs")) {
stubsDir = a[1];
} else if (a[0].equals("-stubpackages")) {
stubPackages = new HashSet<String>();
for (String pkg : a[1].split(":")) {
stubPackages.add(pkg);
}
}
It receives the -stubs option. And here is how it process the stubsDir.
// Stubs
if (stubsDir != null || apiFile != null || proguardFile != null) {
Stubs.writeStubsAndApi(stubsDir, apiFile, proguardFile, stubPackages);
}
And trace the implementation of the Stubs.writeStubsAndApi, you can see why the content in the stub files are like that.
You can even write your own java files and generate your stubs like what the test cases under build/tools/droiddoc/test.
I am having a hard time tracking down a bug in native C code within my Android application. This code sits in Wireshark, which I've ported to Android. I have run this same code on x86 numerous times under Valgrind and GDB and can never find a problem with it. However, when it runs on Android, it seems to behave differently and cause segmentation faults every so often (e.g., after running ~100 times).
To be honest, I do not understand the syntax of the code that well, so I'm having a hard time understanding what assumptions might have been made about an x86 machine that do not hold about an ARM-based processor.
Essentially, what it tries to do is bypass constantly allocating new memory and freeing it, by placing memory in to a "bucket" and allowing it to be reused. Whether or not this is actually better in terms of performance is a separate question. I'm simply trying to adopt pre-existing code. But, to do so, it has a couple main macros:
#define SLAB_ALLOC(item, type) \
if(!type ## _free_list){ \
int i; \
union type ## slab_item *tmp; \
tmp=g_malloc(NITEMS_PER_SLAB*sizeof(*tmp)); \
for(i=0;i<NITEMS_PER_SLAB;i++){ \
tmp[i].next_free = type ## _free_list; \
type ## _free_list = &tmp[i]; \
} \
} \
item = &(type ## _free_list->slab_item); \
type ## _free_list = type ## _free_list->next_free;
#define SLAB_FREE(item, type) \
{ \
((union type ## slab_item *)item)->next_free = type ## _free_list; \
type ## _free_list = (union type ## slab_item *)item; \
}
Then, a couple supporting macros for specific types:
#define SLAB_ITEM_TYPE_DEFINE(type) \
union type ## slab_item { \
type slab_item; \
union type ## slab_item *next_free; \
};
#define SLAB_FREE_LIST_DEFINE(type) \
union type ## slab_item *type ## _free_list = NULL;
#define SLAB_FREE_LIST_DECLARE(type) \
union type ## slab_item *type ## _free_list;
Does anyone recognize any assumptions on x86 that might not fly on an Android phone? Eventually what happens is, SLAB_ALLOC() is called and it returns something from the list. Then, following code attempts to use the memory, and the application segfaults. This leads me to believe it's accessing invalid memory. It happens unpredictably, but it always happens in the first attempt to use memory that SLAB_ALLOC() returns.
Is it possible you're simply running out of memory? The SLAB_ALLOC macro calls g_malloc which aborts if the allocation fails.