I have C code that uses prints with something clever like
printf("hello ");
// do something
printf(" world!\n");
which outputs
hello world!
I want to reuse that code with Android and iOS, but Log.d() and NSLog() effectively add a newline at the end of every string I pass them, so that the output of this code:
NSLog(#"hello ");
// do something
NSLog(#"world!\n");
comes out (more or less) as:
hello
world!
I'm willing to replace printf with some macro to make Log.d and NSLog emulate printf's handling of '\n'; any suggestions?
One solution that might work is to define a global log function that doesn't flush its buffer until it finds a newline.
Here's a (very) simple version in java for android:
import java.lang.StringBuilder;
class CustomLogger {
private static final StringBuilder buffer = new StringBuilder();
public static void log(String message) {
buffer.append(message);
if(message.indexOf('\n') != -1) {
Log.d('SomeTag', buffer);
buffer.setLength(0);
}
}
}
...
CustomLogger.log("Hello, ");
// Stuff
CustomLogger.log("world!\n"); // Now the message gets logged
It's completely untested but you get the idea.
This particular script has some performance issues. It might be better to check if just the last character is a newline for example.
I just realized that you wanted this in C. It shouldn't be too hard to port though a standard lib wouldn't hurt (to get stuff like a string buffer).
For progeny, this is what I did: store logged strings in a buffer, and print the part before the newline whenever there is a newline in the buffer.
Yes, the NDK logcat is dumb about it. There are ways to redirect stderr/stdout to logcat, but there are drawbacks (either need to "adb shell setprop" which is only for rooted devices, or a dup() like technique but creating a thread just for that purpose is not a good idea on embedded devices IMHO though you can look further below for this technique).
So I did my own function/macros for that purpose. Here are snippets. In a debug.c, do this:
#include "debug.h"
#include <stdio.h>
#include <stdarg.h>
static const char LOG_TAG[] = "jni";
void android_log(android_LogPriority type, const char *fmt, ...)
{
static char buf[1024];
static char *bp = buf;
va_list vl;
va_start(vl, fmt);
int available = sizeof(buf) - (bp - buf);
int nout = vsnprintf(bp, available, fmt, vl);
if (nout >= available) {
__android_log_write(type, LOG_TAG, buf);
__android_log_write(ANDROID_LOG_WARN, LOG_TAG, "previous log line has been truncated!");
bp = buf;
} else {
char *lastCR = strrchr(bp, '\n');
bp += nout;
if (lastCR) {
*lastCR = '\0';
__android_log_write(type, LOG_TAG, buf);
char *rest = lastCR+1;
int len = bp - rest; // strlen(rest)
memmove(buf, rest, len+1); // no strcpy (may overlap)
bp = buf + len;
}
}
va_end(vl);
}
Then in debug.h do this:
#include <android/log.h>
void android_log(android_LogPriority type, const char *fmt, ...);
#define LOGI(...) android_log(ANDROID_LOG_INFO, __VA_ARGS__)
#define LOGW(...) android_log(ANDROID_LOG_WARN, __VA_ARGS__)
...
Now you just need to include debug.hpp and call LOGI() with a printf-like semantic buffered until a '\n' is encountered (or buffer is full).
This is not perfect though, as if the string generated from a call is longer than the buffer, it will be truncated and output. But frankly, 1024 chars should be enough in most cases (even less than this). Anyway, if this happens it will output a warning so you know about it.
Also note the vsnprintf() is not standard C (but it works in Android NDK). We could use vsprintf() instead (which is standard), but it is unsafe on its own.
======================================================================
Now for the dup() technique, you can look here (James Moore answer).
Then you can get rid of the function above and define your macro as:
#define LOG(...) fprintf(stderr, ...)
and you're done.
Advantages:
C/C++ libraries often use stderr for their logs. Using dup is the only way to have their output in logcat without modifying their code (some big ones use hundreds of direct calls to fprintf(stderr, ...))
stderr is standard C used since decades. All standard C library functions related to streams can be used with it. Same for C++, you can even use cerr with << operator. It works since under the hood, it still stderr.
Very long lines not truncated (instead, their are split). A good reason to use a shorter buffer (256 in the example).
Disadvantages:
A thread on its own (though it's an IO only thread, impact is close to nothing)
No log priority value (INFO, WARN, ERROR, etc...) can be choosen during the call. It uses a default one (INFO), so DMMS will always show stderr lines in the same color.
You could always just build the string one segment at a time:
String message = "Hello";
// Do Something
message += " World!";
Log.v("Example", message);
Related
I've been looking at libbinder.so, specifically IPCThreadState.cpp, line 781
In libbinder.so, writes are serialized using this line: TextOutput::Bundle _b(alog); which locks a mutex.
The "call tree" for the writes is:
alog << "Sending commands to driver: " << indent;
template<typename T>
TextOutput& operator<<(TextOutput& to, const T& val)
{
std::stringstream strbuf;
strbuf << val;
std::string str = strbuf.str();
to.print(str.c_str(), str.size());
return to;
}
status_t BufferedTextOutput::print(const char* txt, size_t len)
virtual status_t writeLines(const struct iovec& vec, size_t N)
{
//android_writevLog(&vec, N); <-- this is now a no-op
if (N != 1) ALOGI("WARNING: writeLines N=%zu\n", N);
ALOGI("%.*s", (int)vec.iov_len, (const char*) vec.iov_base);
return NO_ERROR;
}
#define ALOGI(x...) fprintf(stderr, "svcmgr: " x)
I understand how writes to log are serialized within libbinder.so but how are writes serialized between multiple .so libraries?
libbinder.so writes to stderr but surely there are other libs that also write to stderr.
The lock is required to synchronize parts of the .<< chain. fprintf() is synchronized by the system.
It's going to come down to how the underlying libc (Bionic on Android, not glibc as commonly found on Linux) implemented stdout and stderr.
On Linux glibc buffers stdout at the line level; that is, any thread that writes anything to stdout, followed by a \n, that line will be printed on the terminal intact, not interleaved. In contrast, stderr is not buffered - it's output immediately, which means that two threads writing to stderr simulatanouesly will cause the output to be interleaved.
On Android I think (I am not an Android programmer) it's different. From what I can tell stdout and stderr are directed through to something called logcat. Kinda makes sense - there's no terminal on which stdout and stderr is displayed, so why not have it hoovered up by some service? I'm speculating, but I strongly suspect that all Bionic does with stderr and stdout is write data down an IPC pipe, with logcat at the other end.
The thing about pipes in the Linux kernel is that writes to the pipe are atomic (for writes below 4kbytes). So, as long as the application's thread's output to stdout or stderr results in a single call to write() to that IPC pipe, it will be atomic and therefore not interleaved. So if a thread in the application calls something like fprintf(stderr, "%i %s %c\n", var, str, c) and Bionic builds a string which it then submits to the IPC pipe with a single write(logcatpipe, buf, len), then you're good.
I re-emphasise that this is mere speculation; it might help further matters. But if it is correct, then you won't get interleaving no matter how many threads simultaneously write to stderr or stdout.
EDIT
This S.O. Question might be useful. The solutions there do seem to involve pipes, so if used then the pipe writes will be atomic, and you won't get interleaving.
I know that OpenMP is included in NDK (usage example here: http://recursify.com/blog/2013/08/09/openmp-on-android ). I've done what it says on that page but when I use: #pragma omp for on a simple for loop that scans a vector, the app crashes with the famous "fatal signal 11".
What am I missing here? Btw I use a modified example from the Android samples, it's Tutorial 2 Mixed Processing. All I want is to parallelize (multithread) some of the for loops and nested for loops that I have in the jni c++ file while using OpenCV.
Any help/suggestion is appreciated!
Edit: sample code added:
#pragma omp parallel for
Mat tmp(iheight, iwidth, CV_8UC1);
for (int x = 0; x < iheight; x++) {
for (int y = 0; y < iwidth; y++) {
int value = (int) buffer[x * iwidth + y];
tmp.at<uchar>(x, y) = value;
}
}
Based on this: http://www.slideshare.net/noritsuna/how-to-use-openmp-on-native-activity
Thanks!
I think this is a known issue in GOMP, see Bug 42616 and Bug 52738.
It's about your app will crash if you try to use OpenMP directives or functions on a non-main thread, and can be traced back to the gomp_thread() function (see libgomp/libgomp.h # line 362 and 368) which returns NULL for threads you create:
#ifdef HAVE_TLS
extern __thread struct gomp_thread gomp_tls_data;
static inline struct gomp_thread *gomp_thread (void)
{
return &gomp_tls_data;
}
#else
extern pthread_key_t gomp_tls_key;
static inline struct gomp_thread *gomp_thread (void)
{
return pthread_getspecific (gomp_tls_key);
}
#endif
As you can see GOMP uses different implementation depending on whether or not thread-local storage (TLS) is available.
If it is available, then HAVE_TLS flag is set, and a global variable is used to track the state of each thread,
Otherwise, thread-local data will be managed via the function pthread_setspecific.
In the earlier version of NDKs the thread-local storage (the __thread keyword) isn't supported so HAVE_TLS won't be defined, therefore pthread_setspecific will be used.
Remark: I'm not sure whether __thread is supported or not in the last version of NDK, but here you can read the same answers about Android TLS.
When GOMP creates a worker thread, it sets up the thread specific data in the function gomp_thread_start() (line 72):
#ifdef HAVE_TLS
thr = &gomp_tls_data;
#else
struct gomp_thread local_thr;
thr = &local_thr;
pthread_setspecific (gomp_tls_key, thr);
#endif
But, when the application creates a thread independently, the thread specific data isn't set, and so the gomp_thread() function returns NULL. This causes the crash and this isn't a problem when TLS is supported, since the global variable that's used will always be available
I remember that this issue had been fixed android-ndk-r10d, but it only works with background processes (no Java). It means when you enable OpenMP and create a native thread from JNI (what is called from Java Android) then your app will crash remains.
The Android systrace logging system is fantastic, but it only works in the Java portion of the code, through Trace.beginSection() and Trace.endSection(). In a C/C++ NDK (native) portion of the code it can only be used through JNI, which is slow or unavailable in threads without a Java environment...
Is there any way of either adding events to the main systrace trace buffer, or even generating a separate log, from native C code?
This older question mentions atrace/ftrace as being the internal system Android's systrace uses. Can this be tapped into (easily)?
BONUS TWIST: Since tracing calls would often be in performance-critical sections, it should ideally be possible to run the calls after the actual event time. i.e. I for one would prefer to be able to specify the times to log, instead of the calls polling for it themselves. But that would just be icing on the cake.
Posting a follow-up answer with some code, based on fadden's pointers. Please read his/her answer first for the overview.
All it takes is writing properly formatted strings to /sys/kernel/debug/tracing/trace_marker, which can be opened without problems. Below is some very minimal code based on the cutils header and C file. I preferred to re-implement it instead of pulling in any dependencies, so if you care a lot about correctness check the rigorous implementation there, and/or add your own extra checks and error-handling.
This was tested to work on Android 4.4.2.
The trace file must first be opened, saving the file descriptor in an atrace_marker_fd global:
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define ATRACE_MESSAGE_LEN 256
int atrace_marker_fd = -1;
void trace_init()
{
atrace_marker_fd = open("/sys/kernel/debug/tracing/trace_marker", O_WRONLY);
if (atrace_marker_fd == -1) { /* do error handling */ }
}
Normal 'nested' traces like the Java Trace.beginSection and Trace.endSection are obtained with:
inline void trace_begin(const char *name)
{
char buf[ATRACE_MESSAGE_LEN];
int len = snprintf(buf, ATRACE_MESSAGE_LEN, "B|%d|%s", getpid(), name);
write(atrace_marker_fd, buf, len);
}
inline void trace_end()
{
char c = 'E';
write(atrace_marker_fd, &c, 1);
}
Two more trace types are available, which are not accessible to Java as far as I know: trace counters and asynchronous traces.
Counters track the value of an integer and draw a little graph in the systrace HTML output. Very useful stuff:
inline void trace_counter(const char *name, const int value)
{
char buf[ATRACE_MESSAGE_LEN];
int len = snprintf(buf, ATRACE_MESSAGE_LEN, "C|%d|%s|%i", getpid(), name, value);
write(atrace_marker_fd, buf, len);
}
Asynchronous traces produce non-nested (i.e. simply overlapping) intervals. They show up as grey segments above the thin thread-state bar in the systrace HTML output. They take an extra 32-bit integer argument that "distinguishes simultaneous events". The same name and integer must be used when ending traces:
inline void trace_async_begin(const char *name, const int32_t cookie)
{
char buf[ATRACE_MESSAGE_LEN];
int len = snprintf(buf, ATRACE_MESSAGE_LEN, "S|%d|%s|%i", getpid(), name, cookie);
write(atrace_marker_fd, buf, len);
}
inline void trace_async_end(const char *name, const int32_t cookie)
{
char buf[ATRACE_MESSAGE_LEN];
int len = snprintf(buf, ATRACE_MESSAGE_LEN, "F|%d|%s|%i", getpid(), name, cookie);
write(atrace_marker_fd, buf, len);
}
Finally, there indeed seems to be no way of specifying times to log, short of recompiling Android, so this doesn't do anything for the "bonus twist".
I don't think it's exposed from the NDK.
If you look at the sources, you can see that the android.os.Trace class calls into native code to do the actual work. That code calls atrace_begin() and atrace_end(), which are declared in a header in the cutils library.
You may be able to use the atrace functions directly if you extract the headers from the full source tree and link against the internal libraries. However, you can see from the header that atrace_begin() is simply:
static inline void atrace_begin(uint64_t tag, const char* name)
{
if (CC_UNLIKELY(atrace_is_tag_enabled(tag))) {
char buf[ATRACE_MESSAGE_LENGTH];
size_t len;
len = snprintf(buf, ATRACE_MESSAGE_LENGTH, "B|%d|%s", getpid(), name);
write(atrace_marker_fd, buf, len);
}
}
Events are written directly to the trace file descriptor. (Note that the timestamp is not part of the event; that's added automatically.) You could do something similar in your code; see atrace_init_once() in the .c file to see how the file is opened.
Bear in mind that, unless atrace is published as part of the NDK, any code using it would be non-portable and likely to fail in past or future versions of Android. However, as systrace is a debugging tool and not something you'd actually want to ship enabled in an app, compatibility is probably not a concern.
For anybody googling this question in the future.
Native trace events are supported since API Level 23, check out the docs here https://developer.android.com/topic/performance/tracing/custom-events-native.
I'd like to convert a ASCII char* to wchar_t* in C++ on Linux without using mbstowcs(). On iOS and Windows, this works perfectly. On Android, however, mbstowcs seems to convert things quite literally, one-to-one. Even using different variations of setlocale(), I've been unable to successfully convert.
I might end up with just manually converting it on Android by copying 1 byte, and filling the rest with zeroes. But is this proper for ASCII? Are the first 255 characters of UTF-32/Unicode the same as the ASCII (ISO 8859-1/ISO Latin-1) character set?
To make thinks a bit clearer :
ASCII is a character encoding using values from 0..127 to encode a single character.
Latin-1 is another character set, that extends ASCII by using the values from 128..255 to encode its own characters.
Indeed most architecture byte is 8 bits, so there are still 128 values available when storing ASCII characters in byte.
Several different character set were thus designed to extend ASCII for values from 128..255. Happy accident, the one referred as Latin-1 was used for the first 256 code points in Unicode (as pointed by BoBTFish). So if you have on one hand string of chars that you know is encoded using Latin-1, you can just assign each value to a wchar_t (which will ensure a correct "zero filling" with regard to endianness on your architecture), and it will be a valid wstring of unicode code points corresponding to the same characters. Then, the consumer of your wstring has to interpret its content as unicode code points.
Also, as soon as you cannot guarantee the encoding of the original string is Latin-1, you will run into problems. (eg, UTF-8 encoding is not mapping byte-per-byte to Latin-1).
If you don't mind taking an STL dependency and using string and wstring instead of raw char * and wchar_t * pointers, you can use a function like the following to perform string conversions:
template<typename TARGET, typename SOURCE>
TARGET convertString(const SOURCE &s)
{
TARGET result;
result.assign(s.begin(), s.end());
return result;
}
Use this as follows:
#include <string>
#include <iostream>
using namespace std;
int main()
{
wstring wstr(L"HELLO WORLD");
string str(convertString<string, wstring>(wstr));
cout << str << endl;
return 0;
}
This performs a character-by-character conversion and is platform-independent. This has been tested on Windows using GCC 4.7.3 and Visual C++ 2012 as well as on Linux using GCC 4.7.3.
The following code can be shortened using std::wstring_convert:
#include <string>
#include <locale>
std::string convert(std::wstring str, std::locale loc = std::locale(),
std::mbstate_t state = std::mbstate_t())
{
const wchar_t* a; char *b;
std::string res;
res.resize(str.size());
auto bytes = std::use_facet<std::codecvt<wchar_t, char, std::mbstate_t>>(loc)
.out(state, &str[0], &str[str.size()], a, &res[0], &res[res.size()], b);
return res;
}
int main()
{
std::wstring a = L"abcdef";
std::string b = convert(a);
}
Demo
I build Android project where I use Android NDK with LibXTract to extract audio features. LibXTract use fftw3 library. Project is consisted of button which runs simple example form libxtract:
JNIEXPORT void JNICALL Java_com_androidnative1_NativeClass_showText(JNIEnv *env, jclass clazz)
{
float mean = 0, vector[] = {.1, .2, .3, .4, -.5, -.4, -.3, -.2, -.1}, spectrum[10];
int n, N = 9;
float argf[4];
argf[0] = 8000.f;
argf[1] = XTRACT_MAGNITUDE_SPECTRUM;
argf[2] = 0.f;
argf[3] = 0.f;
xtract[XTRACT_MEAN]((void *)&vector, N, 0, (void *)&mean);
__android_log_print(ANDROID_LOG_DEBUG, "AndNat", "com_androidnative1_NativeClass.c before");
xtract_init_fft(N, XTRACT_SPECTRUM);
__android_log_print(ANDROID_LOG_DEBUG, "AndNat", "com_androidnative1_NativeClass.c after");
// Comment for test purpose
//xtract_init_bark(1, argf[1], 1);
//xtract[XTRACT_SPECTRUM]((void *)&vector, N, &argf[0], (void *)&spectrum[0]);
}
Libxtract function xtract_init_fft locate in jni/libxtract/jni/src/init.c execute fftw3 function fftwf_plan_r2r_1d located at jni/fftw3/jni/api/plan-r2r-1d.c
__android_log_print(ANDROID_LOG_DEBUG, "AndNat", "libxtract/src/init.c before");
fft_plans.spectrum_plan = fftwf_plan_r2r_1d(N, input, output, FFTW_R2HC, optimisation);
__android_log_print(ANDROID_LOG_DEBUG, "AndNat", "libxtract/src/init.c after");
Application hang inside fftwf_paln_r2r_1d without crash or any outher error I must force it to stop working.
fftwf_paln_r2r_1d looks like:
X(plan) X(plan_r2r_1d)(int n, R *in, R *out, X(r2r_kind) kind, unsigned flags)
{
__android_log_print(ANDROID_LOG_DEBUG, "AndNat", "fftw3/api/plan-r2r-1d.c");
return X(plan_r2r)(1, &n, in, out, &kind, flags);
}
From CatLog I can see:
07-16 18:50:09.615: D/AndNat(7313): com_androidnative1_NativeClass.c before
07-16 18:50:09.615: D/AndNat(7313): libxtract/src/init.c before
07-16 18:50:09.615: D/AndNat(7313): fftw3/api/plan-r2r-1d.c
I genereate config.h for fftw3 and libxtract with gen.sh scripts locate in source folder with success. Both librearies are build as static and linked with shared libary libcom_androidnative1_NativeClass.so
Command
nm -Ca libcom_androidnative1_NativeClass.so
shows that used function is included.
Application is built and deploys to device without any problems.
I build fftw3 with flags --disable-alloca, --enable-float and LibXTract with flags --enable-fft and --disable-dependency-tracking
Only ingerention in library source code was added dbgprint and remove define XTRACT_FFT form LibXtract beacouse it can't detect fftw library.
If somebody have any idea about this strange for me behavior please help.
Here I put entire project in github so maybe someone can help me handle this.
https://github.com/bl0ndynek/AndroidNative1
Thanks for FFTW3 maintainer problem is solved.
Solution was to change optimization level from FFTW_MEASURE to FFTW_ESTIMATE (from 1 to 0) in FFTW3,
FFTW's planner (in xtract_init_fft) actually executes and times different possible FFT algorithms in order to pick the fastest plan for a given n. In order to do this in as short a time as possible, however, the timer must have a very high resolution, and to accomplish this FFTW3 employ the hardware cycle counters that are available on most CPUs but not on Android default ARM configuration.
So this algorithm use gettimeofday() witch have low resolution and on ARM took forever on xtract_init_fft.
It looks to me like you are missing some terminating condition in your recursive function X() which would put you in an infinite loop.