I build Android project where I use Android NDK with LibXTract to extract audio features. LibXTract use fftw3 library. Project is consisted of button which runs simple example form libxtract:
JNIEXPORT void JNICALL Java_com_androidnative1_NativeClass_showText(JNIEnv *env, jclass clazz)
{
float mean = 0, vector[] = {.1, .2, .3, .4, -.5, -.4, -.3, -.2, -.1}, spectrum[10];
int n, N = 9;
float argf[4];
argf[0] = 8000.f;
argf[1] = XTRACT_MAGNITUDE_SPECTRUM;
argf[2] = 0.f;
argf[3] = 0.f;
xtract[XTRACT_MEAN]((void *)&vector, N, 0, (void *)&mean);
__android_log_print(ANDROID_LOG_DEBUG, "AndNat", "com_androidnative1_NativeClass.c before");
xtract_init_fft(N, XTRACT_SPECTRUM);
__android_log_print(ANDROID_LOG_DEBUG, "AndNat", "com_androidnative1_NativeClass.c after");
// Comment for test purpose
//xtract_init_bark(1, argf[1], 1);
//xtract[XTRACT_SPECTRUM]((void *)&vector, N, &argf[0], (void *)&spectrum[0]);
}
Libxtract function xtract_init_fft locate in jni/libxtract/jni/src/init.c execute fftw3 function fftwf_plan_r2r_1d located at jni/fftw3/jni/api/plan-r2r-1d.c
__android_log_print(ANDROID_LOG_DEBUG, "AndNat", "libxtract/src/init.c before");
fft_plans.spectrum_plan = fftwf_plan_r2r_1d(N, input, output, FFTW_R2HC, optimisation);
__android_log_print(ANDROID_LOG_DEBUG, "AndNat", "libxtract/src/init.c after");
Application hang inside fftwf_paln_r2r_1d without crash or any outher error I must force it to stop working.
fftwf_paln_r2r_1d looks like:
X(plan) X(plan_r2r_1d)(int n, R *in, R *out, X(r2r_kind) kind, unsigned flags)
{
__android_log_print(ANDROID_LOG_DEBUG, "AndNat", "fftw3/api/plan-r2r-1d.c");
return X(plan_r2r)(1, &n, in, out, &kind, flags);
}
From CatLog I can see:
07-16 18:50:09.615: D/AndNat(7313): com_androidnative1_NativeClass.c before
07-16 18:50:09.615: D/AndNat(7313): libxtract/src/init.c before
07-16 18:50:09.615: D/AndNat(7313): fftw3/api/plan-r2r-1d.c
I genereate config.h for fftw3 and libxtract with gen.sh scripts locate in source folder with success. Both librearies are build as static and linked with shared libary libcom_androidnative1_NativeClass.so
Command
nm -Ca libcom_androidnative1_NativeClass.so
shows that used function is included.
Application is built and deploys to device without any problems.
I build fftw3 with flags --disable-alloca, --enable-float and LibXTract with flags --enable-fft and --disable-dependency-tracking
Only ingerention in library source code was added dbgprint and remove define XTRACT_FFT form LibXtract beacouse it can't detect fftw library.
If somebody have any idea about this strange for me behavior please help.
Here I put entire project in github so maybe someone can help me handle this.
https://github.com/bl0ndynek/AndroidNative1
Thanks for FFTW3 maintainer problem is solved.
Solution was to change optimization level from FFTW_MEASURE to FFTW_ESTIMATE (from 1 to 0) in FFTW3,
FFTW's planner (in xtract_init_fft) actually executes and times different possible FFT algorithms in order to pick the fastest plan for a given n. In order to do this in as short a time as possible, however, the timer must have a very high resolution, and to accomplish this FFTW3 employ the hardware cycle counters that are available on most CPUs but not on Android default ARM configuration.
So this algorithm use gettimeofday() witch have low resolution and on ARM took forever on xtract_init_fft.
It looks to me like you are missing some terminating condition in your recursive function X() which would put you in an infinite loop.
Related
I know that OpenMP is included in NDK (usage example here: http://recursify.com/blog/2013/08/09/openmp-on-android ). I've done what it says on that page but when I use: #pragma omp for on a simple for loop that scans a vector, the app crashes with the famous "fatal signal 11".
What am I missing here? Btw I use a modified example from the Android samples, it's Tutorial 2 Mixed Processing. All I want is to parallelize (multithread) some of the for loops and nested for loops that I have in the jni c++ file while using OpenCV.
Any help/suggestion is appreciated!
Edit: sample code added:
#pragma omp parallel for
Mat tmp(iheight, iwidth, CV_8UC1);
for (int x = 0; x < iheight; x++) {
for (int y = 0; y < iwidth; y++) {
int value = (int) buffer[x * iwidth + y];
tmp.at<uchar>(x, y) = value;
}
}
Based on this: http://www.slideshare.net/noritsuna/how-to-use-openmp-on-native-activity
Thanks!
I think this is a known issue in GOMP, see Bug 42616 and Bug 52738.
It's about your app will crash if you try to use OpenMP directives or functions on a non-main thread, and can be traced back to the gomp_thread() function (see libgomp/libgomp.h # line 362 and 368) which returns NULL for threads you create:
#ifdef HAVE_TLS
extern __thread struct gomp_thread gomp_tls_data;
static inline struct gomp_thread *gomp_thread (void)
{
return &gomp_tls_data;
}
#else
extern pthread_key_t gomp_tls_key;
static inline struct gomp_thread *gomp_thread (void)
{
return pthread_getspecific (gomp_tls_key);
}
#endif
As you can see GOMP uses different implementation depending on whether or not thread-local storage (TLS) is available.
If it is available, then HAVE_TLS flag is set, and a global variable is used to track the state of each thread,
Otherwise, thread-local data will be managed via the function pthread_setspecific.
In the earlier version of NDKs the thread-local storage (the __thread keyword) isn't supported so HAVE_TLS won't be defined, therefore pthread_setspecific will be used.
Remark: I'm not sure whether __thread is supported or not in the last version of NDK, but here you can read the same answers about Android TLS.
When GOMP creates a worker thread, it sets up the thread specific data in the function gomp_thread_start() (line 72):
#ifdef HAVE_TLS
thr = &gomp_tls_data;
#else
struct gomp_thread local_thr;
thr = &local_thr;
pthread_setspecific (gomp_tls_key, thr);
#endif
But, when the application creates a thread independently, the thread specific data isn't set, and so the gomp_thread() function returns NULL. This causes the crash and this isn't a problem when TLS is supported, since the global variable that's used will always be available
I remember that this issue had been fixed android-ndk-r10d, but it only works with background processes (no Java). It means when you enable OpenMP and create a native thread from JNI (what is called from Java Android) then your app will crash remains.
The Android systrace logging system is fantastic, but it only works in the Java portion of the code, through Trace.beginSection() and Trace.endSection(). In a C/C++ NDK (native) portion of the code it can only be used through JNI, which is slow or unavailable in threads without a Java environment...
Is there any way of either adding events to the main systrace trace buffer, or even generating a separate log, from native C code?
This older question mentions atrace/ftrace as being the internal system Android's systrace uses. Can this be tapped into (easily)?
BONUS TWIST: Since tracing calls would often be in performance-critical sections, it should ideally be possible to run the calls after the actual event time. i.e. I for one would prefer to be able to specify the times to log, instead of the calls polling for it themselves. But that would just be icing on the cake.
Posting a follow-up answer with some code, based on fadden's pointers. Please read his/her answer first for the overview.
All it takes is writing properly formatted strings to /sys/kernel/debug/tracing/trace_marker, which can be opened without problems. Below is some very minimal code based on the cutils header and C file. I preferred to re-implement it instead of pulling in any dependencies, so if you care a lot about correctness check the rigorous implementation there, and/or add your own extra checks and error-handling.
This was tested to work on Android 4.4.2.
The trace file must first be opened, saving the file descriptor in an atrace_marker_fd global:
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define ATRACE_MESSAGE_LEN 256
int atrace_marker_fd = -1;
void trace_init()
{
atrace_marker_fd = open("/sys/kernel/debug/tracing/trace_marker", O_WRONLY);
if (atrace_marker_fd == -1) { /* do error handling */ }
}
Normal 'nested' traces like the Java Trace.beginSection and Trace.endSection are obtained with:
inline void trace_begin(const char *name)
{
char buf[ATRACE_MESSAGE_LEN];
int len = snprintf(buf, ATRACE_MESSAGE_LEN, "B|%d|%s", getpid(), name);
write(atrace_marker_fd, buf, len);
}
inline void trace_end()
{
char c = 'E';
write(atrace_marker_fd, &c, 1);
}
Two more trace types are available, which are not accessible to Java as far as I know: trace counters and asynchronous traces.
Counters track the value of an integer and draw a little graph in the systrace HTML output. Very useful stuff:
inline void trace_counter(const char *name, const int value)
{
char buf[ATRACE_MESSAGE_LEN];
int len = snprintf(buf, ATRACE_MESSAGE_LEN, "C|%d|%s|%i", getpid(), name, value);
write(atrace_marker_fd, buf, len);
}
Asynchronous traces produce non-nested (i.e. simply overlapping) intervals. They show up as grey segments above the thin thread-state bar in the systrace HTML output. They take an extra 32-bit integer argument that "distinguishes simultaneous events". The same name and integer must be used when ending traces:
inline void trace_async_begin(const char *name, const int32_t cookie)
{
char buf[ATRACE_MESSAGE_LEN];
int len = snprintf(buf, ATRACE_MESSAGE_LEN, "S|%d|%s|%i", getpid(), name, cookie);
write(atrace_marker_fd, buf, len);
}
inline void trace_async_end(const char *name, const int32_t cookie)
{
char buf[ATRACE_MESSAGE_LEN];
int len = snprintf(buf, ATRACE_MESSAGE_LEN, "F|%d|%s|%i", getpid(), name, cookie);
write(atrace_marker_fd, buf, len);
}
Finally, there indeed seems to be no way of specifying times to log, short of recompiling Android, so this doesn't do anything for the "bonus twist".
I don't think it's exposed from the NDK.
If you look at the sources, you can see that the android.os.Trace class calls into native code to do the actual work. That code calls atrace_begin() and atrace_end(), which are declared in a header in the cutils library.
You may be able to use the atrace functions directly if you extract the headers from the full source tree and link against the internal libraries. However, you can see from the header that atrace_begin() is simply:
static inline void atrace_begin(uint64_t tag, const char* name)
{
if (CC_UNLIKELY(atrace_is_tag_enabled(tag))) {
char buf[ATRACE_MESSAGE_LENGTH];
size_t len;
len = snprintf(buf, ATRACE_MESSAGE_LENGTH, "B|%d|%s", getpid(), name);
write(atrace_marker_fd, buf, len);
}
}
Events are written directly to the trace file descriptor. (Note that the timestamp is not part of the event; that's added automatically.) You could do something similar in your code; see atrace_init_once() in the .c file to see how the file is opened.
Bear in mind that, unless atrace is published as part of the NDK, any code using it would be non-portable and likely to fail in past or future versions of Android. However, as systrace is a debugging tool and not something you'd actually want to ship enabled in an app, compatibility is probably not a concern.
For anybody googling this question in the future.
Native trace events are supported since API Level 23, check out the docs here https://developer.android.com/topic/performance/tracing/custom-events-native.
Following the answer from this StackOverflow question how do I create the proper
integer for mask?
I made some googling and the everything I found uses CPU_SET macro from sched.h but it operates on cpu_set_t structures which are undefined when using NDK. When try using CPU_SET linker gives me undefined reference error (even though I link against pthread).
Well, in the end I found some version which was taken directly from sched.h. Im posting this here if anyone has the same problem and doesn't want to spend the time searching for it. This is quite useful.
#define CPU_SETSIZE 1024
#define __NCPUBITS (8 * sizeof (unsigned long))
typedef struct
{
unsigned long __bits[CPU_SETSIZE / __NCPUBITS];
} cpu_set_t;
#define CPU_SET(cpu, cpusetp) \
((cpusetp)->__bits[(cpu)/__NCPUBITS] |= (1UL << ((cpu) % __NCPUBITS)))
#define CPU_ZERO(cpusetp) \
memset((cpusetp), 0, sizeof(cpu_set_t))
This works well when the parameter type in the original setCurrentThreadAffinityMask (from the post mentioned in the question) is simply replaced with cpu_set_t.
I would like to pay your attention that function from link in the first post doesn't set the thread cpu affinity. It suits to set the process cpu affinity. Of course, if you have one thread in your application it works well but it is wrong for several threads. Check up sched_setaffinity() description for example on http://linux.die.net/man/2/sched_setaffinity
Try add this before your include <sched.h>
#define _GNU_SOURCE
I am using the official Android port of SDL 1.3, and using it to set up the GLES2 renderer. It works for most devices, but for one user, it is not working. Log output shows the following error:
error of type 0x500 glGetIntegerv
I looked up 0x500, and it refers to GL_INVALID_ENUM. I've tracked down where the problem occurs to the following code inside the SDL library: (the full source is quite large and I cut out logging and basic error-checking lines, so let me know if I haven't included enough information here)
glGetIntegerv( GL_NUM_SHADER_BINARY_FORMATS, &nFormats );
glGetBooleanv( GL_SHADER_COMPILER, &hasCompiler );
if( hasCompiler )
++nFormats;
rdata->shader_formats = (GLenum *) SDL_calloc( nFormats, sizeof( GLenum ) );
rdata->shader_format_count = nFormats;
glGetIntegerv( GL_SHADER_BINARY_FORMATS, (GLint *) rdata->shader_formats );
Immediately after the last line (the glGetIntegerv for GL_SHADER_BINARY_FORMATS), glGetError() returns GL_INVALID_ENUM.
The problem is the GL_ARB_ES2_compatibility extension is not properly supported on your system.
By GL_INVALID_ENUM it means that it does not know the GL_NUM_SHADER_BINARY_FORMATS and GL_SHADER_BINARY_FORMATS enums, which are a part of the said extension.
In contrast, GL_SHADER_COMPILER was recognized, which is strange.
You can try using GL_ARB_get_program_binary and using these two instead:
#define GL_NUM_PROGRAM_BINARY_FORMATS 0x87fe
#define GL_PROGRAM_BINARY_FORMATS 0x87ff
Note that these are different from:
#define GL_SHADER_BINARY_FORMATS 0x8df8
#define GL_NUM_SHADER_BINARY_FORMATS 0x8df9
But they should pretty much do the same.
I built a simple method like below
wchar_t buf[1024] = {};
void logDebugInfo(wchar_t* fmt, ...)
{
va_list args;
va_start(args, fmt);
vswprintf( buf, sizeof(buf), fmt, args);
va_end(args);
}
jstring Java_com_example_hellojni_HelloJni_stringFromJNI( JNIEnv* env,
jobject thiz )
{
logDebugInfo(L"test %s, %d..", L"integer", 10);
return (*env)->NewStringUTF(env, buf);
}
I got following warning
In function 'Java_com_example_hellojni_HelloJni_stringFromJNI':
warning: passing argument 1 of 'logDebugInfo' from incompatible pointer type
note: expected 'wchar_t *' but argument is of type 'unsigned int *'
And the resulting string was not correct.
If I removed that L prefix before that formatting string, weird, it worked. But L prefixes were used everywhere in my legacy code.
First I know wchar_t is not portable enough and is very compiler-specific. The size of wchar_t I expected was supposed to be 16 bits. I read some other posts which said it's 32 bits for android but wchar.h, provided by official NDK, it said wchar_t == char, really?
From NDK r5b docs/STANDALONE-TOOLCHAIN.html:
5.2/ wchar_t support:
- - - - - - - - - - -
As documented, the Android platform did not really support wchar_t until
Android 2.3. What this means in practical terms is that:
- If you target platform android-9 or higher, the size of wchar_t is
4 bytes, and most wide-char functions are available in the C library
(with the exception of multi-byte encoding/decoding functions and
wsprintf/wsscanf).
- If you target any prior API level, the size of wchar_t will be 1 byte
and none of the wide-char functions will work anyway.
We recommend any developer to get rid of any dependencies on the wchar_t type
and switch to better representations. The support provided in Android is only
there to help you migrate existing code.
Since you are targeting Android 1.6, it looks as if wchar_t is not suitable for you.
Even in the Android 2.3 platform ("android-9"), there are still notes in a number of places, including wchar.h, which imply that wchar_t is one byte and none of the wide character library functions are implemented. This suggests that the implementation may still be dodgy, so I would be very cautious about using wchar_t on any Android version.
If you're looking for an alternative, I have found UTFCPP to be an excellent and very lightweight library.
This is a bit old, but I've hit this while searching for a solution.
It seems that the NDK (r8d for me) is still not supporting wsprintf:
see issue
and code.
In my case I am using libjson (considering switching to yajl) for iOS/Android shared native code.
Until I'll switch libraries, my workaround for NDK is this:
double value = 0.5; // for example
std::wstringstream wss;
wss << value;
return json_string(wss.str());
I've read that streams are slower than the C functions, and if you need a pure C (and not C++) solution it doesn't help, but maybe someone will find this useful.