I'm trying to launch a native Android app on Intel Atom Z2560, but it always crashes in the same place with SIGILL ILL_ILLOPN (illegal operand) signal.
The crash doesn't happen with -O0.
The compiler I'm using is GCC 4.8 from NDK r10. I tried to set -march to atom, but that doesn't change anything.
Anybody knows how can I configure my build scripts so there is no incompatible code generated?
This is a known bug in NDK r10, see http://b.android.com/73843 for details on it. To avoid the issue, either use an older NDK version, or add (something like) this to your Android.mk:
ifeq ($(TARGET_ARCH_ABI), x86)
LOCAL_CFLAGS += -m32 # NDK r10 x86 bug workaround - http://b.android.com/73843
endif
Related
Do not think I did not search, my Android project (on Eclipse) refuses to recognize std::chrono library. The include is OK in my header file :
#include <chrono>
But when I want use it :
using namespace std::chrono;
I have a : Symbol 'chrono' could not be resolved, and all the functions of chrono are unavailables.
So I use NDK r10e, I add some lines in my Application.mk, which now looks like this :
APP_PLATFORM := android-22
APP_STL := gnustl_static
APP_CPPFLAGS := -std=gnu++11
NDK_TOOLCHAIN_VERSION := 4.8
And in my Android.mk, I add :
LOCAL_CFLAGS += -std=gnu++11
It did not solve my problem. Any ideas ? Bad Eclipse configuration ?
After modifications in mk files, I have build and re-build my project.
This is known problem of GNU libstdc++ in Android NDK. It's built on top of very limited libc (Google's Bionic) and thus can't provide full C++ Standard Library functionality. In particular, std::chrono is almost completely disabled at build time, but not only std::chrono. There are many other classes and functions being disabled, so Google's NDK just don't support C++ fully.
You can switch to LLVM libc++ (APP_STL := c++_static), but it has experimental status in Google's Android NDK and is actually unstable (i.e. it cause crashes in the application even for completely standard C++ code). This instability is caused by the same reason as for GNU libstdc++ - i.e. because it's built on top of very limited libc.
I'd recommend switch to CrystaX NDK - alternative fork of Google's Android NDK, which I've started mainly to solve Google's NDK problems such as non-standard implementations of libc, libc++, etc. CrystaX NDK is developed to work as a drop-in replacement for Google's NDK (except for the fact that it provides fully standard-conforming low-level libraries). In CrystaX NDK, both GNU libstdc++ and LLVM libc++ are much more stable and fully conforming to C++ standard, at least at the same level as they conform to it on GNU/Linux. In particular, std::chrono is fully implemented there and works just fine. Also, in CrystaX NDK, you can use more recent compilers such as gcc-5.3 and clang-3.7, with better support of C++11 and C++14. I'd be happy if it helps you.
When I ndk compile a project using r10b 64 bit builder it compiles good without any problem
I am able to run the project in Lollipop succesfully and app runs as it supposed to be
But when I run the project in JellyBean at runtime I get the following
error
could not load library "libopenvpn.so" needed by
"/data/data/de.blinkt.openvpn/cache/pievpn.armeabi-v7a"; caused by
soinfo_relocate(linker.cpp:987): cannot locate symbol "srandom"
referenced by "libopenvpn.so"...CANNOT LINK EXECUTABLE
so when I researched I found its due to using 64 builder and solution is to use 32 bit builder.
When I use 32 builder I get the following error during compilation itself.
Android NDK: NDK Application 'local' targets unknown ABI(s): arm64-v8a
x86_64 Android NDK: Please fix the APP_ABI definition in
./jni/Application.mk
/Users/ShajilShocker/Documents/Android/NDK/android-ndk-r10b/build/core/setup-app.mk:112:
*** Android NDK: Aborting . Stop.
so If I omit arm64-v8a and x86_64 then it'd possibly compile but it won't run on 64 bit devices it seems.
is it possible that I can compile the same project first using 32 bit (commenting 64 architectures) and compile using 64 bit(uncomment 64 architectures) and run on both.
Any help is highly appreciated !
Thanks !
64-bit ARM & X86 devices (not sure about MIPS) running Lollipop can execute 32 or 64-bit native code (ARMv7a/ARMv8 and X86/X64). Android allows you to bind native code libraries with multiple ABI's (CPU-specific code) into an APK. These are also called "FAT" binaries. For example, to build a FAT binary containing both ARMv7a and ARMv8 code, add the following line to your Application.mk file:
APP_ABI := arm64-v8a armeabi-v7a
Then, in your Android.mk file, you can add specific settings for each CPU type:
ifeq ($(TARGET_ARCH_ABI),armeabi-v7a)
<your custom ARM 32-bit build instructions here>
endif
ifeq ($(TARGET_ARCH_ABI),arm64-v8a)
<your custom ARM 64-bit build instructions here>
endif
When you run your fat binary containing both 32 and 64-bit code on a 32-bit system, it will load the 32-bit code and vice versa. There shouldn't be any need to conditionally compile in the code for each target device. That's the purpose of the fat binary - the system automatically loads the library appropriate for the target architecture.
You should use NDK Revision 10c at least to support 64-bit system, according to the official documentation, https://developer.android.com/about/versions/android-5.0-changes.html#64BitSupport .
I'm trying to compile a native Android library using the Intel c++ compiler.
The library compiles without problems using gcc 4.8 (I'm using some c++11 code) but when I set NDK_TOOLCHAIN := x86-icc, it tries to include the stl headers from gcc-4.6
I've read the intel compiler documentation, but I can't find a way to change the include path on the command line. Also setting NDK_TOOLCHAIN_VERSION to 4.8 or specifying a compiler with -gcc-name has no effect.
Is the path hardcoded into the compiler?
Open the file {ndk}/toolchains/x86-icc/setup.mk and change the variable GCC_TOOLCHAIN_VERSION from 4.6 to 4.8.
At least for my small code sample it worked.
My Android mixed (Java/NDK) project was going along nicely. Then suddenly I got a report that the native library failed to load on Android 1.5.
On the emulator, it reproduced perfectly. There's an UnsatisfiedLinkError thrown on the System.loadLibrary() call. The message says "library can't be found", but it's right there. Earlier versions of the app worked on Android 1.5; but I didn't test every single build on it.
I don't remember introducing any new RTL calls recently. Moreover, I've dumped all my imported symbols with readelf -s mylib.so | grep UND and matched them against readelf -s output for libc.so and libstdc++.so, as pulled from the 1.5 emulator. All the symbols can be found in either of those libraries, with the exception of three:
__cxa_begin_cleanup
__cxa_type_match
__cxa_call_unexpected
But those are weak references. They shouldn't cause an error. And also, I couldn't find them in Android 4.2's libraries either. Where do they reside? Are they indeed to blame?
If those symbols are causing the error, I didn't insert the dependency on them - the NDK toolchain did. Probably rebuilding against an older version of NDK libraries would help; is there a way to specify the NDK platform version to use on per-CPU_ABI basis? I'm building for all four CPU_ABI; in android-ndk-r9d\platforms\android-3, only ARM can be found.
EDIT: rebuilding with
ifeq ($(TARGET_ARCH_ABI),armeabi)
TARGET_PLATFORM := android-3
endif
doesn't help. Neither does scratching all but ARM and adding
APP_PLATFORM := android-3
to Application.mk.
We have an Android NDK project that has three different build configurations:
DEBUG - armeabi
DEBUG - armeabi-v7a
RELEASE - aremabi + armeabi-v7a
We specify separate aremabi and armeabi-v7 debug configurations due to a known bug in the Android loader, where if more than one EABI is specified, the debugger may launch the wrong EABI version of the App and no native breakpoints will ever hit (More details here, at the end of the document).
On the past, we edited the Application.mk file and specified the desired EABI by means of the APP_ABI variable.
We would like to avoid this manual editing and take advantage of Eclipse's Build Configurations and choose the proper EABI setting automatically.
So far, we have a working solution by adding conditionals to the Application.mk file
Here is how our Application.mk looks:
ifeq ($(BUILD_CONFIG),RELEASE)
APP_OPTIM := release
APP_ABI := armeabi armeabi-v7a
else ifeq ($(BUILD_CONFIG),ARMEABIV7A_DEBUG)
APP_OPTIM := debug
APP_ABI := armeabi-v7a
else ifeq ($(BUILD_CONFIG),ARMEABI_DEBUG)
APP_OPTIM := debug
APP_ABI := armeabi
endif
Additionally, we customised the compiler build command line in Eclipse so that the proper BUILD_CONFIG variable is passed to the make script.
This works very well for compilation purposes, but the problem begins when we try to debug the application. The thing is that we don't know how to pass the BUILD_CONFIG variable to the ndk-gdb script.
Running the ndk-build DUMP_APP_ABI command will always return ARMEABI (expected since we are not explicitly defining the BUILD_CONFIG parameter), and as far as I understand, this is the value that ndk-gdb is reading in order to decide which version of the application will be launched by the debugger.
Has anyone managed to get this working or have an alternative solution where we can get compilation and debugging working properly with Eclipse's Build Configurations? Running a command that patches or renames the Application.mk file is a possibility, but we don't know how to do that either.
Android 4.0 has bug. If you provide armeabi and armeabi-v7a code then armeabi code is loaded even if you have ARMv7 compatible CPU. Android 4.0 ignores armeabi-v7a when armeabi is available.
That is why you can create 2 versions of your lib targeted to armeabi (ARMv5)
But there is no ARMv5 CPUs (HTC Hero)
So most CPUs are ARMv6 or ARMv7
You should detect your CPU in Java and load proper native lib.
Doing these will give you control what .so loaded exactly.
You would be able to create lib with NEON support.