Using System.getProperty("os.arch") to check if it is armeabi cpu - android

I'm having the following issue with RenderScript on some old 4.2.2- devices (galaxy s3 mini, galaxy ace 3, galaxy fresh, etc.) - Android - Renderscript Support Library - Error loading RS jni library.
I want to implement the suggested solution but what exactly will be the value returned by
System.getProperty("os.arch");
for armeabi devices (not armeabi-v7 devices).
Thanks.

The method System.getProperty is a generic method of Java, here you can find the documentation.
On Linux it returns the same value obtained from the command uname -m. The possible values are for example armv5t, armv5te, armv5tej, armv5tejl, armv6, armv7, armv7l, i686 and many more. There isn't an exact value for the armeabi devices because it slightly differs from cpu to cpu.
There is a better alternative to System.getProperty and it is the field Build.CPU_ABI (or Build.SUPPORTED_ABIS in newer devices):
String abi = null;
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.LOLLIPOP) {
abi = Build.CPU_ABI;
} else {
abi = Build.SUPPORTED_ABIS[0];
}
The possible values are armeabi, armeabi-v7a, arm64-v8a, x86, x86_64, mips, mips64.
As you can see the number of possible results is much lower than System.getProperty, and you can check directly for armeabi.

Related

Having LLVM IR library how to crosscompile it to iOS, Android, Windows and Mac from Ubuntu?

I have representations of all my dependencies and my library in LLVM IR forms. How to cross-compile my library into a shared object for iOS, Android, Windows and Mac platforms from Linux ( Ubuntu for example )?
Please provide a single example script that would compile any example library with at least one dependency on to another library of your choice to all 4 platforms ( for example OpenCV or ZeroMQ 4+ ).
Using the LLVM static compiler (llc), you can compile the LLVM IR into object files for a specific target triple. Though the target triples are not documented very well, the LLVM infrastructure is all open source, so a quick search through the source code will lead you here.
Unfortunately, there is no documentation for a discrete list of possible target triples you can use. However, if you know exactly what system you're targeting, constructing a triple is fairly easy. Taken from the target triple documentation, you can see :
The triple has the general format <arch><sub>-<vendor>-<sys>-<abi>,
where:
arch = x86_64, i386, arm, thumb, mips, etc.
sub = for ex. on ARM: v5, v6m, v7a, v7m, etc.
vendor = pc, apple, nvidia, ibm, etc.
sys = none, linux, win32, darwin, cuda, etc.
abi = eabi, gnu, android, macho, elf, etc.
Once you figure out what target triple you're using, you specify it as a string using the -mtriple flag. Here are some examples:
Windows: -mtriple=i686-pc-win32-gnu
Linux: -mtriple=i686-pc-linux-gnu
IOS: -mtriple=armv7-apple-ios
Android: -mtriple=arm-linux-androideabi
Next, you need to specify that you want to compile an object file using the filetype flag:
-filetype=obj
This should be enough if I understand your question correctly.
If you're expecting to use a single file on all platforms and operating systems, while this is possible, it would take a lot of work and I wouldn't expect an answer regarding that here on stackoverflow.
From this link, There is a Variable LLVM_TARGETS_TO_BUILD and the definition says that
A semicolon delimited list controlling which targets will be built and linked into llc. This is equivalent to the --enable-targets option in the configure script. The default list is defined as LLVM_ALL_TARGETS, and can be set to include out-of-tree targets. The default value includes: AArch64, AMDGPU, ARM, BPF, Hexagon, Mips, MSP430, NVPTX, PowerPC, Sparc, SystemZ, X86, XCore.
You should add the X86 and ARM is present in it. you need to add support for 64 and Apple
From this link
It is possible to cross compile
The example command looks like
% cmake -G "Ninja" -DCMAKE_OSX_ARCHITECTURES="armv7;armv7s;arm64"
-DCMAKE_TOOLCHAIN_FILE=<PATH_TO_LLVM>/cmake/platforms/iOS.cmake
-DCMAKE_BUILD_TYPE=Release -DLLVM_BUILD_RUNTIME=Off -DLLVM_INCLUDE_TESTS=Off
-DLLVM_INCLUDE_EXAMPLES=Off -DLLVM_ENABLE_BACKTRACES=Off [options]
<PATH_TO_LLVM>
Also I would like to share this link. It says
The basic option is to define the target architecture. For that, use -target . If you don’t specify the target, CPU names won’t match (since Clang assumes the host triple), and the compilation will go ahead, creating code for the host platform, which will break later on when assembling or linking.
The triple has the general format <arch><sub>-<vendor>-<sys>-<abi>, where:
arch = x86_64, i386, arm, thumb, mips, etc.
sub = for ex. on ARM: v5, v6m, v7a, v7m, etc.
vendor = pc, apple, nvidia, ibm, etc.
sys = none, linux, win32, darwin, cuda, etc.
abi = eabi, gnu, android, macho, elf, etc.

Does ACE+TAO/OpenDDS support a 64-bit GCC toolchain?

ACE+TAO: 6.3.2
OpenDDS: 3.11
Host compiler: GCC 5.4
As I cross-compile OpenDDS for Android, I'm looking at ACE_wrappers/build/arm/include/makeinclude/platform_android.GNU which appears to do the cross-compiling for ACE, and it appears to only build for ARM-v7a.
The reason why I say this is that I'm getting the following error when compiling the auto-generated files in my application ((which come from using opendds_idl on the *.idl), and after a bunch of "In file included from" lines, ends up with ...
[exec] /home/me/tools/crystax-ndk/sources/cxx-stl/gnu-libstdc++/5/include/limits:1601:7: internal compiler error: Illegal instruction
[exec] max() _GLIBCXX_USE_NOEXCEPT { return __FLT_MAX__; }
[exec] ^
I've seen something like this before when I've compiled code which had some wrong flags for the CPU architecture. So my thinking is that maybe there's some incompatible toolchain settings on GCC which I use on my app and those settings used by ACE+TAO/OpenDDS? The CROSS_COMPILE variable in platform_android.GNU is arm-linux-androideabi- ... which as far as I know is a 32 bit toolchain, i.e., arm-v7a and I see no v8a references. And yet in my app I'm using aarch64-linux-android-5. Should these be compatible? Can the tool chain be changed?
What I'd like to do is build ACE+TAO/OpenDDS/my-application for the target architecture and ABI ... arm64: arm64-v8a and use the NDK toolchain and target ABI ... aarch64-linux-android-5: arm64-v8a.
Thoughts?
This should be possible, but probably configuration files are outdated. First, update to ACE+TAO 6.3.4 which is the latest. Second, check the file include/makeinclude/platform_android.GNU and see if your target is there. It could be that some small updates are necessary, if so, please open a pull request at https://github.com/DOCGroup/ACE_TAO with the necessary changes. Search for arm-v7a and look if at that place a new check for arm-v8a is necessary.

Running Android NDK binary on Linux desktop

I downloaded an APK from Play Store that contains native code binaries. In the APK file there is an lib/x86 folder that supposedly contains a library file containing native procedures, normally a .so extension. Since the code is in x86, is it possible to write a Java program to invoke the library on the desktop? Even if you dont have the source code for that library. The NDK function just has to accept parameters and return a value. For example, can we write
class AppNativeLoader
{
public static native void generateRand(int seed);
static
{
System.loadLibrary( "AndroidNDKLib" );
}
}
public class WCallTest
{
public static void main( String[ ] args )
{
long seed = System.currentTimeMillis();
if(args.length > 0) {
seed = Long.valueOf(args[0]);
}
long rand = AppNativeLoader.generateRand(seed);
System.out.println(rand);
}
}
NOTE: This is just an example. The actual environment differs. Using JRE 7 on RHEL, I extracted the x86 .so and placed it in the same directory as the .class file. I still get an UnSatisfiedLinkerError. Anything amiss? Assuming there are no callbacks and the function doesn't utilize and Android APIs, is this possible?
EDIT: I opened the lib in IDA Pro and I saw the following dependencies
.plt:0000B100 ; Needed Library 'liblog.so'
.plt:0000B100 ; Needed Library 'libz.so'
.plt:0000B100 ; Needed Library 'libc.so'
.plt:0000B100 ; Needed Library 'libm.so'
.plt:0000B100 ; Needed Library 'libstdc++.so'
.plt:0000B100 ; Needed Library 'libdl.so'
These should be available in my desktop environment, no?
Not all Linux environments are identical (even crossing distribution boundaries is not guaranteed to work). NDK binaries are built against Bionic and a handful of other Android specific libraries, whereas your RedHat system uses glibc and a bunch of other things available from the RedHat repositories.
tl;dr you can't run Android binaries on desktop Linux.
You can try downloading the needed shared libraries from here (make sure to choose the correct API version, and an architecture matching the architecture of the NDK shared library, to find out which shared libraries you need you can simply use ldd).
Then, to easily access the methods exposed by the shared lib, you can decompile the java code of the app using jadx, and then write your own code around the JNI classes.
Then, to compile your java code, you can use any version of the JDK.
Then, to execute it, you'll have to use a version of JRE matching the architecture of the NDK shared library (in your case, you'll have to download the 32-bit JRE).
However, this is not guaranteed to work: I am currently getting segfaults in the NDK shared library I'm trying to use on my PC, and since most NDK binaries are stripped, debugging is going to be a nightmare.

Checking processor capabilities in android

I'm using FFMPEG in an app, and I'm using the following configuration:
--extra-cflags=' -march=armv7-a -mfloat-abi=softfp -mfpu=neon'
I'm targeting 4.0+ so I believe armv7-a should be supported by most non Intel devices, and I'm sure the neon extension is supported in most devices as well, but I'm not sure how I can find that out for all 2000+ devices.
Is there a way to check in Android the processor type and extensions and/or in the Google Play Store to limit the apk to devices with certain processors?
I'll answer you last question first: you can limit the apk to devices with certain processors on the Play Store, but not with the granularity you're looking for.
In short: if you upload an apk containing native libs only for armv7-a, it won't be downloadable from an x86, mips or armv6 device. But the choice stops there: both NEON and non-NEON devices are considered armv7-a devices, so that does not solve your problem. The check has to be done at runtime.
Checking the processor architecture and capabilities in Java on Android is not easy to do: there's no API for that.
On the other side, this can easily be done using the NDK, which contains a cpufeatures module (you can find documentation on it in your NDK install folder). This module lets you:
find the architecture of the device using android_getCpuFamily()
get additional details using android_getCpuFeatures(): this is what you're looking for, as these details contain the ANDROID_CPU_ARM_FEATURE_NEON flag indicating NEON compatibility!
In practice, implementing a JNI function such as the following one and calling it from Java should do the trick:
#include <jni.h>
#include <cpu-features.h>
jboolean
Java_com_my_namespace_MyClass_isNeon
(JNIEnv *env, jclass class) {
uint64_t features = android_getCpuFeatures();
if ((android_getCpuFamily() != ANDROID_CPU_FAMILY_ARM) ||
((features & ANDROID_CPU_ARM_FEATURE_NEON) == 0 )) {
return JNI_FALSE;
}
else {
return JNI_TRUE;
}
}
it will return true if the device is ARMv7 and features NEON, false otherwise!

Interchangeability of compiled LKMs

Is it possible, to use a Loadable Kernel Module, compiled for 3.0.8+ mod_unload ARMv5 (my self-make'd kernel) in a kernel with version 3.0.31-gd5a18e0 SMP preempt mod_unload ARMv7 (android stock-kernel)?
The module itself contains nearly nothing, just
// Defining __KERNEL__ and MODULE allows us to access kernel-level code not usually available to userspace programs.
#undef __KERNEL__
#define __KERNEL__
#undef MODULE
#define MODULE
// Linux Kernel/LKM headers: module.h is needed by all modules and kernel.h is needed for KERN_INFO.
#include <linux/module.h> // included for all kernel modules
#include <linux/kernel.h> // included for KERN_INFO
#include <linux/init.h> // included for __init and __exit macros
MODULE_AUTHOR("martin");
MODULE_LICENSE("GPL");
static int __init hello_init(void)
{
//printk(KERN_INFO "Hello world!\n");
return 0; // Non-zero return means that the module couldn't be loaded.
}
static void __exit hello_cleanup(void)
{
//printk(KERN_INFO "Cleaning up module.\n");
}
module_init(hello_init);
module_exit(hello_cleanup);
I'm forcing the insmod but then the kernel-crashes:
<1>[ 328.025360] Unable to handle kernel NULL pointer dereference at
virtual address 00000061 <1>[ 328.025695] pgd = c1be8000 <1>[
328.025848] [00000061] *pgd=00000000 <0>[ 328.026184] Internal error: Oops: 5 [#1] PREEMPT SMP <4>[ 328.026519] Modules linked in:
airstream_interceptor(+)
I use
CROSS_COMPILE=/home/adminuser/WORKING_DIRECTORY/prebuilt/linux-x86/toolchain/arm-eabi-4.4.3/bin/arm-eabi-
KDIR ?= /home/adminuser/WORKING_DIRECTORY/android-3.0
ARCH=arm
for both building the kernel and now building module. But the system on it should be inserted uses it's own factory kernel.
I try to build a kernel modul which can be used on several android-phones (arm,armv5,armv7 and so on) but I want to use 1 for all (if this is possible in any way).
(edit)
CONCLUSION #1
it should not be possible to compile one version for all ARM-devices:
compile LKM for kernel 3.0.8 for ARMv5 and use it on a kernel 3.0.39 ARMv7
it may (untested at the moment!) be possible, to compile on the lowest level (ARMv5) and use it in higher levels (ARMv6, ARMv7)
compile LKM for kernel 3.0.8 for ARMv5 and use it on a kernel 3.0.8 ARMv7
it may be possible to interchange the kernel-versions (maybe if simple LKM)
compile LKM for kernel 3.0.8 for ARMv5 and use it on a kernel 3.0.39 ARMv5
Open questions at the moment:
1.)
I tried (with common-kernel 3.0.8 and omap-kernel 3.0.39) to build for ARMv7 but the result is always a ARMv5-LKM.
I manually edited the .config, removed the ARMv5-line and added the ARMv7-line (which was nowhere in the .config):
#CONFIG_CPU_32v5=y # I added the #
CONFIG_CPU_V7=y # didn't exist
CONFIG_CPU_32v7=y # didn't exist
but if I then re-run "make" on the kernel-source, the file get's automatically edited and my v7-config where removed.
Some months ago, I remember this was no problem, I just added the 2 lines and it worked.
Is this a thing of the kernel-source or the used toolchain?
2.)
What is the difference between e.g. the "omap-kernel" and the "common-kernel" in view of the LKM-building? Just an other kernel-version (e.g. common-kernel has now 3.0.53 and omap-kernel 3.0.39)? I think I can "ignore" the specific variants and use the common-kernel for LKM-compiling?
Many thanks to alkalinity, auselen & Marko at the moment - you're helping me out of the mud.
You can't use the same binary driver with different versions of Linux.
Linux does not have a binary kernel interface, nor does it have a stable kernel interface. (source)
No, this isn't possible. The Linux kernel is architecture-specific, and ARMv5 modules are not compatible with ARMv7. There are different header files needed, which will have different instruction sets, or register mappings, or any number of important variants.
In any event, the kernel versions are different in this case too, which means that the kernel API can vary, and therefore the kernel module will not work in all likelihood, even if the architecture were the same.
You'll have to cross-compile separate versions of your kernel module. This isn't too difficult if you have access to the whole kernel tree. The manufacturer should have released their kernel sources (as per the GPL). If they have not, they owe you sources.
If you're interested in reading up on the specifics of loading kernel modules, IBM has a great "anatomy" series of articles. Here's on on loadable kernel modules. Jump to the section on "module loading details" to understand why the kernel rejects insertion of your module in the absence of the force-load.

Categories

Resources