I'm using FFMPEG in an app, and I'm using the following configuration:
--extra-cflags=' -march=armv7-a -mfloat-abi=softfp -mfpu=neon'
I'm targeting 4.0+ so I believe armv7-a should be supported by most non Intel devices, and I'm sure the neon extension is supported in most devices as well, but I'm not sure how I can find that out for all 2000+ devices.
Is there a way to check in Android the processor type and extensions and/or in the Google Play Store to limit the apk to devices with certain processors?
I'll answer you last question first: you can limit the apk to devices with certain processors on the Play Store, but not with the granularity you're looking for.
In short: if you upload an apk containing native libs only for armv7-a, it won't be downloadable from an x86, mips or armv6 device. But the choice stops there: both NEON and non-NEON devices are considered armv7-a devices, so that does not solve your problem. The check has to be done at runtime.
Checking the processor architecture and capabilities in Java on Android is not easy to do: there's no API for that.
On the other side, this can easily be done using the NDK, which contains a cpufeatures module (you can find documentation on it in your NDK install folder). This module lets you:
find the architecture of the device using android_getCpuFamily()
get additional details using android_getCpuFeatures(): this is what you're looking for, as these details contain the ANDROID_CPU_ARM_FEATURE_NEON flag indicating NEON compatibility!
In practice, implementing a JNI function such as the following one and calling it from Java should do the trick:
#include <jni.h>
#include <cpu-features.h>
jboolean
Java_com_my_namespace_MyClass_isNeon
(JNIEnv *env, jclass class) {
uint64_t features = android_getCpuFeatures();
if ((android_getCpuFamily() != ANDROID_CPU_FAMILY_ARM) ||
((features & ANDROID_CPU_ARM_FEATURE_NEON) == 0 )) {
return JNI_FALSE;
}
else {
return JNI_TRUE;
}
}
it will return true if the device is ARMv7 and features NEON, false otherwise!
Related
I'm trying to compile libfuse with NDK, my environment:
Win10(64bit) + NDK(r14b,64bit) + libfuse(3.1.0)
Error occurs in fuse_common.h, it checks size of off_t:
$ ndk-build
[armeabi-v7a] Compile thumb : fuse <= buffer.c
In file included from jni/../../libfuse/lib/buffer.c:15:
In file included from jni/../../libfuse/lib/fuse_i.h:9:
In file included from jni/../../libfuse/include\fuse.h:19:
jni/../../libfuse/include/fuse_common.h:745:13: error: bit-field
'_fuse_off_t_must_be_64bit' has negative width (-1)
{ unsigned _fuse_off_t_must_be_64bit:((sizeof(off_t) == 8) ? 1 : -1); };
^
1 error generated.
make: *** [obj/local/armeabi-v7a/objs/fuse/__/__/libfuse/lib/buffer.o] Error 1
here's the check in fuse_common.h:
struct _fuse_off_t_must_be_64bit_dummy_struct \
{ unsigned _fuse_off_t_must_be_64bit:((sizeof(off_t) == 8) ? 1 : -1); };
I searched on google, there's _FILE_OFFSET_BITS=64 definition, which can be used to change the size of off_t, I have this defined my 'Android.mk' file:
LOCAL_CFLAGS := \
....
-D_FILE_OFFSET_BITS=64 \
....
And even add this line at the beginning of fuse_common.h
#define _FILE_OFFSET_BITS 64
Still not working, how to fix it?
Update to NDK r15c. _FILE_OFFSET_BITS=64 works from there on out.
Note that most off64_t system calls weren't available until android-21. If your minSdkVersion is set below that and you use _FILE_OFFSET_BITS=64, many functions will not be available.
NOTE Provided solution is much like workaround, see #Dan's answer for reliable and official way to get 64-bit off_t.
On Android off_t is always 32-bit length, and there is no preprocessor macro that controls its size. (Though it is true only for NDK development since modern bionic allow to configure off_t size at compile time). And because of this you cannot compile your library directly.
But I guess there is some way to workaround it. Android NDK offers non-POSIX extended type - off64_t, and also it provides a complementary set of library functions that accept it instead of off_t. They are distinguished by 64 suffix, i.e. lseek64(), mmap64(). So to make things work you may try to add global configuration header to your project:
/* let off_t to be a 64-bit length */
typedef off64_t off_t;
/* use appropriate versions of system functions */
/* list here only functions that have off_t parameters and are used by your library */
#define mmap mmap64
#define lseek lseek64
And of course keep in mind that compiled code now is linked against *64() functions instead of regular ones and any public interfaces expect off64_t instead of off_t.
I have representations of all my dependencies and my library in LLVM IR forms. How to cross-compile my library into a shared object for iOS, Android, Windows and Mac platforms from Linux ( Ubuntu for example )?
Please provide a single example script that would compile any example library with at least one dependency on to another library of your choice to all 4 platforms ( for example OpenCV or ZeroMQ 4+ ).
Using the LLVM static compiler (llc), you can compile the LLVM IR into object files for a specific target triple. Though the target triples are not documented very well, the LLVM infrastructure is all open source, so a quick search through the source code will lead you here.
Unfortunately, there is no documentation for a discrete list of possible target triples you can use. However, if you know exactly what system you're targeting, constructing a triple is fairly easy. Taken from the target triple documentation, you can see :
The triple has the general format <arch><sub>-<vendor>-<sys>-<abi>,
where:
arch = x86_64, i386, arm, thumb, mips, etc.
sub = for ex. on ARM: v5, v6m, v7a, v7m, etc.
vendor = pc, apple, nvidia, ibm, etc.
sys = none, linux, win32, darwin, cuda, etc.
abi = eabi, gnu, android, macho, elf, etc.
Once you figure out what target triple you're using, you specify it as a string using the -mtriple flag. Here are some examples:
Windows: -mtriple=i686-pc-win32-gnu
Linux: -mtriple=i686-pc-linux-gnu
IOS: -mtriple=armv7-apple-ios
Android: -mtriple=arm-linux-androideabi
Next, you need to specify that you want to compile an object file using the filetype flag:
-filetype=obj
This should be enough if I understand your question correctly.
If you're expecting to use a single file on all platforms and operating systems, while this is possible, it would take a lot of work and I wouldn't expect an answer regarding that here on stackoverflow.
From this link, There is a Variable LLVM_TARGETS_TO_BUILD and the definition says that
A semicolon delimited list controlling which targets will be built and linked into llc. This is equivalent to the --enable-targets option in the configure script. The default list is defined as LLVM_ALL_TARGETS, and can be set to include out-of-tree targets. The default value includes: AArch64, AMDGPU, ARM, BPF, Hexagon, Mips, MSP430, NVPTX, PowerPC, Sparc, SystemZ, X86, XCore.
You should add the X86 and ARM is present in it. you need to add support for 64 and Apple
From this link
It is possible to cross compile
The example command looks like
% cmake -G "Ninja" -DCMAKE_OSX_ARCHITECTURES="armv7;armv7s;arm64"
-DCMAKE_TOOLCHAIN_FILE=<PATH_TO_LLVM>/cmake/platforms/iOS.cmake
-DCMAKE_BUILD_TYPE=Release -DLLVM_BUILD_RUNTIME=Off -DLLVM_INCLUDE_TESTS=Off
-DLLVM_INCLUDE_EXAMPLES=Off -DLLVM_ENABLE_BACKTRACES=Off [options]
<PATH_TO_LLVM>
Also I would like to share this link. It says
The basic option is to define the target architecture. For that, use -target . If you don’t specify the target, CPU names won’t match (since Clang assumes the host triple), and the compilation will go ahead, creating code for the host platform, which will break later on when assembling or linking.
The triple has the general format <arch><sub>-<vendor>-<sys>-<abi>, where:
arch = x86_64, i386, arm, thumb, mips, etc.
sub = for ex. on ARM: v5, v6m, v7a, v7m, etc.
vendor = pc, apple, nvidia, ibm, etc.
sys = none, linux, win32, darwin, cuda, etc.
abi = eabi, gnu, android, macho, elf, etc.
I downloaded an APK from Play Store that contains native code binaries. In the APK file there is an lib/x86 folder that supposedly contains a library file containing native procedures, normally a .so extension. Since the code is in x86, is it possible to write a Java program to invoke the library on the desktop? Even if you dont have the source code for that library. The NDK function just has to accept parameters and return a value. For example, can we write
class AppNativeLoader
{
public static native void generateRand(int seed);
static
{
System.loadLibrary( "AndroidNDKLib" );
}
}
public class WCallTest
{
public static void main( String[ ] args )
{
long seed = System.currentTimeMillis();
if(args.length > 0) {
seed = Long.valueOf(args[0]);
}
long rand = AppNativeLoader.generateRand(seed);
System.out.println(rand);
}
}
NOTE: This is just an example. The actual environment differs. Using JRE 7 on RHEL, I extracted the x86 .so and placed it in the same directory as the .class file. I still get an UnSatisfiedLinkerError. Anything amiss? Assuming there are no callbacks and the function doesn't utilize and Android APIs, is this possible?
EDIT: I opened the lib in IDA Pro and I saw the following dependencies
.plt:0000B100 ; Needed Library 'liblog.so'
.plt:0000B100 ; Needed Library 'libz.so'
.plt:0000B100 ; Needed Library 'libc.so'
.plt:0000B100 ; Needed Library 'libm.so'
.plt:0000B100 ; Needed Library 'libstdc++.so'
.plt:0000B100 ; Needed Library 'libdl.so'
These should be available in my desktop environment, no?
Not all Linux environments are identical (even crossing distribution boundaries is not guaranteed to work). NDK binaries are built against Bionic and a handful of other Android specific libraries, whereas your RedHat system uses glibc and a bunch of other things available from the RedHat repositories.
tl;dr you can't run Android binaries on desktop Linux.
You can try downloading the needed shared libraries from here (make sure to choose the correct API version, and an architecture matching the architecture of the NDK shared library, to find out which shared libraries you need you can simply use ldd).
Then, to easily access the methods exposed by the shared lib, you can decompile the java code of the app using jadx, and then write your own code around the JNI classes.
Then, to compile your java code, you can use any version of the JDK.
Then, to execute it, you'll have to use a version of JRE matching the architecture of the NDK shared library (in your case, you'll have to download the 32-bit JRE).
However, this is not guaranteed to work: I am currently getting segfaults in the NDK shared library I'm trying to use on my PC, and since most NDK binaries are stripped, debugging is going to be a nightmare.
I'm having the following issue with RenderScript on some old 4.2.2- devices (galaxy s3 mini, galaxy ace 3, galaxy fresh, etc.) - Android - Renderscript Support Library - Error loading RS jni library.
I want to implement the suggested solution but what exactly will be the value returned by
System.getProperty("os.arch");
for armeabi devices (not armeabi-v7 devices).
Thanks.
The method System.getProperty is a generic method of Java, here you can find the documentation.
On Linux it returns the same value obtained from the command uname -m. The possible values are for example armv5t, armv5te, armv5tej, armv5tejl, armv6, armv7, armv7l, i686 and many more. There isn't an exact value for the armeabi devices because it slightly differs from cpu to cpu.
There is a better alternative to System.getProperty and it is the field Build.CPU_ABI (or Build.SUPPORTED_ABIS in newer devices):
String abi = null;
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.LOLLIPOP) {
abi = Build.CPU_ABI;
} else {
abi = Build.SUPPORTED_ABIS[0];
}
The possible values are armeabi, armeabi-v7a, arm64-v8a, x86, x86_64, mips, mips64.
As you can see the number of possible results is much lower than System.getProperty, and you can check directly for armeabi.
I'm building a demo with bullet physics engine library for android phone(NDK).
From 2.81 version, Bullet physics engine supports arm neon optimization, but only for apple devices.
My question is how to enable arm neon for android?
The flag for arm neon is defined in btScalar.h file, code is as below:
#if (defined (__APPLE__) && (!defined (BT_USE_DOUBLE_PRECISION)))
#if defined (__i386__) || defined (__x86_64__)
#define BT_USE_SSE
#define BT_USE_SSE_IN_AP
#elif defined( __armv7__ )
#ifdef __clang__
#define BT_USE_NEON 1
#if defined BT_USE_NEON && defined (__clang__)
#include <arm_neon.h>
……
As we can see in the code, flag BT_USE_NEON is defined in the condition of it is compiled for apple device, if I drop this code and define this flag by myself, some error occurs when compiling, something like bad alignment--vld1.f32 {d26},[r4:128].
What should I do for my demo to enable arm neon?
I had the same issues few days ago:)
The problem was assembly code defined in btVector3 (vld1q_f32_aligned_postincrement). As far as I know, syntax such as [r3, :128] is used in GAS - I guess it is used in iOS environment, but not sure. Modifying it to [%1, #128] may remove those errors.
By the way, from my experience, it is usually slow than plain implementation. I think the bullet neon intrinsics are not optimized for android, as you can see an assembly code(probably optimized) on the other side(defined as APPLE).