I cannot find any references to a detailed explanation about how JNI works on Android in detail, so:
Since every Android application runs in its own process, with its own instance of the Dalvik/ART virtual machine, I think that the native code will be executed in the same process, am I right?
I read that when the VM invokes a function, it passes a JNIEnv pointer, a jobject pointer, and any Java arguments declared by the Java method.
But how is this made at assembly level (under the hood)?
I read that you can instantiate objects, call methods, and so on, like Reflection, using the functions provided by the JNIEnv. Therefore, my question is: have I a "direct" memory access to the VM or I have always to use the JNIEnv's functions?
The Android JVM is under Apache license, so the best detailed and precise description can be found in the form of source code. Note that there are two different JVMs: dalvik and art. Under the hood they are very different, to the extent that a user of JNI may consider special adaptations.
the native code will be executed in the same process
Exactly. Note that an Android app can run in more than one process, and also it can spawn child processes (normal Unix behavior). But JNI is not IPC.
how is this made at assembly level?
More or less, this is described in a related
question: What does a JVM have to do when calling a native method?
have I a "direct" memory access to the VM?
Yes, you have. There is no security barrier between your C code and the JVM. You can reverse engineer the data structures, and do whatever you like. The exact implementations of the JVM not only depend on the Android version, but may be modified without notice by the vendor, as long as the public API of the JVM (including JNI) is compatible. The chances that you will do something useful with direct memory access to JVM are minimal, but the risk that it will crash is very high.
Note that this is not a security issue: your C code is running in a separate process (with your Java code), and is subject to the same permissions restrictions as the Java code. It has no access to the private memory of other apps or procsses. Whatever you change in your instance of JVM will not effect VM that runs other apps.
Related
I have a C library which I'm cross-compiling to use in Android & iOS apps.
It makes use of memcpy() and mktime() so I want to know if these functions are implicitly thread-safe when used in multi-threaded environments.
iOS apps compiled with modern Xcode and Android libraries compiled with modern Android NDK use a clang compiler which is LLVM-based.
I've reviewed the following questions, but have been unable to find a definitive answer:
Is memcpy process-safe?
Are functions in the C standard library thread safe?
POSIX requires of conforming implementations that all functions it standardizes be thread safe, with the exception of a relatively short list of functions. memcpy() and mktime() are both covered by POSIX, and neither is on the list of exceptions, so POSIX requires them to be thread safe (but read on).
Note well, however, that this is not a matter of the compiler used, but rather of the C library that supports your application. I recall Apple's C libraries being non-conforming in some areas. Nevertheless, there's nothing in particular about memcpy() and mktime() that makes them inherently risky from a thread safety perspective. That is, there's no reason to expect that they access any shared data, except any provided to them via their arguments.
And there's the rub. You can rely on memcpy() and mktime() not to, say, rely internally on static data, but POSIX's requirement for thread safety does not extend to working as documented in the face of data races you create through choice of arguments. Thus, for example, if two different threads call memcpy(), and the target region of one call overlaps either the source or target region of the other, then you need some flavor of synchronization between the threads.
The question if memcpy() is thread-safe might be discussible.
I would say that memcpy() is indeed thread-safe. It doesn't rely on a (global) state, which could be mangled up by multiple instances of memcpy() running. This, however, doesn't mean, that there is some magic preventing a memory area, which is concurrently the copy destination of multiple threads doing memcpy() gets badly mangled up, i.e. the copy process as a whole is not atomic. You would have to care yourself using mutexes to ensure atomicity.
mktime() is trivially threadsafe, since it doesn't use static buffers, use a global state or similar. The manpage mentions a few functions from that family being not threadsafe (those have corresponding *_r functions), but mktime() is not amongst those.
In the given android stack,
excluding applications written using NDK kit at LIBRARIES layer,
I learnt that, Any app written at APPLICATIONS layer must run in their own processes, inside their own Dalvik VM instance, as shown below:
As per the process stack above, I see that Dalvik runtime is talking to HAL/kernel layer via bionic Libc library.
My question:
Can't Dalvik VM access HAL/Kernel layer without using bionic Libc library interface?
i see some confusion in you when it comes to C Language.
after all bionic is just standard C library for android.
first of all, your answer is NO.
although bionic libC is not below Dalvik in their diagram, the fact is that Dalvik uses libC helper functions to perform some tasks that are considered as operating system services.
like:
string handling, mathematical computations, input/output processing, memory allocation and several other operating system services.
I have some confusion about the life cycle of native code in Android aps. I have seen references that say that the native code is executed inside the Dalvik VM, but is that true? I was under the impression that the VM only runs Dalvik bytecode. On the otherhand, the native code uses JNI which is be called from Java inside the VM. Lastly, does the use of NativeActivity make any difference?
I thought I was understanding the NDK fairly well, until I sat down and tried to explain it to myself. I'm not even sure that I'm asking the question in a sensible manner.
I have seen references that say that the native code is executed inside the Dalvik VM, but is that true?
It executes inside a process that contains a Dalvik VM. Personally, I would not describe it as executing inside the VM -- as you say, Dalvik bytecode executes inside the VM. "Under the control of the Dalvik VM" would be better phrasing, IMHO. Of course, it boils down to your definition of "in", I suppose.
Lastly, does the use of NativeActivity make any difference?
Not really, insofar as NativeActivity is implemented in Java. While you may not have any Java, Java is still lightly involved in the act of running your native code.
This is about Android. The situation:
C++ library and java wrapper classes plus native functions (JNI) for working with C++ classes from the library. When common java code needs C++ object, it creates corresponding java wrapper object which creates C++ object through native function and remembers the pointer to the native object in 'long' variable. In all next actions the wrapper gives this pointer to the native functions etc.
The problem:
How to release all allocated C++ objects at the end? Currently every wrapper class has 'finalize' method where it calls native function for releasing of the C++ object, but Android doesn't guarantee the calling of 'finalize'! At the other side, normally the C++ library has no idea how many and what types of C++ objects are allocated by java code.
What will happens with the remaining allocated memory when our java application terminates, will Android release automatically the whole heap, used from the native library, when the OS unloads the library?
At the end of the process lifetime, all process memory (both Java and C++ heap) will be freed and reclaimed by the system. One thing is though, Android activity closing does not necessarily end the process. I'm not sure what's the process shutdown policy there.
On the other hand, relying on the garbage collection and finalize() sounds like solid design to me. You claim - "Android does not guarantee finalize()". Do you have a cite for that? 'Cause if it comes with a disclaimer of "when the object is freed as a part of process shutdown...", then we're still good.
And if you're super-paranoid, you can write your own malloc()/free()/realloc() wrapper, store a list of all allocated objects, and introduce a cleanup function that walks the list and frees them all. The containing Java objects, however, might end in a weird zombie state where the memory has been freed from under them. This is a tricky proposition that is very easy to get wrong. So I'd still say - have faith in the garbage collector. Lack thereof would be... disturbing.
Due to the difference in paradigms, you have to incorporate explicit destruction into your Java objects that are implemented under the hood using C++ resources. So a close() or other such method. The same issue comes up with the JNI, so answers to those questions will apply to you:
Force Java to call my C++ destructor (JNI)
As for the memory issue on closing, it's generally best in my opinion to not rely on this. If you get to a clean state, valgrind and such can make sure you weren't leaking.
But from a technical standpoint--since Android is based on Linux, I'd imagine it does the usual thing and will free all the memory when the process closes. Taking advantage of that can make your program exit faster than explicitly freeing memory (for experts only who use other methods to ensure this maintains program correctness and they aren't leaking at runtime).
We are using JNIs and we had a problem like that
Actually, the problem resided in the fact that we were overloading finalize() to do the clean up. We solved our problems by removing our finalize() and creating a clean() instead. This clean() calls the JNI function that does the appropriate deletes (and set the C++ pointers to null, just in case). We call clean() just as you would in C++ with delete (e.g. when the variable goes out of scope).
That worked for us. I hope it works for you. Good luck!
Not really an immediate source code question per se....but Im looking into doing some casual Android Programming, nothing heavy.
But it seems to use alot of XML and Java......what I wonder though is why is it that android is written mostly in C and XML (along with C++ and Java) with it being closely related to the Linux OS......but why is the "main" language for programming in android Java?
Just out of curiosity of course.
The "main" language, as you called it, is Java. You can use C/C++ via the NDK, but you won't need it unless you are doing some special stuff. If you wonder when you would need to use C/C++, take a look at the official documentation:
When to Develop in Native Code
The NDK will not benefit most applications. As a developer, you need to balance its benefits against its drawbacks; notably, using native code does not result in an automatic performance increase, but always increases application complexity. In general, you should only use native code if it is essential to your application, not just because you prefer to program in C/C++.
Typical good candidates for the NDK are self-contained, CPU-intensive operations that don't allocate much memory, such as signal processing, physics simulation, and so on. Simply re-coding a method to run in C usually does not result in a large performance increase. When examining whether or not you should develop in native code, think about your requirements and see if the Android framework APIs provide the functionality that you need. The NDK can, however, can be an effective way to reuse a large corpus of existing C/C++ code.
The Android framework provides two ways to use native code:
Write your application using the Android framework and use JNI to access the APIs provided by the Android NDK. This technique allows you to take advantage of the convenience of the Android framework, but still allows you to write native code when necessary. You can install applications that use native code through the JNI on devices that run Android 1.5 or later.
Write a native activity, which allows you to implement the lifecycle callbacks in native code. The Android SDK provides the NativeActivity class, which is a convenience class that notifies your native code of any activity lifecycle callbacks (onCreate(), onPause(), onResume(), etc). You can implement the callbacks in your native code to handle these events when they occur. Applications that use native activities must be run on Android 2.3 (API Level 9) or
You cannot access features such as Services and Content Providers natively, so if you want to use them or any other framework API, you can still write JNI code to do so.
I am just guessing but Java is a bit easier to program than C/C++ so its more attractive to new programmer which is also good for the platform success itself.
Another reason might be that an application written in java runs in a separate VM so it can be much easier controlled by android. If a vm is not responding the main OS can just kill it and the phone is still responding.
Suaron... From a stability point of view Java apps should be less likely to take down the device. So Java || C# || C++/CLI safer than C++ vs C vs Assembly. To this end the API is in Java and so most Apps are in Java.
On the other hand C/C++ gets closer to the hardware and is more appropriate for writing libraries that interact with hardware. It is much easier to shoot yourself in the foot with C++.
JAL