I just read that Android has a 450% performance improvement because it added a JIT compiler, I know what JIT is, but I don't really understand why is it faster than normal compiled code? or what's the difference with the older approach from the Android platform (the Java like run compiled bytecode).
Thanks!
EDIT: This is hugely interesting, thanks!, I wish I could pick every answer as correct :)
First a disclaimer, I'm not at all familiar with Android. Anyway...
There are two applications of JIT compilation that I am familiar with. One is to convert from byte-codes into the actual machine instructions. The second is Superoptimisation.
JIT bytecode compilation speeds things up because the bytecodes are only interpeted once, instead of each time it is executed. This is probably the sort of optimisation you are seeing.
JIT superoptimsation, which searches for the truly optimal set of instructions to implement a programs logic, is a little more esoteric. It's probably not what your talking about, though I have read reports of 100% - 200% speed increases as a result.
The VM needs to turn compiled byte code into machine instructions to run. Previously this was done using an interpreter which is fine for code that is only invoked once but is suboptimal for functions that are called repeatedly.
The Java VM saw similar speedups when asa JIT-versions of the VM replaced the initial interpreter versions.
The JIT compiler knows about it's system, it can use that knownledge to produce highly efficient code compared to bytecode, and rumors go it can surpass pre-compiled programs.
That's why it can go faster than the traditional system of java, where the code was run as bytecode only, which Android used, too.
Besides compiling java code to native code, which could be done with a compiler too, a JIT does optimizations, that you can only do at runtime.
A JIT can monitor the applications behavior over time and optimize those usage patterns that really make a difference, even at the expense of other branches in the execution path of the code, if those are less frequently used.
Related
According to wiki (http://en.wikipedia.org/wiki/Android_Runtime) Dalvik gets entirely replaced by ART in Lollipop i.e. from that release onwards any app will be compiled to native code upon installation. This begs the question, is there a point in writing computing intense routines in NDK if the app will be compiled to native code anyway?
The Dalvik VM also compiled code to native code. The difference is that Dalvik did it "just in time", and only for the parts of code that were executed frequently.
The compiler in Art has a number of performance improvements over the one in Dalvik, but if you felt the need to go native for performance before, you will most likely continue to feel that need.
ART does not make pure "native code" in the sense of C language etc. It is still bytecode generated from Java sources.
So yes, there is still a lot of advantages to write some routines with the NDK, of course :)
So I'm trying to write some low-level code for Android, and my main concern is that I want to avoid ALL optimization by the JIT compiler (or anything else). After doing some research, the best approach seems to be to:
write Java bytecode by hand
convert it to a dex file using the "dx" command
run it on the program using the "dalvikvm" command (via adb shell) with the "-Xverify:none -Xdexopt:none" paramaters specified
My question is: will this in fact avoid ALL optimization? The previous discussion here https://groups.google.com/forum/#!topic/android-platform/Y-pzP9z6xLw makes me unsure, and I can't 100% convince myself by reading the docs.
Any confirmation one way or the other is greatly appreciated.
Some of the instruction rewriting performed by dexopt cannot be disabled. For example, accesses to volatile long fields must be handled differently from access to long fields, and the specialization is handled by replacing the field-get instruction with a different instruction.
The optimizations performed by dexopt take the form of instruction replacement, usually some sort of "quickening" that allows the VM to do a little less work. All such optimizations are performed statically, ahead of time, not dynamically at run time, so you will get consistent behavior. Enabling the dexopt optimizations doesn't introduce unknowns, it just changes from one set of knowns to a different set of knowns.
The biggest source of variation is going to be Dalvik's JIT compiler, which you can disable with -Xint:fast. See this slightly outdated doc for notes on how to configure this system-wide.
This question is related to android system.
Dalvik VM uses JIT concept, it means that when you first run application Dalvik VM compiles it and loads into RAM as long as it could stay there. I understand that concept. But new virtual machine called ART, uses AOT method. ART compiles app after you install it(or when you are installing it?). What this means ? Apps compiled by ART are the same as already compiled apps(like C apps) but run in a separate processes separated from the rest of OS ?
Can somebody explains me this concepts more thoroughly. I have to make some presentation, and this is mentioning there, but I don't understand this concept and I don't want to look dumb if somebody asks me something about that :)
Sorry for bad English, it would be nice if somebody could edit question a bit.
I am not completely familier how Dalvik JIT on Android works in practice, because JIT have several options how can work.
First option is, that JIT translate all bytecode into CPU instructions on application launch. This option spent some time before application launches and after that application can run as native. Problem is, that translated application has to kept in memory during launch, which is not good.
Second option is, that JIT works as real Just-In-Time, which means that translate block of code when is about to launch. Whole application is not translated on launch, but only main function is translated on launch and then is translated during run, when certain block of code (function etc.) is used. This option consumes less memory, but application is much slower during run.
According to informations I found, Android uses first option. Application is translated on launch and after that it runs "almost" natively. And this "almost" makes main difference between JIT and AOT.
When you are about to launch some application, JIT have only limited time to compile all bytecode to CPU instructions to make launch-lag "acceptable" long. This means, it can perform only basic optimizations. However, when you install some application, you have usually more time to waste and you are doing it only once - not on every launch. This means that AOT compiler has much more time to find tricks how to optimize that application. Resulted code should be more "efficient". Second benefit is, that compiled application is stored to cache and only part of it can be loaded to memory on launch. That means that OS hadn't keep whole code in memory and can save it. And that is main differences.
And last part of your question - ART on Android will perform compilation on installation (after saving apk to /data/app/). However, if you wipe that cache, or switch from Dalvik to ART, it will compile all installed application on first boot, which can take 10 or even more minutes.
And sorry for my bad english too, I am Czech :-)
Ahead Of Time(AOT) - Android Runtime(ART) - generates machine byte code during installation.
[JIT vs AOT]
Well,
since I'm interested in reengineering I spend a lot of time on Android reengineering so far.
Nevertheless I got to a point, where I had the problem of compiled, binary C-Code (NDK) and I got to know that it's very difficult to decompile it back to C/C++ than decompiling a DEX-file back to more or less well Java sources.
What's the reason for this? I mean the bytecode is executed by the Dalvik VM and in case of a usual binary file it's executed by the real processor directly instead. Both are pretty similar except for some additional emulation layers, isn't it? I don't see that much differences at the moment or the reason for this problem.
Do you have any information for me why it's more difficult to decompile a usual binary file (e.g. ELF or MS EXE) back to the C source?
Thanks.
The short answer is that the C/C++ code does not contain any reflective information in it and C/C++ has inline functions, macros, and unrolled loops that the Java compiler just doesn't do (as much as C/C++ compilers do). It is also possible to optimize C/C++ so extensively that all you can do is decompile to assembly because there are no references to the applications own functions. (References to the system's functions will be found though.)
BTW, Hex-Rays ARM Decompiler makes reverse-engineering job much easier, check this out: http://www.hex-rays.com/hexarm_compare0.shtml
The other question is that it costs much...
I believe I read at some point that due to Android running on the Dalvik VM, that dynamic languages for the JVM (Clojure, Jython, JRuby etc.) would be hard pressed to obtain good performance on Dalvik (and hence on Android). If I recall correctly, the reasoning was that under the hood, in order to achieve the dynamic typing, there was quite a bit of fiddling done with the java bytecode and that the bytecode->dalvik translation wouldn't pick this up easily.
So should I avoid a dynamic JVM language if I want to develop for Android?
EDIT: I guess I should have provided a bit more context. I was considering using Clojure to develop apps for Android. I was thinking about using Clojure for a few reasons:
I want to learn FP
I don't really care to learn Java
Clojure seems to have some very
interesting language concepts (STM
for example).
However, when I tried to write apps for Android in Clojure, I found that there is a performance issue that is unacceptable. But I found a blog posting that said that dynamically typed languages (Clojure for example) would have problems due to the bytecode manipulation needed to get the dynamic typing. So I was sort of looking for independent confirmation that this is true or it isn't. I should have known better than to make the assumption that in this particular issue all dynamically typed JVM languages could be treated as the same. I guess I did ask a fairly broad question so I guess I shouldn't be surprised that people didn't quite understand what I was asking.
Dan Bornstein gave a presentation on Dalvik at Google I/O. It's worth watching to learn about the system in general, including the constraints you care about. The specific issue of non-Java languages compiled into Java bytecode comes up during the Q&A.
Remco van 't Veer has a github project where he's patched Clojure to work on Android. Tim Riddell has written a tutorial on how to use it.
As mentioned here by #sean, there is sometimes a bigger problem than just performance. Dan Bornstein discusses it when asked about Jython, at ~54:00 in video. There is currently no support for dynamic languages which generate bytecode on-the-fly, (because the bytecode translation is not available at runtime).
Android just got scripting
There are some patches to make clojure work.
http://riddell.us/tutorial/clojure_android/clojure_android.html
I think the real issue is the use of byte code generators by some dynamic languages; they won't generate byte code for the Davlik VM. Therefore eval will not work.
Given the relatively speaking cramped hardware of the phone running you probably should just target java and not worry about a dynamic jvm language. They dynamic languages on the jvm aren't going to be as efficient as the java to my understanding.
Besides the Android SDK is pretty sane and easy to write for I don't think you'll experience very many benefits using something else.
dynamic languages for the JVM would be hard pressed to obtain good performance on Dalvik
Dynamic languages are hard pressed to obtain good performance, period. If you want performance, use a statically typed language like Java (or C#, F# etc.).