JIT vs AOT Compiling - android

This question is related to android system.
Dalvik VM uses JIT concept, it means that when you first run application Dalvik VM compiles it and loads into RAM as long as it could stay there. I understand that concept. But new virtual machine called ART, uses AOT method. ART compiles app after you install it(or when you are installing it?). What this means ? Apps compiled by ART are the same as already compiled apps(like C apps) but run in a separate processes separated from the rest of OS ?
Can somebody explains me this concepts more thoroughly. I have to make some presentation, and this is mentioning there, but I don't understand this concept and I don't want to look dumb if somebody asks me something about that :)
Sorry for bad English, it would be nice if somebody could edit question a bit.

I am not completely familier how Dalvik JIT on Android works in practice, because JIT have several options how can work.
First option is, that JIT translate all bytecode into CPU instructions on application launch. This option spent some time before application launches and after that application can run as native. Problem is, that translated application has to kept in memory during launch, which is not good.
Second option is, that JIT works as real Just-In-Time, which means that translate block of code when is about to launch. Whole application is not translated on launch, but only main function is translated on launch and then is translated during run, when certain block of code (function etc.) is used. This option consumes less memory, but application is much slower during run.
According to informations I found, Android uses first option. Application is translated on launch and after that it runs "almost" natively. And this "almost" makes main difference between JIT and AOT.
When you are about to launch some application, JIT have only limited time to compile all bytecode to CPU instructions to make launch-lag "acceptable" long. This means, it can perform only basic optimizations. However, when you install some application, you have usually more time to waste and you are doing it only once - not on every launch. This means that AOT compiler has much more time to find tricks how to optimize that application. Resulted code should be more "efficient". Second benefit is, that compiled application is stored to cache and only part of it can be loaded to memory on launch. That means that OS hadn't keep whole code in memory and can save it. And that is main differences.
And last part of your question - ART on Android will perform compilation on installation (after saving apk to /data/app/). However, if you wipe that cache, or switch from Dalvik to ART, it will compile all installed application on first boot, which can take 10 or even more minutes.
And sorry for my bad english too, I am Czech :-)

Ahead Of Time(AOT) - Android Runtime(ART) - generates machine byte code during installation.
[JIT vs AOT]

Related

AOT of framework library

I know that in Kitkat and later versions android does a AOT (Ahead of time compilation), this happen during app install time and result in plenty of performance benefits. But any app would be referencing number of classes from operating system, which I guess they are packaged as jar file somewhere.
Take the example of TextView, so when AOT happens does it also include all the reference to system dependent classes? or AOT it just app exclusive procedure ? in that case a good amount of performance improvements will not be reachable, as we only do a part of processing through our code.
My app is too slow, I am using android system class called android.text.StaticLayout, I think if it were also precompiled whether install time or at the time OS itself was installed it would have been faster than what I am facing.

C JNI library crashes the entire android app

I am using ffmpeg compiled for android and works pretty acceptable for now, however sometimes errors appear (based on some android phone configurations) and the app simply force closes with this message:
Fatal signal 11 (SIGSEGV) at 0x00000001 (code=1), thread 20745 (AsyncTask #2)
The ffmpeg call is inside a try/catch; however, it does not seem to care.
So, how can I prevent this force close and show the user a message?
I'm afraid I can't do that. See also this answer which hints at why.
When ffmpeg dies, it takes with it your entire program. This is just the way things are. When programming in Java, you don't have to think about programs crashing in that manner, but when ffmpeg, which is written i C, dies, it can take down your entire Java program.
try/catch does not help, because ffmpeg does not know or care about Java exceptions. Your only solution while staying within a Java program, is to either find the bug which makes ffmpeg die, or find what triggers the bug and call ffmpeg in such a way that it does not crash. As pointed out by Alex Cohn, another solution is to run ffmpeg in another process, so that it can not take down anything else but its own process.
You can run ffmpeg not as a library, but as a separate executable process. This may be significantly less efficient, but in such setup your process may survive ffmpeg crash.
You can also setup your app such that it has Activity and Service that run in separate processes, see e.g. How to create an Android Activity and Service that use separate processes.
This allows for some watchdog mechanism, and more. I cannot tell without careful testing if this way can deliver better performance than running ffmpeg executable, or worse.

Android: can c++ code be aware of Android app memory use?

I have been searching for this answer for days and can't find a straightforward answer. I am working on an application written in C++ and that has been ported to Android. I am able to launch and run without too much hassle. My task is to figure out how much RAM our app is using dynamically so that we can handle memory issues dynamically-- which in my mind means that I need to have something in my C++ that can somehow be aware of system characteristics. What I have been able to do, is in my Java code, I can pull certain metrics that belong to my app via the getMemoryInfo call. Like in this post: Programmatically find Android system info
However, I would really like to be able to probe this from our C++ code so that we can handle everything in there...
Is this even possible?
If it is, are the calls unrealistically expensive?
If it is not, how is it possible to manage your memory through the native code rather than the Java code? i.e. If I see that I only have x amount of RAM available, I can dynamically change how much memory I want allocated to something in my C++ code to accommodate what the system has to offer.
Something along the lines of:
Ex. C++ Code:
if (android.os.thisApp.RAM left < 20 )
allocate 10M
else
allocate 20M

How to take heap snapshot of Xamarin.Android's Mono VM?

Background: I am trying to track down a memory leak in a Xamarin.Android app. Using DDMS and Eclipse Memory Profiler, I am able to see which objects are alive. When trying to track what is holding them alive (GC Root), I only see "Native stack" (of course).
How can I take a heap snapshot of the MONO VM? So I can later use it with i.e. heapshot tool?
Or are there ANY OTHER TECHNIQUES I can use to find what is holding an object alive in Xamarin.Android's .NET part? Is it possible to do something from within the program?
How can I take a heap snapshot of the MONO VM? So I can later use it with i.e. heapshot tool?
It is now possible to get heap snapshots of the Mono VM (tested with Xamarin.Android 4.8.2 beta; may apply to prior releases, your mileage may vary). It's a four step process:
Enable heapshot logging:
adb shell setprop debug.mono.profile log:heapshot
Start your app. (If your app was already running before (1), kill and restart it.)
Use your app.
Grab the profile data for your app:
adb pull /data/data/#PACKAGE_NAME#/files/.__override__/profile.mlpd
#PACKAGE_NAME# is the package name of your application, e.g. if your package is FooBar.FooBar-Signed.apk, then #PACKAGE_NAME# will be FooBar.FooBar.
Analyze the data:
mprof-report profile.mlpd
mprof-report is included with Mono.
Note: profile.mlpd is only updated when a GC occurs, so you may want to call GC.Collect() at some "well known" point to ensure that profile.mlpd is regularly updated .
I have been having troubles with Xamarin Android memory profiling, and have used a few tricks:
On the Dalvik side I have used Android Monitor to dump a heap snapshot and then opening it with JProfiler or Eclipse MAT. This standard Android.
A large portion of my code is shared (70-80%) and to verify this I have built a simple WinForms application to drive the shared API. This way I can use .Net Memory Profiler (or ANTS if you would prefer) as well as dotTrace for performance. I could easily pick quite a few
issues this way.
By using the solution explained by #jnop above I could open the profile.mldp in Mono's HeapShot tool and get a visual tool instead of the mprof-report textual output.
By the way used should vote for better profilers:
http://xamarin.uservoice.com/forums/144858-xamarin-suggestions/suggestions/3229534-add-memory-and-performance-profiler

How a JIT compiler helps performance of applications?

I just read that Android has a 450% performance improvement because it added a JIT compiler, I know what JIT is, but I don't really understand why is it faster than normal compiled code? or what's the difference with the older approach from the Android platform (the Java like run compiled bytecode).
Thanks!
EDIT: This is hugely interesting, thanks!, I wish I could pick every answer as correct :)
First a disclaimer, I'm not at all familiar with Android. Anyway...
There are two applications of JIT compilation that I am familiar with. One is to convert from byte-codes into the actual machine instructions. The second is Superoptimisation.
JIT bytecode compilation speeds things up because the bytecodes are only interpeted once, instead of each time it is executed. This is probably the sort of optimisation you are seeing.
JIT superoptimsation, which searches for the truly optimal set of instructions to implement a programs logic, is a little more esoteric. It's probably not what your talking about, though I have read reports of 100% - 200% speed increases as a result.
The VM needs to turn compiled byte code into machine instructions to run. Previously this was done using an interpreter which is fine for code that is only invoked once but is suboptimal for functions that are called repeatedly.
The Java VM saw similar speedups when asa JIT-versions of the VM replaced the initial interpreter versions.
The JIT compiler knows about it's system, it can use that knownledge to produce highly efficient code compared to bytecode, and rumors go it can surpass pre-compiled programs.
That's why it can go faster than the traditional system of java, where the code was run as bytecode only, which Android used, too.
Besides compiling java code to native code, which could be done with a compiler too, a JIT does optimizations, that you can only do at runtime.
A JIT can monitor the applications behavior over time and optimize those usage patterns that really make a difference, even at the expense of other branches in the execution path of the code, if those are less frequently used.

Categories

Resources