I wonder if ART is virtual machine. The dex2oat compiles dalvik byte code into the native (specific for platform) code, elf file. So as mentioned in android developer article it still has garbage collector. I don't understand how does it work, we have native compiled elf file, but it still runs in virtual machine environment ? How does GC work in this case ?
Please give a good reference to read about this or please explain this.
Thanks in advance.
GC is just a way the memory is managed. In any Java VM GC is the entity responsible for both memory allocation AND garbage collection. when you allocate an object GC check for available memory and collects garbage if there is no free space. You can implement the same algorithm in native language like C or C++. So it doesn't matter if you compile java to bytecode and then bytecode calls GC and GC runs inside JVM or you compile java to native code and link it with GC which may be a shared library. There was a VM from Miriad Group (ex Esmertec) which did it way before ART but for Java ME
Related
Some time ago I developed a jni wrapper for the C libspeex audio codec.
Recently I had some problem in compiling and running this with the ndk r10e, since the audio encode function crashed in runtime.
Finally I found that when I compile with
NDK_TOOLCHAIN_VERSION:=4.8
the native code runs while with
NDK_TOOLCHAIN_VERSION:=clang
it crashes. I wonder about the difference between the two toolchains.
The logcat of the crash:
11-02 14:26:33.642 1908-1908/com.ermes.intau D/dalvikvm: GC_FOR_ALLOC freed 1248K, 20% free 34140K/42456K, paused 102ms, total 102ms
11-02 14:26:33.642 1908-2514/com.ermes.intau A/libc: Fatal signal 11 (SIGSEGV) at 0x00000000 (code=1), thread 2514 (Thread-103909)
11-02 14:26:33.742 1908-1908/com.ermes.intau D/dalvikvm: GC_FOR_ALLOC freed 6K, 18% free 34883K/42456K, paused 89ms, total 89ms
gcc and clang are completely different C compilers. They have no common code.
Of course they aren't developed in a vacuum. The developers of both compilers do compete with each other to generate the best / fastest machine code. While the optimisations they both perform may be based on the same ideas, they both have different edge cases that will be compiled differently.
With clang being the newest compiler to be developed, they do try to compile other open source projects that have a history of being compiled by gcc. With either clang or the open source project being changed whenever bugs are found.
The C language standard also leaves a lot of behaviour undefined. Things like dividing by zero, dereferencing a NULL pointer, signed integer overflows, alignment of stack allocations... Both compilers will exploit these edge cases to generate faster code. If a block of code "might" do something weird, the compiler can assume that the developer knows what they are doing and have handled these cases elsewhere.
http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html
can anybody tell me where to find the source code of Allocation Tracker, the memory trace tool in Android DDMS?(http://grepcode.com/file/repo1.maven.org/maven2/com.android.ddmlib/ddmlib/r16/com/android/ddmlib/Client.java#Client.requestAllocationDetails%28%29)
Or, are there any other ways to analysis the memory allocation and find out how much memory each class uses within an Android app/process?
I'm facing 'ABORTING: HEAP MEMORY CORRUPTION' problem on Android NDK environment.
If I backtrace with ndk-gdb, it is mainly occurring on malloc/dlfree functions in libc.so
and after long hours of tracing the problem, it mostly happens inside sqlite3_xxx function calls, which absolutely working fine on iOS env.
I just can't find where I have to go deep.
Have anyone encountered similar problem and fixed?
I have seen memory problems, but not 'ABORTING: HEAP MEMORY CORRUPTION' that you report.
You have to find out which heap is corrupt: the Java one or the C/C++ one. Or it maybe your sql. If the log is not informative, you may try to locate the error message in the binaries.
If it is the C/C++ heap, what worked for me was replacing the standard malloc/calloc/free with my own versions.
#define malloc(x) myMalloc(x, __FILE__,__LINE__,__func__)
and so on; myMalloc() and friends print debug information so that you can find out where memory was allocated and freed. I had the source of the library and could compile it. Then logging, logging, logging...
#include <android/log.h>
#define LOGD(...) __android_log_print(ANDROID_LOG_DEBUG , "~~~~~~", __VA_ARGS__)
#define DLOG(...) __android_log_print(ANDROID_LOG_DEBUG , "~~~~~~", __VA_ARGS__)
I also made myMalloc() zero the allocated memory -- just in case. One more trick is to allocate a larger chuck and put a guard value in the end of it. If that value gets corrupt -- you see.
If it is the Java heap, you will have to log your native function calls (I myself have never seen problems in the Java heap, usually Java complains about its JNI-specific stuff).
For my program, 'ABORTING: HEAP MEMORY CORRUPTION' shows when there are thread safety issues. Specifically with Cocos2d-x framework, its getFileData() function of ZipUtils may crash when loading .plist atlas and addImageAsync() at the same time on Android. Though the codes works fine on iOS.
I have developed a Loadable Kernel Module (LKM) for android.
I use kzalloc:
device = kzalloc(ndevices * sizeof (*device), GFP_KERNEL);
and it worked for a while, but after an update of my android (since 4.1 it's no more working), I got following error on insmod:
insmod module.ko
insmod: init_module 'module.ko' failed (No such file or directory)
DMESG says:
Unknown symbol malloc_sizes (err 0)
This has something to do with inux/slab.h, that's what I know.
I googled for days over days and I'm very frustrated not finding the solution to fix this problem and get the LKM working again.
Can maybe anyone help me out?
CONCLUSION:
The accepted answer is correct: Try to remove the slab.h and define the missing methods as "extern". Or in your kernel-source, use "make menuconfig" and change SLAB to SLUB (see first comment in answer for more details).
The remaining problems are handled in a new, more specific topic:
Interchangeability of compiled LKMs
So you need to tell us the kernel versions. But looking up linux kernel versions and memory allocators, it looks like the default mainline kernel switched from SLAB to SLUB.
By default, Linux kernel used a SLAB Allocation system until version
2.6.23, when SLUB allocation became the default.
Unless you're writing a module or something that depends on SLAB (which is very highly unlikely), then you probably don't want to be including linux/slab.h headers.
i need to use a classifier J48 in android. But running into heapspace problems. Is there a way to fix the same? I get an error that states. Dalvik format failed: Failed to convert dex. PermGen space.
So you have a memory problem using J48 in Weka on android.
I would try to diagnose this in the following order:
How much memory does your program consume? See here and here for Weka memory consumption.
Add more memory to the JVM (also in the earlier links).
Try running this on a more affluent JVM - can this run on a desktop? Or is the problem unrelated to the OS resources?
Tune your algorithm - build a smaller tree or prune it more heavily.
Prune your dataset - remove unnecessary attributes.
Prune your dataset - use fewer instances.
Use a different algorithm.
If all else fails - implement your decision tree using a different library (scipy/Orange/KNIME/Rapid miner), or roll your own.