can anyone tell me if multi dex is supported in android 2.3?
I've searched around but i can't find information for this.
My project have the same configuration like this one: https://github.com/mustafa01ali/MultiDexTest
The project builds without problem, but the final apk can't be installed in devices with 2.3 or lower.
On installation I get the error
Failure [INSTALL_FAILED_DEXOPT]
in Android Studio and this appears in logcat:
E/dalvikvm﹕ LinearAlloc exceeded capacity (5242880), last=1384
W/installd﹕ DexInv: --- END '/data/app/xxx.apk' --- status=0x000b, process failed
E/installd﹕ dexopt failed on '/data/dalvik-cache/data#app#xxx.apk#classes.dex' res = 11
You're hitting a different size limitation (LinearAlloc), which according to this bug is not solved by multi-dex:
https://code.google.com/p/android/issues/detail?id=78035
From comment #7 in that bug:
There is already an option in dx allowing to force generation of
smaller dex files:
--set-max-idx-number= Unfortunately changing the default is not a solution since the linearAlloc limit can be reached at very
different levels depending on the classes hierarchy and other
criteria.
In addition for most applications, moving to multidex will only help
to workaround the linearalloc limit for the installation. But the
application will still crash against the same limit at execution. The
only working use case where I know multidex can help with linearalloc
is when the apk does not contains one application but distinct pieces
running in separate process.
It's not clear that there's anything you can do to work around this limit, at least in the long term; you may need to simplify your app. There's another StackOverflow question here with some information and some workarounds that may get you up and running, at least for a while:
How to avoid LinearAlloc Exceeded Capacity error android
Related
I am using an Azure DevOps Pipeline for CI/CD for a ReactNative Android app.
It has been working great for a while now, but in my latest release the Gradle build is running into the following error:
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:signProductionReleaseBundle'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.FinalizeBundleTask$BundleToolRunnable
> Java heap space
The relevant part of my azure-pipelines.yml file looks like this:
pool:
vmImage: "macos-latest" # I am using this image because I am also doing iOS builds (not sure if it's relevant to the problem)
jobs:
- job: DeployAndroid
steps:
- task: Gradle#2
inputs:
gradleWrapperFile: "MyApp/android/gradlew"
cwd: "MyApp/android"
tasks: "bundleProductionRelease"
publishJUnitResults: false
javaHomeOption: "JDKVersion"
sonarQubeRunAnalysis: false
Java heap space isn't a very descriptive error, but it seems reasonable to assume it is a memory issue. I attempted to increase the max JVM memory by adding the gradleOptions argument:
- task: Gradle#2
inputs:
gradleOptions: "-Xmx3072m"
The default value is -Xmx1024m, so I thought tripling the memory (-Xmx3072m) might work. Unfortunately I am still getting the same error.
Does anyone have any other ideas on how I can fix this error?
I discovered there were several problems that needed to solved to get this to work.
For some reason the gradleOptions argument in azure-pipelines.yml wasn't increasing the memory, even though the documentation shows it being used specifically for the -Xmx flag. (I imagine something about my setup is just overriding it)
Instead I added the following line to my gradle.properties file:
org.gradle.jvmargs=-Xmx10240m -XX:MaxPermSize=4096m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
I found that the MacOS Azure images have the following specs:
Mac pros with a 3 core CPU, 14 GB of RAM, and 14 GB of SSD disk space
So, I set the Xmx flag (max JVM heap memory size) to a relatively high 10GB (10240m).
I don't actually know what XX:MaxPermSize does, but it is also related to memory. The other flags are just related to debugging.
This ended up only being half of my problem, but the other half isn't particularly relevant to my original question.
I originally placed all the blame on Azure, but I realized that I had only run the debug bundle locally, and the production bundle was also failing with the same error on my local machine.
My other problem was specific to React Native. It resulted in the same error though, so I imagine both are somehow related.
The issue was that I had upgraded to a more recent version of React Native, but missed some of the changes that had been made to android/app/build.gradle.
I doubt the specifics of what I happened to be missing would be very helpful to anyone, but if you run into this error, I suggest you check the React Native Upgrade Helper to see if you're missing anything in your build.gradle file.
while playing around with Android ART and the "native" code file .oat/.elf which is created at the app installation process, I did notice something odd.
For my understanding, if the device is using ART (Android >= 5.0), the app will start with the compiled oat file (/data/dalvik-cache/arm64/).
Thats why I was kinda surprised when checking the used fd's of an app and did not find the file there. Only the normal apk (/data/app//base.apk) is listed there.
Check this output of my "ls -l /proc/PID/fd"
So I thought maybe it's just not listed there. So I did exchange the oat file of that app by myself by compiling another classes.dex with the dex2oat tool.
So even after changing the file, the app starts normally without any strange messages or errors (also in logcat).
What is the explanation for this? What is the detailed process Android does when starting an app under ART?
I hope someone can clear that up for me. Thanks a lot.
Based on #Paschalis comment, I investigated here and the oat file is indeed memory mapped on Android 5.0 devices (emulator):
a6af4000-a6af9000 r--p 00000000 1f:01 7366 /data/dalvik-cache/x86/data#app#my.app.works-1#base.apk#classes.dex
Check via:
cat /proc/<PID>/maps | grep dex
Sadly this isn't true anymore for Android 6.0 devices (Nexus 5 & arm-Emulator).
The odex file is within the /data/app/<APP>/oat/<ARCHITECTURE>/ folder as 'base.odex`
/data/app/app.app.works-1/oat/arm/base.odex
I still haven't found a valid reference for this, it is based on experiments and observations
Apparently I have too many apache poi jars which return too many methods and go above the limit when I try to read and write an xlsx file. Below is the error I get
trouble writing output: Too many methods: 66024; max is 65536. By package:
13 java.lang
1 java.lang.reflect
5 java.util
1 javax.xml.namespace
66 org.apache.xmlbeans
19 org.apache.xmlbeans.impl.values
1 org.apache.xmlbeans.impl.xb.xmlschema
2500 org.openxmlformats.schemas.drawingml.x2006.chart
1430 org.openxmlformats.schemas.drawingml.x2006.chart.impl
8767 org.openxmlformats.schemas.drawingml.x2006.main
5258 org.openxmlformats.schemas.drawingml.x2006.main.impl
86 org.openxmlformats.schemas.drawingml.x2006.picture
33 org.openxmlformats.schemas.drawingml.x2006.picture.impl
Is there a way around this? I don't want to delete any libraries and yet my project is not compiling. Please help.
Found the issue!
It's Apache POI's XSSF incompatibility with Android! Actually Apache is pretty okay but when Android converts your Java code into Dalvik Executable files it has a method limit of 65536 which the libraries of Apache POI when they handle XSSF exceed. Hence the error. It has nothing to do with lines. :) I had only 75 rows and 7 columns. More information on this can be found at http://mail-archives.apache.org/mod_mbox/poi-dev/201110.mbox/%3CCA+JOeWNWinmNmEtHs5VK+KEc_6BzAG_=LfpdXqsDsnjJKR2X7Q#mail.gmail.com%3E.
short answer:
just remove the unnecessary jar files. e.g. from the list you gave, I saw there are '8767' methods from org.openxmlformats.schemas.drawingml.x2006.main , if it's not necessary, just remove this jar file and you life will be easier.
Detailed answer:
On titanium official Jira , this bug is still "reopened", created 1 year ago. I don't think they are releasing a new version tomorrow. ( https://jira.appcelerator.org/browse/TIMOB-18082 )
Removing the unnecessary jar files will cause the runtime error, however, since they are unnecessary, runtime error won't occur without them.
read the comments, also refer to here: ADT: fail to build when there are too many packages and classes
and here: Can we create multi dex support builds in Titanium android?
I have developed a Loadable Kernel Module (LKM) for android.
I use kzalloc:
device = kzalloc(ndevices * sizeof (*device), GFP_KERNEL);
and it worked for a while, but after an update of my android (since 4.1 it's no more working), I got following error on insmod:
insmod module.ko
insmod: init_module 'module.ko' failed (No such file or directory)
DMESG says:
Unknown symbol malloc_sizes (err 0)
This has something to do with inux/slab.h, that's what I know.
I googled for days over days and I'm very frustrated not finding the solution to fix this problem and get the LKM working again.
Can maybe anyone help me out?
CONCLUSION:
The accepted answer is correct: Try to remove the slab.h and define the missing methods as "extern". Or in your kernel-source, use "make menuconfig" and change SLAB to SLUB (see first comment in answer for more details).
The remaining problems are handled in a new, more specific topic:
Interchangeability of compiled LKMs
So you need to tell us the kernel versions. But looking up linux kernel versions and memory allocators, it looks like the default mainline kernel switched from SLAB to SLUB.
By default, Linux kernel used a SLAB Allocation system until version
2.6.23, when SLUB allocation became the default.
Unless you're writing a module or something that depends on SLAB (which is very highly unlikely), then you probably don't want to be including linux/slab.h headers.
i need to use a classifier J48 in android. But running into heapspace problems. Is there a way to fix the same? I get an error that states. Dalvik format failed: Failed to convert dex. PermGen space.
So you have a memory problem using J48 in Weka on android.
I would try to diagnose this in the following order:
How much memory does your program consume? See here and here for Weka memory consumption.
Add more memory to the JVM (also in the earlier links).
Try running this on a more affluent JVM - can this run on a desktop? Or is the problem unrelated to the OS resources?
Tune your algorithm - build a smaller tree or prune it more heavily.
Prune your dataset - remove unnecessary attributes.
Prune your dataset - use fewer instances.
Use a different algorithm.
If all else fails - implement your decision tree using a different library (scipy/Orange/KNIME/Rapid miner), or roll your own.