I am using an Azure DevOps Pipeline for CI/CD for a ReactNative Android app.
It has been working great for a while now, but in my latest release the Gradle build is running into the following error:
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:signProductionReleaseBundle'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.FinalizeBundleTask$BundleToolRunnable
> Java heap space
The relevant part of my azure-pipelines.yml file looks like this:
pool:
vmImage: "macos-latest" # I am using this image because I am also doing iOS builds (not sure if it's relevant to the problem)
jobs:
- job: DeployAndroid
steps:
- task: Gradle#2
inputs:
gradleWrapperFile: "MyApp/android/gradlew"
cwd: "MyApp/android"
tasks: "bundleProductionRelease"
publishJUnitResults: false
javaHomeOption: "JDKVersion"
sonarQubeRunAnalysis: false
Java heap space isn't a very descriptive error, but it seems reasonable to assume it is a memory issue. I attempted to increase the max JVM memory by adding the gradleOptions argument:
- task: Gradle#2
inputs:
gradleOptions: "-Xmx3072m"
The default value is -Xmx1024m, so I thought tripling the memory (-Xmx3072m) might work. Unfortunately I am still getting the same error.
Does anyone have any other ideas on how I can fix this error?
I discovered there were several problems that needed to solved to get this to work.
For some reason the gradleOptions argument in azure-pipelines.yml wasn't increasing the memory, even though the documentation shows it being used specifically for the -Xmx flag. (I imagine something about my setup is just overriding it)
Instead I added the following line to my gradle.properties file:
org.gradle.jvmargs=-Xmx10240m -XX:MaxPermSize=4096m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
I found that the MacOS Azure images have the following specs:
Mac pros with a 3 core CPU, 14 GB of RAM, and 14 GB of SSD disk space
So, I set the Xmx flag (max JVM heap memory size) to a relatively high 10GB (10240m).
I don't actually know what XX:MaxPermSize does, but it is also related to memory. The other flags are just related to debugging.
This ended up only being half of my problem, but the other half isn't particularly relevant to my original question.
I originally placed all the blame on Azure, but I realized that I had only run the debug bundle locally, and the production bundle was also failing with the same error on my local machine.
My other problem was specific to React Native. It resulted in the same error though, so I imagine both are somehow related.
The issue was that I had upgraded to a more recent version of React Native, but missed some of the changes that had been made to android/app/build.gradle.
I doubt the specifics of what I happened to be missing would be very helpful to anyone, but if you run into this error, I suggest you check the React Native Upgrade Helper to see if you're missing anything in your build.gradle file.
Related
I tried to generate code report using detekt and when execute the below command in terminal
gradle detekt
it showing build failed with below message.
* What went wrong:
Execution failed for task ':app:detekt'.
> Build failed with 395 weighted issues.
As others have said in the comments, this means you have 395 issues in your code (kind of like lint warnings).
Detekt has a maxissues: property that determines whether or not to fail the build if your issues surpass the allowed number of maxissues. What I did was search the whole project for maxissues, which will take you to your detekt-config.yml or default-detekt-config.yml. There, you can change your maxissues to whatever you want.
In our old code base, we had 900 some issues, so I changed mine from maxissues:0 to masissues:1000. As we clean up the code, I hope to bring that number down.
I'm, currently trying to optimise my build time of an android application I'm developing. Currently it builds for about a minute and a half initial and about a minute for incremental build. I've tried all the recommendation from this page : https://developer.android.com/studio/build/optimize-your-build#optimize
We just managed to get rid of the annotation processors we previously used, but this does not decrease the initial or incremental build times , just gives us the opportunity to use Instant run - with which we previously had a lot of issues , ex. not hot swapping at all.
We made some profiling and found that more than half of the time is taken from the :app:packageProductionDebug task.
Here is a profiler sample of one of my incremental build :
total: 58s
:app:packageProductionDebug 38.933s
:app:transformDexArchiveWithDexMergerForProductionDebug 6.697s
:app:transformClassesWithDexBuilderForProductionDebug 3.833s
:app:compileProductionDebugJavaWithJavac 2.891s
:app:transformClassesWithFirebasePerformancePluginForProductionDebug 1.530s
:app:processProductionDebugResources 1.500s
:app:compileProductionDebugKotlin 1.478s
What is this task doing ? I imagine it is only packaging the previously compiled code into apk. If I'm not wrong, why this task takes 80% of the time ? Can I make something in order to improve this ?
So I found what was causing the package task to run so much time . I was having those properties in gradle.properties file
org.gradle.daemon=true
org.gradle.jvmargs=-Xms1024m -Xmx5000m -Xcheck:jni -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
After removing those properties the package task runs for a second and have overall incremental build time for about 15 seconds. I have no idea why those properties caused such a drastic decrease in the performance of the build , but I don't care, as far as I have 15 seconds build
First time android builder here. I used to do a lot of roll your own back on FreeBSD in the day. Getting back into geekdom with android.
I am trying to build android-7.0.0_r14 for the Nexus 6 NBD90Z to run under emulation. I plan to eventually build for my actual phone and this config is pretty close. I am building on ubuntu 18.04 LTS which is newer than what the docs recommend. Maybe that is a bit adventurous.
Here is what I get when I run make.
... snip
build/core/base_rules.mk:316: warning: ignoring old commands for target
out/target/product/shamu/system/lib/soundfx/libqcomvoiceprocessing.so'
Starting build with ninja
ninja: Entering directory.'
ninja: warning: multiple rules generate out/target/product/shamu/system/etc/gps.conf. builds involving this target will not be correct; continuing anyway [-w dupbuild=warn]
[ 0% 1/35600] Lex: libaidl-common <= system/tools/aidl/aidl_language_l.ll
FAILED: /bin/bash -c "prebuilts/misc/linux-x86/flex/flex-2.5.39 -oout/host/linux-x86/obj/STATIC_LIBRARIES/libaidl-common_intermediates/aidl_language_l.cpp system/tools/aidl/aidl_language_l.ll"
flex-2.5.39: loadlocale.c:130: _nl_intern_locale_data: Assertion `cnt < (sizeof (_nl_value_type_LC_TIME) / sizeof (_nl_value_type_LC_TIME[0]))' failed.
Aborted (core dumped)
ninja: build stopped: subcommand failed.
build/core/ninja.mk:148: recipe for target 'ninja_wrapper' failed
make: *** [ninja_wrapper] Error 1
A core dump for flex was not produced in spite of the error message given.
out/host/linux-x86/obj/STATIC_LIBRARIES/libaidl-common_intermediates/aidl_language_l.cpp does not exist. That entire folder is empty. It would seem that something is not downloading/copying the aidl_language_l.cpp.
Any ideas on what I might have messed up?
I am still a little confused at the complexity of git/repo/make/ninja/soong/lunch to conduct a build. It is likely that I missed something obvious.
Thanks,
Jason C. Wells
Just replace your make by export LC_ALL=C make or put the export in your .bashrc
After I looked at this a little closer I realized the prebuilt prebuilts/misc/linux-x86/flex/flex-2.5.39 would dump core with no arguments. I created a soft link to /usr/bin/flex. Compilation seems to be proceeding.
I haven't answered why the prebuilt was dumping. My goal is to compile android, not troubleshoot the tools.
I performed Snorky's steps. I deleted my output directory for libaidl-common_intermediates. I deleted my soft link and restored the android tree version of flex. I re-ran make at the top of the local repo. The build proceeded past the error above and stopped at a new error. It appears that Snorky's answer worked.
Doh! I'm new so S.O. didn't give credit for my upvote.
I am running a Jenkins build of an Android project on a Mac Mini (10.9.5). The Jenkins build is failing with error messages like this:
<package>.myTest > test_myTest FAILED
org.mockito.cglib.core.CodeGenerationException at test_myTest.java:65
Caused by: java.lang.reflect.InvocationTargetException at test_myTest.java:65
Caused by: java.lang.OutOfMemoryError at test_myTest.java:65
java.lang.OutOfMemoryError: PermGen space
This is sometimes followed by messages like
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "/0:0:0:0:0:0:0:1:50340 to /0:0:0:0:0:0:0:1:50339 workers Thread 2"
16:47:17
16:47:17 Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "/0:0:0:0:0:0:0:1:50340 to /0:0:0:0:0:0:0:1:50339 workers Thread 4"
16:47:18
16:47:18 Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "/0:0:0:0:0:0:0:1:50340 to /0:0:0:0:0:0:0:1:50339 workers Thread 5"
16:47:18
16:47:18 Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "/0:0:0:0:0:0:0:1:50340 to /0:0:0:0:0:0:0:1:50339 workers Thread 6"
16:47:19
It usually fails at the same point in the build. According to the Jenkins wiki
Do you consistently see OOME around the same phase in a build? If so, maybe it just needs a bigger memory.
this may mean I just need more PermGen space.
The stackoverflow posts/blog posts I've read indicate that I need to increase the max PermGen size (-XX:MaxPermSize=1024M, for example). However, I'm not clear on where to do this.
I've changed this for GRADLE_OPTS and JAVA_OPTS so my Jenkins build environment looks like this:
As seen in the screenshot, I also added some options to garbage collect Perm Gen as recommended here.
This seemed to be working--I had a few successful builds yesterday, but it's now failing again (with no changes that I'm aware of).
After reading this answer, I also changed the following line in my project's gradle.properties file.
org.gradle.jvmargs=-Xms1024M -Xmx2048M -XX:PermSize=512M -XX:MaxPermSize=2048 -XX:+CMSClassUnloadingEnabled -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
This hasn't fixed the problem.
Answers to similar questions like this and this make me think I may be approaching this the wrong way--should I be changing a computer setting for the Mac (10.9.5) which is running Jenkins? What is the correct way to modify the PermGen space?
Edit: I had previously thought that perhaps the environmental variables weren't being set, but I verified that they appear under the build result Environmental Variables (jenkins/job/<Project>/146/injectedEnvVars/)
As Integrating Stuff said, it was necessary to increase the MaxPermSize for the unit tests. I found how to do so here in the "Running from Gradle" section.
android {
testOptions {
unitTests.all {
jvmArgs '-XX:MaxPermSize=1024m' //prevent OOM (PermGen space) while running tests
}
}
...
}
The memory settings are set per running JVM process. So you definitely do not need to set any OS variables or system wide settings to fix this.
You first need to determine which process is throwing the error.
I am assuming it is the Gradle build and not the Jenkins itself.
It it is the Gradle build, you still do not know which process.
By default, the Gradle Test task executes the tests in a separate jvm, so changing the values in gradle.properties will not help you in such a case.
In such case, you need to configure your Gradle Test task correctly, for example:
test {
jvmArgs "-XX:MaxPermSize=2048m"
}
I just updated to Android Studio version 2.0 Preview 8 and gradle plugin 2.0.0-alpha8. I noticed that now there's a warning in case the heap space is not big enough for dexing.
As they say in the proper page (http://tools.android.com/recent)
In 2.0.0-alpha8 we've added some automatic diagnostics for this: if
the build process is too small, we switch back to out-of-memory dexing
and emit a build warning explaining how to bump up the Gradle daemon
size. (We're also working on some additional related improvements for
the next build after alpha8.)
When I went building my project I got the following error:
To run dex in process, the Gradle daemon needs a larger heap. It
currently has 3641 MB. For faster builds, increase the maximum heap
size for the Gradle daemon to more than 4g as specified in
dexOptions.javaMaxHeapSize. To do this, set org.gradle.jvmargs=-Xmx4g
as specified in dexOptions.javaMaxHeapSize in the project
gradle.properties. For more information see
https://docs.gradle.org/current/userguide/build_environment.html
actually, I get a lot of these in a row. Which surprises me because that's what I see in my gradle.properties file (in my project home dir)
ANDROID_BUILD_TARGET_SDK_VERSION=23
ANDROID_BUILD_TOOLS_VERSION=23
ANDROID_BUILD_SDK_VERSION=23
ANDROID_BUILD_MIN_SDK_VERSION=16
org.gradle.jvmargs=-Xmx4g -XX:MaxPermSize=4096m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
also, in the build.gradle of my main module, I put
android {
dexOptions {
incremental true
javaMaxHeapSize "4g"
}
}
so I don't understand where the builder/dexer/compiler is getting that 3641MB value.
Builds are extremely slow
I Disabled instant run and now things are faster. 48-50 seconds for a build. I still get the same error but just once and not many times in a row.
I'm on Linux, my ulimit -a is the following:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 31483
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 31483
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Also, this is my studio.vmoptions file: https://gist.github.com/MarKco/1ae7918daf867a378a2f I guess it's unedited. I could have a custom one, but I don't know where it could be.
I got it solved, or so it seems.
The last change I had made before I posted the question was the following:
I had
ANDROID_BUILD_TARGET_SDK_VERSION=22
ANDROID_BUILD_TOOLS_VERSION=22
ANDROID_BUILD_SDK_VERSION=22
ANDROID_BUILD_MIN_SDK_VERSION=16
org.gradle.jvmargs=-Xmx4g -XX:MaxPermSize=4096m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
and I changed it to
ANDROID_BUILD_TARGET_SDK_VERSION=23
ANDROID_BUILD_TOOLS_VERSION=23
ANDROID_BUILD_SDK_VERSION=23
ANDROID_BUILD_MIN_SDK_VERSION=16
org.gradle.jvmargs=-Xmx4g -XX:MaxPermSize=4096m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
I don't know why it wasn't changed automatically, since I've been using 23 as target version from a long time. Anyways, after I changed from 22 to 23 I didn't have good builds yet, but that seemed to be due to a gradle sync still running in background. I killed all the gradle projects, kept the version 23 in place of 22, disabled the instant run and performed a new synchronization. At first I kept seing the "To run dex in process, the Gradle daemon needs a larger heap." error, just one time and not repeated as it happened before. But build after build I saw build times getting lower, until I got a 10 seconds build. I guess it's solved. I hope, at least.
P.S. In the end I changed the MaxPermSize accordingly to what Henry said, now my configuration is
ANDROID_BUILD_TARGET_SDK_VERSION=23
ANDROID_BUILD_TOOLS_VERSION=23
ANDROID_BUILD_SDK_VERSION=23
ANDROID_BUILD_MIN_SDK_VERSION=16
org.gradle.jvmargs=-Xmx4g -XX:MaxPermSize=512m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8