I am running a Jenkins build of an Android project on a Mac Mini (10.9.5). The Jenkins build is failing with error messages like this:
<package>.myTest > test_myTest FAILED
org.mockito.cglib.core.CodeGenerationException at test_myTest.java:65
Caused by: java.lang.reflect.InvocationTargetException at test_myTest.java:65
Caused by: java.lang.OutOfMemoryError at test_myTest.java:65
java.lang.OutOfMemoryError: PermGen space
This is sometimes followed by messages like
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "/0:0:0:0:0:0:0:1:50340 to /0:0:0:0:0:0:0:1:50339 workers Thread 2"
16:47:17
16:47:17 Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "/0:0:0:0:0:0:0:1:50340 to /0:0:0:0:0:0:0:1:50339 workers Thread 4"
16:47:18
16:47:18 Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "/0:0:0:0:0:0:0:1:50340 to /0:0:0:0:0:0:0:1:50339 workers Thread 5"
16:47:18
16:47:18 Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "/0:0:0:0:0:0:0:1:50340 to /0:0:0:0:0:0:0:1:50339 workers Thread 6"
16:47:19
It usually fails at the same point in the build. According to the Jenkins wiki
Do you consistently see OOME around the same phase in a build? If so, maybe it just needs a bigger memory.
this may mean I just need more PermGen space.
The stackoverflow posts/blog posts I've read indicate that I need to increase the max PermGen size (-XX:MaxPermSize=1024M, for example). However, I'm not clear on where to do this.
I've changed this for GRADLE_OPTS and JAVA_OPTS so my Jenkins build environment looks like this:
As seen in the screenshot, I also added some options to garbage collect Perm Gen as recommended here.
This seemed to be working--I had a few successful builds yesterday, but it's now failing again (with no changes that I'm aware of).
After reading this answer, I also changed the following line in my project's gradle.properties file.
org.gradle.jvmargs=-Xms1024M -Xmx2048M -XX:PermSize=512M -XX:MaxPermSize=2048 -XX:+CMSClassUnloadingEnabled -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
This hasn't fixed the problem.
Answers to similar questions like this and this make me think I may be approaching this the wrong way--should I be changing a computer setting for the Mac (10.9.5) which is running Jenkins? What is the correct way to modify the PermGen space?
Edit: I had previously thought that perhaps the environmental variables weren't being set, but I verified that they appear under the build result Environmental Variables (jenkins/job/<Project>/146/injectedEnvVars/)
As Integrating Stuff said, it was necessary to increase the MaxPermSize for the unit tests. I found how to do so here in the "Running from Gradle" section.
android {
testOptions {
unitTests.all {
jvmArgs '-XX:MaxPermSize=1024m' //prevent OOM (PermGen space) while running tests
}
}
...
}
The memory settings are set per running JVM process. So you definitely do not need to set any OS variables or system wide settings to fix this.
You first need to determine which process is throwing the error.
I am assuming it is the Gradle build and not the Jenkins itself.
It it is the Gradle build, you still do not know which process.
By default, the Gradle Test task executes the tests in a separate jvm, so changing the values in gradle.properties will not help you in such a case.
In such case, you need to configure your Gradle Test task correctly, for example:
test {
jvmArgs "-XX:MaxPermSize=2048m"
}
Related
I am using an Azure DevOps Pipeline for CI/CD for a ReactNative Android app.
It has been working great for a while now, but in my latest release the Gradle build is running into the following error:
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:signProductionReleaseBundle'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.FinalizeBundleTask$BundleToolRunnable
> Java heap space
The relevant part of my azure-pipelines.yml file looks like this:
pool:
vmImage: "macos-latest" # I am using this image because I am also doing iOS builds (not sure if it's relevant to the problem)
jobs:
- job: DeployAndroid
steps:
- task: Gradle#2
inputs:
gradleWrapperFile: "MyApp/android/gradlew"
cwd: "MyApp/android"
tasks: "bundleProductionRelease"
publishJUnitResults: false
javaHomeOption: "JDKVersion"
sonarQubeRunAnalysis: false
Java heap space isn't a very descriptive error, but it seems reasonable to assume it is a memory issue. I attempted to increase the max JVM memory by adding the gradleOptions argument:
- task: Gradle#2
inputs:
gradleOptions: "-Xmx3072m"
The default value is -Xmx1024m, so I thought tripling the memory (-Xmx3072m) might work. Unfortunately I am still getting the same error.
Does anyone have any other ideas on how I can fix this error?
I discovered there were several problems that needed to solved to get this to work.
For some reason the gradleOptions argument in azure-pipelines.yml wasn't increasing the memory, even though the documentation shows it being used specifically for the -Xmx flag. (I imagine something about my setup is just overriding it)
Instead I added the following line to my gradle.properties file:
org.gradle.jvmargs=-Xmx10240m -XX:MaxPermSize=4096m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
I found that the MacOS Azure images have the following specs:
Mac pros with a 3 core CPU, 14 GB of RAM, and 14 GB of SSD disk space
So, I set the Xmx flag (max JVM heap memory size) to a relatively high 10GB (10240m).
I don't actually know what XX:MaxPermSize does, but it is also related to memory. The other flags are just related to debugging.
This ended up only being half of my problem, but the other half isn't particularly relevant to my original question.
I originally placed all the blame on Azure, but I realized that I had only run the debug bundle locally, and the production bundle was also failing with the same error on my local machine.
My other problem was specific to React Native. It resulted in the same error though, so I imagine both are somehow related.
The issue was that I had upgraded to a more recent version of React Native, but missed some of the changes that had been made to android/app/build.gradle.
I doubt the specifics of what I happened to be missing would be very helpful to anyone, but if you run into this error, I suggest you check the React Native Upgrade Helper to see if you're missing anything in your build.gradle file.
While running lint a strange error occurred. This appears to happen after a androidx.lifecycle update from 2.2.0 to 2.3.0:
../../src/main/java/my/project/MyService.kt: Unexpected failure during lint analysis of MyService.kt
(this is a bug in lint or one of the libraries it depends on)
Message: org.jetbrains.uast.UastErrorType cannot be cast to com.intellij.psi.PsiClassType
The crash seems to involve the detector androidx.lifecycle.lint.NonNullableMutableLiveDataDetector.
You can try disabling it with something like this:
android {
lintOptions {
disable "NullSafeMutableLiveData"
}
}
Stack: ClassCastException:NonNullableMutableLiveDataDetector.visitMethodCall(NonNullableMutableLiveDataDetector.kt:103)
←UElementVisitor$DelegatingPsiVisitor.visitMethodCallExpression(UElementVisitor.kt:1079)
←UElementVisitor$DelegatingPsiVisitor.visitCallExpression(UElementVisitor.kt:1059)
←UCallExpression$DefaultImpls.accept(UCallExpression.kt:85)
←UCallExpressionEx$DefaultImpls.accept(UCallExpression.kt:-1)
←KotlinUSimpleReferenceExpression$KotlinAccessorCallExpression.accept(KotlinUSimpleReferenceExpression.kt:129)
←KotlinUSimpleReferenceExpression.visitAccessorCalls(KotlinUSimpleReferenceExpression.kt:116)
←KotlinUSimpleReferenceExpression.accept(KotlinUSimpleReferenceExpression.kt:83)
You can set environment variable LINT_PRINT_STACKTRACE=true to dump a full stacktrace to stdout.
This issue type represents a problem running lint itself. Examples include failure to find bytecode for source files (which means certain detectors could not be run), parsing errors in lint configuration files, etc.
These errors are not errors in your own code, but they are shown to make it clear that some checks were not completed.
To suppress this error, use the issue id "LintError" as explained in the Suppressing Warnings and Errors section.
When I disable NullSafeMutableLiveData - as suggested - the error doesn't occur anymore, great! But I wonder where this issue comes from and if there's a better solution than simply ignoring that specific check completely. Is it a bug in the androidx.lifecycle dependency I could report, or is it possible to somehow conflict with an error in my project I could fix? (if so, any advice finding out where?)
Note: this only happened when lint was run on Bitrise, I didn't encounter this when running lint in Android Studio. Not sure if this is somehow related though.
after upgrading to Android Studio 3.4 and also with Anroid 3.4.1 i am facing an error when I try to build APK (normal run works) :
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
I tried all the suggested changes to gradle-properties withuot success.
It soccurs on task app:transformClassesAndResourcesWithR8ForRelease
Any help would be appreciated :-)
Maybe you need to increase heap size. Like said here: https://stackoverflow.com/a/25013822/6041024
dexOptions {
javaMaxHeapSize "2g"
}
I finally found a solution which is more easier than every other test / trick :-)
I removed the row "org.gradle.jvmargs=" from the gradle.properties in the project folder , but what did the magic was to remove it also from the file (under Windows)
C:\Users[YOUR_USERNAME].gradle\gradle.properties
and everything start working as expect and faster than before :-)
I just updated to Android Studio version 2.0 Preview 8 and gradle plugin 2.0.0-alpha8. I noticed that now there's a warning in case the heap space is not big enough for dexing.
As they say in the proper page (http://tools.android.com/recent)
In 2.0.0-alpha8 we've added some automatic diagnostics for this: if
the build process is too small, we switch back to out-of-memory dexing
and emit a build warning explaining how to bump up the Gradle daemon
size. (We're also working on some additional related improvements for
the next build after alpha8.)
When I went building my project I got the following error:
To run dex in process, the Gradle daemon needs a larger heap. It
currently has 3641 MB. For faster builds, increase the maximum heap
size for the Gradle daemon to more than 4g as specified in
dexOptions.javaMaxHeapSize. To do this, set org.gradle.jvmargs=-Xmx4g
as specified in dexOptions.javaMaxHeapSize in the project
gradle.properties. For more information see
https://docs.gradle.org/current/userguide/build_environment.html
actually, I get a lot of these in a row. Which surprises me because that's what I see in my gradle.properties file (in my project home dir)
ANDROID_BUILD_TARGET_SDK_VERSION=23
ANDROID_BUILD_TOOLS_VERSION=23
ANDROID_BUILD_SDK_VERSION=23
ANDROID_BUILD_MIN_SDK_VERSION=16
org.gradle.jvmargs=-Xmx4g -XX:MaxPermSize=4096m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
also, in the build.gradle of my main module, I put
android {
dexOptions {
incremental true
javaMaxHeapSize "4g"
}
}
so I don't understand where the builder/dexer/compiler is getting that 3641MB value.
Builds are extremely slow
I Disabled instant run and now things are faster. 48-50 seconds for a build. I still get the same error but just once and not many times in a row.
I'm on Linux, my ulimit -a is the following:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 31483
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 31483
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Also, this is my studio.vmoptions file: https://gist.github.com/MarKco/1ae7918daf867a378a2f I guess it's unedited. I could have a custom one, but I don't know where it could be.
I got it solved, or so it seems.
The last change I had made before I posted the question was the following:
I had
ANDROID_BUILD_TARGET_SDK_VERSION=22
ANDROID_BUILD_TOOLS_VERSION=22
ANDROID_BUILD_SDK_VERSION=22
ANDROID_BUILD_MIN_SDK_VERSION=16
org.gradle.jvmargs=-Xmx4g -XX:MaxPermSize=4096m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
and I changed it to
ANDROID_BUILD_TARGET_SDK_VERSION=23
ANDROID_BUILD_TOOLS_VERSION=23
ANDROID_BUILD_SDK_VERSION=23
ANDROID_BUILD_MIN_SDK_VERSION=16
org.gradle.jvmargs=-Xmx4g -XX:MaxPermSize=4096m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
I don't know why it wasn't changed automatically, since I've been using 23 as target version from a long time. Anyways, after I changed from 22 to 23 I didn't have good builds yet, but that seemed to be due to a gradle sync still running in background. I killed all the gradle projects, kept the version 23 in place of 22, disabled the instant run and performed a new synchronization. At first I kept seing the "To run dex in process, the Gradle daemon needs a larger heap." error, just one time and not repeated as it happened before. But build after build I saw build times getting lower, until I got a 10 seconds build. I guess it's solved. I hope, at least.
P.S. In the end I changed the MaxPermSize accordingly to what Henry said, now my configuration is
ANDROID_BUILD_TARGET_SDK_VERSION=23
ANDROID_BUILD_TOOLS_VERSION=23
ANDROID_BUILD_SDK_VERSION=23
ANDROID_BUILD_MIN_SDK_VERSION=16
org.gradle.jvmargs=-Xmx4g -XX:MaxPermSize=512m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
I'm currently building an android application using ANT on a Jenkins server.
DexGuard is set to run on release in the custom_rules.xml.
Currently there is an issue when DexGuard tries to convert a method:
[dexguard] Unexpected error while converting:
[dexguard] Class = [o/?]
[dexguard] Method = [?(Ljava/lang/String;)Lo/?;]
[dexguard] Exception = [java.lang.IllegalStateException] (Variable v17 too large for instruction [neg-int v17, v17])
[dexguard] java.lang.IllegalStateException: Variable v17 too large for instruction [neg-int v17, v17]
...
Stack trace
...
[dexguard] Not converting this method
My question is, is there a way to get DexGuard to exit with an error status so that either ANT or Jenkins can mark the build as failed?
At the moment it simply prints the stack trace and continues.
I am currently using the Text-finder plugin for Jenkins as a post build step to match a DexGuard exception. If found it downgrades the build to failed.
DexGuard currently ignores methods that it can't convert from Java bytecode to Dalvik bytecode, for any reason -- notably corrupt input code. In this case, it looks more like a bug in DexGuard itself. We'll fix it as soon as possible, and we'll consider adding a flag to stop with an error status.
(I am the lead developer of ProGuard and DexGuard)