I have written many unit and instrumented tests for my Android app. So far, I only run these against the debug build variant. Is it necessary to run tests against the release build variant? What difference can there be that might give different results from testing? The main one that I can think of is when ProGuard is enabled, which I haven't done. What will ProGuard do that makes it necessary to run my test suite? What other issues should I be aware of that require testing the release build variant?
Is it necessary to run tests against the release build variant?
I think you should.
What difference can there be that might give different results from testing?
A couple of examples:
You might have code that uses fields of the BuildConfig class to enable/disable certain workflows. Some libraries might also use that, especially BuildConfig.BUILD_TYPE. It's common to do things like:
if (BuildConfig.BUILD_TYPE.equals("debug") {
ACRA.init(...);
Stetho.init(...);
...
}
but have code fail in release builds due to trying to use components/libraries that were not initialized correctly.
As you mentioned, ProGuard might throw away some of your classes unless it's properly configured (e.g. imagine you forgot to add rules for some 3rd party library). Running your tests against the release variant ensures that the ProGuard configuration is correct.
What will ProGuard do that makes it necessary to run my test suite?
ProGuard might remove classes/methods/fields that are, for example, loaded via reflection unless you add the #Keep annotation to them. It might also rename classes used by libraries like Realm, Retrofit, Gson or Volley resulting in all unit and integration tests passing on debug builds (where ProGuard isn't enabled) but failing on release builds. You definitely want to test these before shipping our a new APK.
What other issues should I be aware of that require testing the release build variant?
The release build might also apply PNG crunching, specify different parameters via the buildConfigField method in Gradle, apply splits by ABI or density or enable/disable multidex and others. All of these can have implications in the way that your app works, so why not be on the safe side and test them too.
Another frequent problem that you can catch using these is ensure that you've not accidentally put code in the wrong location (e.g. /src/debug/java/) that happens to be loaded in debug builds but not in other variants.
Related
I have been working on android app development from past 4 months and now I have developed my first app and as it is easy to decompile a apk so we should use dex or proguard for shrinking and protection.The problem is I have read in an article that proguard may change the code so sometimes a app may misbehave ,this is my first app and I don't want to mess up.So before using proguard in my app I have few questions -
1.What are the points to keep in mind before using proguard.
2.I read you can use keep command but proguard will not obfuscate that code and it will remain same,so I want my all code but as I will use keep it won't do anything.
3.How to make sure that the after functioning of app is same as before after using proguard.
4.Is is necessary to sign app or make key for using proguard?
Question1. What to keep in mind!
The docs state that there may be unintended events that occur from using proguard
Be aware that code shrinking slows down the build time, so you should
avoid using it on your debug build if possible. However, it's
important that you do enable code shrinking on your final APK used for
testing.
After ProGuard shrinks your code, reading a stack trace is difficult (if not impossible) because the method names are obfuscated.
I believe this answers question 3
The key word here is test, test, test! The moment you create your release apk. Test the functionality against your use cases to see if the application is still running the way it should.
If you don't have tests yet I would recommend write some at least unit tests before you release and test the proguard app against that.
Question 4: No you do not need a key to use proguard. I have used it on my debug builds before.
So your typical release build variant could look something like this:
//AndroidStudio3.0.1Canary
release {
postprocessing {
removeUnusedCode true
removeUnusedResources true
obfuscate true
optimizeCode true
proguardFile 'proguard-rules.pro'
}
}
I am using various services in my android app for which I need userIDs and keys. Now I can store all of the keys in my string.xml file. However, since I have two different environments (production and debug) in server, i need to figure out a way of maintaining two different sets of keys based on environment.
Is there a standard way of maintaining keys for android app ?
You are looking for gradle feature called build variants. This will let you have i.e. different string.xml for release build and different for debug ones. See docs:
https://developer.android.com/tools/building/configuring-gradle.html
Build variants are specific builds that you can produce from Gradle,
based around shared core source code. While a standard app may have a
debug and release build type, you can expand on this by adding flavor
dimensions.
Read official guideline about Configuring Gradle Builds
Using Gradle Build Variants
I received some legacy code of app (not developed by me, but by some other team, with no documentation), which has almost 20+ dependencies, in build.gradle.
Now, I wanted to clean up unused Libraries/dependencies, by removing them from build.gradle
I searched on Google and came across this project for resource shrinking. But it seems to be used for removal of resources that are unused, at build time, in the packaged app and this also removes resources from libraries you are depending on if they are not actually needed by your application.
Also, I use ProGuard, for obfuscation and shrinking in conjunction with shrinkResources true in build.gradle
My intention is to remove unused Libraries/dependencies from build.gradle itself, without breaking app functionality.
Is there a way or tool which shows which library is safe to remove without breaking the app functionality?
By 20+ dependencies you don't need any tooling and can do a manual check.
I would proceed like this:
Comment out all dependencies and check what fails (see below)
Uncomment the dependency that causes the failure
Repeat
This way you might also notice dependencies that are seldom used or can be replaced with standard libraries or other libraries that you use in the project.
Here are the things that will indicate you that a dependency is required (in the order of slowing down the feedback loop):
compilation errors
unit test errors
integration / system / end-to-end / device test errors (whatever you use and call them)
application functionality at runtime
application performance at runtime
Runtime dependencies can be especially tricky. For example, your code might not depend on a library, but this library provides a runtime implementation for some other library you depend on. Removing such a dependency will only be visible at runtime as missing functionality or performance issues.
Instead of commenting out all dependencies I would go the other way around - comment out one dependency at a time and see what breaks. This way you would also get a grasp of use-cases of all dependencies because the IDE will point you to the place where code broke. If nothing breaks after commenting out a dependency you'll know that it's not used. Another thing you could potentially do is analyze an unobfuscated release .apk where all unused dependencies will be missing but package structure will be preserved.
If you mean that finding unused library or import, you can easily see with "Ctrl + alt + shift + i" and type "unused import"
You can see now all unused imports.
Finding libraries and resources used in an Android app comes up in several contexts.
For the apps published in Google Play, AppBrain maintains reverse lookups, from the library to the more popular apps that use it. For example, apps using a newish 2D game library Godot.
Apktool will decode the APK directly.
The author instead wants to find (unused) resources, starting from the source code and the build process. Gabriele Mariotti above links to the question, whose accepted answer provides detailed information on use of minifyEnabled and shrinkResources in Gradle configuration.
Review Shrinking Android app and ProGuard vs R8.
My Unit tests are failing with Method d in android.util.Log not mocked but only when I run testDebug.
If running testRelease all is fine and they correctly pass.
Does anyone know why this is happening?
Same thing happens when running gradle from console and Android studio.
Here is an explanation how I solved this for future reference.
Problem with tests working in debug but not release were due to a fact that Log.d (and friends from android framework) were not correctly mocked.
Reason for it working when built as release is that our loging was conditional based on this property from build config. Basically we have if (BuildConfig.type!="Release") Log.d (...) and since the compiler removes this block due to final value it doesn't get called when testing release.
To mock static method Log.d I used PowerMock. Mocking was easy but setting up Power Mock is really a hassle so there might probably be beter ways to do it.
Checkout "Method ... not mocked" link on Android Studio Project Site.
It says:
The android.jar file that is used to run unit tests does not contain any actual code - that is provided by the Android system image on real devices. Instead, all methods throw exceptions (by default). This is to make sure your unit tests only test your code and do not depend on any particular behaviour of the Android platform (that you have not explicitly mocked e.g. using Mockito). If that proves problematic, you can add the snippet below to your build.gradle to change this behaviour:
build.gradle
android {
// ...
testOptions {
unitTests.returnDefaultValues = true
}
}
We are aware that the default behavior is problematic when using classes like Log or TextUtils and will evaluate possible solutions in future releases.
I just used the above to get rid of the exception for now.
I've needed to recently introduce ProGuard on Android because of issues with Scala on Android. I need ProGuard for its shrinking feature, which removes classes presumed to be unused. I'm very concerned about the impact of removing classes on testability.
As it stands, I write unit tests that run on the host and acceptance tests that run the fully integrated application on the Android platform.
Normally, I would be comfortable with relatively complete unit test coverage and spotty acceptance test coverage. However, given that in my code I use Guice dependency injection heavily, so far it's been my experience that ProGuard removes code in a manner that's difficult for me to predict. Because of this it's very likely to cause me to introduce bugs.
This leads me to believe that I need to write acceptance/platform tests that achieve full coverage because at any point, there may be a missing class.
Do others have this experience? If so, what has been your testing strategy? Or with experience, do you become more confident that the classes that ProGuard is removing are truly unneeded?
ProGuard will not break your application until it attempts to use reflection or Class#forName on removed classes and/or obfuscated members.
From my experience (with obfuscated Scala on Android too) it is really easy to spot problems caused by ProGuard to your Android application using the simple smoke tests. You know what libraries you include in your project. If some of them uses reflection or Class#forName - perform smoke test on them. Then exclude the necessary classes/members from the ProGuard configuration.
Remember also that you can automatize testing of your obfuscated project using the ActivityInstrumentationTestCase2 and emulator. If you plan to use ProGuard on your project, always perform instrumentation testing on obfuscated APK.
In conclusion - fear not. ProGuard-related problems are relativity easy to spot.
We've been both unit testing and "fully" testing our ProGuard-ed application for quite a while now, and we've had no "real" problems. The only issues we run into is when we use some library methods in our tests that aren't used in the main application; in these cases ProGuard will remove the code from the libraries and we would have to manually add the specific methods to proguard.cfg.
Oh, and we also use Guice :)