Is there a way to use Proguard to comment out a line of source code so that it exists for tests but does not get into the release build?
I am working on an Android app and have ported tests to Google Espresso for instrumentation tests. As a result, I had to add some code into the /src/ tree to notify Espresso when the app is busy, specifically it is a broadcast intent which is triggered by a static (and stateless) method. I would like to not have production code create broadcasts that only tests ever listen to.
IdlingResourceBroadcaster.onBusy(getActivity(), MyActivity.class.getName());
The solution that I am looking for would be akin to C's #ifdef...#endif which would wipe out the one-liner above completely, or alternatively, akin to Log.v() which still retains the one-liner above but treats the contents of the method as NOOP.
I am not familiar with Proguard but from what I have read this does not seem possible. Any suggestions or links to similar projects much appreciated.
Related
I'm attempting to add some test automation to my XF app, but I'm having some difficulty as I've never used Appium before so I'm hoping that someone with experience may be able to help with my issues.
When I'm setting up my tests - if I set the "automationName" to be "UiAutomator2", then I can launch my app without problems and interact with the UI fine.
I need to do some tests with specific methods inside my app, but Appium only allows interaction with the UI. Doing some searching, I found that Appium contains an Espresso driver which is supposed to allow interaction with specific methods (this sounds exactly what I'm after).
The suggestion is to change the "automationName" from "UiAutomator2" to "Espresso" - the example I was going through in the Appium documentation was a simple test that just launched the app. It suggested that just by changing this setting, that the same test would work regardless.
I have created a simple test that just launches my app - this works fine with "UiAutomator2" but as soon as I change to "Espresso", my app doesn't launch.
The Appium server mentions not being able to find the signed .apk.
According to the documentation, this should all be working fine but as it isn't I'm guessing that there's something else that I need to do so I'm hoping that someone with experience with this will be able to help me get the tests to work.
I don't know whether I need to add a reference to Espresso somewhere in my XF app or if I need to build the .apk in a special way that will make it work.
Any advice or help on this matter would be grateful.
IMHO there are better options to achieve what you want. I'd always test the methods that are not platform specific separately, e.g. with with NUnit. You can run Unit- and Integration-Tests created with NUnit with the NUnit GUI or ReSharper if you've got a license. Unless you are using very exotic functions (or relying on timings, etc.) this should work well across all platforms.
If you are not sure that your methods will behave the same across all platforms (better to be sure), I'd create a separate App that runs tests on those methods on an actual device. There is a runner for NUnit-Tests for Xamarin.Forms, you could use for this purpose. The last release is from 2017, but I'd give it a shot. This way you can make sure that your methods work as intended, but avoid having to access them via a framework that is aimed at UI-tests.
I'll be developing an Android app in Kotlin and I'm trying to figure out how to initialize emulated dependencies. For example, the app will make API calls to a server, obtain the user's location from a location provider, pull down images from a content management system, store data locally in a database and in Android's Shared Preferences, and do math based on the current date/time. Thus there are a lot of external dependencies I want to emulate, including the current date/time so I can verify age calculation, etc.
My goal for testing is just to validate my app's screens using an Android instrumented test. I don't want any dependency on real external systems because testing those systems is the responsibility of the developers of those systems.
While reading Android'd documentation Consider whether to use test doubles, I noticed it offered a good tip: "Tip: Check with the library authors to see if they provide any officially-supported testing infrastructures, such as fakes, that you can reliably depend on." However, the documentation doesn't really explain how to initialize a 3rd party test infrastructure.
Below are what I understand so far about my options, but they all come back to a fundamental thing I don't understand: how does an Android app under test know it should operate in a test mode vs. a production mode?
Mocking such as Mockito or MockK: Mocking seems like a special case of Dependency Injection tailored for testing. The examples I've seen involve testing a class or a method, not a full scale system. The examples typically show how you mock a class and pass it to the class/method under test. But with a full scale system, the test code operates on widgets referenced via Espresso. There is no access to the classes where the logic is. My impression is mocking is for unit testing, not UI testing. But maybe someone can explain how to use mocking for UI testing:
a) Suppose an external dependency is initialized deep in the call stack. If I define a mock in my test code's setup function (e.g. a method annotated with #Before), how do I pass that down to the place in the code that depends on it?
b) I keep reading that mocks don't work in Kotlin because Kotlin defines all classes as final. There seem to be a few work arounds. But does does Google/Android officially recommend one of these (I haven't read it in their documentation).
Dependency Injection such as Dagger 2: If mocking isn't viable for UI testing, then should I use Dependency Injection? From what I understand, it seems Dagger 2 deals with issue 1.a above by defining a top-level Component and a tree of Modules which can provide dependencies at any layer of the stack. For testing, it seems like I would just provide a different Component that emulates the real dependencies.
a) In an Android instrumented test, where do instantiate a Dagger 2 Component designed for testing. How do I make sure that Component is used rather than the Component intended for production?
Prepare before launching test: I can see how I could customize build.gradle to prepare my test environment before my application is launched. For example, I could pass a flag to my app so that when the Application's onCreate() gets called, I can configure my system to prepare emulated dependencies via Dependency Injection, Mocking, or even just a custom implementation. For example, some external dependencies have a test mode where I would need to pass a flag to them so they work in test mode. I'm not clear how that sort of thing reconciles with Dependency Injection or Mock but I guess I could see how I could use those mechanisms as a wrapper to pass the test flag or not. In the following post someone wanted to mock a location provider and to do that they modified their build.gradle file to set things up before the Android test infrastructure started.
How to set Allow Mock Location on Android Device before executing AndroidTest with uiautomator and espresso?
In conclusion, I want to test a Kotlin Android app's UI using Android instrumented test with Espresso but I don't know how to setup the test so that external dependencies use emulation code rather than production code. Should I use mocking, Dependency Injection, or customize via build.gradle? Can someone help me get my thinking on track?
After much searching, I've discovered that the Android ActivityTestRule allows you to defer launching the Activity. This gives the test code time to initialize emulated dependencies as demonstrated in Fast Mocked UI Tests on Android Kotlin.
I've got this Android, with the default Gradle-tasks. In the Android-project, there is a androidTest package, containing integrationTests and uiTests. I have also two Kotlin classes containing a suite of test classes to be called.
However, ./gradlew connectedAndroidTest runs both integrationTests and uiTests, I want to separate this. I came up with multiple solutions:
Android Studio's run configurations. However, this isn't checked into VCS, therefore we have no access on our Jenkins build server.
Add Gradle tasks in the Groovy language, however, it's not encouraged to call another gradle task in a new task.
So I'm looking for a way to only test either integrationTests or uiTests. How can I do this?
I'm going to give you a cheap and cheerful answer now. Maybe someone else will be able to provide a fuller one.
Since all the tests appear to be part of the same source set, you need to distinguish between them some other way. The most appropriate solution is whatever mechanism your testing library has for grouping, which you can then utilise from Gradle.
Alternatively, use something like a naming convention to distinguish between UI tests and integration tests.
What you do then depends on how you want the build to deal with these different categories. The main options include:
Using test filtering from the command line — via the --tests option — to run just integration or UI tests. Note that filtering only works via class name, so you'd have to go with the naming convention approach.
Configure the appropriate Test task — is that connectedAndroidTest? — so that if you set a particular project property it will run either the integration tests or the UI tests based on the value of that project property. This involves using an if condition in the configuration. This approach works with both filtering and grouping.
Add two extra Test tasks, one that executes the integration tests and one that executes the UI tests. You would leave connectedAndroidTest untouched. This is my preferred approach, but requires a bit more code than the others.
This answer is missing a lot of detail on how to implement those solutions, but I'm afraid filling that detail out is too time-consuming for me right now. As I said, hopefully someone will come along with a fuller answer.
I'm using Jenkins for my Android continuous integration. I have some isolated, independent Robotium UI tests that currently take 12 minutes to run serially against a single emulator. Can anybody recommend a good way to run them in parallel so it will take only 6 minutes (or less)?
I know about various ways to run the full test suite in parallel on multiple devices/emulators, e.g. see the Multi-configuration (matrix) job section of the Jenkins Android Emulator Plugin, Spoon, or cloud testing companies like AppThwack.
I know how to run a specific subset of my tests, by using JUnit annotations, or apparently Spoon supports a similar function (see my question about it).
I'm now using Spoon to run my full test suite (mostly to take advantage of the lovely HTML output with screenshots). If anybody has tips on the best way to split my tests and run them in parallel, that would be great.
I assume I could achieve this by splitting the tests into two separate CI jobs, but it sounds like a pain to maintain two separate jobs and combine the results.
Update: I've added another answer which I think gives a cleaner and more concise Jenkins configuration, and is based more directly on Spoon.
I've just discovered the Jenkins MultiJob Plugin which allows you to run multiple jobs in parallel within a Phase.
Below is my working, but slightly fragile approach to doing this using the Fork plugin. I use manually configured regular expressions to partition the tests (this was the main reason I tried to use Fork - it supports using regex).
The MultiJob looks like this with multiple downstream jobs in a Phase:
Main job configuration
Here's how my "Android Multi Job" is configured:
Downstream job configuration
Here's how the downstream "Android Phase N" jobs are configured (with different android.test.classes regular expressions for each):
Gotchas
Fork currently fails to run on Gradle v1.0.0, as per fork plugin issue #6.
If you want a Fork regex to match multiple different packages, you need to comma separate your regex. This isn't very well documented in the Fork project, but their TestClassFilter source shows you how they interpret the regex.
Any abstract test classes need to be named Abstract*, otherwise Fork will try to run them as tests, creating annoying failures. Their TestClassScanner controls this, and issue #5 tracks changing this.
IIRC, you need to have the Fingerprint Plugin installed for the "Aggregate downstream test results" option to work. If you don't have it installed you'll see this error: "Fingerprinting not enabled on this build. Test aggregation requires fingerprinting."
Limitations
Test results are aggregated, but only using the JUnit XML test reports. This means you need to click through to each downstream job to view nice HTML results.
Manually partitioning your tests based on regular expressions can be tedious and error prone. If you use this approach I recommend you still have a nightly/weekly Jenkins job to run your full test suite in a single place, to make sure you don't accidentally lose any tests.
This MultiJob approach requires you to manually configure each downstream job, one for each slave node you want to use. We've prototyped a better approach using a Matrix job, where you only have to configure everything in a single Jenkins job). We'll try to write that up in the next couple of weeks.
Futures
We've also prototyped a way of extending Spoon (the output is more pretty than Fork) to automatically split the whole test suite across N downstream jobs. We still need to enhance it to aggregrate all those results back into a single HTML page in the upstream job, but unfortunately a bug in the Jenkins "Copy To Slave" plugin is blocking this from working at the moment.
You can perform this in 3 steps:
Create 2 nodes pointing to the single target machine (which satisfies your condition to run tests on same machine).
In the Job during execution use the Jenkins env variable $NODE_NAME to and assign different set of tests to each node (you may need the NodeLabel Parameter Plugin).
After execution you will have 2 report files, luckily on the same machine. You can either merge them into one if they are text files or create xml something similar to PerfPublisher plugin format which gives you detailed report.
It means you can actually execute 2 sets of tests on same machine (2 nodes pointing to it) using a single job. Obtaining single report would be tricky but If I get to know the format I can help.
Hope this is useful
This answer is an improvement on my previous MultiJob answer.
The best way I've found to do this is to use a Jenkins Matrix job (a.k.a. "Multi-configuration project"). This is really convenient because you can configure everything in a single Jenkins job.
Spoon now supports a --e option which allows you to pass arguments directly to the instrumentation runner. I've updated their README file with a section on Test Sharding.
That README should give you what you need, but here are other highlights from our Jenkins job configuration, in case that helps.
The User-defined Axis sets the number of slave nodes we want to run. We have to set the label to android so our cloud provider can launch an appropriate slave.
We have a run-build.sh script which invokes Spoon with the correct parameters. We need to pass in the total node count (in this case 6), and the index of the specific slave node being run (automatically present in the node_index variable).
The post build steps shouldn't be any different to your existing ones. In future we'll probably need to add something to collect up the results onto the master node (this has been surprisingly difficult to figure out). For now you can still click through to results on the slaves.
You can use for example Jenkins MultiJob Plugin and Testdroid API to send your APKs to real devices. That's probably the easiest way to do it. Ref guide here.
I am trying to write functional tests for an android application. The problem is that most functional testing frameworks that I have explored (calabash-android, robotium) have a limitation on the number of activities from different applications that can be tested in the same test. So if in one workflow I need to select some contacts from the android contact picker I cant test that entire flow because the contact picker activity is part of the android contacts application and the framework can't test an activity from my application and the contacts application at the same time.
One possible solution my team thought of was to mock out the call to the contacts activity to return a dummy intent with contact information so that our application workflow can be tested. We are trying to use mockito to achieve this. However I am stuck pretty early on. Here is what I am trying to do
MyActivity mockActivity = mock(MyActivity.class);
when(mockActivity.startActivityForResult(<?>,anyInt())).thenReturn(fakeIntent);
I am not sure what to put in the first parameter in the second line. I have tried Intent.class and android.content.Intent.class however it throws a compile error. If anyone has worked with mocking activities using mockito, some help will be greatly appreciated.
P.S. - If I understand correctly mocking is used more in unit testing than functional testing. So these tests would be more of a hybrid. If anyone has a better suggestion on how to go about these functional tests on android I am all ears.
It's hard to answer this without knowing the signature of your startActivityForResult method, but the general idea is to use any(Xxx.class), where Xxx is the type of the parameter. So either
when(mockActivity.startActivityForResult(any(Xxx.class),anyInt())).thenReturn(fakeIntent);
or the (kind of equivalent)
doReturn(fakeIntent).when(mockActivity).startActivityForResult(any(Xxx.class),anyInt());
The issue is that you cannot really "mock" (actually "spy") on the activity you're testing since it is created out of your control by Android's instrumentation code. In a unit-test environment where you would have control, you could follow the mock(MyActivity.class) or spy(myActivityInstance) path (spy would actually be better because you could re-use most of the activity's original implementation), but here not.
The only solution I found for this dilemma was to move certain functionality out of the activity into utility classes, ideally utilizing roboguice for that (#ContextSingletons can be used to process activity results). Then, in your test project, you would create your own test guice injector, set that as base application injector before you call getActivity() for the first time and then let the activity work on your mocked utility class.
I outlined the complete process here.