Gradle task to only execute uiTests - android

I've got this Android, with the default Gradle-tasks. In the Android-project, there is a androidTest package, containing integrationTests and uiTests. I have also two Kotlin classes containing a suite of test classes to be called.
However, ./gradlew connectedAndroidTest runs both integrationTests and uiTests, I want to separate this. I came up with multiple solutions:
Android Studio's run configurations. However, this isn't checked into VCS, therefore we have no access on our Jenkins build server.
Add Gradle tasks in the Groovy language, however, it's not encouraged to call another gradle task in a new task.
So I'm looking for a way to only test either integrationTests or uiTests. How can I do this?

I'm going to give you a cheap and cheerful answer now. Maybe someone else will be able to provide a fuller one.
Since all the tests appear to be part of the same source set, you need to distinguish between them some other way. The most appropriate solution is whatever mechanism your testing library has for grouping, which you can then utilise from Gradle.
Alternatively, use something like a naming convention to distinguish between UI tests and integration tests.
What you do then depends on how you want the build to deal with these different categories. The main options include:
Using test filtering from the command line — via the --tests option — to run just integration or UI tests. Note that filtering only works via class name, so you'd have to go with the naming convention approach.
Configure the appropriate Test task — is that connectedAndroidTest? — so that if you set a particular project property it will run either the integration tests or the UI tests based on the value of that project property. This involves using an if condition in the configuration. This approach works with both filtering and grouping.
Add two extra Test tasks, one that executes the integration tests and one that executes the UI tests. You would leave connectedAndroidTest untouched. This is my preferred approach, but requires a bit more code than the others.
This answer is missing a lot of detail on how to implement those solutions, but I'm afraid filling that detail out is too time-consuming for me right now. As I said, hopefully someone will come along with a fuller answer.

Related

Is it necessary to integrate the testing code within the target project if only use UI Automator?

I was wondering if maybe we could just use a separate project to hold the testing project, instead of integrating it within the target project.
I know we should use Espresso, but in our specific case, our team decided to use UI Automator only. The advantage of using a separate project to hold the testing project is:
Building is much faster, since we don't need to compile the whole large target project every time. And only build the separated mock project.
Debugging is much faster, in the case of using ./gradlew :connectedAndroidTest , android would remove the app after test. That means we will have to login the app every time for the new test. In the case of separated project, only the mock project be removed, and the target app keeps still as-is.
You can use UiAutomator to test many different components even some that you are not developing and don't belong to your application (i.e. Home Screen).
Then, you can create an independent project to test whatever you want.

Possible to Proguard Code Used for Tests but not in Release?

Is there a way to use Proguard to comment out a line of source code so that it exists for tests but does not get into the release build?
I am working on an Android app and have ported tests to Google Espresso for instrumentation tests. As a result, I had to add some code into the /src/ tree to notify Espresso when the app is busy, specifically it is a broadcast intent which is triggered by a static (and stateless) method. I would like to not have production code create broadcasts that only tests ever listen to.
IdlingResourceBroadcaster.onBusy(getActivity(), MyActivity.class.getName());
The solution that I am looking for would be akin to C's #ifdef...#endif which would wipe out the one-liner above completely, or alternatively, akin to Log.v() which still retains the one-liner above but treats the contents of the method as NOOP.
I am not familiar with Proguard but from what I have read this does not seem possible. Any suggestions or links to similar projects much appreciated.

Which test to run when using android studio for unit testing

I am working through an android app tutorial course on udacity.com. I have come to a lesson where it's introducing testing. However, the video for the current class is showing how to run a test where only one run test option is available. seen here: https://youtu.be/CHb8JGHU290?t=170
but my android studio shows a number of options
and I am not sure what is the correct one to use, or even what the differences between them are. Could anyone shed some light on why I have the 4 different choices and what they are? In particular the first and second options are confusing me. the third and fourth options are intuitive enough to understand.
Thank you.
The options you are given are:
1- Run tests using Gradle:
This has been added in version 1.1 of Android Studio, to run tests using Android's build system, Gradle.
2- Run tests using Android JUnit, which will probably require a device/emulator. This is the option to use if you have test cases that make use of Android's test suite, like AndroidTestCase, also useful for running more complex and Android-related test cases.
3/4 - Run using JUnit framework. In your case, the only difference between these options is that the first indicates All Tests available in the project, while the last option offers running all tests existing in the specified package. In your case, probably both options are equivalent.
If you are running basic unit tests, I'd definitely stick with the first option.
More details on Android Studio testing here:
http://tools.android.com/tech-docs/unit-testing-support

How can I run my independent Robotium UI tests in parallel?

I'm using Jenkins for my Android continuous integration. I have some isolated, independent Robotium UI tests that currently take 12 minutes to run serially against a single emulator. Can anybody recommend a good way to run them in parallel so it will take only 6 minutes (or less)?
I know about various ways to run the full test suite in parallel on multiple devices/emulators, e.g. see the Multi-configuration (matrix) job section of the Jenkins Android Emulator Plugin, Spoon, or cloud testing companies like AppThwack.
I know how to run a specific subset of my tests, by using JUnit annotations, or apparently Spoon supports a similar function (see my question about it).
I'm now using Spoon to run my full test suite (mostly to take advantage of the lovely HTML output with screenshots). If anybody has tips on the best way to split my tests and run them in parallel, that would be great.
I assume I could achieve this by splitting the tests into two separate CI jobs, but it sounds like a pain to maintain two separate jobs and combine the results.
Update: I've added another answer which I think gives a cleaner and more concise Jenkins configuration, and is based more directly on Spoon.
I've just discovered the Jenkins MultiJob Plugin which allows you to run multiple jobs in parallel within a Phase.
Below is my working, but slightly fragile approach to doing this using the Fork plugin. I use manually configured regular expressions to partition the tests (this was the main reason I tried to use Fork - it supports using regex).
The MultiJob looks like this with multiple downstream jobs in a Phase:
Main job configuration
Here's how my "Android Multi Job" is configured:
Downstream job configuration
Here's how the downstream "Android Phase N" jobs are configured (with different android.test.classes regular expressions for each):
Gotchas
Fork currently fails to run on Gradle v1.0.0, as per fork plugin issue #6.
If you want a Fork regex to match multiple different packages, you need to comma separate your regex. This isn't very well documented in the Fork project, but their TestClassFilter source shows you how they interpret the regex.
Any abstract test classes need to be named Abstract*, otherwise Fork will try to run them as tests, creating annoying failures. Their TestClassScanner controls this, and issue #5 tracks changing this.
IIRC, you need to have the Fingerprint Plugin installed for the "Aggregate downstream test results" option to work. If you don't have it installed you'll see this error: "Fingerprinting not enabled on this build. Test aggregation requires fingerprinting."
Limitations
Test results are aggregated, but only using the JUnit XML test reports. This means you need to click through to each downstream job to view nice HTML results.
Manually partitioning your tests based on regular expressions can be tedious and error prone. If you use this approach I recommend you still have a nightly/weekly Jenkins job to run your full test suite in a single place, to make sure you don't accidentally lose any tests.
This MultiJob approach requires you to manually configure each downstream job, one for each slave node you want to use. We've prototyped a better approach using a Matrix job, where you only have to configure everything in a single Jenkins job). We'll try to write that up in the next couple of weeks.
Futures
We've also prototyped a way of extending Spoon (the output is more pretty than Fork) to automatically split the whole test suite across N downstream jobs. We still need to enhance it to aggregrate all those results back into a single HTML page in the upstream job, but unfortunately a bug in the Jenkins "Copy To Slave" plugin is blocking this from working at the moment.
You can perform this in 3 steps:
Create 2 nodes pointing to the single target machine (which satisfies your condition to run tests on same machine).
In the Job during execution use the Jenkins env variable $NODE_NAME to and assign different set of tests to each node (you may need the NodeLabel Parameter Plugin).
After execution you will have 2 report files, luckily on the same machine. You can either merge them into one if they are text files or create xml something similar to PerfPublisher plugin format which gives you detailed report.
It means you can actually execute 2 sets of tests on same machine (2 nodes pointing to it) using a single job. Obtaining single report would be tricky but If I get to know the format I can help.
Hope this is useful
This answer is an improvement on my previous MultiJob answer.
The best way I've found to do this is to use a Jenkins Matrix job (a.k.a. "Multi-configuration project"). This is really convenient because you can configure everything in a single Jenkins job.
Spoon now supports a --e option which allows you to pass arguments directly to the instrumentation runner. I've updated their README file with a section on Test Sharding.
That README should give you what you need, but here are other highlights from our Jenkins job configuration, in case that helps.
The User-defined Axis sets the number of slave nodes we want to run. We have to set the label to android so our cloud provider can launch an appropriate slave.
We have a run-build.sh script which invokes Spoon with the correct parameters. We need to pass in the total node count (in this case 6), and the index of the specific slave node being run (automatically present in the node_index variable).
The post build steps shouldn't be any different to your existing ones. In future we'll probably need to add something to collect up the results onto the master node (this has been surprisingly difficult to figure out). For now you can still click through to results on the slaves.
You can use for example Jenkins MultiJob Plugin and Testdroid API to send your APKs to real devices. That's probably the easiest way to do it. Ref guide here.

How to retroactively add tests to a code base?

Suppose you are tasked with adding a testing framework to an existing code base that has very very little unit testing coverage. The code base isn't insanely large yet, however, it does have areas where it's not super clean, or not very OOP or testable.
I've read a couple of good answers:
Adding unit tests to legacy code
How to approach unit testing in a large project
Best Option for Retrospective application of TDD into C# codebase
But the project I'm working on is an Android app, so it's slightly different (given lots more UI components).
I have some questions that all relate to the same issue:
What's the best way to do go back and put a bunch of tests in place?
How do I prioritize which parts to test first?
do I start with areas where it's getting called a lot (and is there a tool for code analysis like this)?
or do I go back and look at classes which in the past have had the most number of bugs?
Should I write integration tests first (given this is an Android app and also integration tests may prevent broken unit tests/refactorings) and then work in the unit tests?
Sorry for all the questions, just really looking for a good approach, more specifically geared towards retrospectively testing an Android app.
P.S. I'm curious if people know of good tools they can use for code analysis that can help bring to attention the areas in a code base in which unit testing would be most helpful.
Usually when starting with unit testing on an existing application you would want to do it iteratively - you probably do not have the time or the man power for a big upfront investment and so you want to add unit tests as part of the required work:
Write unt tests When adding a new feature - or better yet use TDD
When fixing a bug - write unit test(s) that fails de to the bug before fixing it
When refactoring old code wriute unit tests to make sure no regression bus were introduced
Id the whole team follow the three steps above in a matter of weeks you should have good coverage for the code you've been changing - depending on the size of the project.
I don't have an Android-specific answer. I hope someone comes along and adds a good one. Meanwhile: write acceptance (integration) tests for your most important features (from your product owner's point of view, not from some tool's). You'll get the most coverage for your effort, and be the most likely to prevent bugs that customers care about. Don't worry so much about unit tests; those are for details.
[That's the answer given in the third post you cite, but we're still waiting for that Android-savvy answer so let's not call this a duplicate yet.]
Do write acceptance tests and, as necessary, unit tests for new features, and write unit tests when fixing bugs.
Regarding how to find areas of the code that need tests, the first thing to do is measure code coverage. (You can do that in the Android SDK with ant emma debug install test.) Obviously code with no coverage needs tests.

Categories

Resources