Run single test to check code coverage Jacoco Android - android

I'm using ./gradlew createDebugCoverageReport to generate a code coverage report of all my Android instrumentation (Robotium) tests. But I need to run all tests in order to get the coverage report now. How can I specify one single test (or single test class) to execute and get the coverage report? I beed it during development of the tests, it's too slow having to run all tests at once.

I know this is an old post, but here's how I do it.
1. First install the instrumentation test app if you have not.
(Flavor may vary. In this case, it's debug.)
// install instrumentation test app if you have not
./gradlew installDebugAndroidTest
2. Execute a test you want (or tests, classes, packages ..).
In my case, I chose ClassName#methodName.
// execute one test
adb shell am instrument -w -r --no_window_animation -e coverageFile /data/data/com.org.android.test/coverage.ec -e class 'com.org.android.ClassName#methodName', -e coverage true com.org.android.test/android.support.test.runner.AndroidJUnitRunner
Note that I'm passing 2 parameters:
A. -e coverageFile /data/data/com.org.android.test/coverage.ec and,
B. -e coverage true
The two options would generate a coverage report within the device.
If you are not familiar with running tests by adb shell am command, please refer to this official documentation.
3. Then get the coverage.ec file from the device.
// get coverage.ec data
adb shell run-as com.org.android.test cat /data/data/com.org.android.test/coverage.ec | cat > [YOUR_PROJECT_DIRECTORY]/build/outputs/code_coverage/debugAndroidTest/connected/coverage.exec
There are two things to note here.
A. You should change [YOUR_PROJECT_DIRECTORY] to your project directory. Or, you can change the entire [YOUR_PROJECT_DIRECTORY]/build/outputs/code_coverage/debugAndroidTest/connected/coverage.exec to any directory and filename you want. (maybe desktop?)
B. but the final content should have extension .exec, because jacoco will only accept those.
4. Then view the coverage report using Android Studio.
In Android Studio, please navigate to run > Show Code Coverage Data. Then a select window will appear. Select previously generated coverage.exec. Then the Android Studio will process the data and show you the coverage data. You can directly view the code coverage data, or further generate coverage report.

Related

What are the correct commands to run the CTS and VTS test plans?

I am about to setup the VTS and CTS tests for our AOSP. Both of the test suites are using the Trade Federation test framework. What is confusing me is how to run the different test plans.
According to the documentation (https://source.android.com/compatibility/vts/systems) for the VTS one has to decide which test plan to run. And then use the run command to test it.
E.g. If I want to run the default VTS test plan I run it with.
me#computer> vts-tradefed
vts-tf > run vts
This will launch a number of tests to the connected device.
Next, when launching the CTS tests, I expected to call the corresponding functions, or what appears to be. With the following instructions I expected to run CTS tests with the test plan called "cts".
me#computer> cts-tradefed
cts-tf > run cts
This seems to work fine and the tests appears to start. But then I read in the manual for the CTS (https://source.android.com/compatibility/cts/run) that the cts shall be executed as run cts --plan <test-plan>. And they give the example run cts --plan CTS below to run the default cts plan.
Start the default test plan (contains all test packages) by appending: run cts --plan CTS . This kicks off all CTS tests required
for compatibility.
For CTS v1 (Android 6.0 and earlier), enter list plans to view a list of test plans in the repository or list packages to view a list
of test packages in the repository.
For CTS v2 (Android 7.0 and later), enter list modules to see a list of test modules.
Alternately, run the CTS plan of your choosing from the command line using: cts-tradefed run cts --plan
When testing it it seems to work as well. Two thinks makes me wonder. First of all the test plan in the example are refered to with capital letters, i.e., "CTS" instead of "cts". Secondly the run-command seems to work completely different here. For me it makes sense that the run-command is a built in tradefed command, and that its argument should be the name of the test plan. This is also confirmed by tradefed itself.
VTS help:
vts-tf > help run
r(?:un)? help:
command <config> [options] Run the specified command
<config> [options] Shortcut for the above: run specified command
cmdfile <cmdfile.txt> Run the specified commandfile
commandAndExit <config> [options] Run the specified command, and run 'exit -c' immediately afterward
cmdfileAndExit <cmdfile.txt> Run the specified commandfile, and run 'exit -c' immediately afterward
----- Vendor Test Suite specific options -----
<plan> --module/-m <module> Run a test module
<plan> --module/-m <module> --test/-t <test_name> Run a specific test from the module. Test name can be <package>.<class>, <package>.<class>#<method> or <native_binary_name>
Available Options:
--serial/-s <device_id>: The device to run the test on
--abi/-a <abi> : The ABI to run the test against
--logcat-on-failure : Capture logcat when a test fails
--bugreport-on-failure : Capture a bugreport when a test fails
--screenshot-on-failure: Capture a screenshot when a test fails
--shard-count <shards>: Shards a run into the given number of independent chunks, to run on multiple devices in parallel.
----- In order to retry a previous run -----
retry --retry <session id to retry> [--retry-type <FAILED | NOT_EXECUTED>]
Without --retry-type, retry will run both FAIL and NOT_EXECUTED tests
CTS help:
cts-tf > help run
r(?:un)? help:
command <config> [options] Run the specified command
<config> [options] Shortcut for the above: run specified command
cmdfile <cmdfile.txt> Run the specified commandfile
commandAndExit <config> [options] Run the specified command, and run 'exit -c' immediately afterward
cmdfileAndExit <cmdfile.txt> Run the specified commandfile, and run 'exit -c' immediately afterward
----- Compatibility Test Suite specific options -----
<plan> --module/-m <module> Run a test module
<plan> --module/-m <module> --test/-t <test_name> Run a specific test from the module. Test name can be <package>.<class>, <package>.<class>#<method> or <native_binary_name>
Available Options:
--serial/-s <device_id>: The device to run the test on
--abi/-a <abi> : The ABI to run the test against
--logcat-on-failure : Capture logcat when a test fails
--bugreport-on-failure : Capture a bugreport when a test fails
--screenshot-on-failure: Capture a screenshot when a test fails
--shard-count <shards>: Shards a run into the given number of independent chunks, to run on multiple devices in parallel.
----- In order to retry a previous run -----
retry --retry <session id to retry> [--retry-type <FAILED | NOT_EXECUTED>]
Without --retry-type, retry will run both FAIL and NOT_EXECUTED tests
The explanations are pretty much identical. So it actually makes sense to run with run cts and run vts repectively. Is this just a silly question and am I completely wrong? Since these tests are important for our compability I want to be certain to have them run in the correct way.
To run a plan (either cts or vts), you can use different commands according to your selective need:
To run complete vts or cts tests: run <plan> e.g. run cts / run vts
To run specific module in a plan: run <plan> -m <module> e.g run cts -m CtsMyDisplayTestCases (module name should be same as mentioned in LOCAL_PACAKGE_NAME present in your Android.mk)
To run specific test class containing multiple tests of a specific module in a plan: run <plan> -m <module> -t <packageName.className> e.g run cts -m CtsMyDisplayTestCases -t android.display.cts.ScreenTests (This command will run all tests present in test class 'ScreenTests', package name is same as fixed in AndroidManifest.xml)
To run specific test case in a test class of a specific module in a plan: run <plan> -m <module> -t <packageName.className#testName> e.g run cts -m CtsMyDisplayTestCases -t android.display.cts.ScreenTests#testDisplayName (This command will run testDisplayName test case present in test class 'ScreenTests', package name is same as fixed in AndroidManifest.xml)
You can also check AOSP/cts/ directory to get basic idea of naming conventions and working.

How can I run a single android instrumentation test case?

Not a unit test, but an instrumentation test.
My app has multiple flavors, so I run:
./gradlew connectedClientDebugAndroidTest to run my instrumentation tests (flavor name is client).
But I want to run one particular instrumentation test case class called MyActivityTestCase.java. Is this possible? If it is what is the command to run this
With Gradle, you can run a single test by using the test.single system property. You set it from the command-line with the -D option. For example
$ gradle -Dtest.single=MyActivityTestCase connectedClientDebugAndroidTest
For more details, see http://mrhaki.blogspot.com/2013/05/gradle-goodness-running-single-test.html.
The general syntax using adb shell is :
adb shell am instrument -w <test_package_name>/<runner_class>
where is the Android package name of your test application, and is the name of the Android test runner class you are using.
More details at: http://developer.android.com/tools/testing/testing_otheride.html
So the way I fixed this was by putting the single TestCase that I want into a specific folder by flavor.
So in this example, my flavor is named client, so I put the single test that I want to run for this flavor into app/src/androidTestClient/MyActivityTestCase.java
Then run ./gradlew connectedClientDebugAndroidTest
Not a solution if you have multiple test cases for a specific build flavor, but for me that wasn't the case so it works.
All my other tests are in my flavor called other, in the folder:
app/src/androidTestOther/. So to run the other instrumentation tests I just run the command ./gradlew connectedOtherDebugAndroidTest

Android Unit Test Cache

I'm facing an issue whereby whenever I use adb command to run my android test case but it keeps a cache of the last test class. This is even though I have changed my code and rebuild.
I'm running below command
./adb shell am instrument -w -e class com.xxx.yyy.tests/SignInActivityTest com.xxx.yyy.test/android.test.InstrumentationTestRunner
I add some test method, it is still caching and run the old test case.
If you are using command line; Make sure to re-build the test application and not only the application under test and that both are deployed to the target before running tests again.

Interface Android robotium testing with Teamcity

As this was not answered (Maybe did I not find it) previously, I investigated on the following question :
How to perform automated functional tests on Android devices with robotium, and report them to continuous integration server like TeamCity?
As I did not find any answer on that specific question, I investigated. Here is the output of my investigation and a quick How-To in order to help people perform automated functional tests on Android applications using robotium, and then report the results to a continuous integration server like TeamCity. Please note that this may not be the best solution, but I felt that people might be in the same situation as me. So here it is !
The following libraries have been used :
Robotium (https://code.google.com/p/robotium/) : This is an Android test automation framework. It helps you to perform automated tests like click on buttons, fill text automatically, and a lot of other things.
Android Junit Report
(http://zutubi.com/source/projects/android-junit-report/) : This library is very useful to publish test result to an exploitable xml format. If you want to run your tests through Eclipse, you will see the results of your tests on the go, but in order to export them, this library is very useful
Assuming that you have an Android project to test, create an Android Test Project (Eclipse does have a nice workflow to create it for you) and set it up to work with Robotium. A pretty clear detailed instruction on how to do it can be found here : https://code.google.com/p/robotium/wiki/Getting_Started
Then, you need to add Android Junit Report to your project in order to be able to fetch the results of your tests. In order to do so, add the Android Junit Report *.jar library in your lib folder and add it to your build path (in Eclipse : Project -> Properties -> Java Build Path -> Libraries -> Add External Jar).
You also have to change the test runner of your project. In the AndroidManifest.xml of your test project add the following :
<instrumentation
android:name="com.zutubi.android.junitreport.JUnitReportTestRunner"
android:targetPackage="<insert your package ex:com.alth.myproject" />
Once this is done, you should be able to run your tests properly. The results of the tests should be available in your device (in the following folder /data/data//files/junit-report.xml)
The next step is to configure your TeamCity build steps to perform all the different needed action to run your tests. Please note that my solution might not be the optimal one !
Build step 1 : Clean - Command Line runner - This build step may be optional depending of how you decide to create your build.xml files and such build decisions.
rm -rf <report folder>
rm -rf <Project build.xml>
rm -rf <Test project build.xml>
android update project -p <Path to your project>
android update test-projecct -m <Path to your project, relative to the test project> -p <Path to your test project>
Build step 2 : Launch AVD - Command Line runner - This build step launches the android virtual device. This step may be optional if you decide to run the tests on an actual device.
emulator -avd <nameOfYourAvd> -no-boot-anim &
sleep 45
The & avoids build to be interrupted by the virtual device launch (It is basic shell command). The sleep command is used to try to let the AVD be ready for the next build step
Build step 3 : Test app release - Ant runner : Build the test project, install it on the virtual device
Path to build xml file : <Path to your test project>/build.xml
Additional Ant command line parameters : -f <Path to your test project>/build.xml clean debug install -Dsdk.dir=<Path to your android sdk>
Build step 4 : AVD Unlock - Command line runner : Unlock the AVD screen for testing purpose
bash avdUnlock.sh
Body of avdUnlock.sh here : (http://pastie.org/7919761). This script is sending informations on regular AVD port in order to unlock the screen. This may be improved by sending the command only to a specific port and changing build step 2 to add a specific port to the emulator launch. This is however not really part of this how-to
Build step 5 : Launch tests - Command line runner : Launch the tests
adb shell pm list instrumentation
adb shell am instrument -w <insert your test package ex:com.alth.myproject.test>/com.zutubi.android.junitreport.JUnitReportTestRunner
The first adb command could be removed. This is only for debug purpose in order to see which instrumentation has been installed on the device.
Build step 6 : Fetch tests - Command line runner : Retrieve tests report from the device
adb pull /data/data/<insert your project package ex:com.alth.myproject>/files/junit-report.xml <report folder>/junit-report.xml
Build step 7 : Final emulator kill - Command line runner : Kill the running android virtual device
adb emu kill
Additional Build Features : XML report processing - Report type : Ant JUnit
Monitoring rules : <report folder>/*.xml
This How-to is clearly not optimal but answer the original question. Doing so, it is possible to fetch the android functional tests report and feed it to teamcity in order to monitore test results.
I hope this will help someone, and I would try to answer to your questions if you have some.
Al_th

How can I run Android tests with sbt?

I developed for my application a small suite of Android tests written in Scala that uses the Robotium library. The suite is for all intents and purposes a standard Android JUnit test project and runs successfully if launched from Eclipse.
I've already successfully built and run my main Android application with sbt android-plugin. The main application is located in [ProjectDir]/src/main. I was also able to successfully build my Android test application that is located in the [ProjectDir]/tests/src/main directory. I checked the emulator, and the test application appears to have been correctly installed with android-plugin's tests/android:install-emulator command. However, when I try to run the test project via sbt tests/android:test-emulator, I get:
...
Test results for InstrumentationTestRunner=
Time: 0.001
OK (0 tests)
How I can get sbt android-plugin to recognize that the project contains JUnit tests and run them?
The naming convention used here is the same as the normal JUnit and as such you need to name the tests xxxTest.class. They also need to extend TestCase (AndroidTestCase, InstrumentationTestCase etc...).
To reiterate, eclipse will run a command which will look like:
adb shell am instrument -w -e class com.android.foo.FooTest,com.android.foo.TooTest com.android.foo/android.test.InstrumentationTestRunner
It will append the classes name to the command so naming convention might not apply.
If you run from sbt, it will run
adb shell am instrument -w com.android.foo/android.test.InstrumentationTestRunner
which will find all the classes under the package name of the application com.android.foo which finishes with someClassNameTest.

Categories

Resources