I'm experimenting with test sharding on Android and I'm getting pretty weird results:
+ adb -s emulator-5580 shell am instrument -e numShards 2 -e shardIndex 0 -e class com.package.etc.automation.Tests.SanityTest.SanityTest -w com.package.etc.test/android.support.test.runner.AndroidJUnitRunner
com.package.etc.automation.Tests.SanityTest.SanityTest:..........
Time: 306.578
OK (10 tests)
+ adb -s emulator-5582 shell am instrument -e numShards 2 -e shardIndex 1 -e class com.package.etc.automation.Tests.SanityTest.SanityTest -w com.package.etc.test/android.support.test.runner.AndroidJUnitRunner
com.package.etc.automation.Tests.SanityTest.SanityTest:......................
Time: 645.723
OK (22 tests)
As you can see, adb split the tests into two uneven groups. The second one has twice as many tests as the first one and executes twice as long. Not the best parallelism if you ask me.
Is there a possibility to control the distribution of tests, or at least force adb to split the tests evenly?
Let's trace it down.
When test suite is started, TestRequestBuilder is built upon JUnit Filters. ShardingFilter is one of them and is added. Adding it means that previously added Filter is "intersected" with new one - method public boolean shouldRun(Description description) is invoked. If you look at it, more likely at this fragment:
if (description.isTest()) {
return (Math.abs(description.hashCode()) % mNumShards) == mShardIndex;
}
And substituting with your numbers (numShards=2), you'll notice, that this is just a parity test. Statistically it might happen, that generated HashCode parity distribution is not 50%. Moreover, when some of the tests on your test class are ignored, disabled and are interweaving with enabled tests, you can even more disturb particular method hashcode (Junit Description uniqueId is generated from method and class name).
That is just statistics matter. As you might see in this answer:
How the groups are divided is arbitrary
Related
From what i read from these docs we can annotate any flakytests and then there should be a way to filter them out and run them alone.I was thinking firebase robo tests will know about this and re -tests the flaky ones only, but the following statements have confused me :
Can then be used to filter tests on execution using -e annotation or -e notAnnotation as desired.
What is this switch -e ? How can i filter tests ? the comment leads me to confusion on how to fitler tests on execution. is it done on the gradle command line ? Can i get an example ?
enter code hereI finally found out how to do this. When you put annotations on test methods you can run a group of methods that have the same annotation. Reading the docs we find out how to do this. so if i were to mark many tests as #FlakyTest then i can run all of the FlakyTest with adb like this:
adb shell am instrument -w -e annotation android.support.test.filters.FlakyTest
Here is an the part of the AndroidJUnitRunner docs explaining this:
Filter test run to tests with given annotation: adb shell am instrument -w -e annotation com.android.foo.MyAnnotation com.android.foo/android.support.test.runner.AndroidJUnitRunner
I'm trying to set the emulator system time to a predefined date every time i run the test cases.
I've found the command adb shell date --set= which changes time but couldn't implement it using appium API's.
Any help in figuring out how to implement it or other alternatives is much appreciated.
I've also opened a thread on appium discuss for the same.
In Ruby, i'm doing the following for avds:
# set time using adb shell command
# defaults to Time.now
# mm == month, mn == minute
# Note: formatting must include leading zeros for single character results
def self.android_set_time(yy = Time.now.strftime('%y'),
yyyy = Time.now.strftime('%Y'),
mm = Time.now.strftime('%m'),
dd = Time.now.strftime('%e'),
hh = Time.now.strftime('%H'),
mn = Time.now.strftime('%M'))
version = driver_attributes[:caps][:platformVersion].to_f
if version >= 6.0
system("adb shell 'date #{mm}#{dd}#{hh}#{mn}#{yy}.00'")
else
system("adb shell date -s '#{yyyy}#{mm}#{dd}.#{hh}#{mn}00'")
end
end
Note: you must use Kernel.system vs Kernel.exec.
Kernel.exec # Replaces the current process by running the given external _command_...
Kernel.system # Executes _command..._ in a subshell.
Unknown if it works in saucelabs or not.
There is currently no way to do this in Appium. It has not been implemented as an endpoint, and Appium does not allow ad hoc adb command execution, for security reasons.
I have created a test project with exact the same code as shown here:
http://developer.android.com/tools/testing/testing_ui.html
I have uploaded the jar file in the android virtual device and now I'm ready to run the tests. But I always get this output on the console:
INSTRUMENTATION_STATUS: stream=
Test results for WatcherResultPrinter=
Time: 0.0
OK (0 tests)
INSTRUMENTATION_STATUS_CODE: -1
I have also created a simple test with the following code:
public void FailedTest() throws UiObjectNotFoundException {
assertTrue("This test was executed", false);
}
In case there is something wrong with the code using ui elements.
The package name is Tests and the class name Login so I run the following command:
adb shell uiautomator runtest TestProject.jar -c Tests.Login
Edit
When I run it on a real device I get:
uiautomator: permission denied
As a first step, can you change the name of the test method to match the standard convention used in jUnit 3 i.e. public void testWhatever() { ... } the first 4 letters of the name nust be 'test' in lower case, the signature is public void and the method does not take any parameters.
Similarly, can you change the package name to the more standard lowercase convention e.g. org.example.tests If you file is called Tests.java (and the class also called Tests) then you should be able to call it as follows:
adb shell uiautomator runtest Tests.jar -c com.example.tests.Tests
If these don't help, please can you revise the question to include the entire code from your Tests.java file?
Note: I've not tried to reproduce your code at this stage as I'm travelling. I can do so if my suggestions don't unblock your problem(s).
I'll follow up on the uiautomator: permission denied separately. UI Automator tests do run on real devices. They don't need the device to be rooted. I run them on standard Android 4.2.x devices.
I have one test case file with around 20 methods (test cases) which extends ActivityInstrumentationTestCase2. I need to write a suite which will call only selected test case methods, I know in junit there is one method which accepts the methods to be executed
suite.addTest( new AllTestCases("testcase1"));
Is there a similar way to do stuff in android robotium? If yes, please help me out with a way to fix this. Thanks.
You can't make a call like new AllTestCases("testcase1"); because all Android related test classes inherit from either AndroidTestCase or InstrumentationTestCase and neither of these classes expose a constructor that takes a string as an argument.
You could take a look at android.test.suitebuilder.TestSuiteBuilder but even this class does not allow for the running of individual test methods, it accepts tests at the package level.
You might have some luck achieving your goal by using the Android test annotations such as #SmallTest, #MediumTest, #LargeTest etc. These will allow you to target only the specified annotated methods using the follwing command:
adb shell am instrument -w -e size <small|medium|large> com.youproject.test/android.test.InstrumentationTestRunner
Finally, its possible to target individual tests methods or classes directly from within eclipse.
To run an individual test case directly from command line:
adb shell am instrument -w -e class <Test-Class-With-Package-Name>#<Test-Method-Name> <Package-Name-Of-Test-App>/<Instrumentation-Name-Defined-In-Manifest>
Example:
adb shell am instrument -w -e class com.myapp.test.ActivityFragmentTest#testLogin com.myapp.test/android.test.InstrumentationTestRunner
You can run individual test cases programmatically with "-e" arguments to the "adb shell am instrument" command. For example, for a method 'testFoo()' in 'com.foo.bar.FooTest' you could run:
adb shell am instrument -w \
-e "class com.foo.bar.FooTest#testFoo" \
com.foo.bar.test/android.test.InstrumentationTestRunner
http://developer.android.com/guide/developing/testing/testing_otheride.html
I've created a test runner extending android.test.InstrumentationTestRunner. I'm looking for a way to define the set of tests to get executed based on a set of configurations.
I thought I may be able to override the below methods to return my custom test suite, however, these are not getting called! Just wondering whats the use of these:
public TestSuite getAllTests ()
public TestSuite getTestSuite ()
Any clues? Any other alternatives I can use to define a custom test suite at runtime?
Thanx
I also dont know... Another solution is to define your own annotation and annotate the test you want to run. Then you can run only the tests with that annotation using:
Filter test run to tests with given annotation: adb shell am instrument -w -e annotation com.android.foo.MyAnnotation com.android.foo/android.test.InstrumentationTestRunner
If used with other options, the resulting test run will contain the union of the two options. e.g. "-e size large -e annotation com.android.foo.MyAnnotation" will run only tests with both the LargeTest and "com.android.foo.MyAnnotation" annotations.
Filter test run to tests without given annotation: adb shell am instrument -w -e notAnnotation com.android.foo.MyAnnotation com.android.foo/android.test.InstrumentationTestRunner