I have a problem with not working tests on Jenkins - they are taking too much time.
Jenkins run Project with more than 200 tests, some of them fails or hang out. How can I set timeout for single test and abort it and go to next test?
I assume that:
[31mFAILED [0m
junit.framework.AssertionFailedError...]]
means 31m need to pass to set test to failure
You might be able to define a timeout for your test class: https://github.com/junit-team/junit/wiki/Timeout-for-tests
Beside that you should investigate the reason why your test takes ages to fail/succeed and fix/improve it.
Related
I had an issue to run my ui test
#Test
fun firstUi() {
onView(withId(R.id.home_profile_tab)).perform(click())
onView(withId(R.id.enter)).perform(click())
onView(withId(R.id.tvCountryCode)).check(matches(withText("+964")))
}
This test run and passed
But the issue is, after running test and reaching to first line, the firs perform(click)) executed after around 90 seconds, and it is almost constant and every time it takes 90 seconds
But after that (90sec) other lines executed and test completed around 4 seconds and passed successfully
Base on android documentation:
Each time your test invokes onView(), Espresso waits to perform the
corresponding UI action or assertion until the following
synchronization conditions are met:
The message queue is empty.
There are no instances of AsyncTask currently executing a task.
All developer-defined idling resources are
idle.
So how and where can I investigate more to detect to root cause of issue???
Or what I'm doing wrong???
With the help of this post I found what was my issue
I live in iran and most of services because on sanctions are banned here
So I looked in Frames, so I found that some AsyncTask are exist for some sdk, that are trying to send request to their server, but because of ban they couldn't
So they keep retrying, maybe after 90sec without a successful request call they give up, and Espresso requirements meet to continue running tests
My Solution was using VPN, so sdk can successfully make request to their servers
When I run android CTS full test using below command
run cts --plan CTS
Every time it shows different result for some of the packages, I mean some packages some tests passes/fails randomly every time I re-run full test. But when I run package individually (The package in which some tests failed), all the tests passes in it.
Why I am seeing this behavior?
Environment:
OS: Android L
CTS version: 5.1_r7
It happen some time some test failed randomly because some time that test condition is satisfied some time not and some time because of timeout test may fail.
Some cts tests involve specific timeouts set for some event to occur. For example if you are running cts test related to data calls like turning mobile data off/on and timeout to get mobile data connected is set to 10 seconds, then some time this test will pass and sometime it will fail. In this case, increasing that timeout will resolve this issue.
Regarding issue of test case failing when running multiple packages, there could be possibility that test case before failed one has not set device in a neutral/original state for next test. It is a good practice to revert all changes made during a test while exiting a test case.
I have successfully uploaded and run my tests on AWS device farm. Locally, I'm using fun things like #Test(enabled = false, dependsOnGroups = "Login") to mark which tests to run at the time, and what order they should execute. Locally, this all works fine and dandy as expected. The problem happens after I upload the zip of the maven build to device farm and perform a test run.
Looking at the logs from device farm, it doesn't care if "enabled" is set to true or false, it'l run things regardless. It also ignores the "group=" and "dependsOnGroups" annotations. This is super important, since all other tests will fail if I'm not logged in first. Worse, the subsequent failing tests will not be skipped, so AWS is happily charging me more money off of this.
I tried using #Test(priority=blah), but it's ignoring that too. The only thing it seems to be respecting are things like #BeforeSuite and #AfterSuite.
Anyone run into this or have any ideas why this is happening?
I am an engineer working on AWS Device Farm.
1) "enabled" annotation flag
I just verified that you are correct about our TestNG parser ignoring the "enabled" flag on annotations and always including the test, even if it is disabled. At first glance, this appears to be a simple issue with a straight forward fix. Assuming best case scenario, we will try and have it fixed and live in production as soon as we can.
2) "dependsOnGroups" annotation field
The answer to this one is a bit more complex. As of today, AWS Device Farm does not support the dependsOnGroups or dependsOnMethods annotation fields.
There are a few reasons for this, with the main reason being that AWS Device Farm executes each #Test method individually, each one using a fresh instance of the Appium server. There are pros/cons to this approach that I won't delve into here, but I will say that it does have benefits on both a technical and feature level. When executing individual methods with the TestNG runner, it will only load the context of an individual method, and not all tests/suites/groups of the specified dependencies found within the .jar file. Additionally, since each one of these test methods is executed in a different Java process each time, the TestNG runner does not maintain any "state" in memory. This means that it does not know that it has already executed a test previously (in a different process) so it will attempt to run it again.
3) Executing "groups" of tests
We currently do not expose groups/excludegroups to the user to allow them to pick specific collections of tests to execute or skip. However, this should be possible by configuring your <groups> entries in a testng.xml file placed in the root of your *-tests.jar file that is uploaded in your test package archive. In this case, our parser should only "discover" tests defined within those groups and not all #Test annotated methods. I have not tried this specific case however, so your YMMV.
Hope that helps! If you have any additional questions or have a specific run/test package you'd like us to look at, feel free to reach out or paste a URL to a previously executed run.
I added testng.xml in the root of *-tests.jar and checked.
But device farm is not running the tests listed in testng.xml. It's still running all the classes with #Test annotation
I need to test a use case where the application starts from a clean state - i.e. the process has not been running before the test starts. From what I see from logcat, all instrumentation tests run under one single process instance/session, so the outcome of the test in my case depends on whether or not it runs as #1 or not. It should not be this way - as we all know, unit tests (or instrumentation tests) should be autonomous.
Is there any way with the standard Android instrumentation test tools and functions I can force the TestRunner to restart the process before a given test? If not, are there hacks or third-party libraries that can help me achieve that? Or is there any way I can specifically say that test X must be run first (worst option but still)?
In specific, my test relates to the launching of activities through intents, and the intent flags (e.g. FLAG_ACTIVITY_CLEAR_TOP) in addition to the Activity launch mode (e.g. singleTop) and the state of the process, very much dictates the outcome of the test.
Assuming you are running with Espresso, there is not a clean way to pull this off. This is because Espresso runs in the same process as the application and thus killing the app will kill Espresso.
The question is, do you need all the logic you want to execute in your Application or could it be ported to your Activity.onCreate()? With Espresso restarting an Activity is doable. If there is a need to restart the application because of global/singletons, removing these may be necessary. If this cannot be done you can look at other test automation frameworks like Appium which has some support for this.
I am using Robotium Solo to test the app
since I'm fairly new to app test, and Robotium
I have 3 methods in my test case - however, I want to run those tests, under certain conditions
otherwise they fail
I can do that if i write the entire test in one method, but i don't want to run it in one method, I
want to separate it into 3 methods
how do I make sure that the tests run only if I called the test methods, and not one after the other
In Robotium Test Application, every public method starting with keyword 'test' will be considered as one test case.
So, you can try using 2-3 methods with names starting by keyword test.( e.g. testA(), testB() etc.)
So, even if testA() fails, automation testing will continue to run for testB().
And results will be shown to the JUnit viwe.
Note:- the test cases are executed in the alphabetical order as per their names.
Hope this helps , ask if you are stuck somewhere.