I have successfully uploaded and run my tests on AWS device farm. Locally, I'm using fun things like #Test(enabled = false, dependsOnGroups = "Login") to mark which tests to run at the time, and what order they should execute. Locally, this all works fine and dandy as expected. The problem happens after I upload the zip of the maven build to device farm and perform a test run.
Looking at the logs from device farm, it doesn't care if "enabled" is set to true or false, it'l run things regardless. It also ignores the "group=" and "dependsOnGroups" annotations. This is super important, since all other tests will fail if I'm not logged in first. Worse, the subsequent failing tests will not be skipped, so AWS is happily charging me more money off of this.
I tried using #Test(priority=blah), but it's ignoring that too. The only thing it seems to be respecting are things like #BeforeSuite and #AfterSuite.
Anyone run into this or have any ideas why this is happening?
I am an engineer working on AWS Device Farm.
1) "enabled" annotation flag
I just verified that you are correct about our TestNG parser ignoring the "enabled" flag on annotations and always including the test, even if it is disabled. At first glance, this appears to be a simple issue with a straight forward fix. Assuming best case scenario, we will try and have it fixed and live in production as soon as we can.
2) "dependsOnGroups" annotation field
The answer to this one is a bit more complex. As of today, AWS Device Farm does not support the dependsOnGroups or dependsOnMethods annotation fields.
There are a few reasons for this, with the main reason being that AWS Device Farm executes each #Test method individually, each one using a fresh instance of the Appium server. There are pros/cons to this approach that I won't delve into here, but I will say that it does have benefits on both a technical and feature level. When executing individual methods with the TestNG runner, it will only load the context of an individual method, and not all tests/suites/groups of the specified dependencies found within the .jar file. Additionally, since each one of these test methods is executed in a different Java process each time, the TestNG runner does not maintain any "state" in memory. This means that it does not know that it has already executed a test previously (in a different process) so it will attempt to run it again.
3) Executing "groups" of tests
We currently do not expose groups/excludegroups to the user to allow them to pick specific collections of tests to execute or skip. However, this should be possible by configuring your <groups> entries in a testng.xml file placed in the root of your *-tests.jar file that is uploaded in your test package archive. In this case, our parser should only "discover" tests defined within those groups and not all #Test annotated methods. I have not tried this specific case however, so your YMMV.
Hope that helps! If you have any additional questions or have a specific run/test package you'd like us to look at, feel free to reach out or paste a URL to a previously executed run.
I added testng.xml in the root of *-tests.jar and checked.
But device farm is not running the tests listed in testng.xml. It's still running all the classes with #Test annotation
Related
Firstly, apologies for the ridiculous title but I really could not come up with something more accurate.
I'm writing an app that interfaces with a particular type of network device. I can write instrumented tests that interact with said device just fine but those tests obviously only work if the device is accessible if the system running the tests are on the same network. Hence, using Firebase to run the tests is impossible because Firebase does not have access to the type of device the app interacts with.
However, it leads me back to a more generic question. How do you handle instrumented tests for functionality that is not publicly accessible? Be it special devices or networks, login credentials (hardcoding login credentials seem wrong), etc.
Is there any way to mock them? So say when you press a button you can mock a positive result?
Hope this made a sliver of sense.
Thanks
Sure, you can.
For example, mock framework - https://github.com/mockito/mockito
I am heavily testing my application with unit tests and Espresso tests. My next step is to make sure my application hits all required apis. For that reason I am looking for a way to write a test, which will verify the api calls.
I would really appreciate any suggestions.
What you are describing is called a "unit test". Unit tests are meant to test as many lines of code as possible regardless of UI.
Espresso tests are "instrumentation tests" (or "UI tests") intended to check if the app is responding to UI events correctly. They're not meant to verify the correctness of code, but the correctness of the functionality of the app as used by the user.
You can read about both at the official documentation. You'll find that unit tests are very different than instrumentation tests, and often harder to write because they require more engineering of your code to do correctly. You will likely have to "mock" the various parts of your application to make sure their APIs were called exactly as you expected.
There are 2 main goal when I was writing api tests:
First is component based. The goal was to make sure each class / component makes an api call when certain criteria is met (for example calling an api A when onDestroy() is called)
Second, is to make sure the Apis are called is certain order for the analytics purposes.
The first step I achieved by using unit tests with injected mock objects via Mockito and PowerMockito. PowerMockito was used primarily to mock static methods and to make sure the methods were called at least n times.
For the second step, UI test could be used, since it runs the real application. I have implemented the helper class, which was recording the instances when api requests were made. The script in Espresso was validating the order of api calls by referring the helper class.
When running android connected device tests, state that persists across application instances, such as permissions and files stored by the application, cause tests to interfere with each other.
For example, if I want to write a test for application behaviour when I deny runtime permissions and another test for application behaviour when I allow runtime permissions, then I must be very careful that the tests run in the correct order. If the allow test ran before the deny test, then the deny test would fail, because the permissions settings would already have been granted.
Another example, in a shopping app, the application may store the contents of the basket in the apps internal file storage to allow the basket to survive application termination and reboots. Testing the behaviour of the shopping basket then becomes very difficult, as the tests interfere with each other.
What is the solution to this problem?
Be sure to clean up the state after each test case. Tests which depend on the order run are considered a bad practice. In a lost of cases, you can implement a teardown() method (annotated with #AfterTest if you are using JUnit4) to do the clean up.
Create mock state objects during tests that can be injected into your app. I'm still new to this particular approach, so I don't have a lot of advice here. Some googling should help you get started.
Mocking the state / Injecting special state objects for the tests is a solution for most problems but not the Runtime permission case
I don't know if I'm really rusty with JUnit or their is a concept with Android Testing in particular I'm not familiar with but:
I'm finding it very difficult to understand how my tests get run.
I've created a Test Project based on my main project, and created a class which extends ActivityInstrumentationTestCase2<SinglePaneActivity> and in this Test Case I've implemented setUp(), testPort() and tearDown() methods.
When I run the project as a Android JUnit test it all tests correctly.
However, adding another class extending ServiceTestCase<NativeService> with the same setUp(), testStart() and tearDown() methods implemented, the test isn't performed.
Looking through the documentation I can't find anything which states how the tests are run, I'm assuming since their is no specific setup it done via reflection.
Given that as the case however, I don't understand the documentation on TestSuites or why my Service test case isn't running.
Am I the only one that's finding the usually very well written Android Documentation lacking when it comes to testing?
As #Blackbelt says, the was a warning in the Log's indicating that the Test wasn't running.
My problem was that I had used the constructor auto generated for me by eclipse
public NativeServiceTestCase(Class<NativeService> activityClass) {
And the error was output as a warning in LogCat explaining you need to have an empty argument constructor (But that error wasn't repeated anywhere else with any visibility).
I am using Robotium Solo to test the app
since I'm fairly new to app test, and Robotium
I have 3 methods in my test case - however, I want to run those tests, under certain conditions
otherwise they fail
I can do that if i write the entire test in one method, but i don't want to run it in one method, I
want to separate it into 3 methods
how do I make sure that the tests run only if I called the test methods, and not one after the other
In Robotium Test Application, every public method starting with keyword 'test' will be considered as one test case.
So, you can try using 2-3 methods with names starting by keyword test.( e.g. testA(), testB() etc.)
So, even if testA() fails, automation testing will continue to run for testB().
And results will be shown to the JUnit viwe.
Note:- the test cases are executed in the alphabetical order as per their names.
Hope this helps , ask if you are stuck somewhere.