Jake Wharton delivered a fascinating talk where he proposes some smart ways to improve our UI tests by abstracting the detail of how we perform UI out of the tests: https://news.realm.io/news/kau-jake-wharton-testing-robots/
An example he gives is a test that looks as follows, where the PaymentRobot object contains the detail of how the payment amount & recipient are entered into the UI. Putting that one place makes a lot of sense so when the UI inevitably changes (e.g. renaming a field ID, or switching from a TextEdit to a TextInputLayout), it only needs updating in one place not a whole series of tests. It also makes the tests much more terse and readable. He proposes using Kotlin to make them even terser. I do not use Kotlin but still want to benefit from this approach.
#Test public void singleFundingSourceSuccess {
PaymentRobot payment = new PaymentRobot();
ResultRobot result = payment
.amount(42_00)
.recipient("foo#bar.com")
.send();
result.isSuccess();
}
He provides an outline of how the Robot class may be structured, with an explicit isSuccess() response, returning another Robot which is either the next screen, or the state of the current one:
class PaymentRobot {
PaymentRobot amount(long amount) { ... }
PaymentRobot recipient(String recipient) { .. }
ResultRobot send() { ... }
}
class ResultRobot {
ResultRobot isSuccess() { ... }
}
My questions are:
How does the Robot interface with the Activity/Fragment, and specifically where is it instantiated? I would expect that happens in the test by the runner, but his examples seem to suggest otherwise. The approach looks like it could be very useful, but I do not see how to implement it in practice, either for a single Activity/Fragment, or for a sequence of them.
How can this approach be extended so that the isSuccess() method can handle various scenarios. e.g. if we're testing a Login screen, how can isSuccess() handle various expected results like: authentication success, API network failure, and auth failed (e.g. 403 server response)? Ideally the API would be mocked behind Retrofit, and each result tested with an end to end UI test.
I've not been able to find any examples of implementation beyond Jake's overview talk.
I had totally misunderstood how Espresso works, which led to even more confusion in my mind about how to apply it to the page object pattern. I now see that Espresso does not require any kind of reference to the Activity under test, and just operates within the context of the runner rule. For anyone else struggling, here is a fleshed-out example of applying the robot/page object pattern to a test of the validation on a login screen with a username and password field where we are testing that an error message is shown when either field is empty:
LoginRobot.java (to abstract automation of the login activity)
public class LoginRobot {
public LoginRobot() {
onView(withId(R.id.username)).check(matches(isDisplayed()));
}
public void enterUsername(String username) {
onView(withId(R.id.username)).perform(replaceText(username));
}
public void enterPassword(String password) {
onView(withId(R.id.password)).perform(replaceText(password));
}
public void clickLogin() {
onView(withId(R.id.login_button)).perform(click());
}
}
(Note that the constructor is testing to make sure the current screen is the screen we expect)
LoginValidationTests.java:
#LargeTest
#RunWith(AndroidJUnit4.class)
public class LoginValidationTests {
#Rule
public ActivityTestRule<LoginActivity> mActivityTestRule = new ActivityTestRule<>(LoginActivity.class);
#Test
public void loginPasswordValidationTest() {
LoginRobot loginPage = new LoginRobot();
loginPage.enterPassword("");
loginPage.enterUsername("123");
loginPage.clickLogin();
onView(withText(R.string.login_bad_password))
.check(matches(withEffectiveVisibility(ViewMatchers.Visibility.VISIBLE)));
}
#Test
public void loginUsernameValidationTest() {
LoginRobot loginPage = new LoginRobot();
loginPage.enterUsername("");
loginPage.enterPassword("123");
loginPage.clickLogin();
onView(withText(R.string.login_bad_username)).check(matches(withEffectiveVisibility(ViewMatchers.Visibility.VISIBLE)));
}
}
Abstracting the mechanisms to automate the UI like this avoids a mass of duplication across tests, and also means changes are less likely to need to be reflected across many tests. e.g. if a layout ID changes, it only needs updating in the robot class, not every test that refers to that field. The tests are also significantly shorter and more readable.
The robot methods, e.g. the login button method, can return the next robot in the chain (ie that operating on the activity after the login screen). e.g. LoginRobot.clickLogin() returns a HomeRobot (for the app's main home screen).
I've put the assertions in the tests, but if assertions are reused in many tests it might make sense to abstract some into the robot.
In some cases it might make sense to use a view model object to hold a set of fake data that is reused across tests. e.g. if testing a registration screen with many fields that is operated on by many tests it might make sense to build a factory to create a RegistrationViewModel that contains the first name, last name, email address etc, and refer to that in tests rather than duplicating that code.
How does the Robot interface with the Activity/Fragment, and
specifically where is it instantiated? I would expect that happens in
the test by the runner, but his examples seem to suggest otherwise.
The approach looks like it could be very useful, but I do not see how
to implement it in practice, either for a single Activity/Fragment, or
for a sequence of them.
The Robot is supposed to be a utility class used for the purpose of the test. It's not supposed to be a production code included as a part of your Fragment/Activity or whatever you wanna use. Jake's analogy is pretty much perfect. The Robot acts like a person interacting with the screen of the app. So the exposed api of a Robot should be screen specific, regardless of what your implementation is underneath. It can be across multiple activities, fragments, dialogs, etc. or it can reflect an interaction with just a single component. It really depends on your application and test cases you have.
How can this approach be extended so that the isSuccess() method can
handle various scenarios. e.g. if we're testing a Login screen, how
can isSuccess() handle various expected results like: authentication
success, API network failure, and auth failed (e.g. 403 server
response)? Ideally the API would be mocked behind Retrofit, and each
result tested with an end to end UI test.
The API of your Robot really should specify the what in your tests. Not the how. So from the perspective of using a Robot it wouldn't care if you got the API network failure or the auth failure. This is how. "How did you end up with a failure". A human QA tester (=== Robot) wouldn't look at http stream to notice the difference between api failure or http timeout. He/she would only see that your screen said Failure. The Robot would only care if that was a Success or Failure.
The other thing you might want to test here is whether your app showed a message notifying user of a connection error (regardless of the exact cause).
class ResultRobot {
ResultRobot isSuccess() { ... }
ResultRobot isFailure() { ... }
ResultRobot signalsConnectionError() { ... }
}
result.isFailure().signalsConnectionError();
Related
I'm trying to learn the Arrow library and improve my functional programming by transitioning some of my Android Kotlin code from more imperative style to functional style. I've been doing a type of MVI programming in the application to make testing simpler.
"Traditional" Method
ViewModel
My view model has a LiveData of the view's state plus a public method to pass user interactions from the view to the viewmodel so the view model can update state in whatever way is appropriate.
class MyViewModel: ViewModel() {
val state = MutableLiveData(MyViewState()) // MyViewState is a data class with relevant data
fun instruct(intent: MyIntent) { // MyIntent is a sealed class of data classes representing user interactions
return when(intent) {
is FirstIntent -> return viewModelScope.launch(Dispatchers.IO) {
val result = myRoomRepository.suspendFunctionManipulatingDatabase(intent.myVal)
updateStateWithResult(result)
}.run { Unit }
is SecondIntent -> return updateStateWithResult(intent.myVal)
}
}
}
Activity
The Activity subscribes to the LiveData and, on changes to state, it runs a render function using the state. The activity also passes user interactions to the view model as intents (not to be confused with Android's Intent class).
class MyActivity: AppCompatActivity() {
private val viewModel = MyViewModel()
override fun onCreateView() {
viewModel.state.observe(this, Observer { render(it) })
myWidget.onClickObserver = {
viewModel.instruct(someIntent)
}
}
private fun render(state: MyViewState) { /* update view with state */ }
}
Arrow.IO Functional Programming
I'm having trouble finding examples that aren't way over my head using Arrow's IO monad to make impure functions with side effects obvious and unit-testable.
View Model
So far I have turned my view model into:
class MyViewModel: ViewModel() {
// ...
fun instruct(intent: MyIntent): IO<Unit> {
return when(intent) {
is FirstIntent -> IO.fx {
val (result) = effect { myRoomRepository.suspendFunctionManipulatingDatabase(intent.myVal) }
updateStateWithResult(result)
}
is SecondIntent -> IO { updateStateWithResult(intent.myVal) }
}
}
}
I do not know how I am supposed to make this IO stuff run in Dispatcher.IO like I've been doing with viewModelScope.launch. I can't find an example for how to do this with Arrow. The ones that make API calls all seem to be something other than Android apps, so there is no guidance about Android UI vs IO threads.
View model unit test
Now, because one benefit I'm seeing to this is that when I write my view model's unit tests, I can have a test. If I mock the repository in order to check whether suspendFunctionManipulatingDatabase is called with the expected parameter.
#Test
fun myTest() {
val result: IO<Unit> = viewModel.instruct(someIntent)
result.unsafeRunSync()
// verify suspendFunctionManipulatingDatabase argument was as expected
}
Activity
I do not know how to incorporate the above into my Activity.
class MyActivity: AppCompatActivity() {
private val viewModel = MyViewModel()
override fun onCreateView() {
viewModel.state.observe(this, Observer { render(it) })
myWidget.onClickObserver = {
viewModel.instruct(someIntent).unsafeRunSync() // Is this how I should do it?
}
}
// ...
}
My understanding is anything in an IO block does not run right away (i.e., it's lazy). You have to call attempt() or unsafeRunSync() to get the contents to be evaluated.
Calling viewModel.instruct from Activity means I need to create some scope and invoke in Dispatchers.IO right? Is this Bad(TM)? I was able to confine coroutines completely to the view model using the "traditional" method.
Where do I incorporate Dispatchers.IO to replicate what I did with viewModelScope.launch(Dispatchers.IO)?
Is this the way you're supposed to structure a unit test when using Arrow's IO?
That's a really good post to read indeed. I'd also recommend digging into this sample app I wrote that is using ArrowFx also.
https://github.com/JorgeCastilloPrz/ArrowAndroidSamples
Note how we build the complete program using fx and returning Kind at all levels in our architecture. That makes the code polymorphic to the type F, so you can run it using different runtime data types for F at will, depending on the environment. In this case we end up running it using IO at the edges. That's the activity in this case, but could also be the application class or a fragment. Think about this as what'd be the entry points to your apps. If we were talking about jvm programs the equivalent would be main(). This is just an example of how to write polymorphic programs, but you could use IO.fx instead and return IO everywhere, if you want to stay simpler.
Note how we use continueOn() in the data source inside the fx block to leave and come back to the main thread. Coroutine context changes are explicit in ArrowFx, so the computation jumps to the passed thread right after the continueOn until you deliberately switch again to a different one. That intentionally makes thread changes explicit.
You could inject those dispatchers to use different ones in tests. Hopefully I can provide examples of this soon in the repo, but you can probably imagine how this would look.
For the syntax on how to write tests note that your program will return Kind (if you go polymorphic) or IO, so you would unsafeRunSync it from tests (vs unsafeRunAsync or unsafeRunAsyncCancellable in production code since Android needs it to be asynchronous). That is because we want our test to be synchronous and also blocking (for the latter we need to inject the proper dispatchers).
Current caveats: The solution proposed in the repo still doesn't care of cancellation, lifecycle or surviving config changes. That's something I'd like to address soon. Using ViewModels with a hybrid style might have a chance. This is Android so I'd not fear hybrid styles if that brings better productivity. Another alternative I've got in mind would maybe be something a bit more functional. ViewModels end up retaining themselves using the retain config state existing APIs under the hood by using the ViewModelStore. That ultimately sounds like a simple cache that is definitely a side effect and could be implemented wrapped into IO. I want to give a thought to this.
I would definitely also recommend reading the complete ArrowFx docs for better understanding: https://arrow-kt.io/docs/fx/ I think it would be helpful.
For more thoughts on approaches using Functional Programming and Arrow to Android you can take a look to my blog https://jorgecastillo.dev/ my plan is to write deep content around this starting 2020, since there's a lot of people interested.
In the other hand, you can find me or any other Arrow team maintainers in the Kotlinlang JetBrains Slack, where we could have more detailed conversations or try to resolve any doubts you can have https://kotlinlang.slack.com/
As a final clarification: Functional Programming is just a paradigm that resolves generic concerns like asynchrony, threading, concurrency, dependency injection, error handling, etc. Those problems can be found on any program, regardless of the platform. Even within an Android app. That is why FP is an option as valid for mobile as any other one, but we are still into explorations to provide the best APIs to fulfill the usual Android needs in a more ergonomic way. We are in the process of exploration in this sense, and 2020 is going to be a very promising year.
Hopefully this helped! Your thoughts seem to be well aligned with how things should work in this approach overall.
I am trying to test a login page using Espresso in Android.So far I have identified the test cases that my code should perform. These are the test cases
Enter User Name
Enter Password
Press Submit
Check Button text changed to "Verifying..."
This is my test case
#LargeTest
#RunWith(AndroidJUnit4.class)
public class LoginTest {
private String userName;
private String userPass;
#Rule
public ActivityTestRule<LoginActivity> activityRule = new ActivityTestRule<LoginActivity>(LoginActivity.class);
#Before
public void assignCredentials (){
userName = "ABC";
userPass = "ABC";
}
#Test
public void buttonTextChanged(){
onView(withId(R.id.edittext_user))
.perform(typeText(userName));
onView(withId(R.id.edittext_pass))
.perform(typeText(userPass));
onView(withId(R.id.submit_login))
.perform(click())
.check(matches(withText("Verifying...")));
}
}
To add, Login button text is actually changing the text to Verifying... when the system is checking the credentials with server and once done, the text changes to LOGIN again. Every time I run, the test fails and showing the actual text is LOGIN . I am assuming, this is happening because of the delay and espresso could not catch the delay. As I am new in testing, I would be grateful if you could explain how this problem can be resolved or what is the approach that I should take for this kind of scenarios.
There are few things I would try in your case:
Api Call
Generally speaking you should avoid "contacting" the real server when performing the test. You need to isolate the thing you're really testing so you cannot hope that the api call succeeds.
You need to be sure what comes back from your backend.
There are two ways people usually fix this:
Mock the api call to return proper thing without even touching the network stack
Mock the Http server
The exact implementation of any of the above depends on how your app is designed.
Time-related stuff
Although Espresso should manage any events that it should wait for, sometimes there's a need to tell it to delay some executions manually.
For that purpose you should use IdlingResource. A nice, simple explanation on that subject can be found here.
Additionally if you use any Animations in the thing you're testing, you should disable them for the time of your tests. There are multiple ways of doing so, a simple google search will give you tons of questions here on StackOverflow.
Espresso calls
I'm not sure if that makes any difference (would have to look deeper into Espresso code), but the last thing I would do is to separate two Espresso calls you're performing. To be sure that Espresso executes the click() first, and then checks matches and not both of them at the same time.
#Test
public void buttonTextChanged(){
onView(withId(R.id.edittext_user))
.perform(typeText(userName));
onView(withId(R.id.edittext_pass))
.perform(typeText(userPass));
onView(withId(R.id.submit_login))
.perform(click());
onView(withId(R.id.submit_login))
.check(matches(withText("Verifying...")));
}
I was wondering if it was good practice to subclass the test cases on Android. I mean, I need to test a lot of Parcelable objects and I could create a class like GenerericParcelableAndroidTestCase to test all these objects.
I also have a problem implementing it, I have something like this:
public class GenericParcelableTest extends AndroidTestCase {
private Parcelable p = null;
GenericParcelableTest(Parcelable p) {
this.p = p;
}
public void testDescribeContents() throws Exception {
assertEquals(0, p.describeContents());
}
}
And that:
public class AttachmentTest extends GenericParcelableTest {
public AttachmentTest() {
super(new Attachment());
}
}
Attachment implements Parcelable of course.
It returns me this error:
junit.framework.AssertionFailedError: Class GenericParcelableTest has no public constructor TestCase(String name) or TestCase()
I mean, I know that I created no empty constructor but why would I need one?
And generally, is there some known issues with this approach? If not why is there very few article on this topic on the internet (and actually some say even that it's not a good idea).
I have this conversation quite often when introducing new team members to unit testing. The way I explain it is by stating that your tests are first class citizens of your code base (no pun intended), they are susceptible to the same technical debt as any other part of your code base and have equivalent (maybe more?!) importance as that of the runtime code.
With this mindset, the questions begins to answer itself; if it makes sense from an OO perspective to use inheritance (i.e. your subclass is a insert name of test superclass) then subclass away. However, like any abuse of inheritance ever, be careful...the minute you add a test case that doesn't rely upon that superclass behaviour you may have a code smell.
In this scenario, it's likely (perhaps 90% of the time?) it is a separation of concern issue within the code being placed under test, i.e. the "unit" under test isn't actually (one) unit but has combinatorial behaviour. Refactoring that code to do one thing would be a good way of allowing your super-class test case to live on. However, watch this super class test case like a hawk...the minute you see booleans being added to signatures to "allow that similar but not the same" test case to run under your once unpolluted super class then you have a problem, a tech debt problem that is no different to your runtime code.
At last check AndroidTestCase depends on an Activity context so it's likely best described as an integration test which tend to regularly have boilerplate super-class test behaviour. In this case, try to narrow the focus of your superclass to the use case under test...i.e. extends LoginUseCase or extends LoginScenario to better "bucket" those subclasses in the first instance. This will help guide would be extenders as to whether they should be using it for their non-login scenario. Hopefully, conversation will ensue and tech debt accumulation be avoided!
Regarding your error, in JUnit3 do what #Allen recommends, if moving to JUnit4 with something like Robolectric then explore using Rules as well as #BeforeClass.
Personal note
I have only felt the need to write test super classes for pseudo-unit tests that mock an API end point (akin to MockWebServer if you are familiar with that product) and DAO integration tests whereby an in-memory db is started and torn down over the lifecycle of each test (warning - slow (but useful) tests!)
junit.framework.AssertionFailedError: Class GenericParcelableTest has no public constructor TestCase(String name) or TestCase()
You get this error because JUnit needs to be able to construct an instance of your test class. It only knows how to do this using no-arg, or single string constructors.
Instead of performing initialization in your constructor, you should put it in the setUp() method. This will let you use the default constructor while still initializing the object before the test method is called.
I've been trying to cover my Android app with tests and have started using espresso recently. Pretty impressed with it so far. However most of my app's functionality requires that users are logged in. And since all tests are independent, this requires registering a new user for each test. This works fine however the time required for each test increases considerably because of this.
I am trying to find a way to register a user once in a class (of tests) and then use that same user account to perform all the tests in that class.
One way I have been able to do this is to actually have only one test (#Test) method that runs all the other tests in the order I want. However this is an all or nothing approach, since the gradle cAT task only outputs the results once at the end without providing info about the intermediate tests that may have passed/failed.
I also tried the #BeforeClass approach which however did not work (no gradle output from the class where I had used this even with the debug option and it seemed like it took a long time before it moved on to the next class of tests).
Is there a better approach to register a user once at start of a class and then logout once at the end of testing?
Any help appreciated.
Ideally you would test the login/logout functionality in a set of tests that just test different login/logout scenarios, and let the other tests focus on other use cases. However, since the other scenarios depend on the user being logged in, it sounds like one way to solve this would be to provide a mock version of the app component handling the login. For the other login dependent tests, you would inject this mock at the start and it would return mock user credentials that the rest of the app can work with.
Here's an example where Dagger, Mockito and Espresso is being used to accomplish this: https://engineering.circle.com/instrumentation-testing-with-dagger-mockito-and-espresso-f07b5f62a85b
I test an app that requires this same scenario. The easiest way I've gotten around this is to split up logging in and out into their own test classes. Then you add all your test classes to a suite, starting and ending with the login and logout suites respectively. Your test suites ends up looking kind of like this.
#RunWith(Suite.class)
#Suite.SuiteClasses({
LoginSetup.class,
SmokeTests.class,
LogoutTearDown.class
})
EDIT: Here is an example of both the LoginSetup and LogoutTearDown tests. This solution really should only be for end-to-end tests and comprise a small portion of your testing efforts. fejd provides a solution for a full testing stack which also needs to be considered.
#LargeTest
public class SmokeSetup extends LogInTestFixture {
#Rule
public ActivityTestRule<LoginActivity> mLoginActivity = new ActivityTestRule<>(LoginActivity.class);
#Test
public void testSetup() throws IOException {
onView(withId(R.id.username_field)).perform(replaceText("username"));
onView(withId(R.id.password_field)).perform(replaceText("password"));
onView(withId(R.id.login_button)).perform(click());
}
}
#LargeTest
public class LogoutTearDown extends LogInTestFixture {
#Rule
public ActivityTestRule<MainActivity> mMainActivity = new ActivityTestRule<>(MainActivity.class);
#Test
public void testLogout() throws IOException {
onView(withId(R.id.toolbar_menu)).perform(click());
onView(withId(R.id.logout_button)).perform(click());
}
}
The approach with logging in with #Before is nice but if your login is slow, your combined test time will be very slow.
Here's a great hack that works. The strategy is simple: run every test in order and fail every test before they get a chance to run if a certain test fails (in this case login test).
#RunWith(AndroidJUnit4.class)
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
#LargeTest
public class YourTestsThatDependsOnLogin {
private static failEverything;
#Before
public void beforeTest() {
// Fail every test before it has a chance to run if login failed
if (failEverything) {
Assert.fail("Login failed so every test should fail");
}
}
#Test
public void test0_REQUIREDTEST_login() {
failEverything = true;
// Your code for login
// Your login method must fail the test if it fails.
login();
failEverything = false; // We are safe to continue.
}
// test1 test2 test3 etc...
}
Pros:
What you asked for works and it is fast (if your login is slow)
You can have multiple tests that depend on different logins, meaning you can do a bunch of tests for user1, then a bunch of tests for user2 etc.
Quick to set up.
Cons:
Not standard procedure and someone might wonder why so many tests
fail...
You should mock users instead of actually logging in. You should test your login separately and tests should not depend on each other.
Add the following function in your test file, replace the code in the try
block with yours that performs the login actions.
#Before
fun setUp() {
// Login if it is on the LoginActivity
try {
// Type email and password
Espresso.onView(ViewMatchers.withId(R.id.et_email))
.perform(ViewActions.typeText("a_test_account_username"), ViewActions.closeSoftKeyboard())
Espresso.onView(ViewMatchers.withId(R.id.et_password))
.perform(ViewActions.typeText("a_test_account_password"), ViewActions.closeSoftKeyboard())
// Click login button
Espresso.onView(ViewMatchers.withId(R.id.btn_login)).perform(ViewActions.click())
} catch (e: NoMatchingViewException) {
//view not displayed logic
}
}
With this #Before annotation, this setUp function will be executed before any other tests you have in this test file. If the app lands on Login Activity, do the login in this setUp function. The example here assumes there is an EditText for email and password, as well as a login button. It uses Expresso to type the email and password, then hit the login button. The try catch block is to ensure if you are not landing on the Login Activity, it will catch the error and do nothing, and if you didn't land on the Login Activity, then you are good to go on other tests anyway.
Note: this is Kotlin code, but it looks very similar to Java.
My Application also requires the user to be logged in through-out the test run.
However, I am able to login the first time and the application remembers my username/password throughout the test run. In fact, it remembers the credentials until I force it to forget them or uninstall and install the app again.
During a test run, after every test, my app goes to the background and is resumed again at the beginning of the next test. I am guessing your application requires a user to enter their credentials every time you bring it to the front from the background (banking application maybe?). Is there a setting in your application that will "Remember your credentials"? If yes, you can easily enable it right after you login for the first time in your test run.
Other than that, I think you should talk to the developers about providing you a way to remember your credentials.
If you are using #BeforeClass in Kotlin, you need to place it inside companion object. That way the block under #BeforeClass will be executed just once before the 1st test runs in the class. Also have #AfterClass inside companion object, so that it runs at the very end of the last test of the class.
Keep in mind that once the compiler moves from inside companion object to outside of it, the context of the app under test is lost. You can get back the context by launching the main activity(activity after login) of your app.
companion object {
#ClassRule
#JvmField
val activity = ActivityTestRule(Activity::class.java)
#BeforeClass
#JvmStatic
fun setUp() {
// login block
}
#AfterClass
#JvmStatic
fun tearDown() {
// logout block
}
}
#Test
fun sampleTest() {
activity.launchActivity(Intent())
}
There seems to be no end to the number of posts discussing how to unit test completely unrealistic things.
An abundance of tutorials, videos etc outline what unit tests are and how you do them. There do not however seem to be many (if any) resources which outline how to test something real.
After all.. in reality the 'units' that we are testing are generally significantly more complex than a method taking inputs and giving an output.
I am working with Android at the moment and was investigating how to unit test my application.
My application is essentially made up of views and server requests. You click button x and it changes the view displayed. You click button y and it loads data from the server and populates a list.
Below is some source code. I have essentially pieced together an example setup which demonstrates the things that are confusing (to me). The things that I find conceptually difficult to unit test.
public class ChainActivity extends FragmentActivity {
private PRFragmentTabHost mTabHost;
public GetChainResponse.Data responseData;
Integer chainId; //ref to chain we are getting - passed in
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//Get the chain id we are getting
Bundle extras = getIntent().getExtras();
chainId = extras.getInt("chainId");
setContentView(R.layout.activity_base);
//Set up the tabs
mTabHost = (PRFragmentTabHost) findViewById(android.R.id.tabhost);
mTabHost.setup(this, getSupportFragmentManager(), android.R.id.tabcontent);
mTabHost.addTab(mTabHost.newTabSpec("Details").setIndicator("Details", null), ChainDetailsFragment.class, null);
mTabHost.addTab(mTabHost.newTabSpec("Pictures").setIndicator("Pictures", null), PicturesFragment.class, null);
mTabHost.addTab(mTabHost.newTabSpec("Cats").setIndicator("Cats", null), CatsFragment.class, null);
}
#Override
public void onStart() {
super.onStart();
//Initiate the data load
loadChainData();
}
//Method loads the chain data
public void loadChainData(){
PRAPIInterface apiService = ApiService.getInstance();
Integer limit = 4;
apiService.getChain(chainId, limit, new Callback<GetChainResponse>() {
#Override
public void success(GetChainResponse pr, Response response) {
lastData = System.nanoTime();
//Save the response data
responseData = pr.data;
//Get the current tab and pass the loaded data to it
String currentTabTag = mTabHost.getCurrentTabTag();
DataLoadedInterface currentTab = (DataLoadedInterface) getSupportFragmentManager().findFragmentByTag(currentTabTag);
currentTab.dataLoaded(responseData, false);
}
#Override
public void failure(RetrofitError retrofitError) {
// Log error here since request failed
Log.w("Failed", "Failed" + retrofitError.getUrl());
Log.w("Failed", "Failed" + retrofitError.getBody());
}
});
}
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
Fragment fragment = getSupportFragmentManager().findFragmentByTag(mTabHost.getCurrentTabTag());
if (fragment != null) {
fragment.onActivityResult(requestCode, resultCode, data);
}
}
}
So.. I am aware of Roboeletric, Robotium etc and other libraries that are available for testing on Android. I am however looking for conceptual advice.
Android provides ActivityUnitTestCase
I can subclass this and setup a test for my activity.
Part 1
In principle I could test my onCreate by verifying that mTabHost is not null BUT I don't want to make it publicly available, nor do I want to have a getter to its value.
I figured that I could test the existence of my fragments but i can in fact not. Because the activity runs in 'isolation' it seemingly does not actually create the fragments for the tabs.
Part 2
Next is onStart. This calls another method. It has no return value. I cant test a response.
It is however important that i test that onStart we load our initial data..
Within loadChainData I could set a Boolean indicating that I am loading data and verify this but a coworker could just set this Boolean to true by default and my test would pass.
Furthermore I don't want to test loadChainData 'again'.. I will be testing this method anyway. One idea that springs to mind is stubbing out loadChainData and verifying that it is called and leaving it at that. This however seems to be difficult to do with Android (anyone..?) and does not really fit with the sentiment that testing should be fun.
Part 3
loadChainData loads some data from the server using retrofit. Because in reality this method executes asynchronously there is again no response from this method. I have found an appropriate way of returning mock data by swapping out the retrofit client but doing this swapping is not apparently simple.
At the moment I use a singleton for my ApiService. I want to essentially replace what is built the first time this singleton is called. There are potential complex solutions for this like using a Dependency injection library (like Dagger) but given what I want to achieve I feel that there should be a much simpler way of doing this.
My initial thoughts are that if the application could be instantiated with say a test flag. The singleton would return the test client. Alternatively it would default to the real client. This in my mind smells a little.. could someone explain what the smell is, and how one could appropriately resolve it?
Even if the above was considered a fair suggestion there seems to be no easy way to actually do it with ActivityUnitTestCase.
Part 4
Finally is onActivityResult.
Again, there is no response.
This time the method in question interacts with other units elsewhere. Units that act differently within the constraints of ActivityUnitTestCase anyway.
I could wrap my manipulation of the support fragment manager, mock my wrapper, return a mocked fragment, verify that its onActivityResult method is called.. but again this seems incredibly tough to do. Furthermore this adds complexity to my code to allow something to be testable. I have no interest in increasing complexity just to test..
So..
Does anyone with real experience unit testing on mobile have any insight on how to appropriately tests a class such as this one. As you can see it is not really a case of 'put 2 in, does 4 come out' :)
A lot of resources mention how testing is underdone. This is why :) Any advice would be greatly appreciated.
T
onCreate
I would test in two ways, one using some form of AndroidTestCase this would allow the testing of extras you send in the Bundle.
( I would add error handling to throw an error if the extra is not present )
( I would change loadChainData() to take chainId as a paramater )
To test your tabs are instantiated correct I would use espresso for an Acceptance test to validate your views are present.
onStart
There is no Dependency Inversion here and so you cannot test the service methods are called. You would want to test getChain is called. What happens when you create that service, could you do it in the constructor?
( I would move away from the Activity here and encapsulate all behaviour from onStart in some other class [perhaps MVC could help] that way you aren't constrained by the lifecycle.
( I would also add another layer of callback's on top of Volley in to the Activity, that way you can test your listeners return the correct success or failure, but you don't have to worry about the tabs [the activity ui]).
If you really wanted to test asynchronisity (which I wouldn't advise) you can use Espresso and that can help you manage the test to still get callbacks see custom resource idoling.
onActivityResult
I believe using Robolectric you could pass it a Mockito mock fragment manager in order to return you a stub fragment and validate the behviour of the the fragment tab host, then you would call onActivityResult yourself: whether testing this actually gives you any benefit is another question.
Overall testing lifecycle methods is like testing anything else, for it to be testable you need to abstract away any ideas of threading, you need to be in control of your dependencies and I find the closer you adhere to SOLID the easier testing is. In the wake of these stumbling blocks, this is why Robolectric, InstrumentationTests and Espresso where created.
In summary sorry, there is no straight forward answer here, you have to ask yourself - do I get any benefit from testing these things, am I testing my code or am I testing the framework, what is the cost benefit of testing this Activity does it actually encumber future changes rather than gain me confidence.