I run my Espresso test via Spoon. Often, I get a build successful, with tests not being executed. I assume the cause is there was no alterations to the code of the app in question. I can see why they would do this - Why test an app that just ran the same test and passed? However, my situation is different; testing the app is not my primary concern, but testing what the app controls.
My question: My test will be run on a continuous loop, and the app will not be altered or changed. So is there any way around this?
I assume the cause is there was no alterations to the code of the app
in question.
This is not true. You can run the same test thousands of times with Espresso without changing a line of code.
Make sure you're running it the correct way:
java -jar spoon-runner-1.1.0-jar-with-dependencies.jar \
--apk example-app.apk \
--test-apk example-tests.apk
Also keep in mind that the devices running the test should be visible in adb (run adb devices to check).
With Spoon, a test will not run twice if the first test passed. This is cause it believes if it runs the test again, it will pass and there is no reason for that. Bad design on Square's part if you ask me.
The solution is: gradle clean spoon. clean will regenerate the res files (among others) and make spoon believe that it is essentially a different test. This makes running tests take longer than it should. But it works.
Related
Every time I want to do a quick test of some code, android studio takes 20-40 minutes loading an emulator which then either crashes my laptop or makes it run very slow. Is there any way to just use the system log without loading the whole app, similar to the System.out.Prinln() feature in net beans?
I understand that the question is about running your app for quick testing and not about automated tests. But you can learn a lot by trying to adapt writing tests, and they can help you to solve your problem.
1) For code without android dependencies you can write JUnit tests and run those just on JVM to test the code. Bonus: you'll start with creating your testsuite!
2) For code with android dependencies: a) Try to be better in separating platform specific code and internal logic so you will be able to cover more code with unit tests. b) You can use roboelectric and test everything without emulator/device for the rest.
Firstly I want to make a confession. I've never written a test before. I've been a programmer for more than 10 years, and never once I found the need to write a proper test (or whatever it is called) since mostly I write codes that (I think) can be easily tested manually.
Now I'm writing a pretty complex android app, and this manual testing I'm doing to make sure every functions and classes runs as intended slows me down miserably. So now I'm kinda searching in the dark on how to make my codes test-compatible (is there such a thing?) and where should I start.
I'm using the latest Android Studio (1.2 Beta 3). I found that under 'src' folder, there's an 'androidTest' folder, which (few folders beneath it) contains a file, ApplicationTest.java. Here's the content of ApplicationTest.java
public class ApplicationTest extends ApplicationTestCase<Application> {
public ApplicationTest() {
super(Application.class);
}
}
Ok now back to my app. I want to test the class AnalyzerOffline.java (located under main>java>com.code.imin.app) that I've written, because it has pretty complex and large codes going around there. So how should I start? I tried reading http://developer.android.com/tools/testing/testing_android.html , http://rexstjohn.com/unit-testing-with-android-studio/ etc but I still don't know where to start - I feel like I'm missing something here, or maybe somehow my mindset of writing test or the whole idea of it are wrong.
So can please someone show me some light here?
I am using Monkey tool testing
Step 1:
open the android studio terminal(Tools-> open terminal)
Step 2:
In order to use monkey , open up a command prompt and just naviagte to the following directory.
export PATH=$PATH:/home/adt-bundle-linux-x86-20140702/sdk/platform-tools
Step 3:
add this monkey command into terminal and press enter..
see the magic in your emulator.
adb shell monkey -p com.example.yourpackage -v 500
500- it is the frequency count or the number of events to be sent for testing.
you can change this count..
More reference,
http://www.tutorialspoint.com/android/android_testing.htm
http://androidtesting.blogspot.in/2012/04/android-testing-with-monkey-tool.html
So i recently started using Robotium for the first time and noticed after some time that they are Executed in an alphabetical order. This made some test not work, because i needed the "introduction" of my app to be finished and than start other tests.
Since i have never used Automated tests before, im not sure how to write the tests right now. Should all the test cases ALWAYS be independent from each other?
This would mean in my case, that the flag for the introduction should be set false for some tests and true for other tests programmatically.
Or is it also right to assume that one test case has been Executed before another one?
This is correct. You should always build tests so they can run independently. Also take note to ensure you have a rollback process after running your tests. Otherwise the next time they might not run.
There is a lot to consider when writing automated tests.
I would say yes. All tests should always be independent from each others. That way your sure that another test is not the reason for the failing test case.
I been looking at this question, and I thought it would be a good idea of using assert only in debug build.
Is there any thing special that I need to configure in Android Studio in order to allow asserts? I also want to guarantee that they will not be presented in release build.
The adb shell setprop debug.assert 1 from your referred question is to be executed on the device you are testing on, so you could whip up a script or even create a custom gradle task for that (gradle docu).
In general I'd recommend to also have your checks in production and handle them in a proper way. A simple solution would be to throw a RuntimeException. With checked Exception you could even handle the recovery from this errorneous states / misuses of your api.
Furthermore, it would make sense to add proper tests that ensure, that your code/APIs only emit "valid" values that can be handled by the rest of your code.
I want to put some conditions in my code to behave one way, if it's tested by monkeyrunner and another way, if it's used by a user.
How to check that in tested application that emulator/application is driven by monkeyrunner?
P.S. I am aware that it's bad practice to do such checks (the code under test should be the same code as in production). However, it's for one off case, which is hard to handle other way.
I didn't find a clean way to check that except uploading a file to a Emulator/device as part of MonkeyRunner tests and after checking for it in application source code.