I want to create a couple of functional tests for an Android application to run them on a continuous integration server. As far as I understand, there are two main approaches: monkeyrunner and test cases via instrumentation.
At the moment, I can't see any advantages of monkeyrunner, but I might be missing something. What is it good for?
I like to use MonkeyRunner because its really portable (Linux, Mac and Windows), easy to setup and can work easily across many different devices and emulators. Also, sometimes with instrumentation you get crashes that are unrelated to the app, but are rather because of the instrumentation implementation. With MonkeyRunner you will know what caused the crash.
From my experience, monkey testing is really good for detecting flaws of the application in terms of:
Memory leaks: sometimes it is impossible to track scenarios generating excessive memory usage (say basic fast rotation, subsequent button clicks etc.).
Monkey also helps identifying test cases; unintended, strange uses of the applications that lead eventually to crashes.
Using monkey tests you can also somehow mesure performance of the application, when used by "heavy" users.
I would say, that monkey testing does not stand in opposition to unit/instrumenation testing, but it is yet another way to test, that your application is working as intended.
Of course it also depends on the software is about to be tested, but in my opinion it is not always that easy to determine what happens if your button is clicked, then 9px above the button is touched and eventually a phone activity is run. :) That what monkey tests are for...
Related
I am working on automating monkey testing for Android apps.
My question is, if I obtain the seed when monkey causes a crash, what are the requirements to use the seed to reproduce the crash again across devices?
Like Android os version, exact emulator. Is it even possible?
I couldn't find the exact seed requirements for using another device.
Is it even possible?
That depends entirely on the nature of the crash and the steps the monkey took to trigger it on the original device.
For example, if launching Screen 3 crashes immediately, so long as the monkey does something that triggers your app to navigate to Screen 3, it will reproduce the crash. However, whether the original seed will lead to that outcome is impossible to know in the abstract. All you can do is try it and see what happens.
what are the requirements to use the seed to reproduce the crash again across devices?
If it is not absolutely identical, running absolutely identical code, the monkey actions based on that seed might not reproduce the crash. Then again, it might. We have no way to know.
In general, since the monkey is doing UI input, you would want the devices to be as close as possible from a UI standpoint, such as screen size and density. As a counter-example, if the devices have differing amounts of RAM or flash storage space, it is possible that those differences will not impact the monkey results.
I have the following problem: I need to run an android app in an emulator, get to a certain state in it, after which I want to fork the process into two and do different actions in the app starting from that state.
Example: I want to open Yelp in the emulator, after which I want to search for "Coffee", then fork the process into 10, and in each child process open different coffee place.
The particular problem is how to perform the fork.
I've been trying to explore solutions to this problem, and found no easy way to do it. The options I explored the possibility of so far are:
Actually fork the app process within the emulator. This appears to be completely impossible.
Somehow fork the emulator process with an app running in it. There's no easy way to fork an external process, so I guess I would have to change the emulator code to fork from within when certain external event happens.
Put the emulator in some sort of a VM, which supports hot cloning. I haven't found any VM that actually supports it without serious downtime.
Ideally I want a solution that doesn't double the memory (similar to how fork in Linux works), and that is not associated with a significant downtime, though any solution that doesn't have the above two properties would also be acceptable.
Okay, that's quite the task. Intuitively, I would expect option 2 to be the most promising.
Alternatively, have you considered writing an UIAutomator script and having it run in parallel or consecutively across a few devices? The bonus criteria would definitely not be met, but after sufficient runtime you might get what you're looking for.
Bring emulator into the state at which you want to fork
Save snapshot
Spawn emulator, specifying snapshot
Run UIAutomator script
Record findings
GOTO 3
I'd like all unit & instrumentation tests (Espresso) to be ran after each commit / merge to the main develop branch. Unit tests are fast enough to allow this, but UI tests are not - 150 fully mocked UI tests take ~1h to run on a single device. Shazam's FORK library does great job in sharding these 150 tests across all connected devices. The current solution is a local machine with Jenkins running on it. Connecting 4 devices to it reduces the time to run UI tests to ~15 mins. That's not ideal, but bearable.
Ideally I'd like to find a cloud based CI system that allows me to run UI tests using Fork, so that the local Jenkins can be ditched and not maintained internally.
I tried AWS Device Farm and Firebase Test Lab, but both are using their own systems to run the tests. It seems they don't give the option to shard a single test suite across multiple devices. They seem to be great tools to run the whole test suite on different devices simultaneously, but that's not what I want for a CI solution (to split the test suite to multiple devices simultaneously).
Tried BuddyBuild as well, but internally their using Firebase Test Lab, so that didn't work for my case as well.
I was mainly thinking for solutions along these directions:
find a way to run Fork on a cloud based solution
find another way to shard a single test suite across multiple devices
Any suggestions are welcome! How do you guys solve this problem?
Congrats! Too much automation is a great problem to have. It sounds like you have already made a lot of progress speeding up the execution time. I'd say 15 minutes is a pretty reasonable number, but here are some alternate approaches to get that number even lower:
Create a smaller and faster test suite of the highest priority test cases to run per commit while the longer running full test suite runs continuously against the latest merged commits. This is a pretty common pattern across the industry. You can use Android's built-in #Small, #Medium, and #Large annotations or package names to divide tests up.
Optimize your test cases so multiple features are tested at once. Essentially, instead of testing every combination like A1,A2,B1,B2, you would instead test only A1,B2. See the pairwise testing Wikipedia page for a more detailed explanation.
If it's possible for your app, try using emulators. Performance has improved greatly the last couple years. Using Intel Hardware Accelerated Execution Manager (HAXM) drivers I found tests ran faster and more reliably than on most physical devices. This could also make it easier for your goal to run in the cloud.
Check how long each individual test takes to run corresponding to its priority. A low priority long running test could be moved to a less frequently running test.
Watch the tests run to identify sleeps or other slow running areas of the test to improve.
Look for tests that have never failed. Those could also be another candidate for moving to a less frequently running test.
If you are looking for alternate ways to shard tests, you could roll your own by kicking off parallel scripts across devices that target a subset of your tests using adb shell am instrument options. Although, this would create many separate test reports and you would need to evenly divide the tests up yourself.
Might take a look at the Flank open source project which seemed to be built with pretty much the same goal in mind: https://medium.com/walmartlabs/flank-smart-test-runner-for-firebase-cf65e1b1eca7
Flank is a Firebase Test Lab tool for massively-scaling your automated Android tests. Run large test suites across many devices/versions/configurations at the same time, in parallel. Flank can easily be used in a CI environment where Gradle (or similar) first builds the APK:s and then Flank is used to execute the tests.
I would also vote to the above suggestion especially item 1
"Create a smaller and faster test suite of the highest priority test cases to run per commit while the longer running full test suite runs continuously against the latest merged commits. This is a pretty common pattern across the industry. You can use Android's built-in #Small, #Medium, and #Large annotations or package names to divide tests up.
Optimize your test cases so multiple features are tested at on"
Using annotations and driving the execution through TestNG frameworks (via Jenkins) would be one way of splitting a large suite into sections - merging the results would also be less of a pain if the annotation management divides your tests by logical groups (hope this helps)
Quick version:
How could I trigger the event related to an screen touch in a given coordinate, regardless of device state?
Longer Version: My Problem:
I have users that are unable to touch the device (they lack body movement, due to cerebral palsy or strokes). I am then in the process of creating a device that monitorates other types of input (for instance muscle contraction, or even throat humming, among others).
The important part is that I have a circuit that emits a single command.
These commands must then be intercepted by the Android device, and execute its associated command, as if the user was normally operating the device.
Note the following: I will not have any Activity running. The purpose of the application is to interface the sensor with the device, and thus, I cannot make use of View elements.
I suppose what I want to perform is to create a mouse-like element for Android.
But I have not come with any way to either have an application be executed inside my own (where I would provide an automated moving "target" for the user to issue a command/click) or a way to perform an MotionEvent or KeyEvent.
While my research so far yielded no response, I would like to ask the following: Am I forgetting any part or directive of the system that could allow me to perform my task?
The final outcome, is a Service, this service is merely waiting for a signal, this is captured by a Receiver... this is where I am stuck.
You are going to have a very hard time injecting Events from your app into any other app, due to security reasons.
However, you might have better luck in native land, C/C++. So hopefully your sensor-wrapper will end up as a .SO lib which can be included in other projects...
Because what you're trying to do kinda sounds like what Espresso can do (it's a JUnit test suite that can inject Motion/Touch/KeyEvents into views)
However, you can compile your own image with some added accessibility apps that might get you there. But your idea is really not going to be applicable to just about any off-the-shelf android device.
I am going to input an answer here, to show alternatives for people that might have similar issues.
From the get go, the device is a stand alone object, that issues GET requests to an internal webserver in the Android device (I am using PAW server as of today, 26/10/15), and I can issue some simple commands.
Likewise, I am planning to to an Activity with a WebView, and a cursor that keeps on moving, and when the signal is received, it uses the onTouchEvent of the focusable object under it (still studying.)
I do understand the security "flaw" from what I am trying to do, but even a USB HID (Universal Serial Bus - Human Interface Device) wont work on what I am planning so far, due to the needs of the user.
Well, you can insert events while testing because Robolectric, espresso and all the others runs simulations. The best way to do the same in a real device would be sandboxing an app inside yours. That way, your app will act as the OS for the others. But that would be costly and the performance would not be good.
Another alternative is doing the same Xposed does. Xposed intercepts system calls to allow modding the OS without flashing anything, so maybe you could use the same hook (or Xposed itself) to add your listening feature. This will require root, but sounds a lot easier than sandboxing every app in the device or building a custom rom.
Have you looked into Android's accessibility features for developers? http://developer.android.com/guide/topics/ui/accessibility/services.html
Also keep in mind that you can use Android as a USB host to external devices, such as homemade trackpads, joysticks, etc (with a bit of hacking probably).
http://developer.android.com/guide/topics/connectivity/usb/host.html
As for what might be available on the OS already, take a look at the Accessibility menu in the Settings. Hope this helps!
Just out of curiosity. Logically thinking, does Android debugging mode slows down the performance of Android devices?
How can I prove to users that Android debugging does or does not slow down the Android?
P.S.: I need specific answer and reliable source to how can I prove it?
Yes. Attaching a debugger almost always slows down the performance. The best way to prove any performance related argument is always to run some tests. Set some timers in your code and gather the data empirically. Then you'll know not only which way is faster but by exactly how much.
For a 'specific answer' - measure and your tests will be the 'reliable source'
Run the application with and without debugging and show the execution time difference. It's best to use an app that simply opens, does some calculations or something then exits that way there's no user error in interacting with it.