I am doing automation testing using Espresso but I struggle in Camera Capture... I have written code for camera that it will open, but it is not able to click. What I need is to click in automatically if it is possible.
Please give me any suggestions. Below is my code:
onView(withId(R.id.photo)).perform(click());
Things get tricky with Espresso when you are working with tests that include multiple Activities. I prefer to use UIAutomator for such parts of the test.
With UIAutomator, you can do something like:
UiDevice.getInstance(InstrumentationRegistry.getInstrumentation())
.findObject(new UiSelector()
.resourceId("com.example.package:id/photo")).click();
Remember that your IdlingResources won't have any effect on UIAutomator, so you might need to add extra delays to wait for that Activity to be created and initialized before it is clicked on.
Here is how to setup UIAutomator: UIAutomator Testing | Android Developers
Related
I have built a react-native application and am trying to build a few specific end-to-end tests on the android side. For this, I have setup detox. Unfortunately, to test properly, I need to see how the application responds when the user is performing actions outside of the target application.
e.g. I need to automate click A within target application A, click android home button, swipe right on screen and then open application B. Application A should then open itself and the test can confirm if it has opened to the right screen.
Is something like this possible within Detox? If not, are there any frameworks that would allow me to test this?
What you are wanting to do is use something called 'Appium' instead of 'Detox'. This is due to Appium interfacing with the androids uiAutomator API which allows automation and testing outside of a single application, Detox is more for testing the behaviour within the application itself rather than outside of it.
I'm struggling with instrumental ui test of a PlaceAutocomplete API, provided by Google.
What I'm trying to test is ability to open AutoComplete Activity (fired with proper button), enter some text, pick result from given list and check if picked position is listed on the recyclerview.
I'm trying to target this EditText below :(
with:
onView(withClassName(equalTo(EditText.class.getName())))
.inRoot(withDecorView(not(is(homeActivityActivityTestRule.getActivity().getWindow().getDecorView()))))
.perform(typeTextIntoFocusedView("kotek"), closeSoftKeyboard());
And other variations, like withText "Szukaj" or with com.google.android.gms.R.id :) without luck.
Help will be appreciated! Thank You!
This view is auto-generated by Google Api.
If it's true, sorry I haven't any experience with Places API, it would mean that Espresso is unable to find this view.
Solution: Espresso Test Recorder
You can install the latest Android Studio 2.2-RC2 (don't remove the previous) from: http://tools.android.com/download/studio/builds/2-2-rc to check brand new Espresso Test Recorder (check: http://tools.android.com/tech-docs/test-recorder) and try to get this view by click on this to generate code, but I said this might not help.
Solution: Espresso with uiatomator
My doubts about it come from Espresso framework limitation - it depends on actual context of app, it means that it may not recognize intents, genereated code or notifications.
Try to use typical instrumentation framework like Google's uiatomator. There's no problem to use it along with Espresso UI testing framework.
Read: http://qathread.blogspot.com/2015/05/espresso-uiautomator-perfect-tandem.html
More info you would find here: http://google.github.io/android-testing-support-%20/docs/uiautomator/
Solution: Espresso with Robotium or Robotium Test Recorder
If you find uiatomator a pretty hard to learn you can also use another instrumentation test framework called Robotium, which has clean and conchise syntax with some powerfull functions like taking screenshots.
It can work along with Espresso. Check last paragraph of this article: https://github.com/codepath/android_guides/wiki/UI-Testing-with-Robotium
It also has its own recorder: http://robotium.com/products/robotium-recorder. Try it for free, but I doubt that you would find it useful, as Robotium code is really simple to learn.
Hope it will help
Install Android studio 2.2 Beta, Try the Espresso test recorder. It will automatically generate an Espresso test case for you.
Try with this one :-
Espresso.onView(ViewMatchers.withId(R.id.places_autocomplete_edit_text))
.perform(ViewActions.typeText("<ADDRESS>"))
Thread.sleep(3000);
onView(withText("<ADDRESS FROM SUGESSION>"))
.inRoot(withDecorView(Matchers.not(activityTestRule.activity.window.decorView)))
.perform(click())
Tested with :- Android Studio:3.5.2 | espresso-core:3.1.1 | places:1.1.0
Autocompleater component seems to be not accessible using espresso, one solution is to send typing then clicking through a shell
getInstrumentation().getUiAutomation().executeShellCommand("input text n")
sleep(2000)
getInstrumentation().getUiAutomation().executeShellCommand("input tap " + Math.round(width / 2f) + " " + Math.round(height / 4f))
I want to put some conditions in my code to behave one way, if it's tested by monkeyrunner and another way, if it's used by a user.
How to check that in tested application that emulator/application is driven by monkeyrunner?
P.S. I am aware that it's bad practice to do such checks (the code under test should be the same code as in production). However, it's for one off case, which is hard to handle other way.
I didn't find a clean way to check that except uploading a file to a Emulator/device as part of MonkeyRunner tests and after checking for it in application source code.
im thinking of an easy-to-use android usability testing tool, that will allow the user to record and log relevant information during app testing. As a first result i would like to have a screenshot taken each time the user interacts with the touchscreens where the position, duration and type of the touch event is shown.
As android does not allow me to take screenshots easily and as its not possible to log touchevents from an service here are my questions:
Does Logcat give me any information about TouchEvents (I tried but i couldt produce any touch-Logs)
Is it possible to evoke the ddms-Screenshot-action from terminal? (./ddms -takescreenshot)
Does Logcat give me any information about TouchEvents (I tried but i couldt produce any touch-Logs)
No.
Is it possible to evoke the ddms-Screenshot-action from terminal? (./ddms -takescreenshot)
Not via the ddms command AFAIK. Either use monkeyrunner (as another answer suggested), or write your own code to the JAR file that DDMS uses. I used that to create a software projector; another developer extended that concept.
If you are looking to automate these things, you can use the Monkey Runner tool, it specifically has a call to take screen shots automatically.
http://developer.android.com/guide/developing/tools/monkeyrunner_concepts.html
It specifically runs Python scripts, that you would use to design an automated regression test.
I am writing an android testing application which automates testing on the device.
I am targeting facebook as my base application and writing an app using the Robotium framework in order to accomplish my requirement.
Until now i have successfully implemented a few features, but I am stuck at one point: I want to automate the "upload picture" functionality, but as soon as the upload button is clicked, the device builtin application gets activated. I could not control the default app using Robotium.
Is there any way to solve this, by writing some code using robotium or writing a layer between the OS and Robotium which can generate key stokes?
I don't think you can do that. However you might be able to make your own modified version of the built-in application and use that instead (if you can make it default and so not have to go through a selection screen), or root a phone and break its security model to use as an automated testing device.
You said
a layer between the OS and Robotium.
If you were ok with that, there is the black-box UIAutomator-Framework by google, which might be able to do that.
Additionally,
You might want to use monkeyrunner like this:
$ monkeyrunner
>>> from com.android.monkeyrunner import MonkeyRunner, MonkeyDevice
>>> device = MonkeyRunner.waitForConnection()
>>> device.touch(200, 400, MonkeyDevice.DOWN_AND_UP)
You can also do a drag, start activies etc. Have a look at the api for
MonkeyDevice.
(from this SO answer).