Unable to drag and drop in Appium Android - android

I'm using Appium 1.6.5 and Windows 10.
Using the demo app by Appium (ApiDemos-debug.apk), I am trying to drag and drop dots.
View app screen:
This is my current code:
TouchAction actions = new TouchAction(driver);
actions.tap((AndroidElement)driver.findElementByAndroidUIAutomator("text(\"Views\")")).perform();
driver.findElementByAndroidUIAutomator("text(\"Drag and Drop\")").click();
AndroidElement element1 = driver.findElement(By.id("io.appium.android.apis:id/drag_dot_1"));
AndroidElement element2 = driver.findElement(By.id("io.appium.android.apis:id/drag_dot_2"));
actions.longPress(element1).waitAction(3000).perform().release();
This error prints when test is run:
org.openqa.selenium.NoSuchElementException: An element could not be
located on the page using the given search parameters. (WARNING: The
server did not provide any stacktrace information)
Any combination of longPress() calls results in this error. I can click & tap, that's fine. When it comes to using other TouchAction methods, then errors occur.
Any idea how to resolve this? Need to know if it's my setup that's wrong or TouchAction method has issues.

You need to long press on that element and drag it to other element.Currently, you're only pressing long and performing the action without releasing it.
Try this:
actions.longPress(element1).moveTo(element2).release().perform();

Related

How to scroll Android app with Appium by Python Client

I'm testing Android application and need to scroll text. I have tried everything I found here and on a Internet but nothing is working.
Appium v1.4.1 (Server v1.7.2)
Python 3.x
Using selenium webdriver
I need scroll to the bottom of a page, not to specific element
The closest is
self.driver.execute_script("mobile: scroll", {"direction": "up"})
but it is not working
.
Log is:
selenium.common.exceptions.WebDriverException: Message: Unknown mobile command "scroll". Only shell commands are supported.
Thanks
For Android there are 2 good options when it takes to scrolling:
use TouchActions
actions = TouchActions(driver)
el = driver.find_element_by_id(<id of element you press to start swipe>)
action.press(el).move_to(x=100, y=-1000).release().perform()
You can also get screen size of the device to scroll more precisely:
screen_size = driver.get_window_size()
use native UiAutomator scrollIntoView method
self.driver.find_element_by_android_uiautomator('new UiScrollable(new UiSelector().resourceId("<id of scrollable view>")).scrollIntoView(new UiSelector().resourceId("<id of element to scroll to>"))')
You can read more here

Unable to click on a button due to incorrect position in UI Automatorviewer

UI Automator Viewer
Whenever I use below command to click on Ok button, it clicks on the same position as pointing in UI Automator viewer. (screenshot)
I need to click on Ok Button.
Driver.findElement(By.xpath("//android.widget.Button[#text='Ok']")).click();
First of all, since you have resource-id, the most reliable will be to use it for search:
driver.findElementById("confirm_button").click()
If it still click the wrong element you might need to wait a bit (for popup to be loaded) and then perform click
Can you try :
Driver.findElement(By.xpath("//*[#class='android.widget.Button and #text='Ok']")).click();
Have you tried to click by coordinates?
(Python code)
myElement = driver.find_element_by_id('confirm_button')
x = myElement.location['x']
y = myElement.location['y']
touchAction.tap(None, x, y, 1).perform()
I had the same problem when many layers are displayed at the same time...
Hope it helps
Update
(Java code)
WebElement myElement = driver.findElementById("confirm_button");
int x = myElement.getPosition().getX();
int y = myElement.getPosition().getY();
touchAction.tap(x, y).perform();
The problem was when i am trying enter data on signup page Keyboards opens and due to that UI Automator screenshot giving wrong position of button. So what i have done is to use below command to hide the keyboard before clicking signup button.
Driver.navigate().back();
delay(3000);
Driver.findElement(By.id("btn_register_signup")).click();

UiAutomator - UiDevice can't findObject by selector (package name and resource ID)

I'm unable to find an element (UiObject2) using UiAutomator within my androidTest. I obtained UiDevice instance and try to find the object with this:
MY_UI_DEVICE.findObject(By.res(CURRENT_PACKAGE, id));
CURRENT_PACKAGE is the package of my app MY_UI_DEVICE.getCurrentPackageName(). I tried also with this one:
MY_UI_DEVICE.wait(Until.findObject(By.res(CURRENT_PACKAGE, id)), 10000);
I can see the app is waiting for 10 seconds on the right screen (where the desired object persists), but after timeout it fails to find it and test fails. It always fails on emulator (API 23), but rarely works good on a real device (API 25).
When I debug the code I could see that manually I could obtain the right element by calling sequence of getChild(index) methods on AccessibilityNodeInfo but in the runtime it still fails even the app is waiting on the right screen where I expect the specific element.
I was playing with different UiDevice's functions, but none of the helped and I'm out of ideas, so any help will be appreciated.
There were 2 issues with my tests:
The first problem was in getting / initialising UiDevice instance in static block (as a static field in util class). I moved it into #Beforeand it helped to resolve the issue partially.
Another problem was occurring while searching for an element using a package name obtained from the UiDevice. I replaced getting package with InstrumentationRegistry.getTargetContext().getPackageName(); as it's done in google samples.
Make sure that your Test method is throwing the UiObjectNotFoundException. I had this issue with UiObject2 as well until I started forcing the error throw
#Test
public void clockTest() throws UiObjectNotFoundException, InterruptedException {
mDevice.click(1146,37); //click on clock top right corner
Thread.sleep(1500);//wait 1.5 seconds for screen to load
mDevice.click(1138,135);//clicks in shell
Thread.sleep(1500);//wait 1.5s for screen to load
UiObject2 dTSettingsButton = mDevice.findObject(By.text("Date & Time Settings"));
//assertNotNull(dTSettingsButton);//find and assert the settings button
dTSettingsButton.clickAndWait(Until.newWindow(), LAUNCH_TIMEOUT);//clicks the settings button
UiObject2 timeFormatButton = mDevice.findObject(By.text("Select Time Format"));
assertNotNull(timeFormatButton);//find and assert timeformat button
timeFormatButton.clickAndWait(Until.newWindow(), LAUNCH_TIMEOUT);//click timeformat button
UiObject2 twelveHourButton = mDevice.findObject(By.res("com.REDACTED.settings:id/first_btn"));
assertNotNull(twelveHourButton);//find and assert twelvehour button
twelveHourButton.clickAndWait(Until.newWindow(), LAUNCH_TIMEOUT);//click twelvehour button
}
Try use UiSelector methods. That worked for me much better than By selectors

Simulate Touch Controls Through Code

I'm trying to make it possible to navigate through my Google Glass application by using head gestures. I'm able to recognize head gestures like looking to the right left and up. They each have their own method for what to do when this gesture is recognized
Now I need to simulate the corresponding touch gestures inside each method. So it will think I'm swiping to the left or right which will allow me to navigate through the cards with the head gestures.
Does anyone have any idea on how to actually achieve this?
Edit
I created a quick hello world application to play with. I added my headgesture code and started trying to get the keys working.
I added the following to my onCreate()
Instrumentation instr = new Instrumentation();
Then I added the following lines to each respective headgesture method.
Headgesture upwards should correspond with tapping the touchpadinst.sendKeyDownUpSync(KeyEvent.KEYCODE_DPAD_CENTER)
Headgesture to the left should correspond with swiping left on the touchpad inst.sendKeyDownUpSync(KeyEvent.KEYCODE_DPAD_LEFT);
Headgesture to the right should correspond with swiping right on the touchpadinst.sendKeyDownUpSync(KeyEvent.KEYCODE_DPAD_RIGHT);
They are responding accordingly now, however I'm getting an exception saying:
java.lang.RuntimeException: This method can not be called from the main application thread
The Solution
In the end I went a different direction then the one I mentioned in my edit above.
I found out that it is possible to call touch controls in the shell by using
adb shell input keyevent <keycode here>
I then found a way to use this in android, I have the following class named issueKey
public class issueKey {
public void issueKey(int keyCode)
{
try {
java.lang.Process p = java.lang.Runtime.getRuntime().exec("input keyevent " + Integer.toString(keyCode) + "\n");
} catch (Exception e) {
Log.wtf("IssueKeyError", e.getMessage());
}
}
}
Then I simply call the class and pass the keycode for the corresponding gesture
mIssueKey.issueKey(4);//functions as swipe down
Here is the list of keycodes that I tested for anyone that is interested.
Keys for each respective button/gesture
4: Swipe Down
21: Swipe Left
22: Swipe Right
23: Tap
24: Volume Up
25: Volume Down
26: Lock/Unlock Screen
27: Camera Button
However, what I'm wondering now is. What would be best practice, getting the solution I metioned in my edit to work by using a asyncTask or is the solution I'm currently using better.
Using the Instrumentation class would work if you use a separate thread to call the sendKeyDownUpSync method from.
This can be done using the following steps:
Create and start a thread from your activity
In the run method, use the Looper class and create a Handler as explained here
Every time you want to call sendKeyDownUpSync, post a Runnable instance to the Handler, which calls sendKeyDownUpSync in its run method.
A similar code sample (not from me) is available here

Android Web Driver - cannot simulate touch actions

I've downloaded the android_webdriver_library.jar via the Eclipse ADT SDK Manager under Extras/Google Web Driver and reference it in my eclipse project.
How do I simulate a touch action like tapping a button for example? I cannot find the TouchActions class which according to the documentation (http://selenium.googlecode.com/svn/trunk/docs/api/java/index.html) is supposed to be the factory class for touch actions.
i don't know Java, but I got this working in Python.
What does your code look like? Can you try the Tutorial , as well as the example below and modify it to use touch?
Example from tutorial below:
WebElement toFlick = driver.findElement(By.id("image"));
// 400 pixels left at normal speed
Action flick = getBuilder(driver).flick(toFlick, 0, -400, FlickAction.SPEED_NORMAL)
.build();
flick.perform();
WebElement secondImage = driver.findElement(“secondImage”);
assertTrue(secondImage.isDisplayed());

Categories

Resources