How to automate unlock pattern on a real phone using uiautomator? - android

I have recently started learning uiautomator for the UI testing of various Android devices. Currently I am testing on Galaxy S4.
I am looking for any class or method which can be used to automate the unlock pattern that user draws to unlock the phone. For example, I have letter N as a "draw pattern" to unlock the phone. How can I automate this unlock pattern in uiautomator?

Suppose you have letter "N" as unlock pattern, then first you have find the co-ordinates of each point of that N shape in your device. As you mentioned, the entire pattern lock will have 9 dots, you have to get (x,y) co-ordinates of 4 dots. To get the co-ordinates, you can use the
same method mentioned earlier in one of the answer.
Go to 'Settings' -> 'Developer Options'.
Under 'INPUT' section -> you will find a option 'Pointer Location' -> enable that option.
Once you get your 4 dots' co-ordinates, use swipe(Point[] segments, int segmentSteps) method of UiAutomator Framework.
The input for this method is the 4 set of co-ordinates that you got from your device screen as Point array. This will give continuous swipe through the points.
I have given a sample script below for your understanding.
import android.graphics.Point;
public void unlockpatternlock() throws UiObjectNotFoundException, Exception {
Point[] cordinates = new Point[4];
cordinates[0] = new Point(248,1520);
cordinates[1] = new Point(248,929);
cordinates[2] = new Point(796,1520);
cordinates[3] = new Point(796,929);
getUiDevice().wakeUp();
getUiDevice().swipe(cordinates, 10);
}
Above script would draw N shape smoothly. Remember input the co-ordinates according to your device screen.

This is the only way I know to do it, but it can be tedious trying to find your x and y coordinates.
UiDevice.getInstance().swipe(int startX, int startY, int endX, int endY, int steps)
The only probelm I see is to do an "N", you would need 3 of these swipe's. To unlock, it needs to be one continuous swipe.
Give it a show. Finding your x and y will be tough. I would go to my "apps home" page and look at apps (with the uiautomatorviewer) that are in relatively the same spot, find their coords, then go from there.
NOTE The int steps is how fast and "smooth" you want to swipe. I like to use 5 or 10. It seems pretty natural.

To find out the co-ordinates of the screen , you can follow this :
[1] Go to 'Settings' -> 'Developer Options'.
[2] Under 'INPUT' section -> you will find a option 'Pointer Location' -> enable that option.
After that if you touch anywhere on the screen -> you can view the exact screen coordinates of that point on top of the screen on your device.
And after you get the coordinates , you can use swipe method say like this -
UiDevice.getInstance().swipe(390, 1138, 719, 1128, 40);
method easily giving the exact co-ordinates where to drag from and till what point.
I have already used this and it works!

Related

Is possible to retrieve the physical (device) camera height over the physical floor?

I'm developing an app using ARCore. In this app I need:
1) to place an object always staying at the same pose in world space. Following the "Working with Anchors" article recommendations (https://developers.google.com/ar/develop/developer-guides/anchors) I'm attaching an anchor to the ARCore Session. That is, I'm not using Trackables at all.
2) as a secondary requisite the object must be placed automatically, that is, without tapping on the screen.
I've managed to solve the two requisites, having now the object "floating" in front of me, this way (very common code):
private void onSceneUpdate(FrameTime frameTime) {
...
if (_renderable!=null && _anchorNode==null) {
float[] position = {0f,0f,-10f};
float[] rotation = {0,0,0,1};
//
Anchor anchor=_arFragment.getArSceneView().getSession().createAnchor(new Pose(position,rotation));
//
_anchorNode = new AnchorNode(anchor);
_anchorNode.setRenderable(_renderable);
_anchorNode.setParent(_arFragment.getArSceneView().getScene());
_anchorNode.setLocalScale(new Vector3(0.01f,0.01f,0.01f)); //cm -> m
...
}
As i want the object to be on the floor, I need to find out what the height of my physical (device) camera above the floor is, in order to subtract that number from the current object's Y coordinate:
float[] position = {0f,HERE_THE_VALUE_TO_SUBTRACT_FROM_CAMERA_HEIGHT,-10f};
Certainly, it's an easy implementation when plane Trackables are used but here I have the requisites above-named.
I've managed to solve the two requisites, having now the object "floating" in front of me.
As i want the object to be on the floor, I need to find out what the height of my physical (device) camera above the floor is, in order to subtract that number from the current object's Y coordinate.
Trying with different camera/device Pose retrieval APIs, namely: frame.getAndroidSensorPose(), frame.getCamera().getPose() and frame.getCamera().getDisplayOrientedPose() are showing not valid values.
Thanks for your advice.
P.S.:Certainly, it's an easy implementation when plane Trackables are used but here I have other requisites, as above-named.
EDIT after Michael Dougan comments.
Well I think we have then two ways to achieve the requisites:
1) leave the code w/o changes, keeping on using the Session Anchor, asking the user to launch the app and the to follow a "calibration process" which the device on the floor. As this is a professional use app, and not a consumer one, we think it is perfectly suitable;
2) go ahead with the good-and-old Trackables, by means of the usual floor as an anchor, including the pose of that anchor in the calculation of the position of the 3D model.

Unity2D Android Touch misbehaving

I am attempting to translate an object depending on the touch position of the user.
The problem with it is, when I test it out, the object disappears as soon as I drag my finger on my phone screen. I am not entirely sure what's going on with it?
If somebody can guide me that would be great :)
Thanks
This is the Code:
#pragma strict
function Update () {
for (var touch : Touch in Input.touches)
{
if (touch.phase == TouchPhase.Moved) {
transform.Translate(0, touch.position.y, 0);
}
}
}
The problem is that you're moving the object by touch.position.y. This isn't a point inworld, it's a point on the touch screen. What you'll want to do is probably Camera.main.ScreenToWorldPoint(touch.position).y which will give you the position inworld for wherever you've touched.
Of course, Translate takes a vector indicating distance, not final destination, so simply sticking the above in it still won't work as you're intending.
Instead maybe try this:
Vector3 EndPos = Camera.main.ScreenToWorldPoint(touch.position);
float speed = 1f;
transform.position = Vector3.Lerp(transform.position, EndPos, speed * Time.deltaTime);
which should move the object towards your finger while at the same time keeping its movements smooth looking.
You'll want to ask this question at Unity's dedicated Questions/Answers site: http://answers.unity3d.com/index.html
There are very few people that come to stackoverflow for Unity specific question, unless they relate to Android/iOS specific features.
As for the cause of your problem, touch.position.y is define in screen space (pixels) where as transform.Translate is expecting world units (meters). You can convert between the two using the Camera.ScreenToWorldPoint() method, then creating a vector out of the camera position and screen world point. With this vector you can then either intersect some geometry in the scene or simply use it as a point in front of the camera.
http://docs.unity3d.com/Documentation/ScriptReference/Camera.ScreenToWorldPoint.html

configured and built the app and app appears in AVD but doesn't work on clicking, opens a blank screen. using pygame subset for android [duplicate]

I was wondering if someone could give me a detailed explanation on how to run a game/app developed using Pygame on an Android phone. I recently finished programming PacMan and it works perfectly on my computer, but I think it would be awesome if I could get it running on my phone. I tried following the instructions at http://pygame.renpy.org/android-packaging.html, but every time i run "import android" on the IDLE I get an error saying it did not find the module. Could someone clearly explain how to set up the android module?
Also, in my program I used code such as if (event.key == K_UP or event.key == K_w): direction = UP. However there are no arrow keys on a phone. What code would I need to use to see if the user swiped the screen with their fingers from up -> down or left -> right, etc.
Any help would be great. Thanks <3
There is a pyGame subset for android. However this requires special reworking and changing the program. Hopefully it will not be to hard.
http://pygame.renpy.org/writing.html
http://pygame.renpy.org/index.html
However about your second question i am unable to awnser because I am Not yet experienced enough.
i think the pygame subset for android would be good but i dont trust its functionality, i use kivy as its cross platform
and if you ever decide to use the pygame subset for android your touch of flips on screen of an android device would be your mouse movement on the desktop so i ma saying treat the touch as the mouse good luck
There are some pretty good answers for your first part already so I won't answer that. (I came here looking into what to use for it too!)
However the second part of your question should be a lot easier.
Have a mouse object that on a mouse down event will save the coordinates of the touch to an MX and MY variable
Then when the mouse up event is triggered takes the new coordinates and calculates a vector using the MX and MY and this new point ie. The distance and angle of the swipe. Use trigonometry or the math module for the angle (research arctan2).
You can then use this in an if, elif, else to determine what quadrant the angle was and the distance to determine whether the swipe was valid if it's greater than a certain value.
I'm on mobile so unfortunately I can't give an example, however I'm certain you're apt to work out the solution with this guidance.
For your second question, there is an answer in another website.
https://amp.reddit.com/r/Python/comments/2ak14j/made_my_first_android_app_in_under_5_hours_using/
it says You can try code like this
if android:
android.map_key(android.KEYCODE_BACK, pygame.K_ESCAPE)
I hope it will help you.
I've runned pygame for android!!!!
Firstly, I'm debugged app using saving error to file.
I got error that on android it can be runned only under fullscreen.
I've created small app and it working:
import sys, os
andr = None
try:
import android
andr = True
except ImportError:
andr = False
try:
import pygame
import sys
import pygame
import random
import time
from pygame.locals import *
pygame.init()
fps = 1 / 3
width, height = 640, 480
screen = pygame.display.set_mode((width, height), FULLSCREEN if andr else 0)
width, height = pygame.display.get_surface().get_size()
while True:
screen.fill((random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)))
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
pygame.display.flip()
time.sleep(fps)
except Exception as e:
open('error.txt', 'w').write(str(e))
Screenshot: https://i.stack.imgur.com/4qXPe.png
requirements in spec file:
requirements = python3,pygame
APK File size is only 12 MB!
pygame.event has multiple touch-screen events. Here are some useful ones:
FINGERMOTION: touch_id, finger_id, x, y, dx, dy
FINGERDOWN: touch_id, finger_id, x, y, dx, dy
FINGERUP: touch_id, finger_id, x, y, dx, dy
MULTIGESTURE: touch_id, x, y, pinched, rotated, num_fingers

how can I make my PC mouse cursor continue from where it left off, controlled through Android phone?

I am able to control my PC mouse with my Android phone. I would like the mouse cursor to continue its motion from where it left off. Just like a regular mouse. When I continue to move my finger across the phone's screen, the mouse should keep moving forward and same goes for other directions.
To give you an example:
Say I move the cursor to 40, 50 which translates to PCs 400, 500. Now when I put my finger on the device at say 10, 5 the cursor on PC will jump to 100, 50! But I want it to continue from 400, 500.
I am making these numbers up. But the phone's coordinates are converted to PC's like this:
PC_COORDS = MOBILE_COORDS * FACTOR;
where FACTOR = PC_MAX_DIMENSION / MOBILE_MAX_DIMENSION;
It seems like you are asking for some logic interpretation, maybe what you are looking for is something like this:
PC_COORDS = getCurrentPcCoords() + ( MOBILE_COORDS * FACTOR );
Of course getCurrentPcCoords() is not a real function, just to fill the idea behind the example.
Keep in mind that this will ADD the new coordinates, so what you might actually want is to get the 'change of the movement' in the android device and multiply that by the factor then add it to the current coordinates of the PC.

How to click a view of android program through MonkeyRunner?

I want to use MonkeyRunner to test the compatibility of my android program for a list of devices with different screen resolutions. I need to click a view, but the view is not in the same position for different resolutions. How can I get the position of it or do something else to click it?
NEED your help!
I know it's a little late, but you can use hierarchyviewer in android sdk to get the view id.
Then, in your script, use this:
from com.android.monkeyrunner import MonkeyRunner, MonkeyDevice
from com.android.monkeyrunner.easy import EasyMonkeyDevice, By
device = MonkeyRunner.waitForConnection()
easy_device = EasyMonkeyDevice(device)
# Start your android app
# touch the view by id
easy_device.touch(By.id('view_id'), MonkeyDevice.DOWN_AND_UP)
Later edit: thanks to dtmilano and AndroidViewClient, I was able to do the clicks on views as needed. The link is here: https://github.com/dtmilano/AndroidViewClient
Unfortunately this is not really possible with MonkeyRunner. One option is to use device.getProperty("display.width"),device.getProperty("display.height") and device.getProperty("display.density") and try to use those to somehow figure out where the view is. Another option would be to use a tool like Sikuli to try to click on the view.
Edit (one year after this answer was initially posted): It is now possible to do what you initially wanted with https://github.com/dtmilano/AndroidViewClient
Similar to what someoneHuman said above, I use two functions that convert tap coordinates based on the difference in the resolution of the device I originally wrote the script for and whatever device I am currently using.
First I get the current devices x and y pixel width.
CurrentDeviceX = float(device.getProperty("display.width"))
CurrentDeviceY = float(device.getProperty("display.height"))
Then I define a function for converting x and y coordinates. You can see that the functions below were written for a device that is 1280 x 800.
def transX(x):
''' (number) -> intsvd
TransX takes the x value supplied from the original device
and converts it to match the resolution of whatever device
is plugged in
'''
OriginalWidth = 1280;
#Get X dimensions of Current Device
XScale = (CurrentDeviceX)/(OriginalWidth)
x = XScale * x
return int(x)
def transY(y):
''' (number) -> int
TransY takes the y value supplied from the original device
and converts it to match the resolution of whatever device
is plugged in.
'''
OriginalHeight = 800;
#Get Y dimensions of Current Device
YScale = (CurrentDeviceY)/(OriginalHeight)
y = YScale * y
return int(y)
Then I can use these functions when creating tap events in my scripts.
example
device.touch(transX(737), transY(226), 'DOWN_AND_UP')
Please note that this approach is far from perfect, and it will only work if your app utilizes anchoring to adjust UI based on screen size. This is my quick and dirty approach to making scripts that work on multiple devices when UI IDs are unavailable.

Categories

Resources