Communication between Android App and UIAutomator via Non-UI mechanism - android

Currently I am using UIAutomator to test our application and all UI elements are accessible by UIAutomator.
Usually build APK with additional code to show dialog, indicating successful completion of test-case (i.e. operation invoked by UIAutomator), to inform UIAutomator to proceed with next test-case.
Code which is responsible to show dialog is not committed into repository and maintained as patches and not allowed to be committed in repository.
For this reason, whenever we want to execute UIAutomator tests, we build APK with additional code residing in patches.
My question: Is there any other way to communicate UIAutomator about successful completion of test-case (i.e. application has completed the operation invoked by UIAutomator), without using dialog.
I need this change to execute UIAutomator tests on release-candidate builds.
What I tried: Set constant delay between test-cases invocation.
But I cannot set constant delay between test-cases, as execution time varies based on test-data and device/environment.
I thought of BroadcastReceiver, but I don't know how to register from UIAutomator?
Is there any other mechanism / workaround to achieve this functionality?

I am not sure what you really asks for, but since I am not allowed to comment, you can correct me if I have misunderstood.
You are looking for a way to assert that a testcase has executed and no anomalies been present in your application?
First you need to identify a UiObject. Then you can make use of
UiObject.waitForExists();
Above method will return true when a uiObject matches the query specified on instantiation.
Another solution is to use a runner which can extend UiAutomatorInstrumentationTestRunner, where you build a TestSuite to execute in set order.

If you want the UiAutomator to report status to the Host console, here is a answer from the post below.
Writing to Android UI Automator output console
Instrumentation.sendStatus(..)
Each key / value pair in the Bundle will be written out like this:
INSTRUMENTATION_STATUS: key1=value1
INSTRUMENTATION_STATUS: key2=value2
INSTRUMENTATION_STATUS_CODE: -1
So, your host can get status in any steps you want.

Related

How to increase Kotlin coroutines Dispatchers.IO size on Android?

Coroutines Dispatchers.IO context is limited to 64 threads. That's not enough to reliably interface with blocking code in highly-concurrent system.
Documentation states that:
Additional threads in this pool are created and are shutdown on
demand. The number of threads used by this dispatcher is limited by
the value of “kotlinx.coroutines.io.parallelism”
(IO_PARALLELISM_PROPERTY_NAME) system property. It defaults to the
limit of 64 threads or the number of cores (whichever is larger).
I want to change kotlinx.coroutines.io.parallelism system property to something else. However, if I just do this:
adb shell "setprop kotlinx.coroutines.io.parallelism 1000"
then I get the following result:
setprop: failed to set property 'kotlinx.coroutines.io.parallelism' to '1000'
Furthermore, if I want to ship my app, then I'll need to change this property on users' devices as well, right? Otherwise the app won't work. However, even assuming that it's possible, as far as I understand, all apps that change this property will override the setting for one another. This doesn't sound like a reliable mode of operation.
So, I have three questions in this context:
Is the property implied in documentation is indeed the "system property" that I tried to change?
How to change this property on non-rooted device for all users of my app?
Is there better alternative?
P.S. I know that if I'd only use coroutines, without blocking code, this wouldn't be a problem (probably). But let's just assume that I need to use blocking calls (e.g. legacy Java code).
IO_PARALLELISM_PROPERTY_NAME does not refer to an Android system property, but to a Java system property. Just add this code early in your app, e.g. first in your Application.onCreate(), to change it to 1000:
import static kotlinx.coroutines.DispatchersKt.IO_PARALLELISM_PROPERTY_NAME;
System.setProperty(IO_PARALLELISM_PROPERTY_NAME, Integer.toString(1000));
This does not have to be done on a per-device basis with root or anything like that. It will work everywhere, since it is regular app code using regular app APIs.
As long as you do this before you use Dispatchers.IO for the first time, your property change will be applied.
You can create your own dispatcher with any number of threads, like so
val dispatcher = Executors.newFixedThreadPool(128).asCoroutineDispatcher()
With kotlinx-coroutines-1.6.0 you can limit the threads that are used for Dispatchers by using "limitedParallelism" method.
Example:
// 100 threads for MySQL connection
val myMysqlDbDispatcher = Dispatchers.IO.limitedParallelism(100)
// 60 threads for MongoDB connection
val myMongoDbDispatcher = Dispatchers.IO.limitedParallelism(60)
Release note: https://blog.jetbrains.com/kotlin/2021/12/introducing-kotlinx-coroutines-1-6-0/#dispatcher-views-api

How to write an android test which will verify my app hits all required apis

I am heavily testing my application with unit tests and Espresso tests. My next step is to make sure my application hits all required apis. For that reason I am looking for a way to write a test, which will verify the api calls.
I would really appreciate any suggestions.
What you are describing is called a "unit test". Unit tests are meant to test as many lines of code as possible regardless of UI.
Espresso tests are "instrumentation tests" (or "UI tests") intended to check if the app is responding to UI events correctly. They're not meant to verify the correctness of code, but the correctness of the functionality of the app as used by the user.
You can read about both at the official documentation. You'll find that unit tests are very different than instrumentation tests, and often harder to write because they require more engineering of your code to do correctly. You will likely have to "mock" the various parts of your application to make sure their APIs were called exactly as you expected.
There are 2 main goal when I was writing api tests:
First is component based. The goal was to make sure each class / component makes an api call when certain criteria is met (for example calling an api A when onDestroy() is called)
Second, is to make sure the Apis are called is certain order for the analytics purposes.
The first step I achieved by using unit tests with injected mock objects via Mockito and PowerMockito. PowerMockito was used primarily to mock static methods and to make sure the methods were called at least n times.
For the second step, UI test could be used, since it runs the real application. I have implemented the helper class, which was recording the instances when api requests were made. The script in Espresso was validating the order of api calls by referring the helper class.

AWS Device farm seems to be ignoring TestNG annotations

I have successfully uploaded and run my tests on AWS device farm. Locally, I'm using fun things like #Test(enabled = false, dependsOnGroups = "Login") to mark which tests to run at the time, and what order they should execute. Locally, this all works fine and dandy as expected. The problem happens after I upload the zip of the maven build to device farm and perform a test run.
Looking at the logs from device farm, it doesn't care if "enabled" is set to true or false, it'l run things regardless. It also ignores the "group=" and "dependsOnGroups" annotations. This is super important, since all other tests will fail if I'm not logged in first. Worse, the subsequent failing tests will not be skipped, so AWS is happily charging me more money off of this.
I tried using #Test(priority=blah), but it's ignoring that too. The only thing it seems to be respecting are things like #BeforeSuite and #AfterSuite.
Anyone run into this or have any ideas why this is happening?
I am an engineer working on AWS Device Farm.
1) "enabled" annotation flag
I just verified that you are correct about our TestNG parser ignoring the "enabled" flag on annotations and always including the test, even if it is disabled. At first glance, this appears to be a simple issue with a straight forward fix. Assuming best case scenario, we will try and have it fixed and live in production as soon as we can.
2) "dependsOnGroups" annotation field
The answer to this one is a bit more complex. As of today, AWS Device Farm does not support the dependsOnGroups or dependsOnMethods annotation fields.
There are a few reasons for this, with the main reason being that AWS Device Farm executes each #Test method individually, each one using a fresh instance of the Appium server. There are pros/cons to this approach that I won't delve into here, but I will say that it does have benefits on both a technical and feature level. When executing individual methods with the TestNG runner, it will only load the context of an individual method, and not all tests/suites/groups of the specified dependencies found within the .jar file. Additionally, since each one of these test methods is executed in a different Java process each time, the TestNG runner does not maintain any "state" in memory. This means that it does not know that it has already executed a test previously (in a different process) so it will attempt to run it again.
3) Executing "groups" of tests
We currently do not expose groups/excludegroups to the user to allow them to pick specific collections of tests to execute or skip. However, this should be possible by configuring your <groups> entries in a testng.xml file placed in the root of your *-tests.jar file that is uploaded in your test package archive. In this case, our parser should only "discover" tests defined within those groups and not all #Test annotated methods. I have not tried this specific case however, so your YMMV.
Hope that helps! If you have any additional questions or have a specific run/test package you'd like us to look at, feel free to reach out or paste a URL to a previously executed run.
I added testng.xml in the root of *-tests.jar and checked.
But device farm is not running the tests listed in testng.xml. It's still running all the classes with #Test annotation

Testing an Activity which uses a ContentResolver

In my app, I have an Activity, which is basically a form for the user to enter data which is then inserted into a database table via a ContentResolver. How do I test this Activity?
My first attempt was to use ActivityInstrumentationTestCase2 which gives me full instrumentation to simulate entering data. However, the underlying ContentProvider is not closed and destroyed between each test, which leaves the database in an unknown state at the beginning of subsequent tests.
My second attempt was to use ActivityUnitTestCase and inject a mock context that can clean up the database for each test. However, this doesn't allow me to enter text or click on buttons in the activity as it is never actually drawn on the test device.
Does anyone have any suggestions about what else I can try?
it seems that what you've been using is intended for library development
You should look at the monkey binary here , which works great for me.
If you're not satisfied with it you could use monkeyRunner which provides more control over the tests you're running.
Edit :
As far as the database testing goes , cant you use the sqlite3 binary for a simple query after each test?
Edit2:
I am thinking of a .sh script that does the following :
Runs monkey for a while - you can specify the number of events for the monkey to send
Invoke sqlite3 with a query that would check the database integrity into a log file (sqlite3 command can take sql query as a second parameter, and you can use ">" to write the output into some file)
Repeat.
There are tons of examples for .sh scripting on the net so you shouldn't have problem with that.
I am assuming you're doing all this in adb shell, but if you're not, make sure to set all your environment variables correctly. Particularly ANDROID_ROOT, ANDROID_ASSETS and ANDROID_DATA should be set to "/system","/system/app" and "/data" accordingly . Also don't forget to "chmod" the .sh file to be executable ( chmod 777 file.sh ).
Another suggestion is to generate and keep track of the monkey random seeds so you can repeat certain inputs that are causing you problems. You can specify a seed with -s parameter.

Measure application response time/wait for next activity ready in Android?

I am developing an automated test suite to get the timing information for some Android applications (whose source code I do not have access to).
I haven't decided whether to use MonkeyRunner or Robotium yet. My main concern is after I perform an action on the UI (say typed an URL), how to determine when Android has fulfilled my request, all of the next activity's components are ready for use and I am ready to get the result and take the next action (say the page I requested is fully loaded, or email is fully opened).
For web-browser this is simple, I can just use onProgressChaged() or onPageFinished(). But I am looking for a more general way which works for all applications. I think Instrumentation.waitForIdleSync() or Instrumentation.waitForIdle() might be my best bet here.
However, as far as the documentation I read about MonkeyRunner and Robotium, none of them seem to integrate with waitForIdle well. In Robotium I could send some input and then get the output, but there doesn't seem to be a simple way for me to know when the output is ready, and maybe invoke a callback at that point. MonkeyRunner is similar in this aspect, too.
So I wonder is there a simple way for me to know what time my request has been fulfilled (as perceived by the user) without re-implementing Robotium functionality all by my own?
Thanks a lot.
This can be very tricky and entirely dependent on what exactly you asked monkeyrunner to do.
For example, if you have a monkeyrunner script, and issued a command to launch calculator app, you can have a python subprocess monitoring adb logcat -b events output to determine whether calculator app has been launched or not. If you are asking to press a button in the calculator, you can have a sleep of 1 or 2 seconds.
But there is no direct way to determine whether android has processed your event or not. Simply because, every operation differs and takes its own time.
You can put asserts in robotium and then use system.nanoseconds() before and after like a timer.
This might be a easy way to get timing information

Categories

Resources