My scenario is remote control screen, using the API is AccessibilityService.dispatchGesture, the common click, swipe, multi-touch pressed at the same time are no problem.
But if the multi-touch interval is pressed, the onCancelled callback will be received. GestureDescription.Builder.addStroke to add multiple strokes,
I tried to add only one of them and found that each stroke can be dispatched normally, but when put together, it will fail to execute
Related
We have started to look into the Building Accessibility Service for Android at https://developer.android.com/guide/topics/ui/accessibility/services.html. Based on this documentation, we can perform custom gestures on behalf of user as mentioned under section "Taking actions for users" at https://developer.android.com/guide/topics/ui/accessibility/services.html#act-for-users.
We have following questions based on this documentation.
1) As we understand, there are gestures that user would perform and our code would listen to. Let's call these Listening Gestures. Then there are gestures that could be performed by our code for user. Let's call these Performing Gestures. Question is where do Performing Gestures impact - over touch-and-explore layer or underneath the touch-and-explore layer? For additional information, touch-and-explore is feature of Android Operating System that can be requested by Accessibility Services.
2) Does the Performing Gesture trigger any AccessibilityEvent which is notified to Accessibility Service? If yes, there's possible recursion if both Listening Gesture and Performing Gesture happen to be same. That is Listening Gesture could be swipe right which triggers some event. Performing Gesture is also let's say a swipe right. Now, this will also in turn trigger same event handler.
3) How do we determine that Performing Gesture executed successfully? The whole thing holds significance if Performing Gesture happens underneath the touch-and-explore layer.
Any help would be greatly appreciated.
1) No, performing gestures on behalf of users utilizing Accessibility service capabilities DOES NOT end up being caught as a "listening" gesture. The AccessibilityService actually sends the gesture through to the API that calculates screen touches in the exact same way that the screen does, circumventing the screen completely. So, these events are invisible to the assisstive technology. Though, if you hang on to a reference, you could of course call the callbacks from AccessibilityService for these gestures yourself. So, any gesture you perform will not trigger a touch to explore gesture. In fact, you could trigger performing a gesture as a result of a touch to explore gesture.
2) This, to me is actually the same question as question 1. No, it does not, because of all of the same reasons in question 1.
3) There are two answers to this. The first is 'dispatchGesture' returns a boolean. This boolean is true when the operating system runs into no technical issues dispatching your gesture. Potential issues for example would be: Your attempting to interact off screen. This would be stupid of you! LOL. If a "true" is returned from this method, your gesture was generally accpetable and was performed. At this point, we can be as sure that a gesture was performed as a user actually performing the gesture themselves. It's the exact same logic in the Operating System... check out the AOSP for yourself if you don't believe me :)
3B) The only way to be sure things are working is to watch your gesture take actions on the screen.
I am currently researching for better ways to handle events in my app.
Currently I have multiple listeners that are subscribing and unsubscribing to interesting objects on different events.
E.g. on a button click a listener is created, that listens on a client object, if a operation succeeded (in that case it automatically unregisters itself) or if a non fatal error occurs (in that case it automatically retries the operation).
The client object in turn is starting a android service that can emit different status events, that should result in the user interface updating itself or alternatively show notifications, if the app is currently not visible.
In my app I have a really big listener clutter, that is not easy to follow and that is not working on all occasions.
To resolve this issue I would like to implement a event bus with RxJava that hopefully reduces the complexity of my application.
The problem:
Is it possible with RxJava to have a fallback observer for a observable, to react to events, if no other observer is available?
E.g. All activities/fragments register themselves to get informed about certain events, so they can update the UI, if necessary.
When a activity/fragment is created/destroyed it automatically registers/unregisters itself from the event bus.
If the app is now in background, there should be no observers registered anymore. In that case only I would like to use a fallback observer that is handling those events.
I would like to achieve the following:
If in foreground: On event, update the UI.
If in background: On event, show toast / notification.
In my opinion your app shouldn't show anything when it's in the background (user is not interested in it anymore, or is doing something else, so don't spam him with toasts (as he probably would not even know which application raised this toast)).
However,
You can solve this problem with Subject. Let say you have MyServiceErrorHandler class with PublishSubject> inside, so every time there is some part of UI is visible and capable of showing error is should be subscribed to this subject. Then you can expose method like onError(Throwable t) which will call subject.hasObservers(). If yes it pushes data to subject (so it will emit an event to currently subscribed UI) if no you can do some fallback thing (like displaying toast/notification/logging something/etc). This solution is however error prone to rotation as you may receive your result while screen is rotating (thus not subscribed yet)
You can extend this approach a little bit and use a BehaviourSubject which will replay it's last event for every subscriber (pretty handy in case of screen rotation). So you're posting event to this subject even though there are none subscribers, and when user opens this app back again (and one of your UI element will subscribe) it will receive last event with error (so you can show it properly). But in that solution you would need a little bit more logic to clear this subject in case of obsolete/already consumed errors (to prevent it from showing on every rotation/etc).
I want to know if it is possible to generate automatic touch at regular intervals of time, say 5 seconds in another application.
For example..I want to develop an application which will create a touch response just as we touch the screen, at a particular coordinate at regular fixed interval.
Please help.
It's possible to create and dispatch touch events in your application. It's pretty easy. Look at View.dispatchTouchEvent method and parameters. You have to override that method in root view group so you can pass your events to all views in the activity.
It's not possible to access other applications though (due to security reasons).
edit: seems like dispatchTouchEvent is public, so no need to override
I'm developing an app which interacts with phone's WiFi, Bluetooth, Mobile Network.
The app is mainly a Service, and the GUI doesn't play a central role.
The core of the app, and the principal method of the service class is a single method which receives all intents that the app needs to receive:
public void handleIntent(Intent intent){
It then extracts the intent's action and calls a specific handler method for the corresponding intent's action, for instance when receiving a SCREEN_ON, it would call
private void handleScreenOn(){
Problem is that some tasks on the phone take some time, so that some other events may occur in the middle of the processing of the task, that should change processing.
For instance, turning ON the Wifi would take a couple of seconds, sending several intents WIFI_STATE_CHANGED_ACTION before it's actually completed. In the middle of the WiFi being enabled, the user can turn off the screen so that a SCREEN_OFF intent would be received.
Now say that my app's goal is to turn off WiFi when screen gets turned Off. There is a situation where a problem occurs:
Initial situation: Screen is On, WiFi is Off
User toggles WiFi setting to enable it. Wifi starts getting enabled
User almost immediately turns screen Off
The app receives the SCREEN_OFF intent but since WiFi is not enabled yet, then it believes that there is no need to disable it
Wifi enabling gets finished, and Wifi stays enabled despite the screen being off.
How to solve it?
Currently, in step 5, when Wifi gets finally enabled, I would test whether screen is off or not to turn off wifi again.
This solution requires many if / else handling all possible cases.
I'm sure there must be some cleaner way of doing it, handling a stack of intents, or something like this...
Anyone with some good design patter or good advice on how to do it cleanly?
Consider sending an "action completed" event whenever you (asynchronously) complete the reaction to some event. Now you can add some logic in handleIntent() to achieve a consistent state. The logic is at a central position in your components and code duplication is avoided.
Alternatively you can try to serialize your events: i.e. when an event occurs which may invalidate the outcome of some not yet finished action (you need to manage a list of not completed action, completion can be detected as outlined above) postpone its handling until all those actions are completed. To decide which events may depend on other events a simple, static lookup-table should suffice.
Another approach would be to maintain the state of the entities (either all of them, or just those that are asynchronous) being managed. For those that are asynchronous, the state should not be binary (ON|OFF), but rather have at least 3 or 4 states: ON, TURNING_ON, OFF, TURNING_OFF. Then in your example step 4, when you receive the SCREEN_OFF, check for the WIFI states ON or TURNING_ON, to determine if you have to turn it off.
When registering to mousedown and touchdown events on a webview both are triggered at the very same time, leading up to the odd situation of having twice "mousedown-like" events to deal with. There's a known workaround which consists of calling event.preventDefault()... but I cannot call it because I do need to get the default behavior anyway. And of course I cannot remove the regular mouse events listeners neither without no longer being compatible with a regular computer.
There's another known workaround (found here: http://www.quirksmode.org/blog/archives/2010/02/do_we_need_touc.html) which consists of detecting whether the first event that we receive is of touchevent type or not, in the former case we then simply remove the listeners to the mouse events.... But I've found it ugly and moreover I can't believe why the Android's native browser keeps firing mouse event whereas touch events are registered and there's no way to prevent that from happening in the manifest or somewhere else.
Btw: this issue doesn't occur on Safari mobile.
Thank you very much!