I'm designing my first Android IME and am a bit overwhelmed. My goals are accessibility-related, and as such I wonder if perhaps I'm trying things with the IME framework that just aren't possible. I've taken the default softkeyboard example and have it working along with some of my modifications, so I've definitely understood at least some of this, but there isn't a whole lot of documentation on some of the things I'm attempting. Here's what I mean:
Currently, Android phones with touchscreen-only keyboards are inaccessible. I currently have an accessible touchscreen keyboard using methods similar to those used in IOS/VoiceOver, so that part of the project is done and fairly straight-forward to accomplish in the IME framework.
Currently, Android's accessibility API doesn't provide accessible feedback for navigating text fields. That is, with an Android screen reader loaded, if you type the word "this", you'll hear individual characters spoken as you type them, but if you then try left-arrowing over the "s", that isn't spoken. So I want to trap movements and provide spoken feedback for the elements navigated past. Here's where I start encountering problems.
I currently have speech feedback for left and right arrowing, using getCurrentInputConnection.getTextBeforeCursor(1, 0) for arrowing left, and a similar call for arrowing right. This gets the character currently under the cursor after the movement is processed, and all is good.
My challenge, though, comes when arrowing up and down between lines. Ideally, I'd like to grab the current line of text and speak that, but I don't see any way to do that. The only thing I can think of is some combination of getTextBefore/AfterCursor(Integer.MAX_VALUE, 0) and combining those values, determining the current line by filtering for the previous and next \n. Is there another way--getting the entire text content of the field as a single block of text and using the cursor position to determine which piece of that text represents the current line? I'm seeing something about extracted text in the various input method classes, and it seems like there may be a way to monitor that, but I don't know if that is at all useful to me (I.e. would that return the entire field content?)
My next goal is providing standard text navigation capabilities. Android accessibility doesn't currently include touchscreen exploration, so it is impossible to otherwise scroll a large block of text. I don't necessarily expect folks to write novels on their phones, but I'd like to provide quick gestures or commands to move up/down paragraphs, and to the top/bottom of longer fields. Does the IMF provide methods for cursor movement, or is that outside of its authority?
Honestly, I didn't get the first part :(
For your second question, you will need to handle it by hand.
For instance, to add a key with a down drawable and make it work you will need to:
In the onKey method check for the code.
If it's 20, you should do a sendDownUpKeyEvents of that key event.
Related
I am using the input event on a textarea to apply some logic that mutates the value of the textarea. This works as expected in local dev environment in a browser.
However, my target platform is android. On this platform, I'm noticing that instead of event.inputType being insertText, sometimes it is insertCompositionText. Android is apparently trying to be efficient by not actually mutating the textarea's value until you press space. How can I disable this behavior?
I found someone in a similar situation here who tried to use blur and focus events quickly. I can't use this because (1) it's hacky, hope there is a better solution (2) it resets the cursor position and degrades the user experience.
For reference, using ionic vue, but this is just an html <textarea>:
<textarea v-model="input" #input="onInput" />
onInput(event) {
console.log("onInput", event);
//more logic
}
You can't. That behavior comes from the keyboard, which is a separate app on Android. Not implementing the composing text functionality correctly will likely screw it up, breaking autocomplete, tap to correct, and the delete key, which all behave much differently due to composing text. It might even break normal typing- the idea of composing text is that it's temporary until you make it permanent. If so, when it does autocorrect on space it would assume the old stuff is deleted and recommit the entire word, causing duplication.
I was able to handle this issue by adding a hidden password input and route all the focus, events and value of my text input to the password input
The browsers will not show any predictive for password inputs and no insertCompositionText anymore
I'm working on an application that should allow the user to tag a user onto an item. In Google Drive, for example, when you're entering users to send an item to, the user's name becomes it's own "view" in the textview so that it cannot be modified and broken.
My current implementation already contains an autocomplete registry of current users, and formats the usernames with html tags so that they're interpreted correctly by the backend, but it's still possible to tamper by moving the cursor to the center of the name and modifying it.
I would like at least an example of how to accomplish what I am trying to do. The problem itself is hard to search for, probably because I just don't know what to call it.
Example of what I'm looking for: http://imgur.com/EBDoWED
Eventually, I came accross this: https://plus.google.com/+RichHyndman/posts/TSxaARVsRjF
Seeing that they're called "chips" is making this a lot more researchable. I'm still a bit weary about what I'm going to do, but I've got enough resources to start.
I'm going to attempt to make a LinearLayout that dynamically contains "chips" and edittext views and handles most of the functions of an edit texts.
My Android app contains a custom slider control based on the SeekBar, and I want to attach a custom text phrase to my control to explain its use for Accessibility.
I have done this successfully using View.setContentDescription(text), and TalkBack correctly speaks the phrase when I request focus on my slider control from Activity.onCreate.
So far, so good. However, when I touch the control, which I believe sets the AccessibilityFocus on my Android API 16 test device, extra words are being added to the spoken phrase, i.e. '...seek control. 0 per cent'. I want to remove these additional words.
I have tried to eliminate them using event.getText().clear() in View.onInitializeAccessibilityEvent(event) without success. Echoing the event to LogCat reports the correct phrase in event.contentDescription and no entries in event.text, but the extra words appear both in the audio output from the device hardware and in the on-screen debug text displayed by Menu->Settings->Accessibility->TalkBack->Settings->Developer Settings->Display Speech Output.
Please can anyone suggest where the extra words are being added, and how to eliminate them?
Any constructive suggestions would be welcomed. Thanks.
Update
I can see that some Explore By Touch (initial single-tap) event on my custom control does not pass through either its onInitializeAccessibilityEvent or dispatchPopulateAccessibilityEvent methods as I am deliberately calling event.setContentDescription(null). Despite this, there is an AccessibilityEvent being generated with my custom control's ContentDescription, set in Activity.onCreate in code, plus the extra words I'm trying to eliminate.
I've also set an AccessibilityDelegate on my custom control's parent ViewGroup to give visibility of its onRequestSendAccessibilityEvent calls. This confirms that no event containing my ContentDescription is passing through.
This is very puzzling, and happens on both the emulator and real hardware with API 16. Any ideas?
You also need to override http://developer.android.com/reference/android/view/View.html#onInitializeAccessibilityNodeInfo(android.view.accessibility.AccessibilityNodeInfo)
and set the contentDescription there.
If you want to remove the 0%, I would try to change the class in AccessibilityNodeInfo/AccessibilityEvent:
http://developer.android.com/reference/android/view/accessibility/AccessibilityNodeInfo.html#setClassName(java.lang.CharSequence)
I believe that this is a bug in TalkBack, and have raised Google Eyes-Free issue #375, including example code.
Update: Google have now archived this. Link moved to: http://code.google.com/archive/p/eyes-free/issues/375
I am creating a custom keyboard for android devices and i have managed to implement everything but being able to move up and down lines through the use of buttons not just dragging with your finger. I am implementing this for small screens of older devices.
I have managed to implement moving the cursor one character to the left and right and to the end and start of the text how ever i cannot figure out how to implement moving up and down multiple lines like you would when navigating a word document on a normal computer.
I am not sure how you programmed all that, and it sounds like some really nice work, so not sure if you had this idea or if it is even possible, but:
Couldn't you make the cursor move to the right x-Times when trying to go down a line, where x is the amount of characters in one line, or rather the length of the String in a line?
Depending on the way you programmed it, if there is a string for each line, you could see where the cursor is in the line you are going from (e.g at the 3rd character of the String) and then just put it there in the next line.
I have an OpenGL application that needs to show the soft keyboard for devices without physical ones for user input such as username or numbers in a few cases. In the case of numeric input, is there any way to show the numeric keypad instead of the alphabetic keyboard? I'm not using any text edit fields or anything, just the InputMethodManager:
((InputMethodManager)getSystemService(Context.INPUT_METHOD_SERVICE)).showSoftInput(glView, InputMethodManager.SHOW_FORCED);
The only method I've found that looks remotely helpful is InputMethodManager.setInputMethod but that takes an IBinder token and a String id, neither of which is explained very well in the documentation. I get the impression that it's not the right way to go, though.
If I were using an edit field, it would be simple and obvious, and I've found dozens of answers for that, but that's not what I'm doing, because it's an OpenGL game, so I have to just displaying the keyboard manually as above.
Probably not the answer you are looking for since it is more of a hack than a real solution, but a few things come to mind that might work (that is if you can't get a real solution).
An EditText with View.INVISIBLE set. Although, you might not be able to set focus here.
Put an EditText behind your GLSurfaceView and focus it. So it’s technically visible (from a code standpoint) but invisible to the user.