I would like to make the talkback say a simple mathematical expression like: "2 - (3/5)"
However, when it reads, talkback skips the parentheses, thus affecting the expression to take on a different meaning.
Thanks to #QuentinC I was able to read more questions on the subject. Would this fall under "it's best to leave it at that and not edit" cases, or should I instead be looking for ways to force talkback to speak every character?
Related
Is there any sort of TalkBack screen reader markup to inform the TTS how to pronounce a word based on it's type? For example, 'read' pronounced in past or present tense? Or 'bass' being pronounced as a fish or musical instrument?
Maybe, depending on the type of widget you're using. android:contentDescription might be what you're looking for. But I would not recommend trying to force the screen reader to announce something a certain way. If you want "read" announced as past tense and you have a contentDescription of "red", while that might sound correct when using a screen reader, it will not read correctly if using a Braille device. The user will read "r e d" in Braille letters instead of "r e a d" and that will be confusing.
I mean the front-end part of the validation. Should there be a submit button which checks all the fields or a text-change-listener which gives live feedback if the input is correct and so on?
The simple answer: that depends.
Put yourself in the shoes of a user of your application. What gives you a better experience:
You enter data in 10 different fields, and in the end, you are told: there are 5 errors, now go and fix this, and that, and that
You enter data into a field, and you get immediate feedback when something is wrong
Meaning: both attempts work, and both are used by different apps. Most people might find the second option more convenient though.
Long story short: there is no common best practice. Each app is different. If you really want to compete in the market, check out what your competition is doing, and then design a better user experience than that competition!
I've just started using the Android text to speech engine, in an app I'm writing. It seems great.
However there are some things it says "wrongly", in my context at least. For example when it encounters "&" it says "ampersand" and I want it to say "and". Another example (and probably not what you think), when it encounters "FA" is says something that sounds like "far", I want it to say it as letters, sounding like "eff ay".
Can I do something to influence it in the directions I want?
Whilst this is currently an Android situation for me, I wouldn't be surprised if it becomes an "Apple" one as well.
In regard of the ampersand you could simply replace it with the word 'and' before you pass the text on for processing.
For FA, as Markers said, you could simply replace it with 'F A' (mind the gap).
An answer I've found to the problem of making the text-to-speech engine pronounce groups of letters as individual letters rather than trying to make up a word is to separate them into individual letters by putting spaces between them.
So for example "EX" should (for my purpose) be pronounced "Ee ex", but the engine says "ex" (understandably in the general context), so I simply change the text to "E X" and it comes out the way I want.
Hardly rocket science I know, but it works. I was hoping to be able to add some markup, as is present in STML (Spoken Text Markup Language), but Google haven't implemented anything like that yet.
I have a lot of dynamically generated TextViews (being fed to a list array adapter), and each of their text contains a summary of a lot of small information. In an effort to improve UI, I used some styling when displaying information, like so:
(4 Votes) ☬ [tag1] [java] [regex] ☬ 2 min ago ☬ author ☬ 245 points
This line is constructed using a StringBuilder. This line (let's say) looks nice, but not friendly to accessibility tools such as "Google Talk Back". It reads like so: "four votes unknown character bracket tag one close bracket..."
So to fix that, I'm generating another string and set it for content description, like so:
Asked by "author", who has "245" reputation points, received "4 votes" since "two minutes ago", these are the tags: "tag1", "java", "regex".
This line would also be generated using StringBuilder, essentially doubles my run-time.
I'm asking:
Is this really doubling my run-time? Is it worth it? Obviously only a very small percentage of people would need accessibility tools, but looks like I'm sacrificing everyone else's CPU cycles.
If it indeed has a negative impact on performance, how can I improve it? Is there a way to detect whether "Talk Back" is used? Is android smart enough to detect this itself, and ignore setContentDescription() line?
I don't think its worth worrying about unless your app has performance issues. But it you do, you can always turn it on or off based on if any accessibility settings are turned on. You can check for enables accessibility apps in Secure Settings.
are there any characters that, if included in an SMS message (body), might cause problems with being sent?
edit: Actually I only care really about any characters that can be input by user with keyboard (but in any language).. don't know if that makes my question easier to answer or not...
There's no straightforward answer to this as it depends on a variety of factors, but this standard lays out the basics and I expect you can use this to deduce which characters to focus on:
http://en.wikipedia.org/wiki/GSM_03.38
I would advise doing extensive testing of the characters you use, on different handsets, and in your target locales.