My app needs to display numeric ranges in a TextView, such as "34-93". TalkBack is reading this as "thirty-four minus ninety-three". I want it to either read "thirty-four dash ninety-three" or "thirty-four to ninety-three" I've tried inserting spaces before and after the dash, as well as using both a hyphen and en-dash, to no avail. In general, though, I have little control over how this string is formatted.
It seems like it should read "dash" if there were some attribute to cause it to read punctuation literally, like accessibilitySpeechPunctuation in iOS.
i have a simple Memory Game as Project. For the Memory Tiles I wanted to use Emojis. I tried to use it that way:
emojiCard.setText(new String(Character.toChars(Integer.parseInt(1F60D, 16))));
now I just have to save 1F60D to a variable and can show the emoji.
that works for simple emojis but I cannot use the "new" ones because then i have to use surrogate pairs and I don't know how to do this.
Is there a better way ? like saving the unicode ?
sorry i'am really new to android development and tried already a lot of things.
Thanks.
Integer.parseInt() takes a String as input, so presumably you meant to say Integer.parseInt("1F60D", 16) instead. Which would be wasted overhead when you can simply pass a numeric 0x1F60D literal to Character.toChars() instead.
Java strings use UTF-16 encoding. When encoded to UTF-16, codepoint U+1F60D uses surrogate pairs, so surrogates is not your issue.
Assuming you are referring to how newer emojis support modifiers (to change their genders, colors, etc), then that has nothing to do with surrogates. You simply append the modifier codepoint(s) you want after the base emoji codepoint. For example:
emojiCard.setText(new String(Character.toChars(0x1F466)) + new String(Character.toChars(0x1F3FE)));
(👦 + 🏾 = 👦🏾)
Android has two different ways to escape / encode HTML characters / entities in Strings:
Html.escapeHtml(String), added in API 16 (Android 4.1). The docs say:
Returns an HTML escaped representation of the given plain text.
TextUtils.htmlEncode(String) For this one, the docs say:
Html-encode the string.
Reading the docs, they both seem to do pretty much the same thing, but, when testing them, I get some pretty mysterious (to me) output.
Eg. With the input: <p>This is a quote ". This is a euro symbol: €. <b>This is some bold text</b></p>
Html.escapeHtml gives:
<p>This is a quote ". This is a euro symbol: €. <b>This is some bold text</b></p>
Whereas TextUtils.htmlEncode gives:
<p>This is a quote ". This is a euro symbol: €. <b>This is some bold text</b></p>
So it seems that the second escapes / encodes the quote ("), but the first doesn't, although the first encodes the Euro symbol, but the second doesn't. I'm confused.
So what's the difference between these two methods ? Which characters does each escape / encode ? What's the difference between encoding and escaping here ? When should I use one or the other (or should I, gasp, use them both together ?) ?
You can compare their sources:
This is what Html.escapeHtml uses underneath:
https://github.com/android/platform_frameworks_base/blob/d59921149bb5948ffbcb9a9e832e9ac1538e05a0/core/java/android/text/Html.java#L387
This is TextUtils.htmlEncode:
https://github.com/android/platform_frameworks_base/blob/d59921149bb5948ffbcb9a9e832e9ac1538e05a0/core/java/android/text/TextUtils.java#L1361
As you can see, the latter only quotes certain characters that are reserved for markup in HTML, while the former also encodes non-ASCII characters, so they can be represented in ASCII.
Thus, if your input only contains Latin characters (which is usually unlikely nowadays), or you have set up Unicode in your HTML page properly, and can go along with TextUtils.htmlEncode. Whereas if you need to ensure that your text works even if transmitted via 7-bit channels, use Html.escapeHtml.
As for the different treating of the quote character (") -- it only needs to be escaped inside attribute values (see the spec), so if you are not putting your text there, you should be fine.
Thus, my personal choice would be Html.escapeHtml, as it seems to be more versatile.
I discovered today that Android can't display a small handful of Japanese characters that I'm using in my Japanese-English dictionary app.
The problem comes when I attempt to display the character via TextView.setText(). All of the characters below show up as blank when I attempt to display them in a TextView. It doesn't appear to be an issue with encoding, though - I'm storing the characters in a SQLite database and have verified that Android can understand the characters. Casting the characters to (int) retrieves proper Unicode decimal escapes for all but one of the characters:
String component = cursor.getString(cursor.getColumnIndex("component"));
Log.i("CursorAdapterGridComponents", "Character Code: " + (int) component.charAt(0) + "(" + component + ")");
I had to use Character.codePointAt() to get the decimal escape for the one problematic character:
int codePoint = Character.codePointAt(component, 0);
I don't think I'm doing anything wrong, and as String's are by default UTF-16 encoded, there should be nothing preventing them from displaying the characters.
Below are all of the decimal escapes for the seven problematic characters:
⺅ Character Code: 11909(⺅)
⺌ Character Code: 11916(⺌)
⺾ Character Code: 11966(⺾)
⻏ Character Code: 11983(⻏)
⻖ Character Code: 11990(⻖)
⺹ Character Code: 11961(⺹)
𠆢 Character Code: 131490(𠆢)
Plugging the first six values into http://unicode-table.com/en/ revealed their corresponding Unicode numbers, so I have no doubt that they're valid UTF-8 characters.
The seventh character could only be retrieved from a table of UTF-16 characters: http://www.fileformat.info/info/unicode/char/201a2/browsertest.htm. I could not use its 5-character Unicode number in setText() (as in "\u201a2") because, as I discovered earlier today, Android has no support for Unicode strings past 0xFFFF. As a result, the string was evaluated as "\u201a" + "2". That still doesn't explain why the first six characters won't show up.
What are my options at this point? My first instinct is to just make graphics out of the problematic characters, but Android's highly variable DPI environment makes this a challenging proposition. Is using another font in my app an option? Aside from that, I really have no idea how to proceed.
Is using another font in my app an option?
Sure. Find a font that you are licensed to distribute with your app and has these characters. Package the font in your assets/ directory. Create a Typeface object for that font face. Apply that font to necessary widgets using setTypeface() on TextView.
Here is a sample application demonstrating applying a custom font to a TextView.
I am trying to parse a Rss2.0 feed on Android using a Pull parser.
XmlPullParser parser = Xml.newPullParser();
parser.setInput(url.open(), null);
The prolog of the feed XML says the encoding is "utf-8". When I open the remote stream and pass this to my Pull Parser, I get invalid token, document not well formed exceptions.
When I save the XML file and open it in the browser(FireFox) the browser reports presence of Unicode 0x12 character(grave accent?) in the file and fails to render the XML.
What is the best way to handle such cases assuming that I do not have any control over the XML being returned?
Thanks.
Where did you find that 0x12 is the grave accent? UTF-8 has the character range 0x00-0x7F encoded the same as ASCII, and ASCII code point 0x12 is a control character, DC2, or CTRL+R.
It sounds like an encoding problem of some sort. The simplest way to resolve that is to look at the file you've saved in a hex editor. There are some things to check:
the byte order mark (BOM) at the beginning might confuse some XML parsers
even though the XML declaration says the encoding is in UTF-8, it may not actually have that encoding, and the file will be decoded incorrectly.
not all unicode characters are legal in XML, which is why firefox refuses to render it. In particular, the XML spec says that that 0x9, 0xA and 0xD are the only valid characters less than 0x20, so 0x12 will definitely cause compliant parsers to grumble.
If you can upload the file to pastebin or similar, I can help find the cause and suggest a resolution.
EDIT: Ok, you can't upload. That's understandable.
The XML you're getting is corrupted somehow, and the ideal course of action is to contact the party responsible for producing it, to see if the problem can be resolved.
One thing to check before doing that though - are you sure you are getting the data undisturbed? Some forms of communication (SMS) allow only 7-bit characters. This would turn 0x92 (ASCII forward tick/apostrophe - grave accent?) into 0x12. Seems like quite a coincidence, particularly if these appear in the file where you would expect an accent.
Otherwise, you will have to try to make best do with what you have:
although not strictly necessary, be defensive and pass "UTF-8" as the second paramter to setInput, on the parser.
similarly, force the parser to use another character encoding by passing a different encoding as the second parameter. Encodings to try in addtion to "UTF-8" are "iso-8859-1" and "UTF-16". A full list of supported encodings for java is given on the Sun site - you could try all of these. (I couldn't find a definitive list of supported encodings for Android.)
As a last resort, you can strip out invalid characters, e.g. remove all characters below 0x20 that are not whitespace (0x9,0xA and 0xD are all whitepsace.) If removing them is difficult, you can replace them instead.
For example
class ReplacingInputStream extends FilterInputStream
{
public int read() throws IOException
{
int read = super.read();
if (read!=-1 && read<0x20 && !(read==0x9 || read==0xA || read==0xB))
read = 0x20;
return read;
}
}
You wrap this around your existing input stream, and it filters out the invalid characters. Note that you could easily do more damage to the XML, or end up with nonsense XML, but equally it may allow you to get out the data you need or to more easily see where the problems lie.
I use to filter it with a regex, but the trick is not trying to get and replace the accents. It depends on the encode and you don't want to change the content.
Try to insert the content of the tags into this tags
Like this
<title>My title</title>
<link>http://mylink.com</link>
<description>My description</description>
To this
<title><![CDATA[My title]]></title>
<link><![CDATA[http://milynk.com]]></link>
<description><![CDATA[My Description]]></description>
The regex shouldn't be very hard to figure out. It works for me, hope it helps for you.
The problem with UTF-8 is that it is a multibyte encoding. As such it needs a way to indicate when a character is formed by more than one byte (maybe two, three, four, ...). The way of doing this is by reserving some byte values to signal multibyte characters. Thus encoding follows some basic rules:
One byte characters have no MSB set (codes compatible with 7-bit ASCII).
Two byte characters are represented by sequence: 110xxxxx 10xxxxxx
Three bytes: 1110xxxx 10xxxxxx 10xxxxxx
Four bytes: 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
Your problem is that you may be reading some character string supposedly encoded as UTF-8 (as the XML encoding definition states) but the byte chunk might not be really encoded in UTF-8 (it is a common mistake to declare something as UTF-8 but encoding text with a different encoding such as Cp1252). Your XML parser tries to interpret byte chunks as UTF-8 characters but finds something that does not fit the encoding rules (illegal character). I.e. two bytes with two most significate bytes set would bring an illegal encoding error: 110xxxxx must be always followed by 10xxxxxx (values such as 01xxxxxx 11xxxxxx 00xxxxxx would be illegal).
This problem does not arise when non-variable length encodings are used. I.e. if you state in your XML declaration that your file uses Windows-1252 encoding but you end up using ANSI your only problem will be that non-ASCII characters (values > 127) will render incorrectly.
The solution:
Try to detect encoding by other means.
If you will always be reading data from same source you could sample some files and use an advanced text editor that tries to infer actual encoding of the file (i.e. notepad++, jEdit, etc.).
Do it programatically. Preprocess raw bytes before doing any actual xml processing.
Force actual encoding at the XML processor
Alternatively if you do not mind about non-ASCII characters (no matter if strange symbols appear now and then) you could go directly to step 2 and force XML processing to any ASCII compatible 8-byte fixed length encoding (ANSI, any Windows-XXXX codepage, Mac-Roman encoding, etc.). With your present code you just could try:
XmlPullParser parser = Xml.newPullParser();
parser.setInput(url.open(), "ISO-8859-1");
Calling setInput(istream, null) already means for the pull parser to try to detect the encoding on its own. It obviously fails, due to the fact that there is an actual problem with the file. So it's not like your code is wrong - you can't be expected to be able to parse all incorrect documents, whether ill-formed or with wrong encodings.
If however it's mandatory that you try to parse this particular document, what you can do is modify your parsing code so it's in a function that takes the encoding as a parameter and is wrapped in a try/catch block. The first time through, do not specify an encoding, and if you get an encoding error, relaunch it with ISO-8859-1. If it's mandatory to have it succeed, repeat for other encodings, otherwise call it quits after two.
Before parsing your XML, you may tweak it, and manually remove the accents before you parse it.
Maybe not the best solution so far, but it will do the job.