How to call mbstowcs properly? - android

size_t mbstowcs(wchar_t *dest, const char *src, size_t n);
I have some information encoded using gb2312 which needs to change to unicode in android platform.
1.before calling this method, is it right to setlocale(LC_ALL, "zh_CN.UTF-8")?
2.how large need to allocate to dest?
3.What to pass to n, is it strlen(src)?
Thank you very much.

mbstowcs() will convert a string from the current locale's multibyte encoding into a wide character string. Wide character strings are not necessarily unicode, but on Linux they are (UCS32).
If you set the locale to zh_CN.UTF-8 then the current locale's multibyte encoding will be UTF-8, not GB2312. You would need to set a GB2312 locale for the input to be treated using that multibyte encoding.
The C standard implies that a single multibyte character will produce at most one wide character, so you can use strlen(src) as the upper bound on the number of wide characters required:
size_t n = strlen(src) + 1;
wchar_t *dest = malloc(n * sizeof dest[0]);
(glibc has an extension to the standard mbstowcs() interface, which allows you to pass it a NULL pointer to find out exactly how many wide characters will be produced by the conversion, but that won't help you on Android.) It works like this:
size_t n = mbstowcs(NULL, src, 0) + 1;
The value of n that should be passed is the maximum number of wide characters that should be written through the dest pointer, including the terminating null wide character.
However, you should instead look into using libiconv, which has been successfully compiled for Android. It allows you to explicitly choose the source and destination character sets you are interested in, and is a much better fit for this problem.

Related

Android Java.Lang Locale Number Format I/O Asymmetry Problem

Historically the Android phones sold in South Africa provided English.US and English.UK locale support, but recently English.ZA (South Africa) has made an appearance, on Android 9.0 Samsung Galaxy A10, for example.
This particular Locale is showing asymmetric treatment of number formats, using the Locale.DE (German/Dutch) convention when converting Floats and Doubles into character strings[*1], but raising Java.Lang.NumberFormatException when reading back the self-same generated strings. For instance:
// on output
Float fltNum = 1.23456F;
System.out.println(String.format(Locale.getDefault(),"%f",fltNum)); // prints '1,23456'
// on Input
String fltStr = "1,23456";
Float fltVal;
fltVal = Float(fltStr); // generates NumberFormatException
fltVal = Float.parseFloat(fltStr); // also generates NumberFormatException
// Giving the compiler Float hints fltStr = "1,23456F" does not help
// Only fltStr = '1.23456' converts into a Float.
The temptation would be to swap decimal separators on reads, but that is the task of Float.parseFloat(), a not of the programer, for doing so shall again break other Locale.DE-likes, such as Locale.ID (Indonesia) which my App supports.
My additional question directed more at Locale arbitrators is: Does English.ZA not imply English conformant as would say German.NA (Namibia) be German conformant? One would think the natural designation for this particular number conversion would be Dutch.ZA (colloquially 'Afrikaans'), for Dutch conformance, but Android designates it as English.ZA?
NB (*1) This Android English.ZA conforms only partially as it does not produce either the German point group separator or the local clerical (pen-and-paper) space character group separator.
Apologies for using 'Answer' to respond to diogenesgg's comment suggestion:
"Hi, please take a look at this answer stackoverflow.com/questions/5233012/…. TL/DR."
In it I found a few gems -
(1)
NumberFormat f = NumberFormat.getInstance(Locale.getDefault());
if (f instanceof DecimalFormat) {
((DecimalFormat) f).setDecimalSeparatorAlwaysShown(true);
}
But this is neutral and not value-specific so I added after the above,
(2) Given:
String axisValue("some-float-value-rendered as string");
NumberFormat nf = new DecimalFormat(axisValue);
Which I incorporate sequentially:
NumberFormat nf = new DecimalFormat(axisValue);
Number n;
if(nf instanceof DecimalFormat){
try{
n = nf.parse (axisValue);
axisComponent = (Double) n;
} catch (java.text.ParseException jtpe) {
jtpe.printStackTrace();
}
}
Notice the need to cast the Number n to Double.
This worked mostly under the problematic Locale English.ZA, until the value 0,00000 showed up.
For the string value "0,00000", NumberFormat decides Number n is a Long, and the system throws a (Long to Double) CastException.
I tried to trick NumberFormat in all ways I can to view 0 as a Float or Double to no avail, so 0 is a border problem that Number (NumberFormat.DecimalFormat) does not tolerate.
But this NumberFormat workaround does not resolve the assymmetry problem of Android 9 Locale.English(ZA).DecimalFormat emitting Locale.DE (comma decimal separator) but parsing only System.Primitives (decimal dot separator).
Incidentally, getting past the DecimalFormat problem exposed a myriad of other problems under this new-fangled English.ZA, of my App assuming system primitives working equally well with native resources. Well semantics so used require string comparison to work between primitive and native!
For example system file Primitive path names rendered in Native generating 'file not found', or even more problematic, using primitive string keys semantically only to being rendered meaningless on Native lookup.
I'm not sure which is the lesser evil, assymmetric locale English.ZA or my use of Primitives in semantics to thrust upon Natives. A futile exercise!
Right now I'm embarking on separating system primitives, including their semantic variants from ANY Native language resource strings ...
My lifetime of programming under system primitives needs an overall makeover.
Maybe I can keep an Assets repository for the primitives (resource or semantic) and have Natives look that up for system resources or for semantic Meaning.

Music symbols in unicode on android

We want to draw a music symbols in View.onDraw(),and found that unicode contains a few of symbols.here is the Code Chart
But when i call drawText("\u1D100"),only the four character after u encoded,the last "0" still draw with "0".How to solve this problem.
Strings in Java/Android are encoded using UTF-16. The \u escape notation supports up to 4 hex digits. So, to encode a Unicode codepoint above U+FFFF, you have to encode it as a UTF-16 surrogate pair. This is clearly explained in the Java/Android documentations.
U+1D100 is 0xD834 0xDD00 in UTF-16, so use this instead:
drawText("\uD834\uDD00", ...)
Alternatively, you can convert the Unicode codepoint to a char[] array at runtime and then draw it:
char[] ch = Character.toChars(0x1D100);
drawText(ch, 0, ch.length, ...)
Either way, of course you have to use a font that actually supports U+1D100.

Error using native smiles of Android in text field (Unity3d)

I make application with Unity3d and build it for Android, when I write in input field android native smiles - I got error in line
(invalid utf-16 sequence at 1411555520 (missing surrogate tail)):
r.font.RequestCharactersInTexture(chars, size, style);
chars contains string than contains native android smiles. How I may support native smiles? I use own class for Input Field.
Unfortunately, supporting emojis with Unity is hard. When I implemented this feature, it took about a month to finish it, with a custom text layout engine and string class. So, if this requirement is not particularly important, I would suggest axing this feature.
The reason behind this particular error is that Unity gets characters from the input string one by one, and updates the visual string every character. From the layman point of view, this makes complete sense. However, it doesn't take into account how UTF-16 encoding, which is used in C#, works.
UTF-16 encoding uses 16 bits per a single unicode characters. It is enough for almost all characters that you would normally use. (And, as every developer knows, "almost all" is a red flag that will lay dormant for a long time and then will explode and destroy everything you love.) But it so happens, that Emoji characters are do not fit into 16 bit UTF-16 character, and use a special case — surrogate pair:
Surrogate pair is a pair of UTF-16 characters that represent a single Unicode character. That means that they don't have any meaning on their own individually, and when you try to render a UTF-16 character that is a surrogate head or surrogate tail, you can expect to get an error like this, or something similar.
Essentially, what you need to implement is some kind of buffer, that will accept C# UTF-16 characters one by one, and then pass them to rendering code when it verifies that all surrogate pairs are closed.
Oh, and I almost forgot! Some Emoji characters, like country flags, are represented by two unicode characters. Which means that they can potentially take up to four UTF-16 characters. Aren't text encodings fun?

Get emoticon unicode from char UTF-16

I need to intercept an emoticon entry and change for my own emoticon.
When I intercept an emoticon, for example, the FACE WITH MEDICAL MASK (\U+1F604), I get an UTF-16 char (0xD83D 0xDE04), Is it possible to convert this char value to the unicode value?
I need to convert 0xD83D 0xDE04 to \u1f604.
Thanks,
I get an UTF-16 char (0xD83D 0xDE04), Is it possible to convert this char value to the unicode value?
For just a single code point in a string, you can convert it to an integer with:
int codepoint = "\uD83D\uDE04".codePointAt(0); // 0x1F604
It is, however quite tedious to go over a whole string with codePointCount/codePointAt. Java/Dalvik's String type is strongly tied to UTF-16 code units and the codePoint methods are a poorly-integrated afterthought. If you are simply hoping to replace an emoji with some other string of characters, you are probably best off doing a plain string replace or regex with the two code units as they appear in the String type, eg text.replace("\uD83D\uDE04", ":-D").
(BTW Face with medical mask is U+1F637.)
\u1f604 is the UTF-32 encoding of that emoticon. You can convert this way:
byte[] bytes = "\uD83D\uDE37".getBytes("UTF-32BE");

How to remove accent characters from an InputStream

I am trying to parse a Rss2.0 feed on Android using a Pull parser.
XmlPullParser parser = Xml.newPullParser();
parser.setInput(url.open(), null);
The prolog of the feed XML says the encoding is "utf-8". When I open the remote stream and pass this to my Pull Parser, I get invalid token, document not well formed exceptions.
When I save the XML file and open it in the browser(FireFox) the browser reports presence of Unicode 0x12 character(grave accent?) in the file and fails to render the XML.
What is the best way to handle such cases assuming that I do not have any control over the XML being returned?
Thanks.
Where did you find that 0x12 is the grave accent? UTF-8 has the character range 0x00-0x7F encoded the same as ASCII, and ASCII code point 0x12 is a control character, DC2, or CTRL+R.
It sounds like an encoding problem of some sort. The simplest way to resolve that is to look at the file you've saved in a hex editor. There are some things to check:
the byte order mark (BOM) at the beginning might confuse some XML parsers
even though the XML declaration says the encoding is in UTF-8, it may not actually have that encoding, and the file will be decoded incorrectly.
not all unicode characters are legal in XML, which is why firefox refuses to render it. In particular, the XML spec says that that 0x9, 0xA and 0xD are the only valid characters less than 0x20, so 0x12 will definitely cause compliant parsers to grumble.
If you can upload the file to pastebin or similar, I can help find the cause and suggest a resolution.
EDIT: Ok, you can't upload. That's understandable.
The XML you're getting is corrupted somehow, and the ideal course of action is to contact the party responsible for producing it, to see if the problem can be resolved.
One thing to check before doing that though - are you sure you are getting the data undisturbed? Some forms of communication (SMS) allow only 7-bit characters. This would turn 0x92 (ASCII forward tick/apostrophe - grave accent?) into 0x12. Seems like quite a coincidence, particularly if these appear in the file where you would expect an accent.
Otherwise, you will have to try to make best do with what you have:
although not strictly necessary, be defensive and pass "UTF-8" as the second paramter to setInput, on the parser.
similarly, force the parser to use another character encoding by passing a different encoding as the second parameter. Encodings to try in addtion to "UTF-8" are "iso-8859-1" and "UTF-16". A full list of supported encodings for java is given on the Sun site - you could try all of these. (I couldn't find a definitive list of supported encodings for Android.)
As a last resort, you can strip out invalid characters, e.g. remove all characters below 0x20 that are not whitespace (0x9,0xA and 0xD are all whitepsace.) If removing them is difficult, you can replace them instead.
For example
class ReplacingInputStream extends FilterInputStream
{
public int read() throws IOException
{
int read = super.read();
if (read!=-1 && read<0x20 && !(read==0x9 || read==0xA || read==0xB))
read = 0x20;
return read;
}
}
You wrap this around your existing input stream, and it filters out the invalid characters. Note that you could easily do more damage to the XML, or end up with nonsense XML, but equally it may allow you to get out the data you need or to more easily see where the problems lie.
I use to filter it with a regex, but the trick is not trying to get and replace the accents. It depends on the encode and you don't want to change the content.
Try to insert the content of the tags into this tags
Like this
<title>My title</title>
<link>http://mylink.com</link>
<description>My description</description>
To this
<title><![CDATA[My title]]></title>
<link><![CDATA[http://milynk.com]]></link>
<description><![CDATA[My Description]]></description>
The regex shouldn't be very hard to figure out. It works for me, hope it helps for you.
The problem with UTF-8 is that it is a multibyte encoding. As such it needs a way to indicate when a character is formed by more than one byte (maybe two, three, four, ...). The way of doing this is by reserving some byte values to signal multibyte characters. Thus encoding follows some basic rules:
One byte characters have no MSB set (codes compatible with 7-bit ASCII).
Two byte characters are represented by sequence: 110xxxxx 10xxxxxx
Three bytes: 1110xxxx 10xxxxxx 10xxxxxx
Four bytes: 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
Your problem is that you may be reading some character string supposedly encoded as UTF-8 (as the XML encoding definition states) but the byte chunk might not be really encoded in UTF-8 (it is a common mistake to declare something as UTF-8 but encoding text with a different encoding such as Cp1252). Your XML parser tries to interpret byte chunks as UTF-8 characters but finds something that does not fit the encoding rules (illegal character). I.e. two bytes with two most significate bytes set would bring an illegal encoding error: 110xxxxx must be always followed by 10xxxxxx (values such as 01xxxxxx 11xxxxxx 00xxxxxx would be illegal).
This problem does not arise when non-variable length encodings are used. I.e. if you state in your XML declaration that your file uses Windows-1252 encoding but you end up using ANSI your only problem will be that non-ASCII characters (values > 127) will render incorrectly.
The solution:
Try to detect encoding by other means.
If you will always be reading data from same source you could sample some files and use an advanced text editor that tries to infer actual encoding of the file (i.e. notepad++, jEdit, etc.).
Do it programatically. Preprocess raw bytes before doing any actual xml processing.
Force actual encoding at the XML processor
Alternatively if you do not mind about non-ASCII characters (no matter if strange symbols appear now and then) you could go directly to step 2 and force XML processing to any ASCII compatible 8-byte fixed length encoding (ANSI, any Windows-XXXX codepage, Mac-Roman encoding, etc.). With your present code you just could try:
XmlPullParser parser = Xml.newPullParser();
parser.setInput(url.open(), "ISO-8859-1");
Calling setInput(istream, null) already means for the pull parser to try to detect the encoding on its own. It obviously fails, due to the fact that there is an actual problem with the file. So it's not like your code is wrong - you can't be expected to be able to parse all incorrect documents, whether ill-formed or with wrong encodings.
If however it's mandatory that you try to parse this particular document, what you can do is modify your parsing code so it's in a function that takes the encoding as a parameter and is wrapped in a try/catch block. The first time through, do not specify an encoding, and if you get an encoding error, relaunch it with ISO-8859-1. If it's mandatory to have it succeed, repeat for other encodings, otherwise call it quits after two.
Before parsing your XML, you may tweak it, and manually remove the accents before you parse it.
Maybe not the best solution so far, but it will do the job.

Categories

Resources