I'm working on a solution where need to encode string into utf-8 format, this string nothing but device name that I'm reading using BluetoothAdapter.getDefaultAdapter().name.
For one of sampple I got a string like ABC-๏ผ and encoding this returned ABC-%EF%BC%86 instead of ABC-%26. It was weird until further debugging which helped to identify that there is difference between & and ๏ผ. Second one is some other character which is failing to encoded as expected.
& and ๏ผ both are different.
For encoding tried both URLEncoder.encode(input, "utf-8") and Uri.encode(input, "utf-8") but nothing worked.
This is just an example, there might be other character which may look like same as actual character but failed to encode. Now question are:
Why this difference, after all it is reading of some data from device using standard SDK API.
How can fix this be fixed. Find and replace with actual character could be a approach but scope is limited, there might be other unknown character.
Any suggestion around !!
One solution would be to define your allowed character scope. Then either replace or remove the characters that fall outside of this scope.
Given the following regex:
[a-zA-Z0-9 -+&#]
You could then either do:
input.replaceAll("[a-zA-Z0-9 -+&#]", "_");
...or if you don't care about possibly empty results:
input.replaceAll("[a-zA-Z0-9 -+&#]", "");
The first approach would give you a length-consistent representation of the original Bluetooth device name.
Either way, this approach has worked wonders for me and my colleagues. Hope this could be of any help ๐.
Related
I have an Android application that uses Google Translate API.
Everything works great, including when I tried to translate phrases that include apostrophe like "We've eaten" to Spanish.
However, problems occur when the translation result I should be getting back contains an apostrophe. For example, when I translate a Spanish phrase, "A ver", into English, it returns "Let's see" with a ";" after "9". It seems like whenever I have a phrase that should return an apostrophe, it returns "'" with a ";" after "9". (Not placing ";" after "9" because it gets converted to an apostrophe by stackoverflow).
I can think of a way to solve it. After I get the translation result, I can match the string for ""'" + ";" and replace it with an apostrophe.
However, I don't feel like this is the way I should approach it. It's very unlikely that a user will actually type in "'" as an input for translation, but hard coding a manual conversion like this seems like it might cause problems down the road. I'll love to hear your thoughts on this.
Please let me know how I should fix/approach this issue.
Thank you!
The best solution is to add &format=text to your query.
You are correct hard codding is not solution,
But you can convert this HTML entity back to apostrophe, by Using HTML classes provided already.
Html.fromHtml((String) "Let's see").toString()
Above code will convert any valid HTML entity.
I Hope this is what you are looking for.
Thanks Guillaume. For those using php.
$translation = $translate->translate($stringToTranslate, ['target' => $target, 'format' => 'text']);
Thanks Guillaume. For those using go. (api v3)
req := &translatepb.TranslateTextRequest{
MimeType: "text/plain", // add this line to request
}
I am parsing some external XML into an object and displaying this inside a textview.
Apostrophe's/single quotes are being converted to these silly question mark symbols.
Nothing i've found is working - i've tried using replaceall and escaping it with \', it doesn't give me the desired result.
I've tried setting the textview using:
tv.setText(Html.fromHtml(news_item.getTitle()));
It doesn't seem to work, I can't find any other solutions to this one, your ideas appreciated.
Try this:
tv.setText(news_item.getTitle().replaceAll("\u2019", "'"));
For other Unicode characters, please see this link.
Found it!
The mark you are looking for is called RIGHT SINGLE QUOTATION MARK with a unicode code of U+2019. This particular mark should be replaced via:
String.replace("โ", "โ");
for proper display.
If that doesn't work, you should do a substitution from that mark to a apostrophe via:
String.replace("โ", "'");
or directly:
String.replace("โ", "'");
to make sure the display actually displays it.
Close up of the difference between right single quotation mark vs apostrophe: โ vs '
The documented solution will work, but it is not the right way of fixing this, as the root cause of the problem is encoding. In your case, the source's (XML document) encoding is most likely UTF-8 or some other multi-byte encoding. Your parser or consumer of the data is most likely ISO-8859-1 or ASCII. These characters (right/left apostrophes) are not part of that character set. Therefore, the correct solution is to change the encoding of your parser/processor/consumer to UTF-8.
If this is not the case, then it is probably the opposite. You have a process that writes down characters in UTF-8, but the XML's encoding is not compatible (i.e. ISO-8859-1).
Remember this: ALL characters in ISO-8859-1 are mapped in UTF-8, but not the other way around. So going from ISO-8859-1 to UTF-8 is not a problem. The problem is when you have to make the round trip to ISO-8859-1 to UTF-8. When converting UTF-8 characters, those characters NOT in the ISO character set, will show up funny on your display; either as question marks or "รขโฌโข
I am extracting strings from KML file, if the string contains special character like !, #, #, ', " etc. its using codes like '
I am not able to extract entire string if it is like above, by calling getNodeValue(). It is terminating the string at special character.
<name>Continue onto Royal's Market</name>
If i extract the string i am getting only ""Continue onto Royal". I want entire string as
Continue onto Royal's Market.
How to achieve this ?? If anybody familiar with this please reply to this one.
Thanks
Your problem has nothing to do with KML but is general for XML parsning:
Don't use getNodeValue(), as there is no guarantee in DOM that text isn't actually split over several nodes.
Try using getTextContent() instead.
You might also have to replace entities, as in: node.getTextContent().replaceAll("'","'");
In general I wouldnt use DOM at all for extracting data.
I'd use the XmlPullParser as its simpler to work with - and parses faster.
I have a string resource called "foo". It may be a simple string... or it may contain HTML. This may change over time: I should be able to box it up as at least a SpannableString immediately upon reading whether it's HTML or not (but how??)
I want to get that raw CharSequence and first be able to display it as-is (the exact characters, not Android's "interpretation" of it). Right now I can't do that... toString() decides to rip out the parts it doesn't think I want to see.
I'd then like to be able to create a SpannableString from this and other Strings or SpannableStrings via concatenation using some method (none of the normal ones work). I'd like to then use that SpannableString to display the HTML-formatted text in a TextView.
This shouldn't be difficult, but clearly I'm not doing it right (there's very little info out there about this that I've found so far). Surely there is a way to accurately interconvert between between Strings, SpannedStrings and even Spannablestrings, without losing the markups along the way?
Note that I've already played with the somewhat broken Linkify, but I want better control over the process (no dangling unformatted "/"s, proper hrefs, etc.) I can get this all to work IF I stay in HTML at all steps, though I can't concatenate anything.
Edit 1: I've learned I can use the following to always ensure I get my raw string (instead of whatever Android decides it thinks the CharSequence really is). Nice... now, how to coax this into a SpannableString?
<string name="foo"><![CDATA[
<b>Some bold</b>
]]>
</string>
Edit 2: Not sure why this didn't work earlier, but... if foo1 and foo2 are strings marked up as above (as CDATA), then one can apparently do this:
String foo1 = (String)getResources().getText(R.string.foo1);
String foo2 = (String)getResources().getText(R.string.foo2);
SpannedString bar = new SpannedString(Html.fromHtml(foo1+foo2));
Curious: is there a more straightforward solution than this? Is this CDATA business actually necessary? It seems convoluted (but not as convoluted as never quite knowing what the resource type will be... String, Spannable, etc.)
I had the same problem. There are two solutions according to Google API Guides.
First is to escape < mark with < in the string resource. Unfortunately, String conversion removes the tag in the background.
Second is to use Format Strings instead of XML/HTML tags. It seems simpler, faster, and evades hidden conversion problems. getString(resource, ...) works like a printf(string, ...) here.
Both work and require some code to replace given part of the string anyway (handle tags or format strings). Enjoy! =)
It appears there isn't a more straightforward way to accomplish this.
I am trying to parse a Rss2.0 feed on Android using a Pull parser.
XmlPullParser parser = Xml.newPullParser();
parser.setInput(url.open(), null);
The prolog of the feed XML says the encoding is "utf-8". When I open the remote stream and pass this to my Pull Parser, I get invalid token, document not well formed exceptions.
When I save the XML file and open it in the browser(FireFox) the browser reports presence of Unicode 0x12 character(grave accent?) in the file and fails to render the XML.
What is the best way to handle such cases assuming that I do not have any control over the XML being returned?
Thanks.
Where did you find that 0x12 is the grave accent? UTF-8 has the character range 0x00-0x7F encoded the same as ASCII, and ASCII code point 0x12 is a control character, DC2, or CTRL+R.
It sounds like an encoding problem of some sort. The simplest way to resolve that is to look at the file you've saved in a hex editor. There are some things to check:
the byte order mark (BOM) at the beginning might confuse some XML parsers
even though the XML declaration says the encoding is in UTF-8, it may not actually have that encoding, and the file will be decoded incorrectly.
not all unicode characters are legal in XML, which is why firefox refuses to render it. In particular, the XML spec says that that 0x9, 0xA and 0xD are the only valid characters less than 0x20, so 0x12 will definitely cause compliant parsers to grumble.
If you can upload the file to pastebin or similar, I can help find the cause and suggest a resolution.
EDIT: Ok, you can't upload. That's understandable.
The XML you're getting is corrupted somehow, and the ideal course of action is to contact the party responsible for producing it, to see if the problem can be resolved.
One thing to check before doing that though - are you sure you are getting the data undisturbed? Some forms of communication (SMS) allow only 7-bit characters. This would turn 0x92 (ASCII forward tick/apostrophe - grave accent?) into 0x12. Seems like quite a coincidence, particularly if these appear in the file where you would expect an accent.
Otherwise, you will have to try to make best do with what you have:
although not strictly necessary, be defensive and pass "UTF-8" as the second paramter to setInput, on the parser.
similarly, force the parser to use another character encoding by passing a different encoding as the second parameter. Encodings to try in addtion to "UTF-8" are "iso-8859-1" and "UTF-16". A full list of supported encodings for java is given on the Sun site - you could try all of these. (I couldn't find a definitive list of supported encodings for Android.)
As a last resort, you can strip out invalid characters, e.g. remove all characters below 0x20 that are not whitespace (0x9,0xA and 0xD are all whitepsace.) If removing them is difficult, you can replace them instead.
For example
class ReplacingInputStream extends FilterInputStream
{
public int read() throws IOException
{
int read = super.read();
if (read!=-1 && read<0x20 && !(read==0x9 || read==0xA || read==0xB))
read = 0x20;
return read;
}
}
You wrap this around your existing input stream, and it filters out the invalid characters. Note that you could easily do more damage to the XML, or end up with nonsense XML, but equally it may allow you to get out the data you need or to more easily see where the problems lie.
I use to filter it with a regex, but the trick is not trying to get and replace the accents. It depends on the encode and you don't want to change the content.
Try to insert the content of the tags into this tags
Like this
<title>My title</title>
<link>http://mylink.com</link>
<description>My description</description>
To this
<title><![CDATA[My title]]></title>
<link><![CDATA[http://milynk.com]]></link>
<description><![CDATA[My Description]]></description>
The regex shouldn't be very hard to figure out. It works for me, hope it helps for you.
The problem with UTF-8 is that it is a multibyte encoding. As such it needs a way to indicate when a character is formed by more than one byte (maybe two, three, four, ...). The way of doing this is by reserving some byte values to signal multibyte characters. Thus encoding follows some basic rules:
One byte characters have no MSB set (codes compatible with 7-bit ASCII).
Two byte characters are represented by sequence: 110xxxxx 10xxxxxx
Three bytes: 1110xxxx 10xxxxxx 10xxxxxx
Four bytes: 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
Your problem is that you may be reading some character string supposedly encoded as UTF-8 (as the XML encoding definition states) but the byte chunk might not be really encoded in UTF-8 (it is a common mistake to declare something as UTF-8 but encoding text with a different encoding such as Cp1252). Your XML parser tries to interpret byte chunks as UTF-8 characters but finds something that does not fit the encoding rules (illegal character). I.e. two bytes with two most significate bytes set would bring an illegal encoding error: 110xxxxx must be always followed by 10xxxxxx (values such as 01xxxxxx 11xxxxxx 00xxxxxx would be illegal).
This problem does not arise when non-variable length encodings are used. I.e. if you state in your XML declaration that your file uses Windows-1252 encoding but you end up using ANSI your only problem will be that non-ASCII characters (values > 127) will render incorrectly.
The solution:
Try to detect encoding by other means.
If you will always be reading data from same source you could sample some files and use an advanced text editor that tries to infer actual encoding of the file (i.e. notepad++, jEdit, etc.).
Do it programatically. Preprocess raw bytes before doing any actual xml processing.
Force actual encoding at the XML processor
Alternatively if you do not mind about non-ASCII characters (no matter if strange symbols appear now and then) you could go directly to step 2 and force XML processing to any ASCII compatible 8-byte fixed length encoding (ANSI, any Windows-XXXX codepage, Mac-Roman encoding, etc.). With your present code you just could try:
XmlPullParser parser = Xml.newPullParser();
parser.setInput(url.open(), "ISO-8859-1");
Calling setInput(istream, null) already means for the pull parser to try to detect the encoding on its own. It obviously fails, due to the fact that there is an actual problem with the file. So it's not like your code is wrong - you can't be expected to be able to parse all incorrect documents, whether ill-formed or with wrong encodings.
If however it's mandatory that you try to parse this particular document, what you can do is modify your parsing code so it's in a function that takes the encoding as a parameter and is wrapped in a try/catch block. The first time through, do not specify an encoding, and if you get an encoding error, relaunch it with ISO-8859-1. If it's mandatory to have it succeed, repeat for other encodings, otherwise call it quits after two.
Before parsing your XML, you may tweak it, and manually remove the accents before you parse it.
Maybe not the best solution so far, but it will do the job.