I am using Retrofit 2.1. But when I post a field that contains cyrillic word, it gives an empty response, however it should return 2-3 items. Here is the api:
#FormUrlEncoded
#POST("my_awesome_base_url")
Call<Questions> getQuestions(#Field(value = "rowsdata", encoded = false) String rowsdata);
And the rowsdata contains some cyrillic word that db should search and respond similar results. Here is an example rowsdata:
rowsdata = {"code":"-4","start":"1","where":"where short_question like 'Вақт' ","end":"2"}
In the rowsdata, Вақт is in cyrillic, but it is somehow encoding it to some chars so that server is giving me an empty list.
I checked this on Postman, and it gave me the desired results, but when I send a request using Retrofit, it is responding like nothing is found...
Probably an encoding issue.
From developers site :
A String represents a string in the UTF-16 format in which
supplementary characters are represented by surrogate pairs (see the
section Unicode Character Representations in the Character class for
more information). Index values refer to char code units, so a
supplementary character uses two positions in a String.
Try encoding the string into UTF-8, make sure your file is UTF-8 as well (default in Android Studio I think).
Related
My problem is that I am getting strings where some characters are Unicode.
"fieldName": "Ac6jHguQjKKUxx6MSOpjO2kOLKPAdjStVs1pgTGNSU8\u003d"
Then I immediately send such a string to another API and the server returns me an error with a code of 500. If I use this string in postman and replace the unicode with a normal one, then the code 200 is returned from the server.
I thought there was a problem in the server, but they checked it and said that they were sending it as expected.
How do I translate Unicode?
The easiest way is to use URLDecoder. Here is an example.
String str = "Ac6jHguQjKKUxx6MSOpjO2kOLKPAdjStVs1pgTGNSU8\u003d";
String decode = URLDecoder.decode(str, "UTF-8");
System.out.println(decode);
//Ac6jHguQjKKUxx6MSOpjO2kOLKPAdjStVs1pgTGNSU8=
I have two strings retrived from json file (Arabic json files previously parsed from txt files). I use kotlin trim() function to remove leading and trailing newlines after parsing from json. The problem is, one of them, say file1, is successfuly trimmed while the other, say file2, is not.
I have thought of the encoding, but never managed to get my way through it. All what I know is json files are most likely encoded from utf-8 source. So I convert both files with Kotlin String function toByteArray(Charsets.UTF-8).contentToString:
file1 always has: [32, 10] as last elements in its bytes array (where newline character should be).
file2 always has: [32, 10, -30, -128, -113] as last elements in its bytes array (where newline character should be).
It sounds like there are additional three byte chracters at the end of the file with the problem (I have no idea what these minus signs stand for).
This is my way to fetch json and create JSONOBject:
val file: String = applicationContext.assets.open("poets/${poetID}.txt").bufferedReader().use {
it.readText()
}
val json = JSONObject(file)
here, ${poetID}.txt is actually json file in asset folder poets/.
I have the same application written in Swift with no such problems.
My question is: What are these assitional bytes at the end? Is there a way to check for encoding of a string parsed from json files? Or a way to change the encoding programmatically?
I have found the answer. The additional character represnts the Right-to-Left Mark. It is a common unicode character in Arabic language.
Like the problem described by title, my problem is when I read the Http header that returned from the server in the android programs , it appears Disorderly code of strings, so, what I don't know is, what kind of charset the server used to encoding the http response Headers ?and what charset the andorid used to decode the http response headers?
How do I escape or deal the Garbled?
Since HTTP Headers are MIME, see RFC 822 where it is defined as ASCII.
3.1.2. STRUCTURE OF HEADER FIELDS
Once a field has been unfolded, it may be viewed as being composed of
a field-name followed by a colon (":"), followed by a field-body, and
terminated by a carriage-return/line-feed. The field-name must be
composed of printable ASCII characters (i.e., characters that have
values between 33. and 126., decimal, except colon). The field-body
may be composed of any ASCII characters, except CR or LF. (While CR
and/or LF may be present in the actual text, they are removed by the
action of unfolding the field.)
Then RFC 2047
describes extensions to RFC 822 to allow non-US-ASCII text data in
Internet mail header fields
i have a String displayed on a WebView as "Siwy & Para Wino"
i fetch it from url , i got a string "Siwy%2B%2526%2BPara%2BWino". // be corrected
now i'm trying to use URLDecoder to solve this problem :
String decoded_result = URLDecoder.decode(url); // the url is "Siwy+%26+Para+Wino"
then i print it out , i still saw "Siwy+%26+Para+Wino"
Could anyone tell me why?
From the documentation (of URLDecoder):
This class is used to decode a string which is encoded in the application/x-www-form-urlencoded MIME content type.
We can look at the specification to see what a form-urlencoded MIME type is:
The form field names and values are escaped: space characters are replaced by '+', and then reserved characters are escaped as per [URL]; that is, non-alphanumeric characters are replaced by '%HH', a percent sign and two hexadecimal digits representing the ASCII code of the character. Line breaks, as in multi-line text field values, are represented as CR LF pairs, i.e. '%0D%0A'.
Since the specification calls for a percent sign followed by two hexadecimal digits for the ASCII code, the first time you call the decode(String s) method, it converts those into single characters, leaving the two additional characters 26 intact. The value %25 translates to % so the result after the first decoding is %26. Running decode one more time simply translates %26 back into &.
String decoded_result = URLDecoder.decode(URLDecoder.decode(url));
You can also use the Uri class if you have UTF-8-encoded strings:
Decodes '%'-escaped octets in the given string using the UTF-8 scheme.
Then use:
String decoded_result = Uri.decode(Uri.decode(url));
thanks for all answers , i solved it finally......
solution:
after i used URLDecoder.decode twice (oh my god) , i got what i want.
String temp = URLDecoder.decode( url); // url = "Siwy%2B%2526%2BPara%2BWino"
String result = URLDecoder.decode( temp ); // temp = "Siwy+%26+Para+Wino"
// result = "Swy & Para Wino". !!! oh good job.
but i still don't know why.. could someone tell me?
I am trying to parse a Rss2.0 feed on Android using a Pull parser.
XmlPullParser parser = Xml.newPullParser();
parser.setInput(url.open(), null);
The prolog of the feed XML says the encoding is "utf-8". When I open the remote stream and pass this to my Pull Parser, I get invalid token, document not well formed exceptions.
When I save the XML file and open it in the browser(FireFox) the browser reports presence of Unicode 0x12 character(grave accent?) in the file and fails to render the XML.
What is the best way to handle such cases assuming that I do not have any control over the XML being returned?
Thanks.
Where did you find that 0x12 is the grave accent? UTF-8 has the character range 0x00-0x7F encoded the same as ASCII, and ASCII code point 0x12 is a control character, DC2, or CTRL+R.
It sounds like an encoding problem of some sort. The simplest way to resolve that is to look at the file you've saved in a hex editor. There are some things to check:
the byte order mark (BOM) at the beginning might confuse some XML parsers
even though the XML declaration says the encoding is in UTF-8, it may not actually have that encoding, and the file will be decoded incorrectly.
not all unicode characters are legal in XML, which is why firefox refuses to render it. In particular, the XML spec says that that 0x9, 0xA and 0xD are the only valid characters less than 0x20, so 0x12 will definitely cause compliant parsers to grumble.
If you can upload the file to pastebin or similar, I can help find the cause and suggest a resolution.
EDIT: Ok, you can't upload. That's understandable.
The XML you're getting is corrupted somehow, and the ideal course of action is to contact the party responsible for producing it, to see if the problem can be resolved.
One thing to check before doing that though - are you sure you are getting the data undisturbed? Some forms of communication (SMS) allow only 7-bit characters. This would turn 0x92 (ASCII forward tick/apostrophe - grave accent?) into 0x12. Seems like quite a coincidence, particularly if these appear in the file where you would expect an accent.
Otherwise, you will have to try to make best do with what you have:
although not strictly necessary, be defensive and pass "UTF-8" as the second paramter to setInput, on the parser.
similarly, force the parser to use another character encoding by passing a different encoding as the second parameter. Encodings to try in addtion to "UTF-8" are "iso-8859-1" and "UTF-16". A full list of supported encodings for java is given on the Sun site - you could try all of these. (I couldn't find a definitive list of supported encodings for Android.)
As a last resort, you can strip out invalid characters, e.g. remove all characters below 0x20 that are not whitespace (0x9,0xA and 0xD are all whitepsace.) If removing them is difficult, you can replace them instead.
For example
class ReplacingInputStream extends FilterInputStream
{
public int read() throws IOException
{
int read = super.read();
if (read!=-1 && read<0x20 && !(read==0x9 || read==0xA || read==0xB))
read = 0x20;
return read;
}
}
You wrap this around your existing input stream, and it filters out the invalid characters. Note that you could easily do more damage to the XML, or end up with nonsense XML, but equally it may allow you to get out the data you need or to more easily see where the problems lie.
I use to filter it with a regex, but the trick is not trying to get and replace the accents. It depends on the encode and you don't want to change the content.
Try to insert the content of the tags into this tags
Like this
<title>My title</title>
<link>http://mylink.com</link>
<description>My description</description>
To this
<title><![CDATA[My title]]></title>
<link><![CDATA[http://milynk.com]]></link>
<description><![CDATA[My Description]]></description>
The regex shouldn't be very hard to figure out. It works for me, hope it helps for you.
The problem with UTF-8 is that it is a multibyte encoding. As such it needs a way to indicate when a character is formed by more than one byte (maybe two, three, four, ...). The way of doing this is by reserving some byte values to signal multibyte characters. Thus encoding follows some basic rules:
One byte characters have no MSB set (codes compatible with 7-bit ASCII).
Two byte characters are represented by sequence: 110xxxxx 10xxxxxx
Three bytes: 1110xxxx 10xxxxxx 10xxxxxx
Four bytes: 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
Your problem is that you may be reading some character string supposedly encoded as UTF-8 (as the XML encoding definition states) but the byte chunk might not be really encoded in UTF-8 (it is a common mistake to declare something as UTF-8 but encoding text with a different encoding such as Cp1252). Your XML parser tries to interpret byte chunks as UTF-8 characters but finds something that does not fit the encoding rules (illegal character). I.e. two bytes with two most significate bytes set would bring an illegal encoding error: 110xxxxx must be always followed by 10xxxxxx (values such as 01xxxxxx 11xxxxxx 00xxxxxx would be illegal).
This problem does not arise when non-variable length encodings are used. I.e. if you state in your XML declaration that your file uses Windows-1252 encoding but you end up using ANSI your only problem will be that non-ASCII characters (values > 127) will render incorrectly.
The solution:
Try to detect encoding by other means.
If you will always be reading data from same source you could sample some files and use an advanced text editor that tries to infer actual encoding of the file (i.e. notepad++, jEdit, etc.).
Do it programatically. Preprocess raw bytes before doing any actual xml processing.
Force actual encoding at the XML processor
Alternatively if you do not mind about non-ASCII characters (no matter if strange symbols appear now and then) you could go directly to step 2 and force XML processing to any ASCII compatible 8-byte fixed length encoding (ANSI, any Windows-XXXX codepage, Mac-Roman encoding, etc.). With your present code you just could try:
XmlPullParser parser = Xml.newPullParser();
parser.setInput(url.open(), "ISO-8859-1");
Calling setInput(istream, null) already means for the pull parser to try to detect the encoding on its own. It obviously fails, due to the fact that there is an actual problem with the file. So it's not like your code is wrong - you can't be expected to be able to parse all incorrect documents, whether ill-formed or with wrong encodings.
If however it's mandatory that you try to parse this particular document, what you can do is modify your parsing code so it's in a function that takes the encoding as a parameter and is wrapped in a try/catch block. The first time through, do not specify an encoding, and if you get an encoding error, relaunch it with ISO-8859-1. If it's mandatory to have it succeed, repeat for other encodings, otherwise call it quits after two.
Before parsing your XML, you may tweak it, and manually remove the accents before you parse it.
Maybe not the best solution so far, but it will do the job.