I'm trying to filter out accented words if user searches for them in local database. But I have problems, namely with slavic letters ČŠŽ. In my SQLite database I have a field "title" with value: "Želodček"
If I try to select LOWER(title) I always get back the same value "Želodček" whilst other words are correctly lower cased. Only if the word begins with ČŽŠ then it doesn't get lower cased. This only persists with words which have leading accented letters.
Database records
Stomach
Želodček
Uppercase with UPPER()
STOMACH
ŽELODčEK
Lowercase with LOWER()
stomach
Želodček
I've already tried setting localization with setLocale() with no luck. I also tried different collation like NOCASE, UNICODE, LOCALIZED but nothing worked. I'm wondering why when lower cased the first letter is not lower cased and when upper cased other accented words are lowercase.
I've solved the problem with LIKE searches where I replace accented words with their lower cased counterpart. But I have problem with full text(FTS3) searching because I can't use the same trick with MATCH.
-- works but it's a hack
SELECT title FROM articles WHERE REPLACE(LOWER(title),'Ž','ž') LIKE '%želodček%'
-- can't seem to get it work
SELECT title FROM articles WHERE title MATCH 'želodček' COLLATE NOCASE
Is there any solution to this or is there a bigger problem?
Update:
No optimal solution yet.
Un-optimal solution 1:
I decided to deal with the problem directly by changing data in the select query. While this doesn't work for all cases (and I would have to cover all accents) it suits my case for now. So I'm posting it:
-- LIKE query
SELECT title FROM articles WHERE (REPLACE(REPLACE(REPLACE(LOWER(title),'Č','č'),'Š','š'),'Ž','ž') LIKE ? COLLATE NOCASE))
-- MATCH query (FTS)
-- In this case I programmatically replace searched word with 2 word variation (one that starts with lowercase and one that starts with uppercase) ie: title='želodček OR Želodček'
SELECT title FROM articles WHERE title MATCH ? COLLATE UNICODE
Un-optimal solution 2:
As suggested by user CL. to insert in normalized form (didn't work for me because normalized form was basically the original unicode form). I took it futher and insert title stripped of of accents (basically ASCII form). This is maybe better than solution one in ways of general solution. Since I only cover some accents in the first.
But there are downsides:
data doubles (one unicode title and one ASCII title). Which can be a problem if you have a lot of data.
some characters are not supported (like chinese characters will be gone after normalization and stripping)
ambiguity which you get by stripping accents (ie. two words "zelo" and "želo" have different meanings but will both turn up when searching).
Here's the Java code for it:
// Gets you the ASCII version of unicode title which you insert into different column
String titleAsciiName = Normalizer.normalize(title, Normalizer.Form.NFD)
.replaceAll("[^\\p{ASCII}]", "");
LIKE never uses a custom collation.
FTS can use a custom tokenizer, but you have to check whether unicode61 is available in all Android versions you want to support.
The Android database API does not allow to create custom implementations of LIKE or of a FTS tokenizer.
You might want to store a normalized version of your strings in the database.
Related
I discovered today that Android can't display a small handful of Japanese characters that I'm using in my Japanese-English dictionary app.
The problem comes when I attempt to display the character via TextView.setText(). All of the characters below show up as blank when I attempt to display them in a TextView. It doesn't appear to be an issue with encoding, though - I'm storing the characters in a SQLite database and have verified that Android can understand the characters. Casting the characters to (int) retrieves proper Unicode decimal escapes for all but one of the characters:
String component = cursor.getString(cursor.getColumnIndex("component"));
Log.i("CursorAdapterGridComponents", "Character Code: " + (int) component.charAt(0) + "(" + component + ")");
I had to use Character.codePointAt() to get the decimal escape for the one problematic character:
int codePoint = Character.codePointAt(component, 0);
I don't think I'm doing anything wrong, and as String's are by default UTF-16 encoded, there should be nothing preventing them from displaying the characters.
Below are all of the decimal escapes for the seven problematic characters:
⺅ Character Code: 11909(⺅)
⺌ Character Code: 11916(⺌)
⺾ Character Code: 11966(⺾)
⻏ Character Code: 11983(⻏)
⻖ Character Code: 11990(⻖)
⺹ Character Code: 11961(⺹)
𠆢 Character Code: 131490(𠆢)
Plugging the first six values into http://unicode-table.com/en/ revealed their corresponding Unicode numbers, so I have no doubt that they're valid UTF-8 characters.
The seventh character could only be retrieved from a table of UTF-16 characters: http://www.fileformat.info/info/unicode/char/201a2/browsertest.htm. I could not use its 5-character Unicode number in setText() (as in "\u201a2") because, as I discovered earlier today, Android has no support for Unicode strings past 0xFFFF. As a result, the string was evaluated as "\u201a" + "2". That still doesn't explain why the first six characters won't show up.
What are my options at this point? My first instinct is to just make graphics out of the problematic characters, but Android's highly variable DPI environment makes this a challenging proposition. Is using another font in my app an option? Aside from that, I really have no idea how to proceed.
Is using another font in my app an option?
Sure. Find a font that you are licensed to distribute with your app and has these characters. Package the font in your assets/ directory. Create a Typeface object for that font face. Apply that font to necessary widgets using setTypeface() on TextView.
Here is a sample application demonstrating applying a custom font to a TextView.
Actually question was asked several times, but I didn't manage to find answer.
There's set of SQLite table(s) which are read-only - I can't change their structure or redefine collation rules. Tables consisting some international characters (Russian/Chinese, etc).
I would like to get some case-insensitive selection like:
select name from names_table where upper(name) glob "*"+constraint.toUpperCase()+"*"
It works only when name is latin/ASCII charset, for international chars it doesn't work.
SQLite's manual reads:
The upper(X) function returns a copy of input string X in which all
lower-case ASCII characters are converted to their upper-case
equivalent.
So the question is: how to resolve this issue and make international chars in upper/lower case?
This is known problem in sqlite. You can redefine built-in functions via Android NDK. This is not a simple way. Look at this question
Notice that indexes of your tables will not work (for UDF) and query can be very slow.
Instead of it you can store your data (which you look for) in other column in ascii format.
For example:
"insert into names_table (name, name_ind) values ('"+name+"',"+"'"+toAsciiEquivalent(name)+"')"
name name_ind
----------------
Имя imya
Name name
ыыы yyy
and search string by column name_ind
select name from names_table where name_ind glob "*"+toAsciiEquivalent(constraint)+"*"
This solution requires more space for data, but it is simple and fast.
Instead of providing full Unicode case support by default, SQLite provides the ability to link against external Unicode comparison and conversion routines. The application can overload the built-in NOCASE collating sequence (using sqlite3_create_collation()) and the built-in like(), upper(), and lower() functions (using sqlite3_create_function()). The SQLite source code includes an "ICU" extension that does these overloads. Or, developers can write their own overloads based on their own Unicode-aware comparison routines already contained within their project.
Reference: http://www.sqlite.org/faq.html
I'm using SearchView (or SearchManager?) to search the database for hits. It works fine, but the problem is if you search for words with special characters (č, ž, š - all supported by the keyboard), the search returns nothing, even though the word exists in the database.
For example: Word in database ("Računalnik"); search string ("Rač") - returns 0, search string ("Rac") - returns 0.
Is there a way to change search encoding, or to handle these searches some other way?
I think you have to use unicode for those special characters.
I am trying to create a database for an android app including, in part, non-English words which require underlines and accents for proper spelling. I set my encoding for this package to utf-8, which allowed the accented characters to store and display properly. However, I cannot seem to get a single character underlined. It displays an empty box for an unrecognized character.
An example of my database helper to create the sqlite is as follows:
cv.put(ENGLISH, "to be alive");
cv.put(NATIVE, "okch_á_a or okchaha");
cv.put(PART_OF_SPEECH, "verb");
cv.put(AUDIO, "alive");
cv.put(VIDEO, "none");
cv.put(IMAGE_DEFAULT, "none");
cv.put(IMAGE_OPTIONAL, "none");
cv.put(IMAGE_TO_USE, "none");
db.insert("words", ENGLISH, cv);
That
_ a _
is the best I can come up with so far, but the a should actually be an underlined character.
I tried html tags like u and /u:
<u>a</u>
since that works with string arrays, but it displays as:
<u>a</u>
(the html is never interpreted).
I tried using:
"\u0332"
as explained at http://www.fileformat.info/info/unicode/char/332/index.htm , but that, too, is never interpreted, so it displays as:
a\u0332
I also tried:
& # 818 ;
and:
& # x332 ;
in a similar manner, with similar lack of results.
Any ideas?
You can store your string in Html format and call .setText(Html.fromHtml(somestring)) from the textview were you want to display it.
How can I change the font on android to allow to show special characters like "'" or "à"?
Actually the strings that contains these characters are stored in the sqlite database.
When you load the text into your TextView, will this work for you?
textView.setText(new String(textFromDatabase, "UTF-8"));
This uses the String constructor to set the charset name. You can change "UTF-8" to a different Character encoding -- Also, look at the javadoc for String.
String(byte[] bytes, String charsetName) -
Constructs a new String by decoding the specified array of bytes using the specified charset.
The Droid font supports the "'", "à" and many others characters. I use them all the time (pt language).
Actually, I'm quite sure they support all the Basic Latin, Latin 1 supplement and the first extended latin range. They also support many others like hebrew etc., although I'm not sure if that changed between SDK versions.
You can also download the Unicode Map app in the Market to check which characters are available in your particular device. I also store unicode text in sqlite all the time, and still I don't have any problems.
One thing to consider: check that the encoding you are setting match the encoding of your source. It may be a text or a URL... an example:
BufferedReader b = new BufferedReader(new InputStreamReader(url.openStream(), MY_ENCODING));
Are you sure it's not a problem somewhere?
You should use '' instead of ' to store it into Sqlite database.
For example if you want to store 5 o'clock into database then you have to write this as 5 O''clock. Take a look here, for more information about it.
By default Android SQLite uses UTF-8.
I had this problem because when I populated the database on the first launch I used a txt file with another charset.