I am wondering if there's a way to tell a given text is human readable. By human readable, I mean: it has some meanings, format like an article written by somebody, or at least generated by a software translator that is intended to be read by a human.
Here's the background story: recently I am making an app that allows user to upload a short text to a database. At the early stage of deployment I noticed some user always uploaded corrupted text due to a problem with encoding. This problem is fixed later, but leaves me wonder if there's a way to pick up non human readable text before serving the text back to users.
Any advice will be appreciated. The scope might be too large to include other languages, so at the moment let's limit the discussion to English only.
You can try a language identification tool, or something similar.
Basically you have to count the characters, or groups of character (character n-grams), and compare the distribution of the letters of the text submitted with the distribution of the letters of a collection of texts written in good english. (Make sure that such collection of texts is representative of the expected input).
In the continuity of a N-gram approach you might want to try a dictionary based approach and check for the presence of 'stop words' (e.g. 'the', 'a', 'an', 'of') in the input text.
Most of the NLP-Libraries will do the job (Spacy is a very common one). You can also go for language detection: Langdetect will support you on this
(https://pypi.org/project/langdetect/) as many others will do. If you need to be less specific (more math than language) you should look for Phonotactics (with BLICK for Python: https://github.com/mmcauliffe/python-BLICK) that looks into the construction of character order in a string.
Do a hexdump and make sure each character is less than or equal to 0x7f.
Related
Background
Today I've noticed that on Google's Contacts app, if you have both English and Hebrew contacts, and you switch to English locale as the main one, the first contacts are in English:
But, if you switch to Hebrew locale as the main one, the first contacts are in Hebrew:
The problem
I don't see which functions are used to do that. I tried to search over the Internet about this behavior and how it's done, but couldn't find it.
Comparing the values of characters will always return the same result, so the order here should be more dynamic.
What I've found
I thought this will help me:
val unicodeLocaleKeys = Locale.getDefault().unicodeLocaleKeys
But it always returns an empty set.
I also searched for such a function in classes such as Character, Unicode*, and String. I don't think it exists there.
The question
How does Google Contacts app get to sort the contacts by the current locales?
Is it possible perhaps to get the whole set of characters used by a specific locale?
Maybe it's possible to compare characters, while giving order of priorities of locales (users can choose multiple locales) ?
Maybe you are looking on the wrong thing.
Contact app seems not to have an alphabet built in (per locale), but just a collation (local sort) and display the first character. Possibly it will find "symbols" (Unicode categories) and put all symbols in the same bin.
Eventually you can get, from Unicode, the script name (and the direction). You may get the alphabet in few places (e.g. Wikipedia). It will fail for Chinese, and other rich alphabets. The problem: the "alphabet" is language specific. On some European countries you may have (some) accented characters, or character groups interpreted as a single character (also on phone books).
So, if you want to keep thing simple:
use collation and just first character
the same, but remove accent, and try to find if the letter has same priority in alphabetic order: in this case: ignore accent, else: keep it, see e.g.Å - place in alphabet. Maybe do the same with two letters, e.g. ll in the past.
find a library with handle such complex cases (and that it will updated regularly). This will help probably for Chinese and other languages with huge amount of characters.
EDIT: in short, instead of normal sorting of strings using str1.compareTo(str2), you should use :
Collator.getInstance().compare(str1,str2)
What I'm doing now is to show the phone number correctly under right-to-left layout. I want +111111111 but it appears like 111111111+ now. I found a solution that using LRM(left-to-right mark), which is a Unicode control character '\u200E'.
There may be several formats for phone numbers in different place of world like XXX-XXX-XXXX. To prevent further bugs, I have to understand how those control characters work, especially which changes the direction of strings.
In my understanding, for common characters:
strings are stored as bytes in memory.
the editor/textview loads the bytes and look them up in
Unicode.
the editor/textview shows those Unicode in the form of
fonts.
So, when or which step do those control characters like LRM work? How to make sure that using them does not cause further bugs?
I wish I had made it clear for you.
I hope this question isnt going to be down-flagged for not showing some actual code, but thats the core of this situation. I simply have no clue where to start to solve this issue, even after trying to use several combinations of keywords on both Google, and here on SO.
My client suddenly decided that half of the Android App I'm developing for him has to be Chinese, so after I have made some changes in the Database so some fields can take in Simplified Chinese character sets, I need to make sure that my client (living in holland) only uses those characters in that particular EditText field in the app. (There are more Database fields that now only allow Simplified Chinese, however these values come from a dropdown list in the app, so I dont need to worry about wrong characters for them).
So how would one make sure that only Simplified Chinese is used in an EditText field?
Here is a project in Ruby that attempts to detect whether characters are Traditional Chinese, Simplified Chinese, or Japanese (maybe others?): https://github.com/jpatokal/script_detector
This detection is based on the Unihan Database, in which there is a file called Unihan_Variants.txt. (Download zip file containing this text file here.)
Conceivably, you could parse the txt file into a lookup table and check the unicode value as the text is entered during onTextChanged() for your EditText. However, the readme on the project linked above states: "It is important to understand that this requires long sections of text to work reliably, since a single character or even several characters may be valid Japanese, traditional Chinese and simplified Chinese simultaneously." So, weeding out characters on an individual basis might prove difficult.
In Android KitKat, if I choose Settings > Language & Input > Language, the first choice I am offered is [Developer] Accented English. This replaces each Roman letter with an accented version. You can find a list of all the character mappings here. (It helps if you can read French).
What is the purpose of this setting? Is it just to show how characters can be mapped to other characters? Or can it be used productively (to create specific phonemes in text-to-speech output for example?
It's a technique called 'Pseudolocalization', and it's used to help test that an app is handling aspects of localization correctly.
The idea is that instead of waiting for an app's string resources to be translated into other languages - which could take some time - a "fake" pseudo-language is used instead. If the app behaves well against this fake translation, then chances are it will perform well with actual translations. There's different variations of pseudolocalization out there, but most tend to do some of the following:
Add parens [ ... ] or other delimiters around the string: this makes it easier to ensure that strings are not getting clipped at either end.
Replace regular characters with accented characters: if you see a string without accented characters, than that's a sign that it might be hardcoded instead of being treated as a localizable resource. (In the past, this was also used to ensure that apps could handle non-ASCII characters correctly and didn't lose data in code page translation, though this is less of an issue now that modern platforms support Unicode.)
Add padding to the string: this is to simulate languages such as German which often have longer translations for the corresponding English string. If the padded string gets truncated instead of wrapping or flowing, then likely the German string will do similar.
Add known-to-be-tricky characters to act as 'canaries': on some platforms, symbols from specific parts of the Unicode range may be added to ensure that they are handled or supported properly. For example, a Chinese character might be added to ensure that Chinese fonts are supported: if this ends up showing as an empty square, than that would indicate a problem. Other common 'canary' characters include code points from outside the BMP, or using Combining Characters.
One advantage of using pseudolocalization over actual translation is that the testing can be performed by someone who does not understand the target language: "[Àççôûñţ Šéţţîñĝš___]" still visually appears similar to the original English text "Account Settings". If you try using it with a Screen-Reader such as TalkBack, or other wise send pseudolocalized text to Text-to-speech, you'll likely get nonsense, since it will try to treat the accented characters as actual accented characters.
Context
I am currently building an sdk/service on wich applications can access to voice based command,
For the moment i'm using android pocketsphinx to detect a keyword (which is "wake"), and then analyse whole sentence with google voice recognition,
But my problem is i want to make it all offline! So i'm in my way to replace google voice recognition by a full utilisation of pocketsphinx...
My Problem
The user define which is the word he want to detect, and previously i just compared the said-word and what google voice speech-to-text returned me...
So know I want to update the grammar that pocket sphinx use with just the word given by the user, which problematic because (following the javadoc of android pocket sphinx) it can only take grammar files!
Question
Are there any way i can update android pocketsphinx grammar on the fly?
Edit
I forgot to talk about this method:
public void addFsgSearch(String searchName, FsgModel fsgModel) (in github pocketsphinx)
wich seem to deosn't take a grammar file like any other grammar setter method, but rather a class/struct? but the problem it's it isn't documented.....
If you need to detect just one word, consider using addKeywordSearch.
I had the same issue, and more. Perhaps these undocumented discoveries can help you.
Using the overloaded method "addGrammarSearch(String name, String fsgString)" allows you to put your entire FSG or JSGF grammar definition in a string, rather than sourcing it from a file if you wish (only a small file open/read time advantage).
"addKeyphraseSearch(String name, String keyphrase)" // only accommodates ONE WORD or PHRASE, no threshold, no grammar.
"addKeywordSearch(String name, File keywordList)" // accommodates MULTIPLE key WORDS or PHRASES, adding thresholds for each.
Several caveats include:
The grammar searches use JSGF format, parsing the defined syntax correctly. However:
1.1 Tags are not implemented
1.2 Unclear if weights (though the same // syntax as in keyword lists) actually apply recognizer thresholds (they have different meanings in PocketSphinx versus Sun Microsystems).
1.3 Rule names are also not implemented either.
1.4 In other words, you provide a grammar in JSGF, and your Hypothesis as well as FinalResult strings still give you the recognized lowest-level phrase detected in the grammar -- NOT the grammar tags, nor even rule metasymbols.
1.3.1 IMHO, that makes grammars pointless, and actually less efficient and less flexible than keyword list files (which are actually words or phrases) due to the option to provide a threshold for recognizer scrutiny, per phrase. Further, if the RULE & TAG names are not returned, then there is zero information regarding the structure of the grammar that was recognized. So as syntactically complex and flexible as it is, I do not see the advantage of bothering with a grammar definition at all in PocketSphinx; the best multiple keyphrase approach is simply to expand your grammar into a keyword list file. Please correct me if I am mistaken.
Search methods, whether containing the word "phrase" or the word "word", actually accommodate both phrases or single words.
I have assumptions re: the undocumented fsgModel class, but we're not allowed to give assumptions.
Though this may help clarify some aspects,the above fails to add any functionality to the package. Lastly, the C source code has methods getRuleName() and getTagName(). But, discussions regarding this topic between users and developers seems to stonewall -- their is no motivation to add tags or rule name associations to recognized words or phrases in a defined grammar, apparently because the developers believe grammars are old-school and nobody uses them anymore.