I have written an android app for fun that have inner sqlite database after gathering many data using html parser I found my numbers that are saved as
text in database, are written in English language, so doing query in persian that people in my country try will return nothing on numbers
String q = "SELECT * FROM studentIDs WHERE field1 LIKE '%"+name+"%' OR field2 LIKE '%"+name+"%'";
while doing good on both field1 that is string ,it won't work on field2 that number stored as string, how should I perform language independent query on numbers?
I can't change characters from English to other I want support for both languages and I can't change it's type to integer because some records are English name
Sorry about my English and thanks in advance
Since your data type is String, it means you can store any character sequence to it(depends on your sqlite encoding config e.g utf8) and Sqlite doesn't care and shouldn't care about it.
You have a simple solution here:
Just write a simple mapper before any query to database:
String mapToEn(String query) {
return query
.replace('۰', '0')
.replace('۱', '1')
.replace('۲', '2')
.replace('۳', '3')
.replace('۴', '4')
.replace('۵', '5')
.replace('۶', '6')
.replace('۷', '7')
.replace('۸', '8')
.replace('۹', '9');
}
And use it on your query or query parameters before executing the query to database:
Result query(mapToEn(query));
Edit:
Since you said
I can't change it's type to integer cause some recored's are english
name
I thought your data in the field1 is a combination of numbers and characters, now that you clarified it only contains numeric or String data you have another solution.
Database Schema Migration
Since your database schema doesn't match your requirements anymore you have to make some changes to it. You need to differentiate the data type you have entered, simply adding two new column as field_str and field_num.
Basically you should write a database migration which is responsible for converting the field1 column's data from String to Integer if its an Integer without losing any data, Here are the steps you should do:
Add an Integer and a String column to your table respectively field_num and field_str.
Iterate through the table and parse all those Strings in field1
to Integer and insert them into 'field_numcolumn, and insert the unparseable ones intofield_str` column.
change your query accordingly.
Since sqlite does not support column drop, You either have to add a new column to your existing table and leave alone the old data to be there, or You can create a new table and migrate all of your data to the new table:
Here is some hypothetical situation:
sqlite> .schema
CREATE TABLE some_table(
id INTEGER PRIMARY KEY,
field1 TEXT,
field2 TEXT,
);
sqlite> select * from some_table;
id field1 field2
---------- ---------- ------
0 1234 name<br>
1 bahram name
Now create another table
sqlite> CREATE TABLE new_some_table(
...> id INTEGER PRIMARY KEY,
...> field_str TEXT,
...> field_num INTEGER,
...> field2 TEXT,
...> ) ;
Now copy your data from the old table
sqlite> INSERT INTO new_some_table(id, field_str, field2)
...> SELECT id, field1, field2, FROM some_table ;
sqlite> INSERT INTO new_some_table(id, field_num)
...> SELECT id, field1, FROM some_table WHERE typeof(field1) = "integer" ;
Now you can query your table based on what type of data you have.
Consider using an ORM which provides the migration tool, like Google's Room or dbflow.
As a simple knowledge all database store data in one file and there data get store as string. It depends on our Database Driver that interpret them and convert them into other data type as per Table schema.
So, different languages is also stored in database as string. No matter if it is number or character. For each language you have to put language converter.
So idea to make your database usable for your and all other country language is to Encode and Decode. convert any country number in English while save into database and vice-versa.
You have to make method for all languages for conversation Like this. Or make your own library class to use it universally. Hope you find this useful.
my idea is make semi-translator for numbers first check each char like this to see is it number in any language
static boolean isDigit(char ch)//from java doc
Determines if the specified character is a digit.
then use getNumericValue(char)
The nice thing about getNumericValue(char) is that it also works with strings like "٥" and "५" where ٥ and ५ are the digits 5 in Eastern Arabic and Hindi/Sanskrit respectively.
Related
I have a value from the epoch time like this 1549251913000, I save this value to SQLite. I create the table like the following:
CREATE TABLE TABLE_BOOKMARK (COLUMN_ID INTEGER PRIMARY KEY AUTOINCREMENT, COLUMN_TITLE TEXT, COLUMN_SOURCE TEXT, COLUMN_DATEANDTIME INTEGER, COLUMN_GUID TEXT);
that value is COLUMN_DATEANDTIME but with INTEGER type. but when I take the value, it doesn't match what I expected. it becomes like this -1231280856.
Please give me some advice, thanks
I have tried this solution but still mismatch when I get it from SQLite (seems it didn't work with my problem)
Create a column in as INTEGER datatype put LONG value into INTEGER in column
Make sure when you retrieve the value from cursor as LONG
Cursor cursor = db.rawQuery("SELECT * FROM " + TABLE_NAME, null);
long value = cursor.getLong(0);
For more : SQLite DataType Doc
Just to clarify the column type is, with one exception, largely irrelevant as any type of data can be stored in any type of column.
The exception is the rowid column or an alias of the rowid column, such a column MUST store an integer. By integer it is a 64bit signed integer and thus encompasses a java long.
The column type itself is also flexible for example CREATE TABLE mytable (mycolumn a_pretty_weird_column_type) is valid (as would be LONG). Such types are converted according to 5 rules to the column affinity.
Putting the above together, using :-
CREATE TABLE IF NOT EXISTS mytable (mycolumn a_pretty_weird_column_type);
DELETE FROM mytable;
INSERT INTO mytable VALUES
(1549251913000),
(999999999999999),('Fred'),(x'010203040506070809'),(0.234567),(null),
('999999999999999'), -- Note although specified as TEXT as mycolumn is effectively NUMERIC stored as INTEGER
('0.234567') -- As above but stored as REAL
;
SELECT
*,
typeof(mycolumn) AS coltype, -- The column type (note as per value not column definition)
hex(mycolumn) AS as_hex, -- Convert column to a hex representation of the data
CAST(mycolumn AS TEXT) AS as_text, -- follow rules
CAST(mycolumn AS INTEGER) AS as_integer,
CAST(mycolumn AS REAL) AS as_real,
CAST(mycolumn AS NUMERIC) AS as_numeric,
CAST(mycolumn AS BLOB) AS as_blob
FROM mytable;
results in :-
The bible as such is Datatypes In SQLite Version 3.
but when I take the value, it doesn't match what I expected. it
becomes like this -1231280856
The Issue
As such your issue is nothing to do with the column type, not SQLite as such, rather it's due to using the Cursor getInt method, instead of the getLong method.
One answer says,
Its always better to store date and time in form of Text in sqlite.
This is incorrect, from a space point of view and therefore the underlying efficiency the use of a numeric representation, i.e. 64bit signed integer as per SQL As Understood By SQLite - Date And Time Functions - Time Strings (maximum of 8 bytes), of the date and time will be more efficient than storing the 19 bytes (for an accuracy down to a second).
As per the docs sqlite does not have default storage class for date and time. Its always better to store date and time in form of Text in sqlite. You can then parse them into Date runtime whenever you want to use them
How can I find the tables having column Blob type in Sqlite. I need to get the table names from which I get the column blob type and then want to see the total no. of records where the blob is not empty.
If you wanted tables that have a column defined as a blob then you could use
SELECT * FROM sqlite_master WHERE sql LIKE '%blob%';
as the basis for determining the tables. e.g. this could return results such as :-
However, this does not necessarily find all values that are stored as blobs. This is because with the exception of the rowid column or an alias thereof, any type of value (blob included) can be stored in any column.
e.g. consider the following :-
DROP TABLE IF EXISTS not_a_blob_table;
CREATE TABLE IF NOT EXISTS not_a_blob_table (col1 TEXT, col2 INTEGER, col3 REAL, col4 something_or_other);
INSERT INTO not_a_blob_table VALUES
('test text',123,123.4567,'anything'), -- Insert using types as defined
(x'00',x'12',x'34',x'1234567890abcdefff00') -- Insert with all columns as blobs
;
SELECT typeof(col1),typeof(col2),typeof(col3),typeof(col4) FROM not_a_blob_table;
This results in :-
If you want to find all blobs then you would need to process all columns from all rows of all tables based upon a check for the column type. This could perhaps be based upon :-
SELECT typeof(col1),typeof(col2),typeof(col3),typeof(col4),* FROM not_a_blob_table
WHERE typeof(col1) = 'blob' OR typeof(col2) = 'blob' OR typeof(col3) = 'blob' OR typeof(col4) = 'blob';
Using the table above this would result (only the 2nd row has blobs) in :-
A further complication is what you mean by not empty, null obviously. However what about x'00'? or if you used a default of zeroblob(0) ?.
zeroblob(N)
The zeroblob(N) function returns a BLOB consisting of N bytes of 0x00. SQLite manages these zeroblobs very efficiently. Zeroblobs can
be used to reserve space for a BLOB that is later written using
incremental BLOB I/O. This SQL function is implemented using the
sqlite3_result_zeroblob() routine from the C/C++ interface.
If null though then this wouldn't have a type of blob, instead it's type would be null, which could complicate matters if checking for all values stored as blobs.
You may wish to consider having a look at the code from Are there any methods that assist with resolving common SQLite issues?
as this could well be the basis for what you want.
You also wish to have a look at typeof(X) and zeroblob(N).
I am using SQLite in an Android app which stores, amongst other things, a table that records the mood of the user. The table schema is shown below
CREATE TABLE moods
(
dow INTEGER,
tsn INTEGER,
lato INTEGER,
agitation TEXT DEFAULT '{}',
preoccupancy TEXT DEFAULT '{}',
tensity TEXT DEFAULT '{}',
taps TEXT DEFAULT '{}'
);
Unlike, say, Postgres, SQLite does not have a dedicated jsonb storage type. To quote
Backwards compatibility constraints mean that SQLite is only able to store values that are NULL, integers, floating-point numbers, text, and BLOBs. It is not possible to add a sixth "JSON" type.
SQLite JSON1 extension documentation
On the SQLite commandline processor I can populate this table using a simple INSERT statement such as this one
INSERT INTO moods (dow,tsn,lato,agitation,preoccupancy,tensity,taps)
VALUES(1,0,20,'{"A2":1}','{"P4":2}','{"T4":3}','{"M10":4}');
since it accepts both single and double quotes for strings.
In order to accomplish the same thing in my Hybrid Android app I do the following
String ag = JOString("A" + DBManage.agitation,1);
String po = JOString("P" + (DBManage.preoccupancy + 15),1);
String te = JOString("T" + DBManage.tensity,1);
which returns strings such as {"P4":2} etc. My next step is as follows
ContentValues cv = new ContentValues();
cv.put("dow",0);
cv.put("tsn",1);
cv.put("lato",2);
cv.put("agitation",ag);
cv.put("preoccupancy",po);
cv.put("tensity",te);
long rowid = db.insert("moods",null,cv);
which works. However, upon examining the stored values I find that what has gone in is in fact
{"dow":0,"tsn":5,"lato":191,"tensity":"{\"T0\":1}","agitation":"
{\"A1\":1}","preoccupancy":"{\"P15\":1}","taps":"{}"}]
Unless I am misunderstanding something here the underlying ContentValues.put or the SQLiteDatabase.insert implementation has taken it upon itself to escape the quoted strings. Perhaps this is to be expected given that the benefit of going down the SQLiteDatabase.insert route with ContentValues is protection from SQL injection attacks. However, in the present instance it is being a hindrance not a help.
Short of executing raw SQL via execSQL is there another way to get the JSON into the SQLite database table here?
I should add that simply replacing the double quotes with single quotes
String ag = JOString("A" + DBManage.agitation,1);
ag = ag.replaceAll("\"","'");
will not work since a subsequent attempt to extract JSON from the table
SELECT ifnull((select json_extract(preoccupancy,'$.P4') from moods where dow = '1' AND tsn = '0'),0);
would result in the error malformed JSON being emitted by the SQLite JSON1 extension.
I am making a dictionary of over 20,000 words in it. So, to make it work faster when search data, i am using fts3 table to do it.
my select query:
Cursor c=db.rawQuery("Select * from data where Word MATCH '"+word+"*'", null);
Using this query, it will show all the word that contain 'word' , but what i want is to get only the word that contain the beginning of the searching word.
Mean that i want it work like this query:
Cursor c=db.rawQuery("Select * from data where Word like '"+word+"%'", null);
Ex: I have : apple, app, and, book, bad, cat, car.
when I type 'a': i want it to show only: apple, app, and
What can i solve with this?
table(_id primary key not null autoincrement, word text)
FTS table does not use the above attributes. It ignores data type. It does not auto increment columns other than the hidden rowid column. "_id" will not act as a primary key here. Please verify that you are implementing an FTS table
https://www.sqlite.org/fts3.html
a datatype name may be optionally specified for each column. This is
pure syntactic sugar, the supplied typenames are not used by FTS or
the SQLite core for any purpose. The same applies to any constraints
specified along with an FTS column name - they are parsed but not used
or recorded by the system in any way.
As for your original question, match "abc*" already searches from the beginning of the word. For instance match "man*" will not match "woman".
FTS supports searching for the beginning of a string with ^:
SELECT * FROM FtsTable WHERE Word MATCH '^word*'
However, the full-text search index is designed to find words inside larger texts.
If your Word column contains only a single word, your query is more efficient if you use LIKE 'a%' and rely on a normal index.
To allow an index to be used with LIKE, the table column must have TEXT affinity, and the index must be declared as COLLATE NOCASE (because LIKE is not case sensitive):
CREATE TABLE data (
...
Word TEXT,
...
);
CREATE INDEX data_Word_index ON data(Word COLLATE NOCASE);
If you were to use GLOB instead, the index would have to be case sensitive (the default).
You can use EXPLAIN QUERY PLAN to check whether the query uses the index:
sqlite> EXPLAIN QUERY PLAN SELECT * FROM data WHERE Word LIKE 'a%';
0|0|0|SEARCH TABLE data USING INDEX data_Word_index (Word>? AND Word<?)
I'm working on an Android App where the user has different options for sorting the displayed data that comes from the database. Currently my orderBy string that I pass to Androids query() method looks like this:
"LOWER("+columnName+") ASC"
The problem with this is that if the data type in the column specified by columnName is integer, calling LOWER() on it will cause it to be sorted alphabetically, i.e. based only on the leftmost digit, which of course doesn't make any sense for numeric data. Hence I only want to apply LOWER() if the data type of the column is not integer. What I have in mind is a statement like this:
"CASE WHEN [data type of columnName is integer] THEN "+columnName+" ASC ELSE LOWER("+columName+") ASC END"
The part in the brackets is what I don't know how to do. Does SQLite provide a function to determine a column's data type?
Do you really want the type of the column, or the type of the value? (SQLite is dynamically-typed, so the distinction is important.)
If you want the latter, you can use typeof(columnName).
Use:
PRAGMA table_info(table-name);
to get table info.
Taken directly from SQLite docs about datatypes for SQLite Version 3:
Most SQL database engines (every SQL database engine other than SQLite, as far as we know) uses static, rigid typing. With static typing, the datatype of a value is determined by its container - the particular column in which the value is stored.
SQLite uses a more general dynamic type system. In SQLite, the datatype of a value is associated with the value itself, not with its container. The dynamic type system of SQLite is backwards compatible with the more common static type systems of other database engines in the sense that SQL statements that work on statically typed databases should work the same way in SQLite. However, the dynamic typing in SQLite allows it to do things which are not possible in traditional rigidly typed databases.
Column affinity: use PRAGMA table_info(table-name);. PRAGMA table_info() gives a table with columns cid, name, type, notnull, dflt_value, and pk.
Columns in the result set include the column name, data type, whether or not the column can be NULL, and the default value for the column. The "pk" column in the result set is zero for columns that are not part of the primary key, and is the index of the column in the primary key for columns that are part of the primary key.
Datatype of value: Use typeof(column) to see how values are actually stored by SQLite.
Example adapted from section 3.4:
CREATE TABLE t1(
t TEXT, -- text affinity by rule 2
nu NUMERIC, -- numeric affinity by rule 5
i INTEGER, -- integer affinity by rule 1
r REAL, -- real affinity by rule 4
no BLOB -- no affinity by rule 3
);
-- Values stored as TEXT, INTEGER, INTEGER, REAL, TEXT.
INSERT INTO t1 VALUES('500.0', '500.0', '500.0', '500.0', '500.0');
-- Values stored as TEXT, INTEGER, INTEGER, REAL, REAL.
INSERT INTO t1 VALUES(500.0, 500.0, 500.0, 500.0, 500.0);
-- Values stored as TEXT, INTEGER, INTEGER, REAL, INTEGER.
INSERT INTO t1 VALUES(500, 500, 500, 500, 500);
-- BLOBs are always stored as BLOBs regardless of column affinity.
INSERT INTO t1 VALUES(x'0500', x'0500', x'0500', x'0500', x'0500');
-- NULLs are also unaffected by affinity
INSERT INTO t1 VALUES(NULL,NULL,NULL,NULL,NULL);
Output of PRAGMA table_info(t1);:
0|t|TEXT|0||0
1|nu|NUMERIC|0||0
2|i|INTEGER|0||0
3|r|REAL|0||0
4|no|BLOB|0||0
Output of SELECT typeof(t), typeof(nu), typeof(i), typeof(r), typeof(no) FROM t1; (notice each value in a column has its own datatype):
text|integer|integer|real|text
text|integer|integer|real|real
text|integer|integer|real|integer
blob|blob|blob|blob|blob
null|null|null|null|null
Did you declare the column as an integer when setting up the table? Otherwise sqlite will store it as text and the sorts will act as you've described.
create table if not exists exampletable (columnName integer);
To get information of Table use
PRAGMA table_info(table-name);
If you want the latter, you can use
typeof(columnName)