SQLite3 on Android not adding float numbers correctly - android

I am having some trouble running a few simple statements on SQLite3 on Android.
For example:
SELECT 1234 + 0.001
That should return 1234.001, right? SQLite, however, returns 1234 both on the Emulator (v21) and on a real device (v19).
Considering 1234 is stored on a column called Field1, type REAL, on Table1, I have tried all the options below:
SELECT Field1 + 0.001 FROM Table1
SELECT (Field1 * 1.000) + 0.001 FROM Table1
SELECT CAST(Field1 as FLOAT) + 0.001 FROM Table1
SELECT CAST(Field1 as REAL) + 0.001 FROM Table1
SELECT Field1 + 0.001 from Table1
SELECT CAST((Field1 + 0.001) as REAL) FROM Table1
Nothing seems to work, and in every single case I am getting 1234 instead of 1234.001. I need to get 1234.001 from 1234 in a query but SQLite3 isn't being helpful.
Another thing I just found out: if "Field1" <= 999, the SELECT works as expected and I get 999.001 in the result. Anything >= 1000 gives me an integer instead.
Can you please help me how to solve this?

Actually, I found out that the problem is not what it seems. The numbers are being treated as x.001, but not shown as such. Internally, the numbers were being properly stored in binary, but not all of the decimal part was being shown. After just casting them as text, I was able to see that the decimal part was all there.
Anyhow, thank you for your time and answers.

This is an issue of precision and how numbers are stored in a computer or for that matter in sqlite3. Numbers are stored in binary format, which means that by converting a decimal number to a binary number you loose some precision (in real numbers) read more here. Now, what are your options if you want the addition to yield 1234.001 as in your example?
A) Storing the number as a string.
You can always store the values '1234' and '0.001' as VARCHARs and in JAVA code parse this values to BigDecimals and perform your additions there. For more info on parsing, check out this link. The drawback of this method is that it will consume a lot of more storage space in your database and parsing operations aren't that fast either. Before using this method, think if this drawback will impact negatively the performance of your application.
B) Establish a MAXIMUM_PRECISION and store them as INTEGERs.
By default SQLITE3 stores INTEGERs using 8 bytes. This means that the largest integer that you can store is of the order of 10^19. Now, by storing integers you dont loose precision; so we can take advantage of this. Let's say that your MAXIMUM_PRECISION is 0.001 (as in your example) then you need to multiply this number by one thousand to get a 1. So what if instead of representing 1234.001 as a real, we represent it as 1234001 an int? that way we can store the int safely in sqlite3 and make sure that the operations work properly. In your code you can later on parse the number as a String and format it to display it to an user or you can parse it to a BigDecimal in order to keep precision. Of course, this will limit you to a maximum number of the order of 10^16; again check your requirements to see if this trick will work for your app. Please note that a similar trick is used to store currency without loosing precision in sqlit3, for more info see: this link

Related

Does SQLite `NOT IN` parameter have any size limit?

I have an SQLite DB where I perform a query like
Select * from table where col_name NOT IN ('val1','val2')
Basically I'm getting a huge list of values from server and I need to select the ones which is not present in the list given.
Currently its working fine, No issues. But the number of values from server becomes huge as the server DB is getting updated frequently.
So, I may get thousands of String values which I need to pass to the NOT IN
My question is, Will it cause any perfomance issue in the future? Does the NOT IN parameter have any size restriction? (like max 10000 values you can check)?
Will it cause any crash at some point?
This is an official reference about various limitation in sqlite. I think the Maximum Length Of An SQL Statement may related to your case. Default value is 1000000, and it is adjustable.
Except this I don't think any limitation existed for numbers of parameter of NOT IN clause.
With more than a few values to test for, you're better off putting them in a table that has an index on the column holding them. Then things like
SELECT *
FROM table
WHERE col_name NOT IN (SELECT value_col FROM value_table);
or
SELECT *
FROM table AS t
WHERE NOT EXISTS (SELECT 1 FROM value_table WHERE value_col = t.col_name);
will be reasonably efficient no matter how many records are in value_table because that index will be used to find entries.
Plus, of course, it makes it a lot easier to re-use prepared statements because you don't have to create a new one and re-bind every value (You are using prepared statements with placeholders for these values, right, and not trying to put their contents inline into a string?) every time you add a value to the ones you need to check. You just insert it into value_table instead.
Yes, there is a limit of 999 arguments as reported in the official documentation: https://www.sqlite.org/limits.html#max_variable_number

How can I select the last index of a column split with bigquery

There are a lot of questions about splitting a BigQuery, MySQL column, but I can't find one that fits my situation.
I am processing a large dataset (3rd party) that includes a freeform location field to normalize it for my Android app. When I run a select I'd like to split the column data by commas, take only the last segment and trim it of whitespace.
So far I've come up with the following by Googling documentation:
SELECT RTRIM(LOWER(SPLIT(location, ',')[OFFSET(-1)])) FROM `users` WHERE location <> ''
But the -1 trick to split at last element does not work (with either offset or ordinal). I can't use ARRAY_LENGTH with the same array inline and I'm not exactly sure how to structure a nested query and know the last column index of the row.
I might be approaching this from the wrong angle, I work with Android and NoSQL now so I haven't used MySQL in a long time
How do I structure this query correctly?
I'd like to split the column data by commas, take only the last segment ...
You can use below approach (BigQuery Standard SQL)
SELECT ARRAY_REVERSE(SPLIT(location))[SAFE_OFFSET(0)]
Below is an example illustrating it:
#standardSQL
WITH `project.dataset.table` AS (
SELECT '1,2,3,4,5' location UNION ALL
SELECT '6,7,8'
)
SELECT location, ARRAY_REVERSE(SPLIT(location))[SAFE_OFFSET(0)] last_segment
FROM `project.dataset.table`
with result
Row location last_segment
1 1,2,3,4,5 5
2 6,7,8 8
For trimming - you can use LTRIM(RTRIM()) - like in
SELECT LTRIM(RTRIM(ARRAY_REVERSE(SPLIT(location))[SAFE_OFFSET(0)]))
To get the last part of the split string, I use the len(string) - len(replace(string,delimeter,'')) trick to count the number of delimiters:
split(<string>,'-')[OFFSET(length(<string>)-length(replace(<string>,'-',''))]

Natural sorting of alphanumeric values in sqlite using android

I have a list of names of starts with characters and end with numbers like: -
ka1, ka10, ka 2, ka, sa2, sa1, sa10, p1a10, 1kb, p1a2, p1a11, p1a.
I want to sort it in natural order, that is: -
1kb, ka, ka1, ka 2, ka10, p1a, p1a2, p1a10, p1a11, sa1, sa2, sa10.
The main problem I am seeing here is no delimiter between text and numeric part, there also a chance of without numeric part also.
I am using sqlite in android, I can do sorting using java after fetching points by cacheing cursor data, but I am using(recommended to use) cursor adapter.
Please suggest a query for sorting or is there any way to apply sorting in cursor?
I tried below query for Natural sorting:
SELECT
item_no
FROM
items
ORDER BY
LENGTH(item_no), item_no;
It worked for me in Sqlite db too. Please see this link, for more details.
I can propose using regex replacement adding zeros, creating temporary table of original and corresponding values, then follow this link for sorting it: http://www.saltycrane.com/blog/2007/12/how-to-sort-table-by-columns-in-python/
tip for regex add as many zeros after last letter, but limit the number of total digits for predicted maximum number of digits. If You need help with regex as well, provide exact info of valid and invalid values, so can halp with that too.
PS if want to be sure that zeros goes before last digits search for char from the end
Updated
You can use different ways - Some of are mentioned below:
BIN Way
SELECT
tbl_column,
BIN(tbl_column) AS binray_not_needed_column
FROM db_table
ORDER BY binray_not_needed_column ASC , tbl_column ASC
Cast Way
SELECT
tbl_column,
CAST(tbl_column as SIGNED) AS casted_column
FROM db_table
ORDER BY casted_column ASC , tbl_column ASC
or try the solution:
There are a whole lot of solutions out there if you hit up Google, and
you can, of course, just use the natsort() function in PHP, but it's
simple enough to accomplish natural sorting in MySQL: sort by length
first, then the column value.
Query: SELECT alphanumeric, integer FROM sorting_test ORDER BY LENGTH(alphanumeric), alphanumeric from here

SQLite. How to SELECT values from TABLE fast?

I am developing dictionary application. It requires incremental search which means that SELECTING should be fast. There are 200000+ rows. Let me, first of all explain, table structure. I have this table:
CREATE TABLE meaning(
key TEXT,
value TEXT,
entries BLOB);
Some times ago I had this index:
CREATE INDEX index_key ON meaning (key)
This query was performed for around ~500ms which was very slow
SELECT value FROM meaning WHERE key LIKE 'boy%' LIMIT 100
Then I dropped this index, created incasesensitive index which helped to improve performance 2-3 times.
CREATE INDEX index_key ON meaning (key COLLATE NOCASE);
Now this query performing for 75ms(min) - 275(ms) which is quite slow for incremental search.
SELECT value FROM meaning WHERE key LIKE 'boy%' LIMIT 100
I have tried to optimize query according to this post.
SELECT value FROM meaning WHERE key >= 'boy' AND key<'boz' LIMIT 100
But this query is performed for 451ms.
EXPLAIN
SELECT value FROM meaning WHERE key LIKE 'boy%' LIMIT 100
This is returning following values:
EXPLAIN QUERY PLAN
SELECT value FROM meaning WHERE key LIKE 'boy%' LIMIT 100
This is returning this value(detail column):
SEARCH TABLE meaning USING INDEX index_key (key>? AND key<?) (~31250 rows)
Actually this values did not give me some sense or key what to optimize.
Is it possible to optimize SELECTion of words to be performed in ~10ms by optimization of this query or creating another table or changing some parameters of SQLite database? Could you suggest me the best way to do this?
PS. Please, do not suggest to use FTS table. In previous version of application I have used FTS. I agree that it is extremely fast. I left FTS table idea for 2 reasons:
It is not giving proper result(it contains the words which user do not need)
It takes more disk space

android: how to query a key value pair efficiently

I have about 3000 pairs of (key,value). They are fixed and will not be changed forever. In my app, there is a page that needs to make around 200 queries. For each query, they take the key and ask for value. Also, they are sequential. I have to finish query 1 to get the "value 1" so then I know the key for query 2 to get "value 2".
I tried to implement with SQLite. I measured the time and found this is very slow, it took around 600ms. I wonder if there is better way to implement it? For example, string array with 3000 size? or other hashmap? thanks for advice.
Edit: Forget to mention the size of key and value, size of key: 2char(unicode), value: 4~6char, in fact, it is similar to a lookup language dictionary.
The answer depends on the amount of data that has to be put in this container? E.g. 3000 pairs of five bytes: no issue, to keep that data in memory; however, 3000 pairs of 350 bytes: that's already about 1MB.
If you have a rather smaller amount of data you could think about using a static SparseArray that gets initially filled by a SQL query or assignments in code. SparseArray's are intended to be more efficient than HashTable's.
If the key isn't an Integer a HashTable is still much faster than a SQL query.
If you have rather larger data sets, you could use a LruCache.

Categories

Resources