Does a PK in SQLite guarantee order of data?
AFAIK indexes implementation store data in-order of PK.
Does this apply for SQLite? Even for a composite PK?
The documentation says:
If a SELECT statement that returns more than one row does not have an ORDER BY clause, the order in which the rows are returned is undefined.
The presence of a primary key or any other index does not change this; there is no guarantee that that index is actually used for the query.
If you want the output of a query to be sorted, you must use an ORDER BY. (If the ordering can be trivially implemented with the index, this will not be any less efficient that the same query without the ORDER BY clause.)
Do you want know the underly implementation about indexes in SQLite and how does SQLite store data on the disk? Maybe help you at File Format For SQLite Databases and Here in chinese.
Related
I am using this query
"select * from SomeTable group by SomeColumn"
It is returns list with accenting order, but i need to same order like in database.
For example the order in database is:
p
a
s
But result is:
a
i
p
Sample
The result need to be like distinct by CityEN but with all columns and order like 1.Paris 2.Amsterdam 3.Istanbul
In Sqlite, each row of a table has a unique rowid, which you can use for sorting.
select * from SomeTable group by SomeColumn order by rowid;
In your statement, add this line to sort the results:
order by min(rowid)
Your query does not enforce any order with ORDER BY clause so no assumption about row order should be made. If you want specific order add i.e. ORDER BY SomeColumn. See docs about all available order options: https://www.sqlite.org/lang_select.html#orderby
By the rules of SQL, you can't count on getting records back in any specific order without specifying an ORDER BY clause in your SQL query.
In practice servers sometimes return values in the order in which they're inserted, in the order of the first index created, or in the order of the primary key--but you can't count on this behavior, and in fact I've seen the behavior change between database maintenance windows or after the database version is upgraded. You definitely wouldn't want to count on a DB engine to give you back records in any particular order if you write a SELECT statement without an ORDER BY clause.
The only real way to get your records back in the order you inserted them is to create a timestamp column and then sort on it during the SELECT. If you don't want to worry about populating that column on INSERT, have that column auto-populate itself with a timestamp (depending on your DB engine).
I have an SQLite DB where I perform a query like
Select * from table where col_name NOT IN ('val1','val2')
Basically I'm getting a huge list of values from server and I need to select the ones which is not present in the list given.
Currently its working fine, No issues. But the number of values from server becomes huge as the server DB is getting updated frequently.
So, I may get thousands of String values which I need to pass to the NOT IN
My question is, Will it cause any perfomance issue in the future? Does the NOT IN parameter have any size restriction? (like max 10000 values you can check)?
Will it cause any crash at some point?
This is an official reference about various limitation in sqlite. I think the Maximum Length Of An SQL Statement may related to your case. Default value is 1000000, and it is adjustable.
Except this I don't think any limitation existed for numbers of parameter of NOT IN clause.
With more than a few values to test for, you're better off putting them in a table that has an index on the column holding them. Then things like
SELECT *
FROM table
WHERE col_name NOT IN (SELECT value_col FROM value_table);
or
SELECT *
FROM table AS t
WHERE NOT EXISTS (SELECT 1 FROM value_table WHERE value_col = t.col_name);
will be reasonably efficient no matter how many records are in value_table because that index will be used to find entries.
Plus, of course, it makes it a lot easier to re-use prepared statements because you don't have to create a new one and re-bind every value (You are using prepared statements with placeholders for these values, right, and not trying to put their contents inline into a string?) every time you add a value to the ones you need to check. You just insert it into value_table instead.
Yes, there is a limit of 999 arguments as reported in the official documentation: https://www.sqlite.org/limits.html#max_variable_number
I have a table (Table1) in SQLite with one column (Col1), this table has 100,000 rows that all values in Col1 are encrypted with special algortith.
I've used select sql ... like command in Android, Like this:
Select Col1
from Table1
where Col1 like 'A%';
I want to return all rows that started with 'A' letter.
But actually Cal1 is encrypted!! even if I use this:
"Select Col1 from Table1 where Col1 like '"+my_method_encryption("A")+"%';" .. it will be wrong, becuase may the Encrypted values of 'A' letter in Col1 has different value with return value of my_method_encryption("A").
What should I do?
Actually There is another way to solve it, if I select all 100,000 rows and after that I will decrypt all 100,000 rows and then search. But this way will be so slow becuase maybe I will need to use this select ... like more than 10 times.
Thanks
Encrypt the database file containing the plain-text column and table key. Make and link by key a separate db file for non-encrypted data.
Decrypt the file on open, making searches possible and join the other data based on the key.
where Col1 like '"+my_method_encryption("A")+"%';"
That won't work, it's in the purpose of encryption that you cannot tell from the encrypted text what the first character(s) of the plaintext are, otherwise encryption would be meaningless.
What you could do is move your decryption function to SQLite as an user function (https://www.sqlite.org/appfunc.html)
The use something like:
where my_decrypt(Col1,'key') like 'A%'"
But this would be identical to what you wrote:
Actually There is another way to solve it, if I select all 100,000 rows and after that I will decrypt all 100,000 rows and then search.
This is what SQLite will do, just that internally.
However, what you are trying to achieve seems to be conceptually wrong; namely:
your encryption scope is a row:column
you want to query the encrypted data based on what usually requires an external aggregate structure (an index). It's the only way to get a sub-linear search performance.
You should consider expanding your encryption scope to the whole table; for instance SQLite provides https://www.sqlite.org/see/doc/trunk/www/readme.wiki which blocklevel-encrypts the whole database (database scope, block unit). You can also logically join an unencrypted db with an encrypted db; using ATTACH you bring both dbs into the same scope then use a normal JOIN, maybe even in a view, to bring the data together.
I'm not familiar with the Android ecosystem but a simple search for "android sqlite encryption extension" reveals that there is no shortage of alternatives for DB-level encryption.
I was wondering if it's possible (it should be) to query multiple tables simultaneously (several at once) in SQLite. Basically I have several tables that have the exact same columns, but the data in them is just organized by the table it's in. I need to be able to use SELECT to get data from the tables (I heard UNION could help), which matches a condition, then group the data by the table it's in.
In other words, would something like this be possible?
SELECT * FROM table1,table2,table3,table4,table5,table6 WHERE day=15 GROUP BY {table}
I'd rather not resort to having to query the tables individually as then I would have a bunch of Cursors that I'd have to manually go through and that would be difficult when I only have one SimpleCursorAdapter? Unless a SimpleCursorAdapter can have several Cursors?
Thanks.
EDIT: The structure of my tables:
Main Table - contains references to subtables in a column "tbls"
and meta-information about the data stored in the subtables
Subtable - contains reference to subsubtables in a column "tbls"
and meta-information about the data stored in the
subsubtables
Subsubtable - contains the actual entries
Basically these tables just make it easier to organize the hierarchical data structure. I suppose instead of having the subsubtables, I could keep the actual entries in the subtable but add a prefix, and have a separate table for the meta-information. It just seems it would be harder to delete/update the structure if I need to remove a level in this data set.
You can create view based on your tables, the query of your view is union of your tables.
create view test as select * from table1 union select * from table2
now you can filter data as you want
for more info check union & view
http://www.w3schools.com/sql/sql_union.asp
http://www.w3schools.com/sql/sql_view.asp
In the end, I decided to forgo having many subsubtables, and instead adding another column like Tim and Samuel suggested. It will probably be more efficient as well then chaining SELECTs with UNION.
Will the order of rows returned by a query will be the same as the order in which the rows were inserted into the table, of SQLite database?
If Yes, Is this behaviour consistent?
If No, Can this be enforced?
I have a requirement of storing approx 500 rows of data, and which requires sorting/ordering from time to time. The data is in proper order, before the insertion.
Given the small number of rows in your table, this is probably what you need:
SELECT * FROM yourtable ORDER BY ROWID
For more information on ROWID, see these two links:
SQLite Autoincrement and ROWIDs and the INTEGER PRIMARY KEY
Even if the order may be consistent in one scenario, there is afaik no guarantee.
That is why SQL has the ORDER BY operator:
SELECT foo,bar FROM Table FOO WHERE frobnitz LIKE 'foo%' ORDER BY baz ASC;
Will the order of rows returned by a
query will be the same as the order in
which the rows were inserted into the
table, of SQLite database?
No, you can't count on that. All query optimizers have a lot of freedom when it comes to speeding up queries. One thing they're free to do is to return rows in whatever order is the fastest. That's true even if a particular dbms supports clustered indexes. (A clustered index imposes a physical ordering on the rows.)
There's only one way to guarantee the order of returned rows in a SQL database: use an ORDER BY clause.