The wording of my question comes from a comment at the end of the blog post Android Quick Tip: Using SQLite FTS Tables. As the title implies, the post tells how to create and query full text search virual tables in your android app. The comment by user Fer Raviola specifically reads
my question is why dont' we ALWAYS use FTS tables!. I mean, they ARE
faster
The blog author did not reply (at the time of this writing anyway), but I thought it was an interesting question that deserves an answer. After all, FTS tables can be made for an entire table, not just a specific text column. At first look, it seems like it would both simply and speed up queries.
One could also completely do away with the non-virtual table. That would eliminate having to keep the virtual and non-virtual tables in sync with triggers and external content tables. All the data would be stored in the virtual table.
#CL. says that this is not a good option, though, because "FTS tables cannot be efficiently queried for non-FTS searches." I assume that this has something to do with what the SQLite documentation says here:
-- The examples in this block assume the following FTS table:
CREATE VIRTUAL TABLE mail USING fts3(subject, body);
SELECT * FROM mail WHERE rowid = 15; -- Fast. Rowid lookup.
SELECT * FROM mail WHERE body MATCH 'sqlite'; -- Fast. Full-text query.
SELECT * FROM mail WHERE mail MATCH 'search'; -- Fast. Full-text query.
SELECT * FROM mail WHERE rowid BETWEEN 15 AND 20; -- Slow. Linear scan.
SELECT * FROM mail WHERE subject = 'database'; -- Slow. Linear scan.
SELECT * FROM mail WHERE subject MATCH 'database'; -- Fast. Full-text query.
But are the slow queries really so much slower than if one were just doing a normal query on a normal table? If so, why?
Here are some general potential downsides that I can think of to only using a virtual FTS table in Android:
The table would be larger because of the size that the index takes.
Operations like INSERT, UPDATE, and DELETE would be slower because the index would have to be updated.
But as far as queries themselves go, I don't see what the problem would be.
Update
The Android documentation example Storing and Searching for Data uses only an FTS virtual table in its database. This seems to confirm that there are at least some viable options for FTS only databases.
When the table is small, scanning all rows does not take much time.
For large tables, however, this can take a very long time. (The speed would be similar to a normal, unindexed table.)
Related
I am working with SQLite on Android.
I want to return 10 records from a 1000 records DB, based on a where clause. e.g. select * from log where level > 5. The column level is not indexed. How does SQLite retrieve the data? I assume it will go thru all the records one by one and filter out invalid record, correct?
In that case, would it be faster to just use a key-value store like LevelDb?
I assume it will go thru all the records one by one and filter out invalid record, correct?
Pretty much, yeah.
This is called a full table scan, and will result in poor performance if your table is big.
If you're curious about how SQLite executes your queries, you can use the EXPLAIN keyword, either on-device or in the command-line sqlite3 executable (available in the platform-tools folder of your Android SDK).
If you run EXPLAIN <your query>, it will give you a string of the actual code that will run in sqlite's virtual machine when executing that query. This is rarely useful unless you're developing/debugging sqlite itself.
If you run EXPLAIN QUERY PLAN <your query>, it will give you a high-level view of what'll happen, including which index (if any) is used.
If it is not indexed it will perform a sequential (linear) search to find all records meeting that criterion. This will require all 1000 records to be passed each time you do it.
If it is indexed then it will obviously use the index to find the records which will, in all cases, be much faster.
You should index it if you're going to do this with any frequency. If it is a one-off, than indexing gets you nothing. Even though the performance may not be noticeably faster, it is what indexes are for and it would be lousy design not to index it.
I'm writing an Android app which allows the user to store information in an SQLite database. It is a relatively simple database with 9 tables and most users will store less than 200 records in the main table, but a few may store up to 1000.
An important function of the app will be to provide the user with the ability to search for a string in any of nine different fields across several of the tables.
There appear to be two approaches: full text searching using an FTS table, and using the WHERE ... LIKE function in the SELECT statement. I've searched stackoverflow and googled it, but have been unable to find any real information about which is best to use under which circumstances.
It would be very useful to understand the pros and cons of each method.
i suggest you to use sqlite with fts4
two search tables
one tables is full text search table with one column
CREATE VIRTUAL TABLE %s USING fts4(content);
one tables is data table with the column which you define.
this method can extend easily in the future.
I want to know how exactly does sqllite works when you are dealing with database on android. I know that it writes everything on file with .db extension. But how does it read or write one particular table? Does it fetch the whole file or just the related part and how exactly does it do these operations? Can someone please suggest me some link? I tried google but the links I found just explain how to write queries.
For that you have to read basics of the databases .
all the db frameworks are almost same in terms of working so
you have to research on the basics of database (any).
here is some related information u can like
What does a database actually do to find out what matches a select statement?
To be blunt, it's a matter of brute force. Simply, it reads through each candidate record in the database and matches the expression to the fields. So, if you have "select * from table where name = 'fred'", it literally runs through each record, grabs the "name" field, and compares it to 'fred'.
Now, if the "table.name" field is indexed, then the database will (likely, but not necessarily) use the index first to locate the candidate records to apply the actual filter to.
This reduces the number of candidate records to apply the expression to, otherwise it will just do what we call a "table scan", i.e. read every row.
But fundamentally, however it locates the candidate records is separate from how it applies the actual filter expression, and, obviously, there are some clever optimizations that can be done.
How does a database interpret a join differently to a query with several "where key1 = key2" statements?
Well, a join is used to make a new "pseudo table", upon which the filter is applied. So, you have the filter criteria and the join criteria. The join criteria is used to build this "pseudo table" and then the filter is applied against that. Now, when interpreting the join, it's again the same issue as the filter -- brute force comparisons and index reads to build the subset for the "pseudo table".
How does the database store all its memory?
One of the keys to good database is how it manages its I/O buffers. But it basically matches RAM blocks to disk blocks. With the modern virtual memory managers, a simpler database can almost rely on the VM as its memory buffer manager. The high end DB'S do all this themselves.
How are indexes stored?
B+Trees typically, you should look it up. It's a straight forward technique that has been around for years. It's benefit is shared with most any balanced tree: consistent access to the nodes, plus all the leaf nodes are linked so you can easily traverse from node to node in key order. So, with an index, the rows can be considered "sorted" for specific fields in the database, and the database can leverage that information to it benefit for optimizations. This is distinct from, say, using a hash table for an index, which only lets you get to a specific record quickly. In a B-Tree you can quickly get not just to a specific record, but to a point within a sorted list.
The actual mechanics of storing and indexing rows in the database are really pretty straight forward and well understood. The game is managing buffers, and converting SQL in to efficient query paths to leverage these basic storage idioms.
Then, there's the whole multi-users, locking, logging, and transactions complexity on top of the storage idiom.
SQLite operation on Android is not any different from SQLite operation on any other platform.
Very short answer to your question: SQLite file is split into pages of fixed size.
Each database object (table, index, etc) occupies some number of pages. If objects needs to grow (like new rows are inserted into table) it may allocate more new pages either from free page list, or by growing database file in size. If rows are deleted or object dropped, reclaimed free space goes into free page list. During any operation, SQLite engine tries to NOT fetch whole file, however it maintains page cache for higher performance.
You can find much more detailed explanations on SQLite website in general, and about SQLite database file format in particular.
I was browsing through the source code for Mendeley for Android by Martin Eve and saw that Temp tables are created along with the main tables by calling replace on the tables with _id. For example on the Collections table
db.execSQL(COLLECTIONS_CREATE.replace(" (_id", "_TEMP (_id"));
I guess it creates a new temporary table. Any more explanation on this would be great. Further on the data is first inserted to temp tables and later moved to main tables.
I searched through SO and came across What are the use cases for temporary tables in SQL database systems? and saw that temp tables are used for handling complex queries, sorting and for increasing performance. Can anyone explain how it helps in this situation?
Temp tables make things easier for the programmer by letting the programmer break up a single complex query into multiple relatively simpler queries, and also by letting the programmer store results temporarily so they can be consulted multiple times for different purposes during the course of the program without having to be reinstantiated each time. The latter also makes things easier for the computer. The disk subsystem and CPU can take a little rest, so to speak.
An example of the former: let say you wanted to get all records where:
the sale was in the eastern division
and involved one of the several new gizmos introduced last quarter
and occurred during the special 5-day bonanza sale
or
the sale was made by the boss's daughter
who floats from division to division
and the sale occurred at any time during the month of May
Your program will then generate an email praising the salesperson for the sale, with a cc to the division manager.
The single query that fetches records that satisfy either of those sets of conditions above might get a little unwieldy--just a little difficult for one's addled brain or weary eyes to handle after a long day of dealing with the sort of crap one has to deal with in most walks of life. It's a trivial example, of course; in a production system the conditions are often more complex than those above, involving calculations and tests for null values and all sorts of other tedious things that can cause a query statement to grow long and turn into a ball of tangled yarn.
So if you created a temp table, you could populate the temp table with the rows that satisfy the first set of conditions, and then write the second query that grabs the rows that satisfy the second set of conditions, and insert them into the temp table too, and voilĂ -- your temp table contains all the rows you need to work with, in two baby steps.
Temporary tables are just that, temporary. They go away once you close your connection, at least with sqlite. It's often more complicated than that with other DMBS', although this is usually the default behaviour.
So temporary tables are thus used whenever a temporary table is required! (Just kiddin'). Performance will be better as there is no I/O involved (which is often the cause of performance problems with databases).
Another use case is when you want to use what is in a permanent table without changing anything in the said table; you can then select the permanent table in a temp table, and keep your permanent table intact.
I'm planning on generating queries for SQLite that will involve many joins on 12 tables that will surpass the 64 table join limit in SQLite. (~250 table joins or possibly more) This will be running on android eventually. The purpose behind this is to have X amount of user defined fields in the result set depending on the report that is being generated.
Unfortunately I'm not a DBA and I do not know of an optimal way to achieve this.
So far I think the options are:
Use 2 temp tables to juggle the result set while joining the max amount possible. (My previous solution in SQLServer, fairly slow)
Produce result sets of a few columns and a key to join on and store them in n temp tables. (Where n is less than 64) Then join all the temp tables on their common key.
Create a single temp table and fill it up one insert or update at a time.
Don't do a big join, perform many selects instead and fill up some sort of data container.
Is there something else I should consider?
Per your comment on Mike's response, "the query to generate the report needs to join and rejoin many many times".
Frequently, when dealing with reports, you'll want to split your query into bite-size chunks, and store intermediary results in temporary tables where applicable.
Also, your question makes it sound like you've an entity/attribute/value store and trying to pivot the whole thing. If so, you may want to revisit using this design anti-pattern, since it probably is at the source of your problem.
I don't think you can get "fast" on any relational database platform when you're trying to join that many tables - any kind of built-in optimisation is going to give up the ghost. I would be likely to review my design when I saw as many as ten tables in a query.
I think your schema design needs to be revisited. 250+ tables in a schema (on a phone!) doesn't make sense to me - I run several enterprise apps in a single DB with 200+GB of data and there are still only 84 tables. And I never join all of them. Do all your tables have different columns? Really different? Could you post a few entries from sqlite_master?
Since your app is running on an Android device, I would guess it syncs with an enterprise-class database on a server somewhere. The real solution is to generate a de-normalized representation of the server data on the device database, so it can be more readily accessed.