I have more than 5000-6000 records in SQLite table. When I delete this all records it takes very long time and causes screen pause and starts releasing resources.
I tried it with AsyncTask but still the same problem. So can anyone tell how should I delete this thousands of records without blocking app.
I am no expert to Sqlite but in general there are 3 ways to do that.
As every body commented Truncate if you are going to delete all record.
If you are going to majority of the records you can store the non-delete files in tempTable and then truncate your actual table finally insert all the records from temp to actual table
This one is what I use most of the time. Use a Top XXX delete statement in your case you can delete 200 records in every 2min. (I am assuming you don t insert more than 200 records in 2 min). The AsyncTask is the way for that kind of approach.
In T-SQL I use the following sql to it is up to you
Delete From tUser
where UserId in (
Select top 200 UserId
From tUser
where LastLoggin< GetDate()-120
)
If you want to delete all records in a table you could try dropping the table:
http://www.sqlite.org/lang_droptable.html
and later re-creating an empty table:
http://www.sqlite.org/lang_createtable.html
Related
I hope some one can help me with this, I currently have an android application which has an SQLite database which can execute a txt with an SQLite query for amending the database, my issue is when I use a large number of INSERT OR REPLACE I get an error DATABASE LOCKED AT LINE xxx (ever time I execute the same query its always a different line) This error seems to start occurring around 1600 Lines or more. Query currently is constructed like this for each line
INSERT OR REPLACE INTO "TABLE003" VALUES('000001','Group1',NULL,'20140519163202','20140519163202');
I currently have 2700 line which I wish to execute, If I break the query down in to about 500 lines each I don't have an issue, is this a known is on large query of INSERT OR REPLACE? and is there a work around (other than breaking the query down in to separate query of 500 lines each)?
I have a table with 1400 rows. Every row has a blob field which holds data between 10kb and 500kb. I need to delete that table. It takes me 3.5 minutes to delete the table and 3 minutes to drop the table. Thats too long for the users.
How can I remove that table as fast as possible ? ( No rollback needed or any security, just remove it. )
I already tried the following.
1. Set pagesize :
sqlitedatabase.setPageSize(8000);
sqlitedatabase.execSQL("DROP TABLE IF EXISTS " + sTableName);
2. deactivate journallog which did not work.
sqlitedatabase.rawQuery("PRAGMA journal_mode=OFF",null);
sqlitedatabase.execSQL("DROP TABLE IF EXISTS " + sTableName);
this doesn't work for me. journal log, which I guess takes a lot of time, is still be written on to the disk.
From the SQLite manual (with emphasis added):
SQLite is slower than the other databases when it comes to dropping tables. This probably is because when SQLite drops a table, it has to go through and erase the records in the database file that deal with that table. MySQL and PostgreSQL, on the other hand, use separate files to represent each table so they can drop a table simply by deleting a file, which is much faster.
You do have the option of creating and storing multiple database files, which you can then manage from a single connection with ATTACH and DETACH queries.
Here's an example I just ran in SQLite3's command-line client:
sqlite> ATTACH 'example.sqlite' AS example;
sqlite> CREATE TABLE example.ex ( a INTEGER );
sqlite> INSERT INTO example.ex VALUES (1),(2),(3);
sqlite> SELECT * FROM example.ex;
1
2
3
sqlite> DETACH example;
sqlite>
Since the ex table is in its own file, example.sqlite, I can simply detach that DB from the connection and delete the entire file, which will be much faster.
Bear in mind that the number of DBs you can attach is fairly low (with default compile options: 7). I've also read that foreign keys aren't supported in this scenario, though that info might be out of date.
I got a solution for me, which speeds up the deletion 6 times.
with
connection_read.enableWriteAheadLogging();
I drop my table in 30 Seconds. Without it it takes the mentoined 3 minutes.
enableWriteAheadLogging is an alternative journal log which is way faster.
Why not just put the drop table code in a separate thread? The user shouldn't have to wait for the app to drop a table.
I am currently building a database recording events on the phone, but as I don't want to make this a huge database, 100 events are more than enough.
This will keep my database light en efficient.
Unfortunately, I don't see a way to limit the number of rows other than
String sql = "DELETE FROM myTable WHERE _id <= "+limitId;
and I could run this code when the user launch/leaver the app, but I am expecting a better way to achieve this
Is there a more convenient way to achieve this?
If you are using a ContentProvider, you can implement your DELETE in onInsert, deleting a single row on every insert of a single row:
String sql = "DELETE FROM myTable WHERE _id IN (SELECT min(_id) FROM myTable)";
I guess you mean limit to the 100 newest events ?
if so, there is no better way to do it as you did: checking for entries on every insert and delete old entries if necessary.
It's just a matter of taste how or where you do your check, as flx mentioned you could do it in the ContentProvider or as you probably did in the BroadcastReceiver or Service where you actually add the new row. You could also set up a Trigger on your table, but the main idea remains the same. Here a link if you're interested in triggers:
http://www.tutorialspoint.com/sqlite/sqlite_triggers.htm
I successfully used the following BEFORE INSERT trigger to limit the number of rows stored in the SQLite database table locations. The database table acts as a cache in an Android application.
CREATE TRIGGER 'trigger_locations_insert'
BEFORE INSERT ON 'locations'
WHEN ( SELECT count(*) FROM 'locations' ) > '100'
BEGIN
DELETE FROM 'locations' WHERE '_id' NOT IN
(
SELECT '_id' FROM 'locations' ORDER BY 'modified_at' DESC LIMIT '100'
);
END
Meanwhile, I added a second trigger that allows me to INSERT OR UPDATE rows. - The discussion on that topic can be found in another thread. The second trigger requires a VIEW on which each INSERTis executed.
CREATE VIEW 'locations_view' AS SELECT * FROM 'locations';
Since an INSERT is no longer executed on the TABLE locations but on the VIEW locations_view, the above trigger does no longer work. If I apply the trigger on the VIEW the following error message is thrown.
Failure 1 (cannot create BEFORE trigger on view: main.locations_view)
Question:
How can I change the above trigger to observe each INSERT on the VIEW - or do you recommend another way to limit the number of rows? I would prefer to handle this kind of operation within the database, rather then running frontend code on my client.
Performance issues:
Although, the limiter (the above trigger) works in general - it performs less then optimal! Actually, the database actions take so long that an ANR is raised. As far as I can see, the reason is, that the limiter is called every time an INSERT happens. To optimize the setup, the bulk INSERT should be wrapped into a transaction and the limiter should perform right after. Is this possible? If you like to help, please place optimization comments concerning the bulk INSERT into the original question. Comments regarding the limiter are welcome here.
This type of trigger should work fine in conjunction with the other one. The problem appears to be that the SQL is unnecessarily quoting the _id field. It is selecting the literal string "_id" for every row and comparing that to the same literal string.
Removing the quotes around '_id' (both in the DELETE and in the sub-SELECT) should fix the problem.
I want to get the number of NOT NULL records from my SQLite database. Since I'm using autoincrement, the last inserted row won't give me the real number of records in the table as even if I delete any middle record, it'll insert at the position higher than the last inserted record.
The insert statement returns me the number of last inserted row, but I want this value on the fly.
Doing a count before on the table should work. Simply query for the id column with the where check of NOT NULL and on the returned cursor just call the getCount()
Just to be sure: You should never ever, really never ever, manipulate the auto increment in a productive database. If you delete a record, than the "gap" should stay there. It has no impact on performance or anything else. If you insert a new record in the gap, you can create a lot of trouble...
So you just want to find the number of rows? (There no such thing as a "null record" as far as I'm aware.)
Can you not just do
select count(1) from YourTableName
or
select count(*) from YourTableName
? (Some databases are faster using one form or other... I don't know about sqlite.)