I hope some one can help me with this, I currently have an android application which has an SQLite database which can execute a txt with an SQLite query for amending the database, my issue is when I use a large number of INSERT OR REPLACE I get an error DATABASE LOCKED AT LINE xxx (ever time I execute the same query its always a different line) This error seems to start occurring around 1600 Lines or more. Query currently is constructed like this for each line
INSERT OR REPLACE INTO "TABLE003" VALUES('000001','Group1',NULL,'20140519163202','20140519163202');
I currently have 2700 line which I wish to execute, If I break the query down in to about 500 lines each I don't have an issue, is this a known is on large query of INSERT OR REPLACE? and is there a work around (other than breaking the query down in to separate query of 500 lines each)?
Related
I've only observed this on Android 9 and possibly only on Samsung devices. I'm storing multiple JSON responses to multiple serialized strings into my DB later to be typeConverted using Moshi again into a model.
The query that causes this error is:
#Query(“SELECT * FROM tasks”)
public abstract Flowable<List<TaskEntity>> getAll();
The last instance had a total of about 392,000 characters TOTAL in the table. These are actually split up into about 5500 character size entries within the table.
Why would the cursor have a problem with ~11k byte sized entries? Does the fact that I'm selecting * mean the cursor is grabbing the whole table into memory and not a single row at a time?
Why only Android 9?
Thanks.
Does the fact that I'm selecting * mean the cursor is grabbing the whole table into memory and not a single row at a time?
SELECT * means you are retrieving all columns. A SELECT without a WHERE clause (or other types of constraints) means that you are retrieving all rows. So, SELECT * FROM tasks will attempt to retrieve the entire table contents into memory.
You could add #Transaction to this function, as that may help get past this error. Quoting the documentation:
When used on a Query method that has a SELECT statement, the generated code for the Query will be run in a transaction. There are 2 main cases where you may want to do that:
If the result of the query is fairly big, it is better to run it inside a transaction to receive a consistent result. Otherwise, if the query result does not fit into a single CursorWindow, the query result may be corrupted due to changes in the database in between cursor window swaps.
If the result of the query is a POJO with Relation fields, these fields are queried separately. To receive consistent results between these queries, you also want to run them in a single transaction.
Even better would be to not load the entire table's content's into memory (and then convert the entire table's rows into entity objects). Heap space is limited.
Why only Android 9?
No clue. I wouldn't worry about that — if you focus on retrieving less data, that will have benefits for all your users.
I am using sqlcipher database. I am tracking the lastModified time of my database. According to my understanding long value returned by lastModified() function will change only if we update or add a value to the database we refer. I am using a query to fetch (not modifying) a value from the database, for this i am using the below code
mDatabaseFileObj = mContext.getDatabasePath("xxx.db");
Log.i(""," "+mDatabaseFileObj.lastModified());
mSQLiteDatabase = net.sqlcipher.database.SQLiteDatabase.openOrCreateDatabase(...)
Log.i(""," "+mDatabaseFileObj.lastModified());
mCursor = mSQLiteDatabase.rawQuery(query, null);
do{
....
}while(..)
In this i had printed two logs. First log before creation of mSQLiteDatabase obj and another log after that.According to the doc for lastModified() both the values printed by the logs should be same as i just quering not modifying the database. But the value is changing.
I couldnt sort out this problem.Give your thoughts on this.
An addtional info is, i had placed this code snippet in a function and i am calling that function 5 times and strangely for the first time alone the log is printing different values but for the rest 4 times the log printed values are same..
Thanks in Advance
Deepak,
openOrCreateDatabase is not a read only operation. In particular the wrapping library, which is based on the Android sqlite library, manipulates a table called android_metadata when the database is open. This could cause the timestamp to change, because the database is actually modified during open.
mDatabaseFileObj this is reference to your File object from OS don't confuse this with database in SQLITE database are implemented on top of file system only, so in first line you are printing when this file was last modified,
while second line you are trying to alter the file, and third line again printing time, so as per me and going with file systemn behaviour you will get a different time stamp, this doesn't mean if content inside this file was modified or not.
Just imagine it like this, open a txt file in windows and save it again without changing it notice time before and after they will be different.
Hope this help.
I have more than 5000-6000 records in SQLite table. When I delete this all records it takes very long time and causes screen pause and starts releasing resources.
I tried it with AsyncTask but still the same problem. So can anyone tell how should I delete this thousands of records without blocking app.
I am no expert to Sqlite but in general there are 3 ways to do that.
As every body commented Truncate if you are going to delete all record.
If you are going to majority of the records you can store the non-delete files in tempTable and then truncate your actual table finally insert all the records from temp to actual table
This one is what I use most of the time. Use a Top XXX delete statement in your case you can delete 200 records in every 2min. (I am assuming you don t insert more than 200 records in 2 min). The AsyncTask is the way for that kind of approach.
In T-SQL I use the following sql to it is up to you
Delete From tUser
where UserId in (
Select top 200 UserId
From tUser
where LastLoggin< GetDate()-120
)
If you want to delete all records in a table you could try dropping the table:
http://www.sqlite.org/lang_droptable.html
and later re-creating an empty table:
http://www.sqlite.org/lang_createtable.html
I successfully used the following BEFORE INSERT trigger to limit the number of rows stored in the SQLite database table locations. The database table acts as a cache in an Android application.
CREATE TRIGGER 'trigger_locations_insert'
BEFORE INSERT ON 'locations'
WHEN ( SELECT count(*) FROM 'locations' ) > '100'
BEGIN
DELETE FROM 'locations' WHERE '_id' NOT IN
(
SELECT '_id' FROM 'locations' ORDER BY 'modified_at' DESC LIMIT '100'
);
END
Meanwhile, I added a second trigger that allows me to INSERT OR UPDATE rows. - The discussion on that topic can be found in another thread. The second trigger requires a VIEW on which each INSERTis executed.
CREATE VIEW 'locations_view' AS SELECT * FROM 'locations';
Since an INSERT is no longer executed on the TABLE locations but on the VIEW locations_view, the above trigger does no longer work. If I apply the trigger on the VIEW the following error message is thrown.
Failure 1 (cannot create BEFORE trigger on view: main.locations_view)
Question:
How can I change the above trigger to observe each INSERT on the VIEW - or do you recommend another way to limit the number of rows? I would prefer to handle this kind of operation within the database, rather then running frontend code on my client.
Performance issues:
Although, the limiter (the above trigger) works in general - it performs less then optimal! Actually, the database actions take so long that an ANR is raised. As far as I can see, the reason is, that the limiter is called every time an INSERT happens. To optimize the setup, the bulk INSERT should be wrapped into a transaction and the limiter should perform right after. Is this possible? If you like to help, please place optimization comments concerning the bulk INSERT into the original question. Comments regarding the limiter are welcome here.
This type of trigger should work fine in conjunction with the other one. The problem appears to be that the SQL is unnecessarily quoting the _id field. It is selecting the literal string "_id" for every row and comparing that to the same literal string.
Removing the quotes around '_id' (both in the DELETE and in the sub-SELECT) should fix the problem.
I created a table in the database that has the data like this:
Now i have written a query that updates the contact field by concatinating name and email fields:
UPDATE MyContacts SET contact=(SELECT name||'--'||email FROM MyContacts);
Here the problem is after executing the query the table is as below:
Why is it happening like this? In oracle i never faced this problem. Please help me. Thank you
Right now you're not specifying the correct row to retrieve the values from. Try something like this:
UPDATE MyContacts SET contact = name||'--'||email;
EDIT: Glad it worked. Your first issue was that your sub-select uses a SELECT statement with no WHERE clause (SELECT name||'--'||email FROM MyContacts will return 3 rows). One possible solution would be for SQLite to throw an error and say You've tried to set a column to the result of an expression that returns more than 1 row: I've seen this with MySQL and SQL Server. However, in this case SQLite appears to just use only the very first value returned. However, your second error then kicks in: since you don't narrow your UPDATE statement with a WHERE clause, it uses that first value returned to update EVERY single row, which is what you see.