In my android application which has preprocessing step at the start of the application which loads data needed by the app from database. The size of SQLite database is some 40 MB. The application takes a lot of time to preprocess the data from the database so an user has to wait for some 1 minute time for using the application. Is there any way/ways by which I can improve the performance of my app? The DB operations are mostly of select type like this:
Cursor mCursor = myDataBase.rawQuery("SELECT SUM(TimeTaken) as _time from AssessmentAttempted where AssesmentId IN(" + assessmentIds + ")", null);
To improve the performance on the preprocess phase create a SQL transaction for all operations made against the DB. This will decrease especially the insert and update times.
myDataBase.beginTransaction();
try {
//make all the BD operations
myDataBase.setTransactionSuccessful();
}catch {
//Error in between database transaction
}finally {
myDataBase.endTransaction();
}
Related
Working on database optimization, we split our database into two databases: db and db2. A low priority background thread is inserting into db2. Some of the queries on db are joined with db2, so we need to attach db2 to db. We enable WAL because want it all to be multithreaded.
SQLiteDatabase db = SQLiteDatabase.openDatabase(dbPath, ...);
db.enableWriteAheadLogging();
db.execSQL("attach " + db2path + " as db2");
To understand the problem, we run a simple two thread test. The first thread is inserting rows into db, and the second thread is selecting from db. Each thread prints the time delta from the previous loop and the time we were inside the database.
thread 1 loop: | thread 2 loop:
t1 = getTime() | t1 = getTime()
db2.execSQL("insert into ...."); | db2.execSQL("select ....");
t2 = t3 | t2 = t3
t3 = getTime() | t3 = getTime()
log("i: "+(t3-t1)+", delta: "+(t2-t1)) | log("s: "+(t3-t1)+", delta: "+(t2-t1))
What we see is that the selecting thread is blocking the inserting thread. This can be emphasized by doing a huge (and slow) select, and a tiny insert. You will see that the insert time and the delta increase approximately to the time of the select. If we don't run the slow threads, the insert thread speeds up considerably.
Digging into the source code of SQLiteDatabase I found the following lines in SQLiteDatabase#enableWriteAheadLogging():
// make sure this database has NO attached databases because sqlite's write-ahead-logging
// doesn't work for databases with attached databases
if (mHasAttachedDbsLocked) {
if (Log.isLoggable(TAG, Log.DEBUG)) {
Log.d(TAG, "this database: " + mConfigurationLocked.label
+ " has attached databases. can't enable WAL.");
}
return false;
}
Now to my questions:
What is the meaning of the comment? What exactly doesn't work? Is it some old code left behind? The documentation of ATTACH DATABASE (https://www.sqlite.org/lang_attach.html) explicitly indicates that ATTACH + WAL is OK (with a small caveat.)
Why is the Android binding code trying to protect us from SQLite internal issues? The way I see it, it's supposed to be a thin interface layer.
Edit: I reported this as a bug in AOSP issue tracker. Will update if an answer appears there.
WAL allows readers and a writer at the same time, but only from different connections. You should never use the same connection (the SQLiteDatabase object) from multiple threads.
The WAL setting is permanent; you do not need to execute it every time after opening the database.
The meaning of the comment is exactly what it says. (Nobody guarantees that this comment is correct.)
Sometimes, the Android framework tries to be clever. But you can just execute PRAGMA journal_mode = WAL manually.
I am developing an android application in which I need to download an JSON string and save it in SQlite database in a specific format (In my perspective, I have no other option to choose any other data-storage). And this is my table-structure:
problem_table(pid INTEGER PRIMARY KEY,
num TEXT, title TEXT,
dacu INTEGER,
verdict_series TEXT)
And at launch I need almost 4200 rows to be entered into the database table. I am working on emulator and when I launch the app, it works perfectly. But the app seemed to be freeze for a while after database manipulation is begin. Eventually the app manages to insert all the row but take pretty much time. Even at a point it shows the following look:
So how can I reduce the time-memory complexity or how can I do this in more optimized way or avoid this temporary failure?
N.S. : I didn't check it in any real device yet for lack of my scope. My emulator is using 512 RAM and 48 heap size.
Don't do your database manipulations in UI thread but in an AsyncTask, Thread, Service or whatever, but not in the UI Thread.
I solved it by #Jakobud answer given here
Answer:
Normally, each time db.insert() is used, SQLite creates a transaction (and resulting journal file in the filesystem). If you use db.beginTransaction() and db.endTransaction() SQLite commits all the inserts at the same time, dramatically speeding things up.
Here is some pseudo code from: Batch insert to SQLite database on Android
try
{
db.beginTransaction();
for each record in the list
{
do_some_processing();
if (line represent a valid entry)
{
db.insert(SOME_TABLE, null, SOME_VALUE);
}
some_other_processing();
}
db.setTransactionSuccessful();
}
catch (SQLException e) {}
finally
{
db.endTransaction();
}
I created Calendar application for android, when i store data(daily plans) for one month in a single instance in my calendar application, the emulator shows not responding alert or it takes about 5 min to store the data. How can I store the data to database as quick.
It is always recommended to use SQLite Transactions to store large amount of data. Transactions create single journal file to perform SQLite manipulation, causing the entire process to accomplish quickly.
A simple Transaction would look like:
db.beginTransaction();
try {
/*
*
perform sql add/edit/delete here
*
*/
db.setTransactionSuccessful();
}
catch {
//Error in between database transaction
}finally {
db.endTransaction();
}
P.S. If its taking 5 min without transactions, then I hope it should be completed within ±10 seconds when using it.
Also if it does not respond to ui gestures, you are doing database operations in the main thread, move database operations to background thread, preferably by using AsyncTask.
i am developing android app, here i am having an huge no of data approximately 10000 records with 10 fields in the server, i need to get this data and store it in the my local db, so for this i tried to implement by getting the data in the form of json parsing it and inserting in db one by one, it is taking less time to download the data but more time to insert to the db, after some time i get to know that i am inserting to the db one by one, so insertion operations looping based on the total no of records which had been got. i tried to look for the alternatives i could not get the way for this, so i request you to give me suggestions and snippets to me achieve this.
Thanking you
use transactions to wrap multiple inserts into one operation, that's a lot faster: Improve INSERT-per-second performance of SQLite?
List<Item> list = getDataFromJson();
SQLiteDatabase db = getDatabase();
db.beginTransaction();
try {
// doing all the inserts (into memory) here
for(Item item : list) {
db.insert(table, null, item.getContentValues());
}
// nothing was actually inserted yet
db.setTransactionSuccessful();
} finally {
// all inserts happen now (if transaction was set to successful)
db.endTransaction();
}
I made my own contentprovider where I put a lot of data in at once with multiple inserts.
The app will receive the data from an external source and at this moment I receive about 30 items (therefor 30 times an insert).
Now I noticed that this takes a lot of precious time (about 3 seconds, 100ms on each insert).
How can I improve the speed of the contentprovider? I already tried to bulkInsert them all together but them it will take up to 5 sec.
Thanks in advance.
wrap all that in insertBulk into transactions.
Example:
SQLiteDatabase sqlDB = mDB.getWritableDatabase();
sqlDB.beginTransaction();
try {
for (ContentValues cv : values) {
long newID = sqlDB.insertOrThrow(table, null, cv);
if (newID <= 0) {
throw new SQLException("Failed to insert row into " + uri);
}
}
sqlDB.setTransactionSuccessful();
getContext().getContentResolver().notifyChange(uri, null);
numInserted = values.length;
} finally {
sqlDB.endTransaction();
}
bulkInsert does not use transactions by default, since the default behavior just calls insert:
Override this to handle requests to insert a set of new rows, or the default implementation will iterate over the values and call insert(Uri, ContentValues) on each of them.
doing the inserts in a transaction greatly improves the speed, since only one write to the actuall database takes place:
db.beginTransaction();
try {
// do the inserts
db.setTransactionSuccessful()
} finally {
db.endTransaction();
}
I was once experimenting trying to improve the write speed of about ~2000 writes, and this was the only big improvement I found.
Doing db.setLockingEnabled(false) i think gave about 1% improvement, but then you must also make sure no other threads write to the db. Removing redundant indexes can also give a minor boost if the table is huge.