I am working on an android application using android:minSdkVersion="14". The application receives data as JSON from a server. The data received need to be added to an sqlite table. If a row exists, all fields except for two have to be updated. If a row does not already exist in the table, it has to be inserted. I am looking for the most efficient way as regards performance.
The function insertwithonCoflict() has been considered but it is not an option since in case of update, it updates all the fields including the two that should not be updated.
The function replace() is also not suitable.
I would opt for a SELECT to check if the row exists and then an INSERT or UPDATE but I was wondering if I could optimize the procedure somehow .
Two approaches:
Change the database structure so that the table has only the server data. Put local data (the two columns) in another table that references the server data table. When updating, just insert to the server data table with "replace" conflict resolution.
Do the select-insert/update logic.
For performance in any case, use database transactions to reduce I/O. That is, wrap the database update loop in a transaction and only commit it when you've done with everything. (In case the transaction becomes too large, split the loop into transaction chunks of maybe a few thousand rows.)
A nice solution I use is as follows:
long id = db.insertWithOnConflict(TABLE, null, contentValues, SQLiteDatabase.CONFLICT_IGNORE);
if(id!=-1) db.update(TABLE, contentValues, "_id=?", new String[]{String.valueOf(id)});
This ensures the row exists and has the latest values.
Related
My understanding of SQLite transactions on Android is based largely on this article. In its gist it suggests that
if you do not wrap calls to SQLite in an explicit transaction it will
create an implicit transaction for you. A consequence of these
implicit transactions is a loss of speed
.
That observation is correct - I started using transactions to fix just that issue:speed. In my own Android app I use a number of rather complex SQLite tables to store JSON data which I manipulate via the SQLite JSON1 extension - I use SQLCipher which has JSON1 built in.
At any given time I have to manipulate - insert, update or delete - rows in several tables. Given the complexity of the JSON I do this with the help of temporary tables I create for each table manipulation. The start of the manipulation begins with SQL along the lines of
DROP TABLE IF EXISTS h1;
CREATE TEMP TABLE h1(v1 TEXT,v2 TEXT,v3 TEXT,v4 TEXT,v5 TEXT);
Some tables require just one table - which I usually call h1 - others need two in which case I call them h1 and h2.
The entire sequence of operations in any single set of manipulations takes the form
begin transaction
manipulate Table 1 which
which creates its own temp tables, h1[h2],
then extracts relevant existing JSON from Table 1 into the temps
manipulates h1[h2]
performs inserts, updates, deletes in Table 1
on to the next table, Table 2 where the same sequence is repeated
continue with a variable list of such tables - never more than 5
end transaction
My questions
does this sound like an efficient way to do things or would it be better to wrap each individual table operation in its own transaction?
it is not clear to me what happens to my DROP TABLE/CREATE TEMP TABLE calls. If I end up with h1[h2] temp tables that are pre-populated with data from manipulating Table(n - 1) when working with Table(n) then the updates on Table(n) will go totally wrong. I am assuming that the DROP TABLE bit I have is taking care of this issue. Am I right in assuming this?
I have to admit to not being an expert with SQL, even less so with SQLite and quite a newbie when it comes to using transactions. The SQLite JSON extension is very powerful but introduces a whole new level of complexity when manipulating data.
The main reason to use transactions is to reduce the overheads of writing to the disk.
So if you don't wrap multiple changes (inserts, deletes and updates) in a transaction then each will result in the database being written to disk and the overheads involved.
If you wrap them in a transaction and the in-memory version will be written only when the transaction is completed (note that if using the SQLiteDatabase beginTransaction/endTransaction methods, that you should, as part of ending the transaction use the setTransactionSuccessful method and then use the endTransaction method).
That is, the SQLiteDatabase method are is different to doing this via pure SQL when you'd begin the transaction and then end/commit it/them (i.e. the SQLiteDatabase methods would otherwise automatically rollback the transactions).
Saying that the statement :-
if you do not wrap calls to SQLite in an explicit transaction it will
create an implicit transaction for you. A consequence of these
implicit transactions is a loss of speed
basically reiterates :-
Any command that changes the database (basically, any SQL command
other than SELECT) will automatically start a transaction if one is
not already in effect. Automatically started transactions are
committed when the last query finishes.
SQL As Understood By SQLite - BEGIN TRANSACTION i.e. it's not Android specific.
does this sound like an efficient way to do things or would it be
better to wrap each individual table operation in its own transaction?
Doing all the operations in a single transaction will be more efficient as there is just the single write to disk operation.
it is not clear to me what happens to my DROP TABLE/CREATE TEMP TABLE
calls. If I end up with h1[h2] temp tables that are pre-populated with
data from manipulating Table(n - 1) when working with Table(n) then
the updates on Table(n) will go totally wrong. I am assuming that the
DROP TABLE bit I have is taking care of this issue. Am I right in
assuming this?
Dropping the tables will ensure data integrity (i.e. you should, by the sound of it, do this), you could also use :-
CREATE TEMP TABLE IF NOT EXISTS h1(v1 TEXT,v2 TEXT,v3 TEXT,v4 TEXT,v5 TEXT);
DELETE FROM h1;
I have table that for the specified rows with specified ids need to change the value, while for previously selected rows should be reset.
Do I need to reset the whole table and then update for specified rows. is there any option to update table with only one query.
I'm using room persistence on android
Like any database standard, Room Update and Delete are separate operation types.
Then maybe you can try to execute Trigger if you need mixed operation (thread about trigger).
But for what reason do you have to execute this two operations in a same query ?
I already have a SQLite Database setup which I am using as cache for the Android application. The application does a HTTP Request and gets back a List of objects which I can insert into the db. After the first request, if I do anymore requests, how do all of the following in a better way:
1) insert all new objects from the list
2) update all objects that were already in the db
3) delete all rows that were not there in the latest list of objects.
I know that options 1 and 2 can be done using the "INSERT OR UPDATE" query. How can I manage the 3rd option efficiently?
Right now my approach is to delete all from table and then insert all. But that isn't very efficient. Any ideas how to improve it?
For that you can use the ids of the rows. For doing that first retrieve all the rows which you want to delete using SELECT query and add it a temporary arraylist, then use for loop over the arraylist to delete all those rows by using DELETE query.
You should do your operations using the applyBatch() method of the ContentProvider (http://developer.android.com/reference/android/content/ContentProvider.html#applyBatch(java.util.ArrayList)).
You can perform this method in a separate thread asynchronously so that you do not block anything else. You will have to create a list of ContentProviderOperations. In fact, you only need to specify the ones you need to insert or update within the ArrayList and implement the applyBatch() method such that it will automatically delete the rest of the entries in the database.
To answer your question about how to delete the entries not in the table, the logical assumption would be to search through your data sequentially and then delete the ones that do not need to exist.
I guess the intention is to refresh the Http request result set saved in the database. So I think the most efficient way is do a transaction or batch operation to delete all rows from the table first and then insert the new rows. A transaction might be better so that the result rows are either all new or all old, but not mixed.
I use this function to insert data into the SQLite Android data base:
public long insertAccount(String code,String name,int s3,int s4,String s5,String s6,int s7,
int s8,int s9,int s10,int s11,String s12,String s13,int s14,int s15,int s16) {
//container and place in it the information you want inserted, updated, etc.
ContentValues initialValues = new ContentValues();
initialValues.put(Code, code);
initialValues.put(Name,name);
initialValues.put(Type, s3);
initialValues.put(Level1, s4);
initialValues.put(Father, s5);
initialValues.put(ACCCurr,s6);
initialValues.put(AccNat, s7);
initialValues.put(LowLevel, s8);
initialValues.put(DefNum, s9);
initialValues.put(AccClass, s10);
initialValues.put(SubClass, s11);
initialValues.put(SSClass1, s12);
initialValues.put(SSClass2, s13);
initialValues.put(Stype1, s14);
initialValues.put(Stype2, s15);
initialValues.put(Stype3, s16);
return db.insert(DATABASE_TABLE, null, initialValues);
}
But this takes much time when inserting about 70,000+ rows! How can I accelerate the process of insertion into the data base, and after the insert is done, how can I apply Update on it?
Some options:
Prepopulate your database. See "Ship an application with a database"
Use transactions to reduce the time waiting for I/O. See e.g. "Android SQLite database: slow insertion". Likely you cannot wrap all 70k rows in a single transaction but something like 100..1000 inserts per transaction should be doable, cutting the cumulative I/O wait time by orders of magnitude.
Inserting into SQLlite android using PHP? how is it possible using php in android phone, I am sorry I didn't got this.
Anyways I believe you have written the java code up here and you have like 7k+ records that you want to insert in your db.
The style of inserting a bulk of records in any db is called "Bulk Inserts", the idea is to create as less number of transactions as possible and rather do all the inserts in one shot; In case of relational db's like sql server and oracle its done by specific api's as well, but in sqllite the plain old idea is to make a single transaction with a bunch of data
check out this article which uses the same technique http://www.techrepublic.com/blog/software-engineer/turbocharge-your-sqlite-inserts-on-android/ and also explains it quite well.
You have to use transaction to done insertion in 1 time. you can use this:
//before insertion
db.beginTransaction();
//====do insertion
//after insertion
db.setTransactionSuccessful()
db.endTransaction();
The Android app that I am currently working on dynamically adds columns to an SQLite database. The problem I have is that I cannot figure out a way to remove these columns from the database.
If I add column A, B, C, D, and E to the database, is it possible to later remove column C?
I have done a lot of looking around and the closest thing I could find was a solution that requires building a backup table and moving all the columns (except the one to be deleted) into that backup table.
I can't figure out how I would do this, though. I add all the columns dynamically so their names are not defined as variables in my Java code. There doesn't seem to be a way to retrieve a column name by using Android's SQLiteDatabase.
SQLite has limited ALTER TABLE support that you can use to add a column to the end of a table or to change the name of a table.
If you want to make more complex changes in the structure of a table, you will have to recreate the table. You can save existing data to a temporary table, drop the old table, create the new table, then copy the data back in from the temporary table.
For example, suppose you have a table named "t1" with columns names "a", "b", and "c" and that you want to delete column "c" from this table. The following steps illustrate how this could be done:
BEGIN TRANSACTION;
CREATE TEMPORARY TABLE t1_backup(a,b);
INSERT INTO t1_backup SELECT a,b FROM t1;
DROP TABLE t1;
CREATE TABLE t1(a,b);
INSERT INTO t1 SELECT a,b FROM t1_backup;
DROP TABLE t1_backup;
COMMIT;
SQLite doesn't support a way to drop a column in its SQL syntax, so its unlikely to show up in a wrapper API. SQLite doesn't often support all features that traditional databases support.
The solutions you've identified make sense and are ways to do it. Ugly, but valid ways to do it.
You can also 'deprecate' the columns and not use them by convention in newer versions of your app. That way older versions of your app that depend on column C won't break.
Oh... just noticed this comment:
The app is (basically) an attendance tracking spreadsheet. You can add
a new "event" and then indicate the people that attended or didn't.
The columns are the "events".
Based on that comment you should just create another table for your events and link to it from your other table(s). You should never have to add columns to support new domain objects like that. Each logical domain object should be represented by its own table. E.g. user, location, event...
Was writing this initially. Will keep it if you're interested:
Instead of dynamically adding and removing columns you should consider using an EAV data model for that part of your database that needs to be dynamic.
EAV data models store values as name/value pairs and the db structure never needs to change.
Based on your comment below about adding a column for each event, I'd strongly suggest creating a second table in which each row will represent an event, and then tracking attendance by storing the user row id and the id of the event row in the attendance table. Continually piling columns onto the attendance table is a definite anti-pattern.
With regards to how to find out about the table schema, you can query the sqlite_master table as described in this other SO question - Is there an SQLite equivalent to MySQL's DESCRIBE [table]?
As per SQLite FAQ, there is only limited support to the ALTER TABLE SQL command. So, the only way you can do is that ou can save existing data to a temporary table, drop the old table, create the new table, then copy the data back in from the temporary table.
Also you can get the column name from the database using a query. Any query say "SELECT * FROM " gives you a cursor object. You can use the method
String getColumnName(int columnIndex);
or
String[] getColumnNames();
to retrieve the names of the columns.