Just curious on the best practice on syncing data from a database to an android tablet.
Tables:
- Part1
- Part2
- Part3
- Part4
- Part5
Whenever I open the app on the tablet I grab the latest lists from the database, truncate the table, and re-add the records. Each table consists of 400 records. So it takes around 60.45 per table to grab the data from the server and add it. Since I have 5 tables it takes around 5 minutes. Is there a better way to achieve efficient syncing for what I am doing? After I grab the data from the server, instead of truncating the table I've tried checking if it exists firsts before adding it but that didn't help with the time.
What I am currently doing: I get the JSON list from the API server and truncate the table and add the rows back. Pretty time consuming with 5 tables of 500 records each.
libraryApp = (LibraryApp) act.getApplication();
List<Pair> technicians = getJsonData("get_technicians");
if(technicians.size() > 0) {
stiLibraryApp.getDataManager().emptyTechnicianTable(); // truncate current table
// add technicians back to database
for(Pair p : technicians) {
libraryApp.getDataManager().saveTechnician(new Technician((Integer) p.key(), (String) p.value()));
}
}
Given the limited information provided I would try the following:
(1) Have your server keep a record of when the table you are updating was last "put" on the server. I have no idea what backend language you are using so I cannot make a sugestion. But it should be really easy to keep a lastupdated timestamp.
With this timestamp you will be able to tell if the version of the table on your server is more recent than the version on your mobile device.
(2) Using an AsyncTask download the data you need. I am not sure if all 5 tables are in the same activity, in seperate activities, in fragments or something else. However, the general idea is as follows:
private class GetTableData extends AsyncTask<Void, Void, Void>{
#Override
protected Void doInBackground(){
//get data from your sever
return null;
}
protect void onPostExecute(Void result){
//update the table view if version on server is newer
}
You will place all your I/O methods, that is those that connect to your server and download data within doInBackground. You will place all methods that update the table view within onPostExecute. This seperation is necessary because while I/O functions must run in the background after Jellybean, views must be updated from the UI thread.
(3) Check the timestamp of what you downloaded. If what you downloaded is newer update your table. You can acomplish this by simply adding in a conditional statment to your onPostExecute function such that
if(lastDownloadTime < lastUpdatedOnServerTime){
//update view
}
Depending on how big the table files are you may want to add a function on your sever code that just returns the time the table was last updated. That way you can check the time it was last updated against the time you last downloaded the table. If the table was updated on the server after you downladed it you can proceed to download the new information.
That's the basic idea. You can adapt it to your own set up.
Related
I want to make an android app that works offline. For the data part, I want to have some data in a json file, and whenever my app is open firstly that JSON file is fetched, and from the fetched data I want to make tables entries in android room database(offline). So that, let say if user liked some quotes, then i can change the state of that quote as liked in room db, and when user clicked on Liked Quotes navigation, I can show those offline stored quotes which were liked (OfCourse when user delete the app that data will be lost). The problem I'm facing is where to fetch that data file and create entries in room db. If I do this in onCreate() then whenever user will open this app the duplicate entries will be created everytime. How to make those entries only ones?
There's several ways to do it. One way is to include a random UUID in each element, and make that column in the DB have a UNIQUE constraint. Then re-adding it will fail (alternatively you can use an UPSERT and then it will automatically update the data in case the data changed).
Another way is to just not process the file if it already exists in onCreate. Your logic can be
if(network_exists) {
copy_file_from_network()
}
else if(json file exists) {
return
}
else {
copy_file_from_assets()
}
process_json_file()
Actually I can see a good argument for doing both- that way if there are updates to existing rows you process them, but if there's no new data you don't waste your time.
As for a good place to put this- I'd be running this during your splash screen if you have one, so the user has an indication that you may be processing for a while.
My understanding of SQLite transactions on Android is based largely on this article. In its gist it suggests that
if you do not wrap calls to SQLite in an explicit transaction it will
create an implicit transaction for you. A consequence of these
implicit transactions is a loss of speed
.
That observation is correct - I started using transactions to fix just that issue:speed. In my own Android app I use a number of rather complex SQLite tables to store JSON data which I manipulate via the SQLite JSON1 extension - I use SQLCipher which has JSON1 built in.
At any given time I have to manipulate - insert, update or delete - rows in several tables. Given the complexity of the JSON I do this with the help of temporary tables I create for each table manipulation. The start of the manipulation begins with SQL along the lines of
DROP TABLE IF EXISTS h1;
CREATE TEMP TABLE h1(v1 TEXT,v2 TEXT,v3 TEXT,v4 TEXT,v5 TEXT);
Some tables require just one table - which I usually call h1 - others need two in which case I call them h1 and h2.
The entire sequence of operations in any single set of manipulations takes the form
begin transaction
manipulate Table 1 which
which creates its own temp tables, h1[h2],
then extracts relevant existing JSON from Table 1 into the temps
manipulates h1[h2]
performs inserts, updates, deletes in Table 1
on to the next table, Table 2 where the same sequence is repeated
continue with a variable list of such tables - never more than 5
end transaction
My questions
does this sound like an efficient way to do things or would it be better to wrap each individual table operation in its own transaction?
it is not clear to me what happens to my DROP TABLE/CREATE TEMP TABLE calls. If I end up with h1[h2] temp tables that are pre-populated with data from manipulating Table(n - 1) when working with Table(n) then the updates on Table(n) will go totally wrong. I am assuming that the DROP TABLE bit I have is taking care of this issue. Am I right in assuming this?
I have to admit to not being an expert with SQL, even less so with SQLite and quite a newbie when it comes to using transactions. The SQLite JSON extension is very powerful but introduces a whole new level of complexity when manipulating data.
The main reason to use transactions is to reduce the overheads of writing to the disk.
So if you don't wrap multiple changes (inserts, deletes and updates) in a transaction then each will result in the database being written to disk and the overheads involved.
If you wrap them in a transaction and the in-memory version will be written only when the transaction is completed (note that if using the SQLiteDatabase beginTransaction/endTransaction methods, that you should, as part of ending the transaction use the setTransactionSuccessful method and then use the endTransaction method).
That is, the SQLiteDatabase method are is different to doing this via pure SQL when you'd begin the transaction and then end/commit it/them (i.e. the SQLiteDatabase methods would otherwise automatically rollback the transactions).
Saying that the statement :-
if you do not wrap calls to SQLite in an explicit transaction it will
create an implicit transaction for you. A consequence of these
implicit transactions is a loss of speed
basically reiterates :-
Any command that changes the database (basically, any SQL command
other than SELECT) will automatically start a transaction if one is
not already in effect. Automatically started transactions are
committed when the last query finishes.
SQL As Understood By SQLite - BEGIN TRANSACTION i.e. it's not Android specific.
does this sound like an efficient way to do things or would it be
better to wrap each individual table operation in its own transaction?
Doing all the operations in a single transaction will be more efficient as there is just the single write to disk operation.
it is not clear to me what happens to my DROP TABLE/CREATE TEMP TABLE
calls. If I end up with h1[h2] temp tables that are pre-populated with
data from manipulating Table(n - 1) when working with Table(n) then
the updates on Table(n) will go totally wrong. I am assuming that the
DROP TABLE bit I have is taking care of this issue. Am I right in
assuming this?
Dropping the tables will ensure data integrity (i.e. you should, by the sound of it, do this), you could also use :-
CREATE TEMP TABLE IF NOT EXISTS h1(v1 TEXT,v2 TEXT,v3 TEXT,v4 TEXT,v5 TEXT);
DELETE FROM h1;
I need to synchronize the data in my application. I do the request to the server, bind and use copyToRealmOrUpdate(Iterable<E> objects) to add or update this data to the database.
But my files can be invalidated and I need something to delete everything that don't have at the data that return at the request. I don't want to truncate or do a manual delete to do this because performance matters.
IDEA 1
#beeender
What do you think about use the PRIMARY_KEY of the table to delete the data that I don't want (or I don't need)?
Looks like:
1º: If the database was populated, get all primary key and add it in an HashMap (or anything that do the same).
2º: Update the data or add, removes the item of the HashMap (using the primary key) if it was updated or added.
3º: Remove all items of HashMap on the Realm.
Maybe the In memory Realm would be a good choice for you in this situation. You can find related documents here .
By using the in-memory Realm:
The db will be empty when you start a new app process
After you close all the instances of the Realm, the data will be cleared as well.
----------------------------------- Update for deleting data for normal case -----------------------------------------
For deleting, there are some options you can use
Remove all data for a specific model, see doc
realm.allObjects(MyModel.class).clear();
Remove entire data from a given Realm by (Realm API)[https://realm.io/docs/java/latest/api/io/realm/Realm.html#deleteRealm(io.realm.RealmConfiguration)] (close all instances first!):
Realm.deleteRealm(realmConfig);
Or just remove the Realm file through normal java API.
If you really care about the performance, you could consider to separate those data in one Realm, and use option 2 or 3 to remove them. See doc here for using different Realm through RealmConfiguration.
----------------------------------- Update for delete by Date field ------------------------------------------------------
For your user case, this would be a good choice:
Add a Date field to your model, and add annotation #Index to make query faster on it.
Update/add rows and set the modified date to current time.
Delete the objects where its modifiedDate is before the current date.realm.where(MyModel.class).lessThan("modifiedDate", currentDate).findAll().clear()
NOTE: "The dates are truncated with a precision of one second. In order to maintain compatibility between 32 bits and 64 bits devices, it is not possible to store dates before 1900-12-13 and after 2038-01-19." See current limitations. If you could modified the table in a very short time which the accuracy doesn't fit, consider to use a int field instead. You can get the column's max value by RealmResult.max()
Thanks in advance.
I am developing a Car Review Application, where user can log in and displayed all the review from the Database. All the the data is being stored in MYSQLdatabase first. I am using json to connect to the MYSQLdatabase and SQLiteDatabase. But the problem is that, after log in the application screen huge no. of data is coming from the server and it is being inserted in our SQLite Database.
After that it is being retrieved from database and displayed in the Application Screen in a list view, it is taking a longer time to displayed all the data in list view. In that case, I am using a SimpleCursorAdapter to retrieve all the data from database.
So is there any way like pagination or something like that to make the data retrieval faster.
Please help me by giving some source code.
You can use something like:
Page 1:
SELECT * FROM YOUR_TABLE LIMIT 20 OFFSET 0
Page 2:
SELECT * FROM YOUR_TABLE LIMIT 20 OFFSET 20
Reference: http://sqlite.org/lang_select.html
You can use the concept of Asynchronous tasks along with SimpleCursorAdapters.
"AsyncTask enables proper and easy use of the UI thread. This class allows to perform background operations and publish results on the UI thread without having to manipulate threads and/or handlers."
Here's what you can do:
1) Retrieve only 1st 10/15 items in the 1st query.
2) Fire another query as a background task, while user is checking out first 10/15 items.
This will certainly make the User experience faster
Using the LIMIT keyword from MYSQL you can achieve pagination.
LIMIT allows you to control the number of rows returned by query:
Example:
to show first 10 records
SELECT * FROM Student LIMIT 10 //for first time
to show rows between 10 and 20
SELECT *FROM Student LIMIT 9, 10 //after showing the records first time
LIMIT works for SQLiteDatabase also
I currently successfully use a SQLite database which is populated with data from the web. I create an array of values and add these as a row to the database.
Currently to update the database, on starting the activity I clear the database and repopulate it using the data from the web.
Is there an easy method to do one of the following?
A: Only update a row in the table if data has changed (I'm not sure how I could do this unless there was a consistent primary key - what would happen is a new row would be added with the changed data, however there would be no way to know which of the old rows to remove)
B: get all of the rows of data from the web, then empty and fill the database in one go rather than after getting each row
I hope this makes sense. I can provide my code but I don't think it's especially useful for this example.
Context:
On starting the activity, the database is scanned to retrieve values for a different task. However, this takes longer than it needs to because the database is emptied and refilled slowly. Therefore the task can only complete when the database is fully repopulated.
In an ideal world, the database would be scanned and values used for the task, and that database would only be replaced when the complete set of updated data is available.
Your main concern with approach (b) - clearing out all data and slowly repopulating - seems to be that any query between the empty and completion of the refill would need to be refused.
You could simply put the empty/repopulate process in a transaction. Thereby the database will always have data to offer for reading.
Alternatively, if that's not a viable solution, how about appending newer results to the existing ones, but inserted as with an 'active' key set to 0. Then, once the process of adding entries is complete, use a transaction to find and remove currently active entries, and (in the same transaction) update the inactive entries to active.