Last object not saved after table was truncated in ActiveAndroid - android

I use ActiveAndroid to save my objects to the database, it works mostly well. In my application, I use the following scenario:
I save a new object to a table in my database
I select some objects from that table
I add them to a List<>
I delete everything from that table
I use foreach on my List and call 'save' on each object
And here comes the problem. In my table the objects are saved except the aforementioned most recently saved one. I created a counter to check, how many 'save' was called: the counter is 1 more than the count of the objects in the table. I debugged it, no exception was raised, the save was called. I use the latest version of ActiveAndroid (3.0.99)
Any ideas what I should check?

Well, the problem can be seen in the scenario if your read it through.
I copy an existing object to the memory and try to reinsert it. The ORM checks only the mID of the object and if it is not null, it calls an update. As my object had an id, it was tried to be updated though the table was truncated so nothing was updated.
I don't know if it is intentional that the model never checks the table just its own id but it can lead to issues like this.

Related

copyToRealmOrUpdate and Delete

I need to synchronize the data in my application. I do the request to the server, bind and use copyToRealmOrUpdate(Iterable<E> objects) to add or update this data to the database.
But my files can be invalidated and I need something to delete everything that don't have at the data that return at the request. I don't want to truncate or do a manual delete to do this because performance matters.
IDEA 1
#beeender
What do you think about use the PRIMARY_KEY of the table to delete the data that I don't want (or I don't need)?
Looks like:
1º: If the database was populated, get all primary key and add it in an HashMap (or anything that do the same).
2º: Update the data or add, removes the item of the HashMap (using the primary key) if it was updated or added.
3º: Remove all items of HashMap on the Realm.
Maybe the In memory Realm would be a good choice for you in this situation. You can find related documents here .
By using the in-memory Realm:
The db will be empty when you start a new app process
After you close all the instances of the Realm, the data will be cleared as well.
----------------------------------- Update for deleting data for normal case -----------------------------------------
For deleting, there are some options you can use
Remove all data for a specific model, see doc
realm.allObjects(MyModel.class).clear();
Remove entire data from a given Realm by (Realm API)[https://realm.io/docs/java/latest/api/io/realm/Realm.html#deleteRealm(io.realm.RealmConfiguration)] (close all instances first!):
Realm.deleteRealm(realmConfig);
Or just remove the Realm file through normal java API.
If you really care about the performance, you could consider to separate those data in one Realm, and use option 2 or 3 to remove them. See doc here for using different Realm through RealmConfiguration.
----------------------------------- Update for delete by Date field ------------------------------------------------------
For your user case, this would be a good choice:
Add a Date field to your model, and add annotation #Index to make query faster on it.
Update/add rows and set the modified date to current time.
Delete the objects where its modifiedDate is before the current date.realm.where(MyModel.class).lessThan("modifiedDate", currentDate).findAll().clear()
NOTE: "The dates are truncated with a precision of one second. In order to maintain compatibility between 32 bits and 64 bits devices, it is not possible to store dates before 1900-12-13 and after 2038-01-19." See current limitations. If you could modified the table in a very short time which the accuracy doesn't fit, consider to use a int field instead. You can get the column's max value by RealmResult.max()

SQL interface like pattern?

I have two tables, SyncedComments and QueuedComments, the latter holds local comments until they are synced with a webserver, when they are synced succesfully they get placed in the synced table, my application should be indifferent to each type. I load in the comments through a CursorLoader, and they may be moved to the synced table while users are reading them. Let's say the user can also edit comments, perhaps while they are being moved, so the application should know where the comment is, regardless of it's table.
To support this, I've thought of having a table with 3 columns, local_id, synced_id and queued_id, the local_id is persistent and simply serves as a reference to either one of the two other id's. When a comment is created a new row is inserted with it's sync_id set to NULL and the queue id it's been given, when a comment is moved then the queue_id is set to NULL and the sync_id is set. This way my application only needs to reference the local id at all times.
How does this solution look? Any flaws? Could it be done smarter?
I would in the first place put all the comments in one table, with flag for whether the comment is synchronized (actually it would probably be ID on server, set to NULL until synchronized and the value obtained from server afterwards). That will take you down to 1 table instead of 3, make it easier to show all comments (because you won't need to do union) and above all avoid problems when the comment is synchronized while being shown, because the comment will not be moving anywhere. And it does less writes to the database file, so it causes less fragmentation and fewer writes to the flash device.

Updating database objects that have foreign collections?

I'm trying to think of how to get around this problem. I have an ORMlite object that can belong to multiple Categories; I'm using another table (i.e. a ForeignCollection) to track many-to-many connections between my objects and categories.
The problem is if I update the object with changed categories, the new categories are added, but old ones are not removed.
In the JavaDoc for the update method of DAO I see this text:
NOTE: Typically this will not save changes made to foreign objects or
to foreign collections.
My question is about the use of the word "typically." Does this mean that there IS a way through some sort of setting to make sure that updates update related foreign objects/collections?
Or should I read the sentence as if "typically" was not there, assume there is no automatic method, and that I need to run extra queries on committing each object to delete old categories?
The problem is if I update the object with changed categories, the new categories are added, but old ones are not removed.
So you have an object that has a foreign collection of categories:
#ForeignCollectionField
ForeignCollection<Category> categories;
If you run categories.add(category1) or categories.remove(category1), then the underlying collection should remove those from its associated table using a built-in DAO.
If you are changing the category list some other way then you are going to have to remove the Category entries by hand using the categoryDao directly.
... about the use of the word "typically." Does this mean that there IS a way through some sort of setting to make sure that updates update related foreign objects/collections?
Not sure why I left the word "typically" there. I think it was a blanket statement to take into account the various auto-create, auto-refresh, etc. field settings -- I'm not sure. In any case, I've removed it from the code base.
ORMLite has no way to know if foreign objects have been changed. It does not create magic proxy objects nor sessions so that it can tell when a foreign object has been updated. You have to be explicit about what you want updated when. The documentation on foreign collections is quite explicit about it.
OrmLite will not save objects to ForeignCollections automatically. You have to store and delete these objects yourself. Ormlite will retrieve the objects in the ForeignCollection automatically for you, provided you set the right parameters in the annotation.
Ormlite is "lite". It does ORM, but not completely. It's not JPA or Hibernate.
I solved this problem by adding the new Category to the table Categories directly, instead of adding a new category to the Object's foreignCollection.
This can be done by simply creating a category ado and adding a new element.
A newCategory.setObject(object) is needed in order to create the relation with the object.
Hope this helps.

Android SQLite - Update table only if different

I currently successfully use a SQLite database which is populated with data from the web. I create an array of values and add these as a row to the database.
Currently to update the database, on starting the activity I clear the database and repopulate it using the data from the web.
Is there an easy method to do one of the following?
A: Only update a row in the table if data has changed (I'm not sure how I could do this unless there was a consistent primary key - what would happen is a new row would be added with the changed data, however there would be no way to know which of the old rows to remove)
B: get all of the rows of data from the web, then empty and fill the database in one go rather than after getting each row
I hope this makes sense. I can provide my code but I don't think it's especially useful for this example.
Context:
On starting the activity, the database is scanned to retrieve values for a different task. However, this takes longer than it needs to because the database is emptied and refilled slowly. Therefore the task can only complete when the database is fully repopulated.
In an ideal world, the database would be scanned and values used for the task, and that database would only be replaced when the complete set of updated data is available.
Your main concern with approach (b) - clearing out all data and slowly repopulating - seems to be that any query between the empty and completion of the refill would need to be refused.
You could simply put the empty/repopulate process in a transaction. Thereby the database will always have data to offer for reading.
Alternatively, if that's not a viable solution, how about appending newer results to the existing ones, but inserted as with an 'active' key set to 0. Then, once the process of adding entries is complete, use a transaction to find and remove currently active entries, and (in the same transaction) update the inactive entries to active.

Android Widgets: Where would the 'insert' step for a database occur?

I have a widget that currently takes a random string from an array and sets it to text view on update. The issue here is that the same item can be re-used multiple times in a row due to the string being 'random'
In order to solve this I was going to create a table that held String text, and int viewednum and increment the viewed number each time 'get text' was called. (on update in the widget).
My Question: If I put the insert statements in the widget, won't the data be inserted every time 'on update' is called?
Would it be better for it to go in the DBadapter class somewhere? I'm just unsure about the best way to make sure I don't enter duplicate data. If there is a better alternative like saving a csv file somewhere and using that I'm open to it, it seemed like a sqlite database was the way to go.
Thank you for your time.
That depends on what your onUpdate method does. If each time onUpdate is called it gets a random String from the database, then that would be the place to put it. However, if you are not getting the String during onUpdate, then you should put it in the method where you are accessing your database. I think your confusion is about the purpose of onUpdate. onUpdate doesn't get called every time the user scrolls by the homepage and sees your widget; it gets called regularly on a timescale you specify, and the whole purpose of it is, in a case like yours, to get a new String from the database.
As for your second question, yes, SQlite databases are the way to do it :) I haven't tried saving a csv file or something like that, but I imagine that would be a lot more complex than just using a database.
Declare your database with a UNIQUE constraint on the columns you want to keep unique, then set the desired behaviour via ON CONFLICT in the INSERT statement. ON CONFLICT REPLACE... means the most recent INSERT overwrites. ON CONFLICT IGNORE... keeps the older version.

Categories

Resources