I'm considering to use db4o in my android project to store objects but my concern is if in the future I want to change one of the objects attributes how do i deal with the existing data of that object in the db4o file? I know in sqlite3 environment this can be done by altering the table structure in the onUpgrade() method, so how does db4o deal with this?
in db4o it really depends on what the changes are.
Adding field: You just add it. The new field will have the default value of that type. (null for references, 0 for numbers)
Removing a field: db4o will just ignore the removed field. You can access the old data of the removed field. As soon as you update the object the value of the old field will be removed.
Renaming fields and classes can be done with this call.
Changing the type of a field. That's like adding a new field. You need to copy the values over yourself. See here.
Adding interfaces has no effect on db4o.
Removing a type from the inheritance hirarchy: Not supported.
In general, take a look here.
Related
I have a document on Firestore, from which I read its fields in a fragment. Since it has many fields, I set variables in the Activity that hosts this fragment so that I can pass the data between other fragments. In order to achieve that, I realize that I have to write similar lines of codes over and over again, which got me to thinking if there is a better way.
Two possible solutions that come to my mind:
Structure all these fields in JSON format -> something that wouldn't be suitable in Firestore's document system imo
Put all these fields into a serializable data class which I keep in the activity then pass it around the bundles of fragments -> Seemed to complicated and I would still have to write it.get(foo) as bar for each of the field's of this class' constructor.
Given all these, what is the best approach? Thanks in advance.
You have a several options on how to approach this. There is none that's necessarily better than another. Ultimately, you will pick the one that best suits your needs and preferences.
You can do what you're doing now.
You can go a step further an actually check the types of the values instead of just blindly casting them (which would cause a crash at runtime if they didn't match).
You can provide a Class object to get(String, Class<T>) that can automatically map the fields to properties in a new object of type T, as long as the types also match.
You can call a variety of type-specific versions of get, such as getString()
Ultimately you will have to decide if you are going to trust what you get in the snapshot and allow errors to happen, or trust nothing and check everything. It's up to you.
ObjectBox docs suggest to use auto-assignable long ids for elements and it even has some checks based on it:
By default object IDs are assigned by ObjectBox. For each new object, ObjectBox will assign an unused ID that is above the current highest ID value used in a box. For example, if there are two objects with ID 1 and ID 100 in a box the next object that is put will be assigned ID 101.
http://objectbox.io/documentation/introduction/#Object_ID_assignment
If we have a custom key, we can add #Id(assignable = true) and it will use given field as an id.
However, I read somewhere that it adds some performance overhead and it is better to use the standard auto incremented ones whenever possible. I can't find the source now, so does anybody know if it is ok to use assignable ids for often changed objects? In addition, does ObjectBox use equals() and hashCode() somehow?
The main reasoning for using assignable ids for us is to be able to put elements using their natural long ids without manual resolving the mapping.
As I understood according to official docs and comment of Marcus Junginger (CTO of ObjectBox), there is no performance penalty when you use assignable ids.
I realized that when calling "createFile", it creates a new file even if its title is an already existing title.
What am doing now is to search for the file first and if i can't find it, i create it. Two methods for a simple problem.
There is a better way to create a file overriding it if already it exists?
Google Drive is actually a 'flat' model, where every object is identified by it's unique ID.
So, when an object (file/folder) is created, it gets a unique ID. The object may/not have content. Everything else is 'metadata'. The tree structure of popular OSs is actually 'faked' by metadata links (parent links). That means in Google Drive you may have multiple children with the same metadata (title/name) in a parent object. And you may also have multiple parents for any child object (single object appears in multiple parents' folders).
All this rant means one thing for your situation:
Once you create a file/folder and get hold of it's ID, 'creation of a new file with the same name' can be accomplished by modifying it's content and/or metadata (you can see a typical example here).
If you take the path of delete/create (which is also possible, but had not been until recently), you are actually:
1/ modifying the original file/folder's 'trashed/deleted' metadata
2/ creating a brand new object with a different ID
Think twice before you select the method you use. UPDATE method is a 'one-step', approach preferable in async environment (create MUST wait for successful delete). On the other hand, if you use DELETE/CREATE approach you may be able to take advantage of the fact that 'trashed' object will be around for a while.
Good Luck
I think files are uniquely identified by their ID in the Drive API. Therefore you have no way to control for the title using the drive API itself. So doing it yourself is probably the way to go.
EDIT: The ID is what is important with all that synchronisation happening. A title could change easily therefore using it as a unique identifier would be a bad idea. Hence the unique ID.
What you could do if the file already exists is either remove it and replace it by the new one (bad idea I would say) or simply add an extra number at the end of the new file that will be added to the folder.
i have been searching for a while for a solution to my problem without success.
I have an application in which I receive information for a particular entity in my database from different services, so I am using greenDAOs insertOrReplace methods so whenever the entity already exists in my DB it gets updated instead of recreated.
So far so good.
The problem is.. let's say for example sake my entity is called User, with fields id, title, and displayName.
So in the first call I get a JSON object containing a user with only its id and title fields, so I insert it into the DB and naturally the displayName gets inserted as NULL.
Afterwards from another service I get another JSON containing the same user (same id field), but it comes with the displayName as well, but doesn't include the title info at all.
So whenever I run the insertOrReplace on the DAO object automatically generated by greenDAO, the user gets updated but as the title info was not present, when it gets updated the title field gets reset to NULL, so I end up losing data.
Unfortunately I am unable to change the data being returned from the services, and haven't been able to fix this issue. I find it hard to believe there is no easy way to tell the DAO object to update only certain fields and no all of them.
I was looking at the code generated by greenDAO and in the dao objects generated there is a bindValues method which gets called before the query gets executed, and apparently it filters out the NULL properties from the object, but either way it gets updated with the NULL value.
I was able to come up with some sort of fix by modifying the final dao object being generated by adding some methods from the parent class, but I don't think this is a good solution because I would have to do this for all the dao objects. (I know it's possible to define a custom superclass but this only applies for the entity object and not the DAO one).
I would really appreciate if someone has any idea on how I could resolve this, and sorry for the long explanation, I just wanted to be clear on my issue.
Thanks.
First of all: I wouldn't tamper with the generated code unless you really know what you are doing. Modifications may have effects on caches and data-integrity.
Generally you are following this (insert-or)-update-approach if you are using a ORM-Framework (like greendao):
Try to get the entity, that you want to modify from the db (maybe it is already in cache, so this may not be a real database operation)
If you don't have such an entity: create it
Modify the entity according to your needs
Insert or Update it in database (in greendao you would use insertOrReplace)
I'm looking into using greenDAO for my Android app, but I noticed it doesn't seem to support any kind of data validation other than "not null", "unique", and foreign keys, either on the SQL level (constraints defined when creating tables) or the Java level (validation logic in setter methods). "Keep sections" don't seem like they would be helpful in this case because you can't have them within individual methods. Am I missing something, or would I really need to add yet another layer on top of the generated Java objects if I wanted to validate input data? (I'm somewhat confused how the framework could be useful without providing any place to include validation logic.)
1.
You can write a method
boolean check ();
in KEEP-SECTION of the entity which you call manually before INSERT or UPDATE.
2.
Another possibility is to extend the sourcecode of greendao generator to support checks: In property.java you could add a method to Property.Builder
public Property.Builder check (String expr) {
property.checkConditon = expr;
}
Of course you would have to introduce the String checkCondition = ""; and use it for generating the dao in the dao-template.
Problem:
With new versions of greendao your changes would be lost (but then again new version may already contain such a feature)
3.
A third possibility is to copy the generated CREATE TABLE statement, modify it to fit your needs and call your modified statement instead of the original one or to drop the original table and call your statement.
Problem:
If your table changes you will have to repeat this.