I'm working on an Android application using Couchbase lite.
Should I have my classes extending com.couchbase.lite.Document ?
Pros: DAO is integrated in class.
Cons: - every object is linked to a document, if we want a new object, we must create a new document in couchbase? - anything else?
For example:
public class UserProfile extends Document {
public UserProfile (Database database, String documentId);
public Map<String, Object> getProperties();
public boolean isModified();
public boolean update() throws CouchbaseLiteException {
if (isModified()) {
super.putProperties(getProperties());
return true;
}
else
return false;
}
}
I would not recommend extending Document. Instead, either just use Maps, or use something like the Jackson JSON library to create POJOs. I usually create a simple helper class to wrap the database operations (including replication, if you're using that).
Off the top of my head, I wouldn't do it because subclassing doesn't fit well with some of the ways you retrieve documents, documents are somewhat heavy-weight objects, and the preferred way to update takes into account the possibility of conflicts, which would be much more difficult. (See this blog post for a discussion of that last point.)
I've never tried to work around these issues in a subclassing approach, but it seems pretty certain to be more pain than it's worth.
Related
I am developing an android chat application, I have several types of messages inherited from a single abstract class. I want to get a list of different types of chat messages. I think I need an ORM with inheritance support with a SINGLE_TABLE strategy. Is there an ORM for Android with support for this functionality? Or, perhaps, you will advise how to solve this problem using the ORM without SINGLE_TABLE support?
Examples:
public abstract class AbstractMessage implements MessageListContent, Serializable, Comparable<AbstractMessage> {
public enum Status {
DELIVERED,
SENDING_AND_VALIDATION,
NOT_SENDED,
INVALIDATED
}
private SupportedMessageListContentType supportedType = SupportedMessageListContentType.UNDEFINED;
private boolean iSay;
private long timestamp;
private Status status = Status.SENDING_AND_VALIDATION;
private String transactionId;
private String companionId;
// getters and setters
//...
}
public class BasicMessage extends AbstractMessage {
private String text;
private transient Spanned htmlText;
// getters and setters
//...
}
public class TransferMessage extends AbstractMessage {
private BigDecimal amount;
// getters and setters
//...
}
I don't know if you know a lot about ORM's in android, but, two of the most famous ORM's for Android are Room and Realm. And these two could achieve what you want. But, don't take my word on realm, as I am only going to state what my friend told me about realm.
For starters, Room runs in SQLite and Realm in NoSQL. Now, I assume that you know greatly about inheritance and polymorphism, this concept could also be applied to SQL. This is, taking in account that you choose SQLite. Now for realm, it is a different story tho. My friend told me that the polymorphism of your models answers to the polymorphism of the database. Though, I highly doubt that, but I don't like realm, so don't take my word for it.
For choosing your database, I will be frank to tell you to choose SQLite and to help you decide, here is a simple site that currated reasons on which is better, SQL or NoSQL: http://www.nosql-vs-sql.com/
I am trying to implement DBFlow for the first time and I think I might just not get it. I am not an advanced Android developer, but I have created a few apps. In the past, I would just create a "database" object that extends SQLiteOpenHelper, then override the callback methods.
In onCreate, once all of the tables have been created, I would populate any lookup data with a hard-coded SQL string: db.execSQL(Interface.INSERT_SQL_STRING);. Because I'm lazy, in onUpgrade() and onDowngrade(), I would just DROP the tables and call onCreate(db);.
I have read through the migrations documentation, which not only seems to be outdated syntactically because "database =" has been changed to "databaseName =" in the annotation, but also makes no mention of migrating from no database to version "initial". I found an issue that claims that migration 0 can be used for this purpose, but I cannot get any migrations to work at this point.
Any help would be greatly appreciated. The project is # Github.
The answer below is correct, but I believe that this Answer and Question will soon be "deprecated" along with most third-part ORMs. Google's new Room Persistence Library (Yigit's Talk) will be preferred in most cases. Although DBFlow will certainly carry on (Thank You Andrew) in many projects, here is a good place to re-direct people to the newest "best practice" because this particular question was/is geared for those new to DBFlow.
The correct way to initialize the database (akin to the SQLiteOpenHelper's onCreate(db) callback is to create a Migration object that extends BaseMigration with the version=0, then add the following to the onCreate() in the Application class (or wherever you are doing the DBFlow initialization):
FlowManager.init(new FlowConfig.Builder(this).build());
FlowManager.getDatabase(BracketsDatabase.NAME).getWritableDatabase();
In the Migration Class, you override the migrate() and then you can use the Transaction manager to initialize lookup data or other initial database content.
Migration Class:
#Migration(version = 0, database = BracketsDatabase.class)
public class DatabaseInit extends BaseMigration {
private static final String TAG = "classTag";
#Override
public void migrate(DatabaseWrapper database) {
Log.d(TAG, "Init Data...");
populateMethodOne();
populateMethodTwo();
populateMethodThree();
Log.d(TAG, "Data Initialized");
}
To populate the data, use your models to create the records and the Transaction Manager to save the models via FlowManager.getDatabase(AppDatabase.class).getTransactionManager()
.getSaveQueue().addAll(models);
To initialize data in DBFlow all you have to do is create a class for your object models that extends BaseModel and use the #Table annotation for the class.
Then create some objects of that class and call .save() on them.
You can check the examples in the library's documentation.
We are building a project using couchbase. On Android, I use couchbase lite. Usually, I've been working with relational databases and because I am new to couchbase I am having trouble finding the "correct" architecture. I do understand the core concepts I think, but all the samples and guides seem to stick to some kind of easy setup where they access the database right in the Activities.
I am more used to having some database abstraction where the business logic only get's to see POJO DTO's that are delivered through a database interface or some DAO or something. So I've now annotated my model classes and started writing a simple OR mapper, but with different types of data, foreign keys etc. this is getting quite time consuming quite fast.
Am I completely missing the point here somehow? I can't imagine everyone doing it this way? I everyone writing methods that convert Documents to POJO model classes for each class seperately? Or using a json parser to do that (But that won't work for foreign keys if I wan't to load them too, does it)?
Sorry for the load of questions, but I feel I am missing something obvious here. Thanks!
Will try answering your questions:
Am I completely missing the point here somehow?
No. You can treat noSQL CB as a persistent distributed object cache. So its not RDBMS. However, DAO pattern perfectly fits into this model...since you are dealing with DTOs/ValueObjects/POJOs on DAO level and on noSQL level.
I can't imagine everyone doing it this way?
I suggest write one universal Couchbase manager class that can persist/retrieve a POJO. Then you can re-use it in your DAOs.
Everyone writing methods that convert Documents to POJO model classes
for each class separately? Or using a json parser to do that (But that
won't work for foreign keys if I wan't to load them too, does it)?
You can have one common code in your Couchbase manager class that does conversion from/to json to POJO. So you work with only POJOs and don't see any json in your application code (outside of Couchbase manager class)
Here is an example of such class:
public class CouchbaseManager<K, V>
{
private final Class<V> valueTypeParameterClass;
#Inject
private CouchbaseClient cbClient;
#Inject
private Gson gson;
public CouchbaseManager(final Class<V> valueClass)
{
this.valueTypeParameterClass = valueClass;
}
public V get(K key)
{
V res = null;
String jsonValue = null;
if (key != null)
{
jsonValue = (String) cbClient.get(key);
if (jsonValue != null)
{
res = gson.fromJson(jsonValue, valueTypeParameterClass);
}
}
return res;
}
public void put(K key, V value)
{
int ttl = 0;
cbClient.set(key, ttl, gson.toJson(value, valueTypeParameterClass));
}
}
Then in your DAO code you create instance of CouchbaseManager for each type:
CouchbaseManager<String,Customer> cbmCustomer = new CouchbaseManager<String,Customer>(Customer.class);
CouchbaseManager<String,Account> cbmAccount = new CouchbaseManager<String,Account>(Account.class);
// and so on for other POJOs you have.
// then get/put operations look simple
Customer cust = cbmCustomer.get("cust-1234");
cust.setName("New Name"); // mutate value
// store changes
cbmCustomer.put(cust.getId(), cust);
Now regarding "foreign keys". Remember its not RDBMS so its up to your code to have notion of a "foreign key". For example a Customer class can have an id of an account:
Customer cust = cbmCustomer.get("cust-1234");
String accId = cust.getAccountId();
//You can load account
Account acc = cbmAccount.get(accId);
So as you can see you are doing it all yourself. I wish it was JPA or JDO implementation/provider for Couchbase (like DataNucleus or Hibernate)
You should really start with your POJO/Document design to try to split your POJO entities into "chunks" of data to get a right balance between coarse vs fine grained POJOs.
Also see this discussion on key/document design considerations.
I'm trying to persist data objects throughout my Android app. I want to be able to access an object in one activity, modify it, save it, navigate to a new activity, and access the same object with the updated value.
What I'm essentially talking about is a cache, but my data objects are complex. For example, ObjectA contains ObjectB which contains ObjectC. Does anyone know if a good method, tool, or framework for persisting complex objects in Sql?
Put a static field in a subclassed Application. Also inside your manifest, put:
android:name="MyApp" inside your application tags.
Also to access from other files, simply use:
MyApp myApp = (MyApp)getApplicationContext();
See here How to declare global variables in Android?:
class MyApp extends Application {
private String myState;
public String getState(){
return myState;
}
public void setState(String s){
myState = s;
}
}
class Blah extends Activity {
#Override
public void onCreate(Bundle b){
...
MyApp appState = ((MyApp)getApplicationContext());
String state = appState.getState();
...
}
}
You could use an ORM framework, like OrmLite for mapping objects into sql, but it may be an overkill for you situation.
You could also make these shared object Parcelable and pass them between the Activities thru the Intents.
You could also save these objects into the SharedPreferences, so each Activity can access them whenever they feel the need to it, and the objects are also persisted this way. This may mean more IO access though, so take that into consideration as well. You could use e.g. Gson to serialize the objects more painlessly for this.
These are the solutions I'd consider. But whatever you do, don't put this common object into some kind of "standard" global static variable, like using a custom Application class, static field or any implementation of the Singleton pattern, these are really fragile constructs on Android.
Why don't you use a JSON serialization mechanism ?
In association with a static access to your objects you can easily build a lite-weight database with some basic functionnalities:
loadObjectsFromCache
saveObjectsInCache
getObjects
You can also store your objects in differents files, and use a streaming json parser like this one: http://code.google.com/p/google-gson/
It's the same that this one: http://developer.android.com/reference/android/util/JsonReader.html
but can be used even if your application api level is inferior to 11.
It use less memory than the basic DOM parser:
http://developer.android.com/reference/org/json/JSONObject.html,
but with the same speed.
So I have a custom subclass of OrmLiteSqliteOpenHelper. I want to use the ObjectCache interface to make sure I have identity-mapping from DB rows to in-memory objects, so I override getDao(...) as:
#Override
public <D extends Dao<T, ?>, T> D getDao(Class<T> arg0) throws SQLException {
D dao = super.getDao(arg0);
if (dao.getObjectCache() == null && !UNCACHED_CLASSES.contains(arg0))
dao.setObjectCache(InsightOpenHelperManager.sharedCache());
return dao;
}
My understanding is that super.getDao(Class<T> clazz) is basically doing a call to DaoManager.createDao(this.getConnectionSource(),clazz) behind the scenes, which should find a cached DAO if one exists. However...
final DatabaseHelper helpy = CustomOpenHelperManager.getHelper(StoreDatabaseHelper.class);
final CoreDao<Store, Integer> storeDao = helpy.getDao(Store.class);
DaoManager.registerDao(helpy.getConnectionSource(), storeDao);
final Dao<Store,Integer> testDao = DaoManager.createDao(helpy.getConnectionSource(), Store.class);
I would expect that (even w/o the registerDao(...) call) storeDao and testDao should be references to the same object. I see this in the Eclipse debugger, however:
Also, testDao's object cache is null.
Am I doing something wrong here? Is this a bug?
I do have a custom helper manager, but only because I needed to manage several databases. It's just a hashmap of Class<? extends DatabaseHelper> keys to instances.
The reason I need my DAO cached is that I have several foreign collections that are eager and are being loaded by internally-generated DAOs that are not using my global cache and thus are being re-created independently for each collection.
As I was writing this up, I thought I could just have my overridden helpy.getDao(...) call through to DaoManager.createDao(...), but that results in the same thing: I still get a different DAO on the second call to createDao(...). This seems to me to be totally against the docs for DaoManager.
First, I thought it looked like registerDao(...) may be the culprit:
public static synchronized void registerDao(ConnectionSource connectionSource, Dao<?, ?> dao) {
if (connectionSource == null) {
throw new IllegalArgumentException("connectionSource argument cannot be null");
}
if (dao instanceof BaseDaoImpl) {
DatabaseTableConfig<?> tableConfig = ((BaseDaoImpl<?, ?>) dao).getTableConfig();
if (tableConfig != null) {
tableMap.put(new TableConfigConnectionSource(connectionSource, tableConfig), dao);
return;
}
}
classMap.put(new ClassConnectionSource(connectionSource, dao.getDataClass()), dao);
}
That return on line 230 of the source for DaoManager prevents the classMap from being updated (since I'm using the pregenerated config files?). When my code hits the second create call, it looks at the classMap first, and somehow (against my better understanding) finds a different copy of the DAO living there. Which is super weird, because stepping through the first create, I watched the classMap be initialized.
But where would a second DAO possibly come from?
Looking forward to Gray's insight! :-)
As #Ben mentioned, there is some internal DAO creation which is screwing things up but I think he may have uncovered a bug.
Under Android, ORMLite tries to use some magic reflection to build the DAOs given the horrible reflection performance under all but the most recent Android OS versions. Whenever the user asks for the DAO for class Store (for example), the magic reflection fu is creating one DAO but internally it is using another one. I've created the follow bug:
https://sourceforge.net/tracker/?func=detail&aid=3487674&group_id=297653&atid=1255989
I changed the way the DAOs get created to do a better job of using the reflection output. The changes were pushed out in the 4.34. This release revamps (and simplifies) the internal DAO creation and caching. It should fix the issue.
http://ormlite.com/releases/
Just kidding. Looks like what may be happening is that my Store object DAO initialization is creating DAO's for foreign connections (that I set to foreignAutoRefresh) and then recursively creating another DAO for itself (since the DAO creation that started this has not completed, and thus has yet to be registered w/ the DaoManager).
Looks like this has to do w/ the recursion noted in BaseDaoImpl.initialize().
I'm getting Inception flashbacks just looking at this.