Class doesn't exist in current schema after being added to migration - android

I've been struggling performing a simple migration. What I just want to achieve is add a new Class in realm.
The code below is inside a method that is called inside onCreate.
Realm.init(this)
val config = RealmConfiguration.Builder()
.name("db_name")
.schemaVersion(5L)
.migration { realm, oldVersion, newVersion ->
val schema = realm.schema
var _oldVersion = oldVersion
if (_oldVersion == 4L) {
if (schema.contains(XModel::class.java.simpleName))
schema.remove(XModel::class.java.simpleName)
if (!schema.contains(XModel::class.java.simpleName))
schema.create(XModel::class.java.simpleName)
.addField(XModel::id.name, Long::class.javaPrimitiveType,
FieldAttribute.PRIMARY_KEY)
...
.addField(XModel::N.name, Int::class.javaPrimitiveType)
_oldVersion += 1
}
}
.build()
Realm.setDefaultConfiguration(config)
As what the title suggest, the new class in the schema was created inside the migration object, but when I try to access it in other parts of the application using a realm query or a simple call to schema.get("XModel") it will throw an error XModel doesn't exist in current schema. Any comment will really help. Thank you...
Edit:
Additional information. I have 2 realm objects, each are in different android modules, one module is dependent to the other. I somehow have some progress, now Im a bit confuse, do I need to declare 2 configurations? Then it would mean 2 realm instance? How to switch from both, I want to merge them into 1 realm.
Edit2:
Another clarification about realm. If you have 2 android modules, each of them using realm, will they have different realm even if in the realm configuration they have the same name?
Background
I want to give you a background of what im doing because I think its needed to fully understand my case.
Originally I only have one module, but then after refactoring and also because of future apps to be develop, I need to pull out the common classes from the existing module and put it in a separate lower-level module that the future apps can depend on. This new lower-level module will also be responsible for most of the data layer, so realm was transferred to this module. But I can't just ignore the realm of the existing app because some users might already populated it, and I need to transfer those data to the new database.

Related

How to abandon a manual Room migration script, and fall back to destructive migration?

In the app I'm working on, we had a complex manual migration that required data parsing, manual SQL commands, etc. This was to convert a List<X> column into a new linked table of X. I've previously written about the approach, but the specific commands are not especially relevant for this question.
The issue I'm encountering is ~1% of users are experiencing a crash as part of this migration. This cannot be reproduced in testing, and due to our table's size, Crashlytics cannot show any useful error:
Losing customer data isn't catastrophic in this context, but being stuck in the current "try migrate, crash, reopen app and repeat" loop is. As such, I want to just give up on the migration and fall back to a destructive migration if we encounter an exception.
Any ideas how this can be done? My current solution is rerunning the DB changes (but not the presumably failing data migration) inside the catch, but this feels very hacky.
Our database is defined as:
Room.databaseBuilder(
context.applicationContext,
CreationDatabase::class.java,
"creation_database"
)
.addMigrations(MIGRATION_11_12, MIGRATION_14_15)
.fallbackToDestructiveMigration()
.build()
where MIGRATION_14_15 is:
private val MIGRATION_14_15 = object : Migration(14, 15) {
override fun migrate(database: SupportSQLiteDatabase) {
try {
// database.execSQL create table etc
} catch (e: Exception) {
e.printStackTrace()
// Here is where I want to give up, and start the DB from scratch
}
}
}
The problem you have is that you cannot (at least easily) invoke the fall-back as that is only invoked when there is no migration.
What you could do is to mimic what fall back does (well close to what it does). That is the fall-back will delete (I think) the database file and create the database from scratch and then invoke the databases _Impl (generated java) createAllTables method.
However, you would likely have issues if you deleted the file as the database connection has been passed to the migration.
So instead you could DROP all the app's tables using the code copied from the dropAllTables method from the generated java. You could then follow this with the code from the createAllTables method.
These methods are in the generated java as the class that is the same as the class that is annotated with #Database suffixed with _Impl.
The gotcha, is that the exception
(Expected .... Found ....) that you have shown is NOT within the migration but after the migration when Room is trying to build the database, so you have no control/place to do the above fall-back mimic unless this was done for all 14-15 migrations.
Perhaps what you could do is to trap the exception, present a dialog requesting the user to uninstall the app and to then re-install. This would then bypass the migration as it would be a fresh install.

Android architecture LiveData and Repositories

I am converting my application to room database and try to follow the google architecture best practices based on "Room with a View".
I am having trouble to understand the repository in terms of clean architecture.
The Words database example contains only one table and one view using it, making it a simple HelloWorld example. But lets start with that.
There is a view which displays a list of words. Thus all words need to be read from the database and displayed.
So we have a MainActivity and a Database to connect.
Entity Word
WordDao to access DB
WordViewModel: To separate the activity lifecycle from the data lifecycle a ViewModel is used.
WordRepository: Since the data maybe kept in a database or the cloud or whatever the repository is introduced to handle decision, where data comes from.
Activity with the View
It would be nice if the view is updated when the data changes, so LiveData is used.
This in turn means, the repository is providing the LiveData for the full table:
// LiveData gives us updated words when they change.
val allWords: LiveData<List<Word>>
This is all fine for a single view.
Now to my questions on expanding this concept.
Let us assume, the word table has two columns "word" and "last_updated" as time string.
For easier comparison the time string needs to be converted to milliseconds, so I have a function.
Question: Where to put the fun queryMaxServerDateMS() to get the max(last_updated)?
/**
* #return Highest server date in table in milliseconds or 1 on empty/error.
*/
fun queryMaxServerDateMS(): Long {
val maxDateTime = wordDao.queryMaxServerDate()
var timeMS: Long = 0
if (maxDateTime != null) {
timeMS = parseDateToMillisOrZero_UTC(maxDateTime)
}
return if (timeMS <= 0) 1 else timeMS
}
For me it would be natural to put this into the WordRepository.
Second requirement: Background job to update the word list in the database.
Suppose I now want a Background Job scheduled on a regular basis which checks the server, if new entries were made and downloads them to the database. The app may not be open.
This question just relays to the question of the above queryMaxServerDateMS.
The job will basically check first, if a new entry was made by asking the server if an entry exists which is newer then the max known entry.
So I would need to get a new class WordRepository, do my query, get max last_update and ask the server.
BUT: I do not need the LiveData in the background job and when val repositoy = WordRepository the full table is read, which is needless and time-, memory and batteryconsuming.
I also can think of a number of different fragments that would require some data of the word table, but never the full data, think of a product detail screen which lists one product.
So I can move it out to another Repository or DbHelper however you want to call it.
But in the end I wonder, if I use LiveData, which requires the View, ViewModel and Repository to be closely coupled together:
Question: Do I need a repository for every activity/fragment instead of having a repository for every table which would be much more logical?
Yes, with your current architecture you should put it in the Repository.
No, you don't need a repository for every activity/fragment. Preferably, 1 repository should be created for 1 entity. You can have a UseCase for every ViewModel.
In Clean architecture there's a concept of UseCase / Interactor, that can contain business logic, and in Android it can act as an additional layer between ViewModel and Repository, you can create some UseCase class for your function queryMaxServerDateMS(), put it there and call it from any ViewModel you need.
Also you can get your LiveData value synchronously, by calling getValue().
You do not need repository for each activity or fragment. To answer your question about getting max server time - when you load words from db you pretty much have access to entire table. That means you can either do that computation yourself to decide which is the latest word that's added or you can delegate that work to room by adding another query in dao and access it in your repo. I'd prefer latter just for the simplicity of it.
To answer your question about using repo across different activities or fragment - room caches your computations so that they are available for use across different users of your repo (and eventually dao). This means if you have already computed the max server time in one activity and used it there, other lifecycle owners can use that computed result as far as the table has not been altered (there might be other conditions as well)
To summarize you're right about having repository for tables as opposed to activities or fragments

Android Room requery on db change

I'm writing an application using newest Room Persistance Library.
The app sipmply shows a list of items and updates this list as data changes.
When new item is inserted into a table, or updated, I expect the list to update automaticlally.
I tried vanilla LiveData and Flowable so far. Both are claimed to support this feature, as it is stated in documentation and on this blog:
https://medium.com/google-developers/room-rxjava-acb0cd4f3757
Here's the ViewModel snippet in Kotlin:
messagesFlowable = db.messagesDao().all()
messagesFlowable
.observeOn(AndroidSchedulers.mainThread())
.subscribe {
Log.d(TAG, "Received 1 list of %s items", it.size)
messages.value = it
}
Somewhere else, the db is modified like this:
mDb.messagesDao().add(Message("Some data"))
The updates are not pushed to observers. I guess I'm missing something, but what?
Update: This problem is solved and the answer is below.
I'll answer my own question, as the solution is not documented.
It looks like you need to have the same instance of database object.
In my case, my Dagger2 was misconfigured to inject new instances of DB each time, so my Repository and ViewModel ended up with 2 separate instances.
Once I use single database instance shared among all interested parties, all updates are distributed correctly.

What is the correct way to initialize data in a lookup table using DBFlow?

I am trying to implement DBFlow for the first time and I think I might just not get it. I am not an advanced Android developer, but I have created a few apps. In the past, I would just create a "database" object that extends SQLiteOpenHelper, then override the callback methods.
In onCreate, once all of the tables have been created, I would populate any lookup data with a hard-coded SQL string: db.execSQL(Interface.INSERT_SQL_STRING);. Because I'm lazy, in onUpgrade() and onDowngrade(), I would just DROP the tables and call onCreate(db);.
I have read through the migrations documentation, which not only seems to be outdated syntactically because "database =" has been changed to "databaseName =" in the annotation, but also makes no mention of migrating from no database to version "initial". I found an issue that claims that migration 0 can be used for this purpose, but I cannot get any migrations to work at this point.
Any help would be greatly appreciated. The project is # Github.
The answer below is correct, but I believe that this Answer and Question will soon be "deprecated" along with most third-part ORMs. Google's new Room Persistence Library (Yigit's Talk) will be preferred in most cases. Although DBFlow will certainly carry on (Thank You Andrew) in many projects, here is a good place to re-direct people to the newest "best practice" because this particular question was/is geared for those new to DBFlow.
The correct way to initialize the database (akin to the SQLiteOpenHelper's onCreate(db) callback is to create a Migration object that extends BaseMigration with the version=0, then add the following to the onCreate() in the Application class (or wherever you are doing the DBFlow initialization):
FlowManager.init(new FlowConfig.Builder(this).build());
FlowManager.getDatabase(BracketsDatabase.NAME).getWritableDatabase();
In the Migration Class, you override the migrate() and then you can use the Transaction manager to initialize lookup data or other initial database content.
Migration Class:
#Migration(version = 0, database = BracketsDatabase.class)
public class DatabaseInit extends BaseMigration {
private static final String TAG = "classTag";
#Override
public void migrate(DatabaseWrapper database) {
Log.d(TAG, "Init Data...");
populateMethodOne();
populateMethodTwo();
populateMethodThree();
Log.d(TAG, "Data Initialized");
}
To populate the data, use your models to create the records and the Transaction Manager to save the models via FlowManager.getDatabase(AppDatabase.class).getTransactionManager()
.getSaveQueue().addAll(models);
To initialize data in DBFlow all you have to do is create a class for your object models that extends BaseModel and use the #Table annotation for the class.
Then create some objects of that class and call .save() on them.
You can check the examples in the library's documentation.

GreenDAO generated entities / name package convention

I'm currently into evaluating GreenDAO for my application. I'm facing the following problem.
My app consists of several modules (seperated in packages, e.g. "com.example.app.results", "com.example.app.synchronization"). Some of them have no dependencies, some of them have dependencies on other modules (e.g. synchronization has a dependency on results, whereas results has no dependency).
What I would like to model is the following:
Module results has Entity MyResult (attributes: name, value).
Module synchronization has Entity MyResultSynchronization (attributes: MyResult (reference), date).
final Schema schema = new Schema(1, "com.example.app");
final Entity myresult = schema.addEntity("results.MyResult");
final Property myresultId = myresult.addIdProperty().getProperty();
myresult.addStringProperty("name");
myresult.addStringProperty("value");
final Entity myResultSynchronization = schema.addEntity("synchronization.MyResultSynchronization");
myResultSynchronization.addIdProperty();
myResultSynchronization.addDateProperty("date");
myResultSynchronization.addToOne(myresult, myresultId);
but - $entityPackage.$name does not what I expected it to do (neither did $package\$name ;-)).
My question is: Am I forced to have all entities of my app in a single package? Is what I'm trying to do feasible by creating multiple Schemas - but than again, is it possible to use the relate-feature between two (or more) schemas? What is the "right" way to do it? (Is there one?)
Indeed all entities have to be in the same package.
Normally you use a structure like
com.example.myapp.data
Where you put everything for managing your database, especially your entity classes. Inside you can let greendao create a dao package where it will put everything needed to access your data (base).
Of course you can enforce your naning schema by making multiple sxhemas in greendao. But the schemas will be independent: They won't use the same database and you won't be able to link them together with toOne () for example.
If you still want to use your naming schema you can generate everything to an intermediate package and move them to your desired packages manually. But you would have to repeat this upon every change to your database schema, which is more often than one may think at first.

Categories

Resources