how do I VACUUM my RoomDatabase for my Application?
I build my entire Application around Room and at a certain point one big table gets deleted reguarly and later filled again.
I tried to create an additional DAO-Interface with the Query:
#Dao
public interface GenericDao {
#Query("VACUUM")
void vacuum();
}
but I get the Error:
Error:(13, 10) error: UNKNOWN query type is not supported yet. You can use:SELECT, UPDATE, DELETE
Is there a workaround?
Basically what I need is, that once my Table gets completely emptied, Autoincrement starts at 1 again.
I'm fairly new to Database Design, so please be understanding if this is against best Practice.
And yes: I exhausted Google and every other Platform I know.
Many Thanks in advance!
Dao can look as follows :
#Dao
public interface RawDao {
#RawQuery
int vacuumDb(SupportSQLiteQuery supportSQLiteQuery);
}
Run the query like this :
rawDao.vacuumDb(new SimpleSQLiteQuery("VACUUM"));
Is there a workaround?
Call getOpenHelper() on your RoomDatabase subclass, to get a SupportSQLiteOpenHelper. Call getWritableDatabase() on it to get a SupportSQLiteDatabase. Then, since I don't think that VACUUM returns a result set, call execSQL("VACUUM") on the SupportSQLiteDatabase.
Basically what I need is, that once my Table gets completely emptied, Autoincrement starts at 1 again.
IMHO, you should not be making any assumptions about how autoincrement fields get incremented.
Related
I made a screen like the current image.
Data such as A, B, C.. are currently being set by getting from the strings.xml resource file.
I am now going to use Room DB instead of strings.xml and I want to get these data from Room.
To do this, we need to pre-populate the Room with data.
In the sample code I found, the method called addCallback() was usually used.
like this :
#Database(entities = arrayOf(Data::class), version = 1)
abstract class DataDatabase : RoomDatabase() {
abstract fun dataDao(): DataDao
companion object {
#Volatile private var INSTANCE: DataDatabase? = null
fun getInstance(context: Context): DataDatabase =
INSTANCE ?: synchronized(this) {
INSTANCE ?: buildDatabase(context).also { INSTANCE = it }
}
private fun buildDatabase(context: Context) =
Room.databaseBuilder(context.applicationContext,
DataDatabase::class.java, "Sample.db")
// prepopulate the database after onCreate was called
.addCallback(object : Callback() {
override fun onCreate(db: SupportSQLiteDatabase) {
super.onCreate(db)
// insert the data on the IO Thread
ioThread {
getInstance(context).dataDao().insertData(PREPOPULATE_DATA)
}
}
})
.build()
val PREPOPULATE_DATA = listOf(Data("1", "val"), Data("2", "val 2"))
}
}
However, as you can see from the code, in the end, data (here, val PREPOPULATE_DATA) is being created again within the code. (In another code, db.execSQL() is used)
In this way, there is no difference from fetching data from resource file in the end.
Is there any good way?
Developer documentation uses assets and files.
However, it is said that it is not supported within In-memory Room databases.
In this case, I do not know what In-memory means, so I am not using it.
In this case, I do not know what In-memory means, so I am not using it.
In-Memory will be a database that is not persistent, that is the database is created using in memory rather than as a file, at some time it will be deleted. You probably do not want an in-memory database.
However, as you can see from the code, in the end, data (here, val PREPOPULATE_DATA) is being created again within the code. (In another code, db.execSQL() is used)
This is a common misconception when writing Apps as the onCreate method of an activity is often repeated when an App is running. With an SQLite database the database is created once in it's lifetime, which would be from the very first time the App is run until the database file is deleted. The database will otherwise remain (even between App version changes).
Is there any good way?
You basically have two options for a pre-populated database. They are
to add the data when/after the database is created, as in your example code (which is not a good example as explained below), or
to utilise a pre-packaged database, that is a database that is created outside of the App (typically using an SQlite tool such as DBeaver, Navicat for SQlite, SQLiteStudio, DB Browser for SQLite).
Option 1 -Adding data
If the data should only be added once then using the overridden onCreate method via the CallBack can be used. However, using functions/methods from the #Dao annotated class(es) should not be used. Instead only SupportSQLiteDatabase functions/methods should be used e.g. execSQL (hence why the SupportSQLiteDatabase is passed to onCreate).
This is because at that stage the database has just been created and all the underlying processing has not been completed.
You could protect against duplicating data quite easily by using INSERT OR IGNORE .... rather than INSERT ..... This will skip insertion if there is an applicable constraint violation (rule being broken). As such it relies upon such rules being in force.
The two most commonly used constraints are NOT NULL and UNIQUE, the latter implicitly for a primary key.
In your case if a Data object has just the 2 fields (columns in Database terminology) then, as Room requires a primary key, an implicit UNIQUE constraint applies (could be either column or a composite primary key across both). As such adding Data(1,"val") a second time would result in a constraint violation which would result in either
The row being deleted and another inserted (if INSERT OR REPLACE)
This further complicated by the value of autogenerate.
An exception due to the violation.
The insert being skipped if INSERT OR IGNORE were used.
This option could be suitable for a small amount of data but if over used can start to bloat the code and result in it's maintainability being compromised.
If INSERT or IGNORE were utilised (or alternative checks) then this could, at some additional overhead, even be undertaken in the Callback's onOpen method. This being called every time the database is opened.
Pre-packaged Database
If you have lots of initial data, then creating the database externally, including it as an asset (so it is part of the package that is deployed) and then using Room's .createFromAsset (or the rarer used .createFromFile) would be the way to go.
However, the downfall with this, is that Room expects such a database to comply with the schema that it determines and those expectations are very strict. As such just putting together a database without understanding the nuances of Room then it can be a nightmare.
e.g. SQLite's flexibility allows column types to be virtually anything (see How flexible/restricive are SQLite column types?). Room only allows column types of INTEGER, TEXT, REAL or BLOB. Anything else and the result is an exception with the Expected .... Found ... message.
However, the easy way around this is to let Room tell you what the schema it expects is. To do so you create the #Entity annotated classes (the tables), create the #Database annotated class, including the respective entities in the entities parameter and then compile. In Android Studio's Android View java(generated) will then be visible in the explorer. Within that there will be a class that is the same name as the #Database annotated class but suffixed with _Impl. Within this class there is a function/method createAllTables and it includes execSQL statements for all the tables (the room_master_table should be ignored as Room will always create that itself).
The database, once created and saved, should be copied into the assets folder and using .createFromAsset(????) will then result in the pre-packaged data being from the package to the appropriate local storage location.
In the app I'm working on, we had a complex manual migration that required data parsing, manual SQL commands, etc. This was to convert a List<X> column into a new linked table of X. I've previously written about the approach, but the specific commands are not especially relevant for this question.
The issue I'm encountering is ~1% of users are experiencing a crash as part of this migration. This cannot be reproduced in testing, and due to our table's size, Crashlytics cannot show any useful error:
Losing customer data isn't catastrophic in this context, but being stuck in the current "try migrate, crash, reopen app and repeat" loop is. As such, I want to just give up on the migration and fall back to a destructive migration if we encounter an exception.
Any ideas how this can be done? My current solution is rerunning the DB changes (but not the presumably failing data migration) inside the catch, but this feels very hacky.
Our database is defined as:
Room.databaseBuilder(
context.applicationContext,
CreationDatabase::class.java,
"creation_database"
)
.addMigrations(MIGRATION_11_12, MIGRATION_14_15)
.fallbackToDestructiveMigration()
.build()
where MIGRATION_14_15 is:
private val MIGRATION_14_15 = object : Migration(14, 15) {
override fun migrate(database: SupportSQLiteDatabase) {
try {
// database.execSQL create table etc
} catch (e: Exception) {
e.printStackTrace()
// Here is where I want to give up, and start the DB from scratch
}
}
}
The problem you have is that you cannot (at least easily) invoke the fall-back as that is only invoked when there is no migration.
What you could do is to mimic what fall back does (well close to what it does). That is the fall-back will delete (I think) the database file and create the database from scratch and then invoke the databases _Impl (generated java) createAllTables method.
However, you would likely have issues if you deleted the file as the database connection has been passed to the migration.
So instead you could DROP all the app's tables using the code copied from the dropAllTables method from the generated java. You could then follow this with the code from the createAllTables method.
These methods are in the generated java as the class that is the same as the class that is annotated with #Database suffixed with _Impl.
The gotcha, is that the exception
(Expected .... Found ....) that you have shown is NOT within the migration but after the migration when Room is trying to build the database, so you have no control/place to do the above fall-back mimic unless this was done for all 14-15 migrations.
Perhaps what you could do is to trap the exception, present a dialog requesting the user to uninstall the app and to then re-install. This would then bypass the migration as it would be a fresh install.
I was tinkering with SingleLiveEvent. Is it possible to use it with Room database? I tried using it and got a build error saying Not sure how to convert a Cursor to this method's return type. Are there any workarounds here? I have got an edge case where I would like to use it!
SingleLiveEvent is MutableLiveData which is LiveData. You can return List<LiveData<YourData>> from Room with select query which is invoked in worker thread. No need to work with cursors in Room. get List<LiveData<YourData>> and on observe method send List<YourData> to required class or RecyclerView. What is your edge case for needing cursor?
Caution: It's highly discouraged to work with the Cursor API because
it doesn't guarantee whether the rows exist or what values the rows
contain. Use this functionality only if you already have code that
expects a cursor and that you can't refactor easily.
However, you can get it with
#Dao
public interface MyDao {
#Query("SELECT * FROM user WHERE age > :minAge LIMIT 5")
public Cursor loadRawUsersOlderThan(int minAge);
}
Source
I'm targeting Android 2.2 and newer. This error was generated on a device running 4.x. I am using ORMLite 4.38 libraries.
I need to guarantee every record instance is unique for any number of devices. I was happy to see that ORMLite supports UUIDs as IDs. I've created a UUID - id abstract base class for my database record definitions. allowGeneratedIdInsert is the perfect solution. But this feature seems to cause an 'IllegalStateException: could not create data element in dao'. I tested by removing this annotation, and no issue. Put it back in...same issue. Put the base class stuff in one record definition...same issue.
LogCat also reports:
Caused by: java.sql.SQLException: Unable to run insert stmt on object - objectid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx
public abstract class UUIDDaoEnabled<T> extends BaseDaoEnabled<T, UUID> {
//allowGeneratedIdInsert allows us to set UUID when this device db didn't create it
#DatabaseField(generatedId = true, allowGeneratedIdInsert=true)
private UUID id;
...
public void setUUIDFromSerializedSource(SerializedModelBinaryInputStream stream, Dao<T, UUID> dao) throws SQLException { //only place we can set UUIDs
if(id == null)
dao.refresh((T)this);
if(id != null)
throw new SQLException("Trying to set UUID on existing object");
id = stream.getCurrentUUID();
}
}
I'll specialize like so:
#DatabaseTable()
public class Type extends UUIDDaoEnabled<Type> { ... }
I can't explain this from the documentation for allowGeneratedIdInsert and generatedId. In fact the documentation for alloeGeneratedIdInsert says it overrides the default behavior of generatedId. It also says
This only works if the database supports this behavior
Yet, I have read in other posts that ORMLite 4.25 (?) and newer supports this behavior on Android devices. So, either that's not entirely true. Or I'm doing something stupid...anyone???
UPDATE: after thinking about it for a minute, I realized that neither allowGeneratedIdInsert support, nor inheritance can be the root cause, because I instantiate other objects based on the same abstract class. What I can't figure out is why one particular class is causing the issue. The only unique thing about the offending record type (compared to other types that create) is it is a many in a one to many, and it contains several to manies. Could these properties, combined with allowGenereatedIdInsert, be the root issue? Rather, I should ask, has anyone seen this issue in this circumstance?
UPDATE: nevermind the question. I can use updateId(...) instead of allowGeneratedIdInsert.
So I'm not sure about this but it looks to me that you are trying to insert an element twice into a table with the same UUID id. The exception is saying there is a constraints failure:
IllegalStateException: Could not create data element in dao
at BaseForeignCollection.add(BaseForeignCollection.java:57)
...
Caused by: SQLiteConstraintException: error code 19: constraint failed
If you call foreignCollection.add(...); it does the same thing as dao.create(...); -- and you can't do both of these with the same object. If you have an existing object that has already been created by the DAO and you want to associate it with another object, you should do something like:
// associate this object with another
existingObject.setForeignField(...);
// now update it in the db
existingObjectDao.update(existingObject);
You can't add it to the foreignField's foreign collection.
I had a similar problem. But it was caused by using create instead createOrUpdate to save the object.
It is also important to uninstall the application before changing this to ensure that the database has been removed and will not keep the old behavior.
Edit: createOrUpdate is very time expensive. It's better use just create with great amounts of data.
Edit 2:It is also bether to use a TransactionManager.callInTransaction.
So I have a custom subclass of OrmLiteSqliteOpenHelper. I want to use the ObjectCache interface to make sure I have identity-mapping from DB rows to in-memory objects, so I override getDao(...) as:
#Override
public <D extends Dao<T, ?>, T> D getDao(Class<T> arg0) throws SQLException {
D dao = super.getDao(arg0);
if (dao.getObjectCache() == null && !UNCACHED_CLASSES.contains(arg0))
dao.setObjectCache(InsightOpenHelperManager.sharedCache());
return dao;
}
My understanding is that super.getDao(Class<T> clazz) is basically doing a call to DaoManager.createDao(this.getConnectionSource(),clazz) behind the scenes, which should find a cached DAO if one exists. However...
final DatabaseHelper helpy = CustomOpenHelperManager.getHelper(StoreDatabaseHelper.class);
final CoreDao<Store, Integer> storeDao = helpy.getDao(Store.class);
DaoManager.registerDao(helpy.getConnectionSource(), storeDao);
final Dao<Store,Integer> testDao = DaoManager.createDao(helpy.getConnectionSource(), Store.class);
I would expect that (even w/o the registerDao(...) call) storeDao and testDao should be references to the same object. I see this in the Eclipse debugger, however:
Also, testDao's object cache is null.
Am I doing something wrong here? Is this a bug?
I do have a custom helper manager, but only because I needed to manage several databases. It's just a hashmap of Class<? extends DatabaseHelper> keys to instances.
The reason I need my DAO cached is that I have several foreign collections that are eager and are being loaded by internally-generated DAOs that are not using my global cache and thus are being re-created independently for each collection.
As I was writing this up, I thought I could just have my overridden helpy.getDao(...) call through to DaoManager.createDao(...), but that results in the same thing: I still get a different DAO on the second call to createDao(...). This seems to me to be totally against the docs for DaoManager.
First, I thought it looked like registerDao(...) may be the culprit:
public static synchronized void registerDao(ConnectionSource connectionSource, Dao<?, ?> dao) {
if (connectionSource == null) {
throw new IllegalArgumentException("connectionSource argument cannot be null");
}
if (dao instanceof BaseDaoImpl) {
DatabaseTableConfig<?> tableConfig = ((BaseDaoImpl<?, ?>) dao).getTableConfig();
if (tableConfig != null) {
tableMap.put(new TableConfigConnectionSource(connectionSource, tableConfig), dao);
return;
}
}
classMap.put(new ClassConnectionSource(connectionSource, dao.getDataClass()), dao);
}
That return on line 230 of the source for DaoManager prevents the classMap from being updated (since I'm using the pregenerated config files?). When my code hits the second create call, it looks at the classMap first, and somehow (against my better understanding) finds a different copy of the DAO living there. Which is super weird, because stepping through the first create, I watched the classMap be initialized.
But where would a second DAO possibly come from?
Looking forward to Gray's insight! :-)
As #Ben mentioned, there is some internal DAO creation which is screwing things up but I think he may have uncovered a bug.
Under Android, ORMLite tries to use some magic reflection to build the DAOs given the horrible reflection performance under all but the most recent Android OS versions. Whenever the user asks for the DAO for class Store (for example), the magic reflection fu is creating one DAO but internally it is using another one. I've created the follow bug:
https://sourceforge.net/tracker/?func=detail&aid=3487674&group_id=297653&atid=1255989
I changed the way the DAOs get created to do a better job of using the reflection output. The changes were pushed out in the 4.34. This release revamps (and simplifies) the internal DAO creation and caching. It should fix the issue.
http://ormlite.com/releases/
Just kidding. Looks like what may be happening is that my Store object DAO initialization is creating DAO's for foreign connections (that I set to foreignAutoRefresh) and then recursively creating another DAO for itself (since the DAO creation that started this has not completed, and thus has yet to be registered w/ the DaoManager).
Looks like this has to do w/ the recursion noted in BaseDaoImpl.initialize().
I'm getting Inception flashbacks just looking at this.