While playing with the Room Persistence Library I came to know that there is no methodology to set a data class field with NOT NULL and also UNIQUE constraints. whether SQLite supports those constraints. Isn't it a problem to migrate old database where those constraints are used? Can anyone give a suggestion for this issue?
I came to know that there is no methodology to set a data class field with NOT NULL and also UNIQUE constraints
A #NonNull annotation on an #Entity field will cause that field's column to have NOT NULL applied to it.
unique=true on an #Index will enforce a uniqueness constraint (e.g., #Entity(indices={#Index(value="something", unique=true)}). However, you are correct that a plain UNIQUE constraint on a column, other than via an index, is not supported.
Isn't it a problem to migrate old database where those constraints are used?
Room is not designed to support existing database structures, particularly in the now-current alpha state. Over time, I would expect Room to support a higher percentage of SQLite features, though I will be stunned if it ever reaches 100%.
Complementary answer about NOT NULL for those using Kotlin:
please note that marking a type as non optional will automatically make it not null (and an optional type will not do that).
You can check it in the schema generated by room with #Database(exportSchema = true) on your database.
For example I have something like that:
#Entity(tableName = "messages")
data class Message (
#PrimaryKey
val messageId: UUID = UUID.randomUUID(),
val date: Date = Date(),
val receivedDate: Date? = null
)
And in the generated schema I can read:
"CREATE TABLE IF NOT EXISTS `${TABLE_NAME}` (`messageId` TEXT NOT NULL, `date` INTEGER NOT NULL, `receivedDate` INTEGER, PRIMARY KEY(`messageId`))"
(Note: the Date type is here an Int and the UUID a string due to converters I use elsewhere)
If you have multiple item that is to be marked unique & based on that you want to insert in db then you can use composite primary key.
For Not null, Room has provided "#NonNull" annotation that is added over the field that cannot be null.
In the below mentioned eg. roll number is unique in each class but 2 different class can have same roll number. So we can use class & rollNumber as composite primary key & insert in db uniquely.
Example:
#Entity(primaryKeys = {"rollNumber", "class"})
class Student {
#NonNull
private int rollNumber;
private String firstName;
private String lastName;
private int class;
}
for a null able field you can use wrapper primitive type java. for example use Integer instance int in your Room Table.
as in wrapper primitive type java this can bee null but primitive type class cant bee null. and in generation of this SQL code for primitive field that use notNull=true but when use Integer in generayion SQL code use notNull=false
Related
I have an issue where autoGenerate is not working on an inherited field in my Entity class.
In my project I have created a base class which has an id field already added to it. This base class is then used by every Entity so I can work with generics and such. Everything seems to work perfectly until I add the autoGenerate to the id field of an Entity. (FYI: this was working in version 2.2.6, but in 2.3.0 this breaks and results in this issue.)
The BaseEntity class
interface BaseEntity {
val id: Any
}
The specific Entity class
#Entity(tableName = DBConstants.FOOD_ENTRY_TABLE_NAME)
data class FoodEntry(
#PrimaryKey(autoGenerate = true)
override val id: Int = 0,
var amount: Float,
var date: Long,
var meal: Meal
) : BaseEntity
If I do something like this it works (but it's not what I need)
#Entity(tableName = DBConstants.FOOD_ENTRY_TABLE_NAME)
data class FoodEntry(
override val id: Int = 0,
#PrimaryKey(autoGenerate = true)
var someOtherId: Int = 0,
var amount: Float,
var date: Long,
var meal: Meal
) : BaseEntity
As far as I can see this is only a problem when you wish to autoGenerate an inherited field.
Anybody else have seen this issue before?
As far as I can see this is only a problem when you wish to autoGenerate an inherited field.
The same behaviour happens if you just have #PrimaryKey or if you define the id column as a primary key.
The issue is that Room interprets the Int as a Type of int (in the underlying Java code (perhaps a bug, you may wish to raise an issue )). Room if it considers the Type as int as opposed to Int treats the 0 default value differently.
In case of Type Int with a 0 then Room doesn't attempt to specify a value if it is a for a primary key column and thus allows SQLite to assign a value.
e.g. SQL along the lines of INSERT INTO the_table (amount,date,meal) VALUES(the_amount, the_date, the_meal);
If the Type is int then the value is always specified so the 0's will result in UNIQUE constrain conflicts.
e.g. SQL along the lines of INSERT INTO the_table VALUES(0,the_amount, the_date, the_meal);
as the column list is omitted ALL columns need a value
Possible Fix
If you instead used
interface BaseEntity {
val id: Any?
}
with :-
#PrimaryKey
override val id: Int? = null,
generated java has :-
_db.execSQL("CREATE TABLE IF NOT EXISTS food (id INTEGER, amount REAL NOT NULL, date INTEGER NOT NULL, meal TEXT NOT NULL, PRIMARY KEY(id))");
or :-
#PrimaryKey(autoGenerate = true)
override val id: Int? = null,
generated java has :-
_db.execSQL("CREATE TABLE IF NOT EXISTS food (id INTEGER PRIMARY KEY AUTOINCREMENT, amount REAL NOT NULL, date INTEGER NOT NULL, meal TEXT NOT NULL)");
then no attempt is made to insert the id value and SQLite assigns the value.
in SQLite terms
Alternative Fix
If you use the original BaseEntity then you could insert using a Query where you exclude the id column and thus allow it to be generated.
e.g. you could have :-
#Query("INSERT INTO food (amount,date,meal) VALUES(:amount,:date,:meal)")
fun insertFE(amount: Float, date: Long, meal: String): Long
but that doesn't insert using a FoodEntry object so you could then have (dependant upon the above) :-
fun insertFE(FoodEntry: FoodEntry): Long {
return insertFE(FoodEntry.amount,FoodEntry.date,your_type_converter(FoodEntry.meal))
}
obviously your_type_converter would be changed accordingly
About autoGenerate (AUTOINREMENT in SQLite)
autogenerate = true does not noticeably effect the internally generated value until you reach the maximum allowed value (9223372036854775807). If autogenerate = true is coded and the last value was the 9223372036854775807 the next insert will result in a an SQLITE_FULL error and an exception.
If autoGenerate = false is coded or autoGenerate is not specified then then SQLite will, if 9223372036854775807 has been assigned (and the row still exists), attempt to allocate an unassigned value between 1 and 9223372036854775807 which would likely succeed as it's basically impossible to have 9223372036854775807 rows.
note that if any row has been assigned a negative value then the range is extended.
of course specifying Int or int imposes a much lower restriction outside of SQLite. Really id's should be Long or long.
autoGenerate= true means that the AUTOINCREMENT keyword is included in the column definition. This is a constraint that says that when the value is determined by SQLite that the value must be greater than any value that either exists or has been used.
To ascertain this AUTOINCREMENT uses an internal table namely sqlite_sequence to store the last assigned/determined value. Having the additional table and having to access and maintain the table has overheads.
SQLite https://sqlite.org/autoinc.html has as it's first sentence :- The AUTOINCREMENT keyword imposes extra CPU, memory, disk space, and disk I/O overhead and should be avoided if not strictly needed. It is usually not needed.
so in my application, when the user clicks add on something I should create an Entity A to carry the values which the user provides, this Entity A have an autoincremented-primary-key, also along the way of constructing Entity A there're another Entities that carry the key of Entity A as a foreign key as well as part of their composite key.
so my problem is that room prevents me from creating the other entities without providing the key of Entity A in their constructor annotating it with #NonNull as it's part of their composite key and it can't null.
now I don't know how to approach this problem,
was it a mistake from the beginning to work with my entities as custom classes along my application and I should separate entities from custom classes ? (though they would be having the same fields)
whenever the user clicks the add option, should I just push/insert an empty entity/row/tuple to get an autogenerated key so I could create the entities along the way?
please tell me your thoughts about this as it's my first time to work with a database embedded in an application so I don't know what should regarding it.
this Entity A have an autoincremented-primary-key
AUTOINCREMENT, in Room autoGenerate = true as part of the #PrimaryKey annotation, does not actually result in auto generation. Rather it is a constraint rule that forces the next automatically generated rowid to be greater than any that exist or have existed (for that table).
Without AUTOINCREMENT if the column is INTEGER PRIMARY KEY (or implied via a table level definition of such a column as PRIMARY KEY) then the column is made an alias of the always existing rowid (except for the rarely used WITHOUT ROWID table (unable to do so in Room via entities, there is no annotation for such a table)).
The rowid is always unique and always automatically generated and will typically be greater (typically 1 greater) anyway. It is only (unless purposefully manipulated) when the max (9223372036854775807th rowid) is reached when AUTOINCREMENT comes into play. In which case with AUTOINCREMENT you get an SQLITE_FULL exception, without SQLITE will try to find a lower unused/free rowid.
Due to the unnecessary overheads see I personally never use autoGenerate = true.
What AUTOINCREMENT does, is have a system table sqlite_sequence with a row per table that has AUTOINCREMENT where it stores/maintains the highest allocated rowid for the table. With AUTOINCREMENT it then uses the higher of the sqlite_sequence value and the highest rowid value and then adds 1 (without it just uses the highest rowid and adds 1).
was it a mistake from the beginning to work with my entities as custom classes along my application and I should separate entities from custom classes ?
There should be no need to have separate classes an Entity can be used as a stand-alone class, the room annotations being ignored.
whenever the user clicks the add option, should I just push/insert an empty entity/row/tuple to get an autogenerated key so I could create the entities along the way?
It is very easy to get the generated key and #Insert for a single insert returns the key (id) as a long so the #Dao #Insert abstract fun(entityA: EntityA): Long (long in Java) returns the key or -1 if the insert did not insert a row.
If you use the list/varargs for of #Insert then it returns a and array of Longs, each element returning the key (id) of the insert or -1.
So considering what I believe is your issue consider the following 3 Entities (not if Java then use Long rather than long for the key as primitives can't be null).
#Entity
data class EntityA(
#PrimaryKey
var entityAKey: Long? = null,
var otherAdata: String
)
No AUTOINCREMENT via autoGenerate = true.
No #NOTNULL annotations
then :-
#Entity
data class EntityB(
#PrimaryKey
var entityBKey: Long?= null,
var otherBdata: String
)
and :-
#Entity(
primaryKeys = ["entityBRef","entityARef","otherPartOfPrimaryKey"]
)
data class EntityC(
var entityBRef: Long,
var entityARef: Long,
var otherPartOfPrimaryKey: Long,
var otherCData: String
)
add some Dao's :-
#Insert
abstract fun insert(entityA: EntityA): Long
#Insert
abstract fun insert(entityB: EntityB): Long
#Insert
abstract fun insert(entityC: EntityC): Long
NOTE the Long return value (always Long doesn't compile if Int) and generated keys should always be long anyway as they can exceed what an Int can hold.
Finally consider :-
db = TheDatabase.getInstance(this)
dao = db.getDao()
var myfirstA = EntityA(otherAdata = "First")
var myfirstB = EntityB(otherBdata = "The first B")
var a1 = dao.insert(myfirstA)
var b1 = dao.insert(myfirstB)
dao.insert(EntityC(b1,a1,100L,"The first C using id's from the first A and the first B"))
run on the main thread via allowMainThreadQueries()
And the database :-
You could even do :-
dao.insert(EntityC(
dao.insert(EntityB(otherBdata = "Second B")),
dao.insert(EntityA(otherAdata = "A's second")),
200,
"blah")
)
obviously this would likely be of limited use as you'd need to know the values up front.
And the result is :-
Database snapshots obtained via Android studio's App Inspector (formerly Database Inspector).
You could also do/use :-
var my3rdA = EntityA(otherAdata = "3rd")
my3rdA.entityAKey = dao.insert(my3rdA)
Of course whenever you extract from the database then the object will include the key (id) (unless you purposefully chose not to).
In my Android application I use Room library for persistency.
Assuming, I have an Entity defined like this:
#Entity(tableName = "my_entity")
public class MyEntity {
#ColumnInfo(name = "id")
#PrimaryKey(autoGenerate = true)
private int id;
//...
}
can I rely on the fact, that id will be increased monotonically, i.e. that for newly inserted row id will always be higher, than for all previously created rows?
I think, that it is unlikely, but I can imagine, that Room (or SQLite - I am not sure, who is responsible in this case) could e.g. try to reuse the IDs of the previously deleted rows...
As far as I can see, the official documentation does not tell anything about it PrimaryKey.AutoGenerate().
This answer is the expanded comment from JensV.
As suggested by JensV, the generated schema json file contains (among others):
"createSql": "CREATE TABLE IF NOT EXISTS `${TABLE_NAME}` (`id` INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, ... <other fields>)"
So looking at the SQLite docs of AUTOINCREMENT we get, that it is guaranteed to be monotonic.
In fact, this flag serves exactly for this purpose: to ensure, that the generated value is monotonic (without this flag, the value still will be generated to be unique, but will not be necessarily monotonic). Taking into account, that Room uses the flag, it is strange, that they don't mention it in the documentation.
I'm building an Android application based on an old Android project.
In my new application I'm using Room. I have to use the same database that is used in the first project.
Furthermore, I've extracted the database from the first project using com.amitshekhar.android:debug-db library.
After obtaining the database file I would like to open it with the Room.
I am building database like this:
Room.databaseBuilder(
androidContext(),
Database::class.java, "database.db"
).createFromAsset("database.db")
.build()
Currently I'm using this createFromAsset() method, although later I would use the createFromFile() method, as my database should be downloaded from the server.
But I'm getting the java.lang.IllegalStateException: Pre-packaged database has an invalid schema
This happens because there are several datatypes in the database that are not supported in Room such as NVARCHAR(200), DATE or bit.
I'm aware that Room is using only five Sql types, but I do not know how to change this so that Room can open this kind of database using above mentioned methods.
The problem is how to convert NVARCHAR(200), DATE or bit into datatypes that are supported by Room?
You have to convert the database to use specific column type affinities that are supported by Room and that match the entities.
For NVARCHAR(200) you need to have TEXT replace NVARCHAR(200) with the Entity defining the column as a String.
For DATE it depends upon the Entity definition if you are using String based dates e.g. YYYY-MM-DD hh:mm:ss then the Entity should be String and the column affinity TEXT. If storing the date as a timestamp then the Entity should be long and the column affinity
INTEGER.
The answer here
Can't migrate a table to Room do to an error with the way booleans are saved in Sqlite does a conversion to change BOOL's to INTEGER.
You could adapt this (although I would be cautious with DATE) to suit.
Additional
You may find the following to be of use. You run it against the pre-existing database in your favourite SQLite Manager tool.
WITH potentialRoomChanges AS (
SELECT sm.name AS tablename, pti.name AS columnname, pti.type, dflt_value, pk,
CASE
WHEN instr(upper(pti.type),'INT') THEN 'INTEGER'
WHEN instr(upper(pti.type),'CHAR') OR instr(upper(pti.type),'CLOB') OR instr(upper(pti.type),'TEXT') THEN 'TEXT'
WHEN instr(upper(pti.type),'BLOB') THEN 'BLOB'
WHEN instr(upper(pti.type),'REAL') OR instr(upper(pti.type),'FLOA') OR instr(upper(pti.type),'DOUB') THEN 'REAL'
ELSE 'NUMERIC'
END AS roomtype ,
CASE WHEN pti.[notnull] THEN 'Investigate NOT NULL USE' END AS nnindicator,
sql
FROM sqlite_master AS sm JOIN pragma_table_info(sm.name) AS pti
WHERE
sm.type = 'table'
AND sm.name NOT LIKE 'sqlite_%'
AND sm.name <> 'android_metadata'
AND (
upper(pti.type) <> roomtype
OR instr(roomtype,'NUMERIC')
OR nnindicator IS NOT NULL
OR dflt_value IS NOT NULL
OR pk > 0
)
ORDER BY sm.name,pti.cid
)
SELECT tablename, columnname, type, roomtype,
CASE WHEN upper(type) <> upper(roomtype) THEN 'Investigate TYPE should be ' ||roomtype END AS typechange_notes,
CASE WHEN roomtype = 'NUMERIC' THEN 'Investigate NUMERIC' END AS numeric_notes,
CASE WHEN dflt_value IS NOT NULL THEN 'Investigate DEFAULT VALUE of '||dflt_value END AS default_notes,
CASE WHEN pk > 0 THEN 'Investigate PRIMARY KEY inclusion' END AS primarykey_notes,
nnindicator AS notnull_notes
FROM potentialRoomChanges
;
Example output :-
Hopefully the columns/text are self-explanatory. This is based upon the column types defined (which may differ from the type used). e.g. FLOATING POINT (5th row shown) you would think would be REAL. However according to the derived type affinity the first rule (if the type includes INT it is INTEGER) has been applied.
Rules as per Datatypes In SQLite Version 3 - 3.1. Determination Of Column Affinity.
NUMERIC from my limited experience with room isn't a type that it uses, so it should always be changed to one of the other types.
Use #NonNull before every field of you Pojo (entity) class.
there is no need to add #NonNull to primary key field.
an example is below
#Entity(tableName = "station")
public class Station {
#PrimaryKey
private int id;
#NonNull
private String name;
#NonNull
private int line;
#NonNull
private double lat;
#NonNull
private double lon;
... constructor, getters and setters
}
For me the problem was Not-Null , for every column in the database with NOT NULL it should be reflected in you model with #NonNull.
if you LastName in the database is
"LastName" TEXT NOT NULL,
in your code on your Model it should be
#NonNull
private String LastName;
In my case error was solved by changing the name parameter different from my database name. in this code, I set the name parameter to "my_db"
fun getInstance(context: Context): AppDatabase {
if (INSTANCE == null) {
INSTANCE = Room.databaseBuilder(
context,
AppDatabase::class.java,
"my_db" //this parameter
)
.allowMainThreadQueries()
.createFromAsset("database/harry_potter.db")
.build()
}
}
This should be resolvable with ColumnInfo.typeAffinity. Unfortunately, there's an open issue relating to typeAffinity and schema validation:
ColumnInfo type affinity is being suppressed by column adapters.
Maybe go there and hit the "+" so this issue gets some attention.
How can I migrate a primary key field, which was not set to Auto generate before?
From
#PrimaryKey
private int id;
To
#PrimaryKey(autoGenerate=true)
private int id;
Since Sqlite does not support altering columns, my only guess is to migrate the whole table as is and resetting the constraints.
Do I even have to migrate the database during the development process or can I just rebuild it, since my database will change rapidly, so I don't have to migrate every time?
I suggest you to change your approach: add an unique identifier (UID) as alternative way to identify records.
You can define a UID with annotation Entity on your POJO.
#Entity(indices={#Index(value="uid", unique=true)})
publi class Pojo {
..
public String uid;
..
}
When you insert a record in your database, you can define uid field using:
String uuid = UUID.randomUUID().toString();
You can use the UUID field to identify your records, in absolute way. When you migrate to a version to another, you don't work with the old ids, you can always work with UID.