Is PrimaryKey's autoGenerate exactly equivalent to SQLite's AUTOINCREMENT? - android

Is marking a primary key with #PrimaryKey(autoGenerate = true) exactly the same as if you had used PRIMARY KEY AUTOINCREMENT in an SQL statement?
Intuition tells me yes, but documentation seems to suggest no.
Room javadoc states:
Set to true to let SQLite generate the unique id.
as if setting it false will prevent SQLite from generating the key.
But SQLite documentation for AUTOINCREMENT states that SQLite always generates a currently-unique key if none is given when doing an INSERT, and that AUTOINCREMENT merely adds the additional behavior that SQLite will never allow an automatically generated key to overlap with any previously deleted row.
The SQLite documentation also recommends not using AUTOINCREMENT if it isn't needed (for performance reasons), and states that it is usually not needed. From the description, that seems to match my case. My table will be fine if a previously deleted row ID gets reused.

Is marking a primary key with #PrimaryKey(autoGenerate = true) exactly the same as if you had used PRIMARY KEY AUTOINCREMENT in an SQL statement?
Yes, as using autoGenerate=true adds the AUTOINCREMENT keyword.
But
as if setting it false will prevent SQLite from generating the key.
Is false.
If a class is:-
annotated with #Entity, and
the column/variable/member is annotated with #PrimaryKey, and
if the type resolves to an integer type
(byte .... double, primitive or Object (e.g. Double))
then the value can be generated (it is INTEGER PRIMARY KEY that makes the column a special column that can be generated as that column is then an alias of the rowid (a normally hidden column)).
AUTOINCREMENT is only applicable to aliases of the rowid (i.e. INTEGER PRIMARY KEY). It does not determine whether the value can be generated (in the absence of a value for the column or when the value is null).
What AUTOINCREMENT does is add an additional rule when generating the value. That rule being that the value MUST be higher than any ever used for that table.
There are subtle differences.
Without AUTOINCREMENT
deleting the row with the highest value, frees that value for subsequent use (and would be used to generate the value still higher than any other value that exists at that time), and
should the highest value (9223372036854775807) be reached SQLite will try to find a free lower value, and
lastly it is possible to double the range of values by using negative values.
With AUTOINCREMENT
deleting the row with the highest value does not free that value for subsequent use
should the highest value (9223372036854775807) be reached then subsequent attempts to insert with a generated value will fail with an SQLITE FULL error.
If you insert 1 row with a value of 9223372036854775807 then that's the only row that can be inserted.
negative values cannot be generated (can still be used)
an additional table is required (sqlite_sequence), which is automatically created by SQLite, that will have a row per table with AUTOINCREMENT. The highest used value is stored in the row. So whenever inserting when the value is to be generated requires the respective row to be retrieved and the value obtained, after insertion the value has to be updated. As such there are overheads associated with using AUTOINCREMENT.
Note the above is assuming that methods to circumvent SQLite's in-built handling are not circumvented (such as updating values in the sqlite_sequence table).
I would always advocate using (not using autoGenerate=true) e.g.
#PrimaryKey
Long id_column=null;
or
#PrimaryKey
var id_column: Long?=null
thus an #Insert (convenience insert) will autogenerate if no value is given for the id_column.
Demo
Consider the following two #Entity annotated classes (with and without autoGenerate=true) :-
AutoInc:-
#Entity
data class AutoInc(
#PrimaryKey(autoGenerate = true)
val id: Long?=null,
val other: String
)
NoAutoInc:-
#Entity
data class NoAutoInc(
#PrimaryKey
var id: Long?=null,
var other:String
)
Room (after compiling and looking at the generated java in the class that is the same name as the #Database annotated class) has the following in the createAllTables method/function:-
_db.execSQL("CREATE TABLE IF NOT EXISTS `AutoInc` (`id` INTEGER PRIMARY KEY AUTOINCREMENT, `other` TEXT NOT NULL)");
_db.execSQL("CREATE TABLE IF NOT EXISTS `NoAutoInc` (`id` INTEGER, `other` TEXT NOT NULL, PRIMARY KEY(`id`))");
i.e. the only difference is the AUTOINCREMENT keyword.
Then consider the following code :-
/* Typical where the id will be generated */
dao.insert(AutoInc(other = "A"))
dao.insert(AutoInc(null,other = "B"))
dao.insert(NoAutoInc(other ="A"))
dao.insert(NoAutoInc(null, other = "B"))
/* Beware */
/* Room interprets types different ways
here 0 is taken to be 0 as id is an Object
if long (Java) then 0 will be generated id
getters/setters are taken in to consideration when determining type
* */
dao.insert(AutoInc(0,other = "W"))
dao.insert(NoAutoInc(0,other ="W"))
/* Unusual */
dao.insert(AutoInc(-100,"X"))
dao.insert(NoAutoInc(-100,other ="X"))
dao.insert(AutoInc(9223372036854775807,"Y")) /* The maximum value for an id */
dao.insert(NoAutoInc(9223372036854775807,"Y")) /* The maximum value for an id */
When run then the tables (via Android Studio's App Inspection) are:-
AutInc:-
Note the Z row has not been added due to :-
E/SQLiteLog: (13) statement aborts at 4: [INSERT OR ABORT INTO `AutoInc` (`id`,`other`) VALUES (?,?)] database or disk is full
However, the disk isn't full as Disk Explorer shows:-
It's by no means full as Disk Explorer shows (and of course the subsequent step works inserting a row into the database):-
and
NoAutInc
Here the Z row has been added with a generated id based upon SQLite finding an unused value due to the highest allowable value for an id having been reached as opposed to the failure due to the disk/table full.

Related

How to get the next auto-increment id in Android room?

Here is my room entity object:
#Entity(tableName = "user_account", indices = [Index(value = ["user_name", "user_type"], unique = true)])
data class DataUserAccountEntity(
#PrimaryKey(autoGenerate = true) #ColumnInfo(name = "auto_id") val autoId: Int,
#NonNull #ColumnInfo(name = "user_name") val userName: String,
#NonNull #ColumnInfo(name = "user_photo") val userPhoto: String,
#NonNull #ColumnInfo(name = "user_type") val userType: Int,
)
Here is my Dao entity object:
#Dao
interface DataUserAccountDao {
#Query("SELECT * FROM user_account WHERE auto_id = :autoId LIMIT 1")
fun getUserAccount(autoId: Int): DataUserAccountEntity
#Query("SELECT * FROM user_account ORDER BY auto_id ASC")
fun getAllUserAccounts(): List<DataUserAccountEntity>
}
Since auto_id is set to #PrimaryKey(autoGenerate = true), how would I query room for the next value?
(i.e. I am looking for the auto_id that would be generated if I insert a new row into the local database right now)
Although I appreciate the response, this does not solve my problem. I need the number BEFORE insertion.
If autoGenerate=true is coded then you can use:-
#Query("SELECT seq+1 FROM sqlite_sequence WHERE name=:tableName")
fun getNextRowidFromTable(tableName: String): Long
HOWEVER, there is no guarantee that the next allocated value will be 1 greater than the last and thus the value obtained from the query. As per:-
The behavior implemented by the AUTOINCREMENT keyword is subtly different from the default behavior. With AUTOINCREMENT, rows with automatically selected ROWIDs are guaranteed to have ROWIDs that have never been used before by the same table in the same database. And the automatically generated ROWIDs are guaranteed to be monotonically increasing.
and
Note that "monotonically increasing" does not imply that the ROWID always increases by exactly one. One is the usual increment. However, if an insert fails due to (for example) a uniqueness constraint, the ROWID of the failed insertion attempt might not be reused on subsequent inserts, resulting in gaps in the ROWID sequence. AUTOINCREMENT guarantees that automatically chosen ROWIDs will be increasing but not that they will be sequential.
What coding autoGereate=true does is include the AUTOINCREMENT keyword. This doesn't actually cause auto generation rather that for every table (using Room at least) a value is generated an placed into a hidden column rowid. If a column is specified with a type of INTEGER and the column is the PRIMARY KEY (not part of a composite primary key) then that column is an alias of the rowid. If such a column has a value specified for the column when inserting the row then that value (as long as it is unique) is assigned to the column (and therefore rowid).
AUTOINCREMENT is a constraint (rule) that enforces the use of a value higher than any that have been assigned (even if such rows are deleted).
AUTOINCREMENT handles this subtle difference by using the sqlite_sequence table to store the assigned rowid or alias thereof obviously updating the value to always be the highest. The sqlite_sequence table will not exist if AUTOINCREMENT aka autoGenerate=true is not coded in any #Entity annotated classes (which are passed to the #Database annotated class via the entities parameter of the annotation)
You may wish to refer to https://www.sqlite.org/autoinc.html
For a solution that is less likely to result in missed sequence numbers you could instead not use AUTOINCREMENT aka autoGenerate= true. This does mean another subtle change to cater for the auto generation and that is making the auto_id nullable with a default value of null.
e.g.
#Entity(tableName = "user_account", indices = [Index(value = ["user_name", "user_type"], unique = true)])
data class DataUserAccountEntity(
#PrimaryKey/*(autoGenerate = true)*/ #ColumnInfo(name = "auto_id") val autoId: Int?=null /*<<<<< CHANGED*/,
#NonNull #ColumnInfo(name = "user_name") val userName: String,
#NonNull #ColumnInfo(name = "user_photo") val userPhoto: String,
#NonNull #ColumnInfo(name = "user_type") val userType: Int,
)
As sqlite_sequence will not exist or not have a row for this table then you cannot use it to ascertain the next auto_id value.
So you could have:-
#Query("SELECT COALESCE(max(auto_id),0)+1 FROM user_account")
fun getNextAutoId(): Long
This would work, due to the COALESCE function changing null into 0, even if there were no rows and return 1.
Even still there is still no guarantee that the value will be in sequence. However, more likely and predictable than if using AUTOINCREMENT as the issue with AUTOINCREMENT is due to sqlite_sequence being updated but then the row not being inserted (rolled back).
However, IF the sequence number reaches the value of 9223372036854775807 then instead of an SQLITE_FULL error that would happen with AUTOINCREMENT (it cannot break the rule and cannot have a larger value) SQLite will try to find an unused value (and therefore lower value (unless getting even deeper and using negative values)).
You could mimic sqlite_sequence by defining a table with two columns (only one could be used but two name and seq would cater for other tables). You could compliment this with a TRIGGER so that an INSERT automatically sets the new value (prone to misuse). Room doesn't support TRIGGERS but doesn't complain if you include them (e.g. via a callback).
Saying all that when it boils down to it. The intended purpose of the rowid or an alias thereof is for the unique identification of a row. SQLite has been written with this in mind (such as up to twice as fast location of rows as the rowid can be considered a super/optimized index). Other uses of the rowid/alias thereof will always have some potential issues.
As such it is not recommended to use them for anything other than their intended use.
You can get the id of last saved record in room database.
#Query("SELECT auto_id FROM user_account ORDER BY auto_id DESC LIMIT 1")
fun getLastUserAccount(autoId: Int): Long
This will return you last row id. Suppose you have 5 records, it will return 4.
Now, you increment the returned_id, to get new one.
And verify after inserting,
#Insert(onConflict = OnConflictStrategy.REPLACE)
suspend fun insertCountry(dataUserAccountEntity: DataUserAccountEntity): Long
Long is the return type of this new record. if it's -1, it means the operation got failed else will return the auto-generated ID

How to insert only 3 columns data in room database table if we have more columns?

I have a project that I wrote in kotlin. I want to insert data in different columns of the same table on different pages. I have specified these columns in the dataclass, but it gives a null data error.
In order to make this insert process more healthy, should I divide the table into two separate tables or send static 'null' data and update these fields?
In a database, such as SQLite (which Room is a wrapper around), the unit of insertion is a row.
A row will consist of the same number of columns. You cannot insert a column on it's own, other than if you ALTER the table to add or remove a column, when the change is reflected throughout the entire table.
if adding a column then a DEFAULT VALUE must be provided, this could be the default/implicit value of null or another specific value.
Room with Kotlin will apply a constraint (rule) of NOT NULL unless nulls are specifically allowed using for example ?
var theColumn: Int has the implicit NOT NULL
var theColumn: Int? does not have the implicit NOT NULL and nulls can be stored
var theColumn: Int=-1 will apply a default value of -1 in the absence of the field not being supplied a value when instantiating the object.
var theColumn: Int?=null will apply null in the absence of the field not being supplied a value when instantiating the object.
obviously fields may be altered before inserting the object, if var rather than val is used.
The data stored in the column can be interpreted to represent whatever you wish, often NULL will be used to represent a special situation such as no value.
If using an #Insert annotated function, then ALL columns are applied the values as obtained from the object or objects passed to the function. In Kotlin whether or not NULLs can be used is dependent upon the field definition or in some cases the #NonNull annotation.
#Insert indicates what is termed as a convenience method, it actually builds the underlying SQL along with binding the values using the SQLite API.
However, if you want flexibility, then an #Query annotation with suitable INSERT SQL statement can be used.
e.g. you could perhaps have a table that has 4 columns COL1, COL2, COL3 and COL4 and only apply some of the columns (the DEFAULT VALUE will be applied to the other column if specified, if not the NULL but if there is a NOT NULL constraint then a conflict would be raised).
So to insert when only two of the columns (COL2 and COL4) then you could use:-
#Query("INSERT INTO theTable (COL2,COL4) VALUES(:valueForCol2,:valueForCol4)")
fun insertCol2AndCol4Only(valueForCol2: Int, valueForCol4: Int?)
Note that valueForCol4 could be NULL. However, whether or not a NULL will result in a conflict depends upon how the field is defined in the #Entity annotated class.
Conflicts (breaking a rule) can be handled by SQLite, depending upon the type of the conflict. UNIQUE, PRIMARY KEY (which is really a UNIQUE conflict), CHECK (Room doesn't cater for CHECK constraints) and NOT NULL constraints can be handled in various ways at the SQLite level.
A common use of conflict handling is to IGNORE the conflict, in which case the action (INSERT or UPDATE) is ignored. In the case of INSERT the row is not inserted but SQLite ignores the conflict and doesn't issue an error.
So if for example COL4's field was var COL4: Int and not var COL4: Int? then the insert would fail and an SQlite Exception would occurr.
However if instead
#Query("INSERT OR IGNORE INTO theTable (COL2,COL4) VALUES(:valueForCol2,:valueForCol4)")
were used and the COL4 field were defined as var COL4: Int (implied NOT NULL constraint) then the conflict if NULL was passed as valueForCol4 then the row would not be inserted but no failure would occur as the NOT NULL conflict would be ignored.
With the #Insert annotation you can defined this conflict handling via the onConflictStrategy parameter e.g. #Insert(onConflictStrategy=OnConflict.IGNORE)
You may wish to consider reading the following:-
The On Conflict Clause
INSERT
In order to make this insert process more healthy, should I divide the table into two separate tables or send static 'null' data and update these fields?
Note the above is only a summary, INTEGER PRIMARY KEY aka #PrimaryKey var id: Long?=null or variations such as #PrimaryKey(autoGenerate=true) etc has specifically not been discussed.
The design of the database could be handled either way, from the very limited description of the scenario, a most likely suitable scenario cannot really be advised, although either could probably be an approach.
Additional
Based upon the comment:-
For example, I'm going to add the features of a car to the database, but it could be a different type at a time. So on the first page, the type of car will be chosen, like off road, sedan, 4x4, hatchback.
The perhaps consider having a feature table and a mapping table for a many-many relationship between car and it's features as per my response:-
I would suggest that features be a table and with a many-many relationship with the car. That is a car could have a 0-n features and a feature could be used by 0-n cars. The many-many relationship would require a third table known by many terms such as an associative table/reference table/ mapping table. Such a table has 2 core columns a column to map to the car and a column to map to the feature, the primary key being a composite of both these columns.
Here's a basic example of how this could work from an SQLite basis:-
DROP INDEX IF EXISTS carFeatureMap_idxon_feature;
DROP TABLE IF EXISTS carFeatureMap;
DROP TABLE IF EXISTS car;
DROP TABLE IF EXISTS feature;
CREATE TABLE IF NOT EXISTS car (
carId INTEGER PRIMARY KEY,
carname TEXT /* and other columns */
);
CREATE TABLE IF NOT EXISTS feature (
featureId INTEGER PRIMARY KEY,
featureDescription TEXT
);
CREATE TABLE IF NOT EXISTS carFeatureMap (
carIdMap INTEGER REFERENCES car(carId) ON DELETE CASCADE ON UPDATE CASCADE,
featureIdMap INTEGER REFERENCES feature(featureId) ON DELETE CASCADE ON UPDATE CASCADE,
PRIMARY KEY(carIdMap, featureIdMap)
);
/* Should improve efficiency of mapping from a feature */
CREATE INDEX IF NOT EXISTS carFeatureMap_idxon_feature ON carFeatureMap(featureIdMap);
/* Add some features */
INSERT OR IGNORE INTO feature VALUES(100,'4x4'),(101,'Sedan'),(106,'Convertable'),(null /*<<<< featureId generated by SQLite*/ ,'Hatchback');
/*Report1 Output the features */
SELECT * FROM feature;
/* Add some cars */
INSERT OR IGNORE INTO car VALUES(10,'Car1'),(20,'Car2'),(30,'Car3');
/*Report2 Output the cars */
SELECT * FROM car;
/* add the mappings/relationships/associations between cars and features */
INSERT OR IGNORE INTO carFeatureMap VALUES (10,101) /* Car 1 has 4x4*/,(10,106) /* Car 1 has Sedan */,(20,100);
/*Report3 Get the Cars with features cartesian product */
SELECT
car.carName,
featureDescription
FROM car
JOIN carFeatureMap ON car.carId=carFeatureMap.carIdMap
JOIN feature ON featureIdMap=featureId
;
/*Report4 Get the Cars with features with all the features concatendated, i.e. single output per car with features */
SELECT
car.carName,
group_concat(featureDescription,': ') AS allFeatures
FROM car
JOIN carFeatureMap ON car.carId=carFeatureMap.carIdMap
JOIN feature ON featureIdMap=featureId GROUP BY (carId)
;
/*Report5 Similar to the previous BUT if no features then output none so ALL cars are output */
SELECT
carName,
coalesce(
(
SELECT
group_concat(featureDescription)
FROM feature
JOIN carFeatureMap ON carFeatureMap.featureIdMap=featureId AND carFeatureMap.carIdMap=carId
),
'none'
) AS features
FROM car
;
/* Clean Up After Demo*/
DROP INDEX IF EXISTS carFeatureMap_idxon_feature;
DROP TABLE IF EXISTS carFeatureMap;
DROP TABLE IF EXISTS car;
DROP TABLE IF EXISTS feature;
Results from the demo code above
Report1 - The features
Report2 - The cars
**Report3 ** Cars and features
Report 4 Cars and features 2
Report 5 Cars and features 3

get a primary key without inserting empty rows

so in my application, when the user clicks add on something I should create an Entity A to carry the values which the user provides, this Entity A have an autoincremented-primary-key, also along the way of constructing Entity A there're another Entities that carry the key of Entity A as a foreign key as well as part of their composite key.
so my problem is that room prevents me from creating the other entities without providing the key of Entity A in their constructor annotating it with #NonNull as it's part of their composite key and it can't null.
now I don't know how to approach this problem,
was it a mistake from the beginning to work with my entities as custom classes along my application and I should separate entities from custom classes ? (though they would be having the same fields)
whenever the user clicks the add option, should I just push/insert an empty entity/row/tuple to get an autogenerated key so I could create the entities along the way?
please tell me your thoughts about this as it's my first time to work with a database embedded in an application so I don't know what should regarding it.
this Entity A have an autoincremented-primary-key
AUTOINCREMENT, in Room autoGenerate = true as part of the #PrimaryKey annotation, does not actually result in auto generation. Rather it is a constraint rule that forces the next automatically generated rowid to be greater than any that exist or have existed (for that table).
Without AUTOINCREMENT if the column is INTEGER PRIMARY KEY (or implied via a table level definition of such a column as PRIMARY KEY) then the column is made an alias of the always existing rowid (except for the rarely used WITHOUT ROWID table (unable to do so in Room via entities, there is no annotation for such a table)).
The rowid is always unique and always automatically generated and will typically be greater (typically 1 greater) anyway. It is only (unless purposefully manipulated) when the max (9223372036854775807th rowid) is reached when AUTOINCREMENT comes into play. In which case with AUTOINCREMENT you get an SQLITE_FULL exception, without SQLITE will try to find a lower unused/free rowid.
Due to the unnecessary overheads see I personally never use autoGenerate = true.
What AUTOINCREMENT does, is have a system table sqlite_sequence with a row per table that has AUTOINCREMENT where it stores/maintains the highest allocated rowid for the table. With AUTOINCREMENT it then uses the higher of the sqlite_sequence value and the highest rowid value and then adds 1 (without it just uses the highest rowid and adds 1).
was it a mistake from the beginning to work with my entities as custom classes along my application and I should separate entities from custom classes ?
There should be no need to have separate classes an Entity can be used as a stand-alone class, the room annotations being ignored.
whenever the user clicks the add option, should I just push/insert an empty entity/row/tuple to get an autogenerated key so I could create the entities along the way?
It is very easy to get the generated key and #Insert for a single insert returns the key (id) as a long so the #Dao #Insert abstract fun(entityA: EntityA): Long (long in Java) returns the key or -1 if the insert did not insert a row.
If you use the list/varargs for of #Insert then it returns a and array of Longs, each element returning the key (id) of the insert or -1.
So considering what I believe is your issue consider the following 3 Entities (not if Java then use Long rather than long for the key as primitives can't be null).
#Entity
data class EntityA(
#PrimaryKey
var entityAKey: Long? = null,
var otherAdata: String
)
No AUTOINCREMENT via autoGenerate = true.
No #NOTNULL annotations
then :-
#Entity
data class EntityB(
#PrimaryKey
var entityBKey: Long?= null,
var otherBdata: String
)
and :-
#Entity(
primaryKeys = ["entityBRef","entityARef","otherPartOfPrimaryKey"]
)
data class EntityC(
var entityBRef: Long,
var entityARef: Long,
var otherPartOfPrimaryKey: Long,
var otherCData: String
)
add some Dao's :-
#Insert
abstract fun insert(entityA: EntityA): Long
#Insert
abstract fun insert(entityB: EntityB): Long
#Insert
abstract fun insert(entityC: EntityC): Long
NOTE the Long return value (always Long doesn't compile if Int) and generated keys should always be long anyway as they can exceed what an Int can hold.
Finally consider :-
db = TheDatabase.getInstance(this)
dao = db.getDao()
var myfirstA = EntityA(otherAdata = "First")
var myfirstB = EntityB(otherBdata = "The first B")
var a1 = dao.insert(myfirstA)
var b1 = dao.insert(myfirstB)
dao.insert(EntityC(b1,a1,100L,"The first C using id's from the first A and the first B"))
run on the main thread via allowMainThreadQueries()
And the database :-
You could even do :-
dao.insert(EntityC(
dao.insert(EntityB(otherBdata = "Second B")),
dao.insert(EntityA(otherAdata = "A's second")),
200,
"blah")
)
obviously this would likely be of limited use as you'd need to know the values up front.
And the result is :-
Database snapshots obtained via Android studio's App Inspector (formerly Database Inspector).
You could also do/use :-
var my3rdA = EntityA(otherAdata = "3rd")
my3rdA.entityAKey = dao.insert(my3rdA)
Of course whenever you extract from the database then the object will include the key (id) (unless you purposefully chose not to).

android, room: is autogenerated primary key always monotonic

In my Android application I use Room library for persistency.
Assuming, I have an Entity defined like this:
#Entity(tableName = "my_entity")
public class MyEntity {
#ColumnInfo(name = "id")
#PrimaryKey(autoGenerate = true)
private int id;
//...
}
can I rely on the fact, that id will be increased monotonically, i.e. that for newly inserted row id will always be higher, than for all previously created rows?
I think, that it is unlikely, but I can imagine, that Room (or SQLite - I am not sure, who is responsible in this case) could e.g. try to reuse the IDs of the previously deleted rows...
As far as I can see, the official documentation does not tell anything about it PrimaryKey.AutoGenerate().
This answer is the expanded comment from JensV.
As suggested by JensV, the generated schema json file contains (among others):
"createSql": "CREATE TABLE IF NOT EXISTS `${TABLE_NAME}` (`id` INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, ... <other fields>)"
So looking at the SQLite docs of AUTOINCREMENT we get, that it is guaranteed to be monotonic.
In fact, this flag serves exactly for this purpose: to ensure, that the generated value is monotonic (without this flag, the value still will be generated to be unique, but will not be necessarily monotonic). Taking into account, that Room uses the flag, it is strange, that they don't mention it in the documentation.

Get/Return the autogenerated id (as primary key) generated by Android Room database

For an android room interface, I want to get the autogenerated id (as primary key of a record just inserted), so that I can put it in the object without executing a select after insert, where the select might return the wrong record if there is no other unique attribute, or set of attributes for those record types.
For example, for 2 people having the same name being inserted into the same table. You might say generate a composite key to make a unique set. However that might involve the addition of new fields that are otherwise not required.
I've seen various links, including those below. Some mention that it is the row id that is returned if the insert method is declared to return integer (or long), and succeeds.
However it is my understanding that the row id cannot be assumed to be the same as the primary key. (Refer Rowid after Insert in Room).
I cannot comment on any posts because I don't have enough reputation points.
I appreciate any comments regarding what might be a good/typical approach to this problem.
These are the posts I have looked upon:
Android Room - Get the id of new inserted row with auto-generate
https://developer.android.com/training/data-storage/room/accessing-data
https://commonsware.com/AndroidArch/previews/the-dao-of-entities
Late answer just for anyone seeing this question in the future
from SQLite docs it says :
The PRIMARY KEY of a rowid table (if there is one) is usually not the
true primary key for the table, in the sense that it is not the unique
key used by the underlying B-tree storage engine. The exception to
this rule is when the rowid table declares an INTEGER PRIMARY KEY. In
the exception, the INTEGER PRIMARY KEY becomes an alias for the rowid.
therefore it's correct to assume that the rowId returned by insert query is the same as the autoincremented-primary-key

Categories

Resources