This is my table :
#Parcelize
#Entity(tableName = "profile")
data class Profile(
#SerializedName("id") #PrimaryKey var id:Long,
#SerializedName("name") var name :String?,
#TypeConverters(UserConverter::class)
#NotNull
#SerializedName("users") var users :List<Long>?
):Parcelable
and this is my second table :
#Parcelize
#Entity(tableName = "user")
data class User(
#PrimaryKey
#SerializedName("id") var id: Long,
#SerializedName("name") var name: String
) : Parcelable
and I want to get this object :
data class ProfileWithUsersName(
val profile: Profile,
val usersName: List<String>?
)
to get this list of objects I do this :
fun getProfiles() :List<ProfileWithUsersName>{
val arrayListTemp = arrayListOf<ProfileWithUsersName>()
val profiles = profileDao.getProfiles()
for(profile in profiles){
if(profile.users != null) {
arrayListTemp.add(
ProfileWithUsersName(
profile,
userDao.getUsersNameByIds(profile.users!!)
)
)
}else{
ProfileWithUsersName(
profile,
null
)
}
}
return arrayListTemp.toList()
}
it is any changes to do this on one query ?
it could be something like this
#Dao
abstract class ProfileDao {
companion object {
const val QUERY = "SELECT * FROM profile " +
"INNER JOIN user " +
`your condition`
}
#Query(QUERY)
abstract fun getCurrentStep(): List<ProfileWithUsersName>?
}
}
and your profile should be something like this
data class ProfileWithUsersName(
#Embedded val profile: Profile,
#Embedded val usersName: List<String>?
)
May be that would not be full answer, but why not to get partly answer?
Notes to your tables' structure
It's not so optimal and it would be better not to hold information about users having exact profile in Profile class (better option - to remove this into another table). But let it be.
With no changes to tables' structure I think your decision with nested queries could be better if you call the function "userDao.getUsersNameByIds(profile.users!!)" just once (but for that you should change logic a little bit). But I think your tables are not too big to invest time into such optimisations.
As for "do this in one query"
Actually I think that there are some chances to succeed in that, but honestly I don't think it worth of it and that query would be faster or more elegant than loops. But it could be a challenge for somebody :)
Main problem here - is field "users" in your entity "Profile". In SQLite I guess you save it as a String (with the help of UserConverter). For example, if you have list of ids: [111, 222, 333] in SQLite they will be saved as TEXT "111,222,333". So on next step we must somehow JOIN table with such a field an a table, that has INTEGERS 111|222|333.
That could be done only with casting INTEGER To TEXT and then JOIN tables on condition - casted_value_from_second_table LIKE ["%," + value_from_first_table + ",%]" in pseudocode. Maybe that requires some changes to your TypeConverter. Even if it will be successful, JOIN tables with "LIKE" is not best practice.
Conclusions:
Without changes to tables' structures and if your tables are not very big I think way how you do now - is acceptable. With "one query" (again, with no changes to db tables' structure) it's not regular task, there are several problems that are needed to be solved experimentally - I don't think it worth of it.
P.S. Maybe I don't see some obvious decision, so it's just my personal opinion :)
Related
I have three tables in my Room database, my query should return a list of products that contains details about them.
All products are inserted based on a documentId so when the query is done I need to get in the list only the items for that documentId.
The DAO looks like this:
#Transaction
#Query("SELECT * FROM document_products dp LEFT JOIN products ON products.id = dp.productId LEFT JOIN products_barcodes ON products_barcodes.barcode = dp.productId WHERE dp.documentId = :documentId AND (products_barcodes.barcode = dp.productId OR products.id = dp.productId) ORDER BY timestamp ASC")
fun getProductsWithDetails(documentId: Long): Flow<List<DocumentProductWithDetails>>
And if I test the query in a table like this:
Where documentId is 5 the query returns the correct values:
But those values are incorrect in the application probably cause of #Relation in DocumentProductWithDetails but I'm unable to find the issue, in facts inside the application the data is shown as this:
So as the item with productId is saved three times it is showing the last item instead of the one related to documentId
The data class which contains #Relation annotation looks like this:
#JsonClass(generateAdapter = true)
data class DocumentProductWithDetails(
#Relation(
parentColumn = "id",
entityColumn = "productId"
)
var product: DocumentProduct,
#Embedded
var details: ProductWithBarcodes?
)
Where DocumentProduct and ProductWithBarcodes:
data class DocumentProduct(
#PrimaryKey(autoGenerate = true)
var id: Long,
var productId: String,
var quantity: Float,
var orderQuantity: Float,
var purchase: Float,
var documentId: Long,
var labelType: String?,
var timestamp: Long?
)
data class ProductWithBarcodes(
#Embedded
var product: Product,
#Relation(
parentColumn = "id",
entityColumn = "productId"
)
var barcodes: List<Barcode>
)
So as the item with productId is saved three times it is showing the last item instead of the one related to documentId
IF any columns of the query have the same column name, the values assigned may(will be) inconsistent in that the value of the last column, of the repeated name, will be the value assigned to all the other columns that have the same name (etc).
I would suggest that the fix is to use unique column names. e.g. instead of productId in the Barcode that you use a column name more indicative of the use of the value, it could be considered as a map to the product so perhaps barcode_productIdMap.
it is not the query that is at fault but how Room handles retrieving and assigning values.
Consider your second image (with some marking):-
The above is explained in more detail, with examples, in this answer
Which is the correct id, productId, quantity (Rhetorical).
How is Room supposed to know what goes where? (Rhetorical)
Consider that the following query extracts exactly the same data (but with the data columns in a different sequence):-
#Query("SELECT products.*,products_barcodes.*,document_products.* FROM document_products dp LEFT JOIN products ON products.id = ....
How is Room meant to cope with the different order (should the correct productId be the first or the second (Rhetorical)).
with unique column names the column order is irrelevant as Room then knows exactly what column is associated with what field, there is no ambiguity.
with unique columns tablename.columname can be reduced to just the column name in the SQL, so the SQL can be simplified.
What I wanna do is, By merging the following two data classes into one, the two tables are divided as they are.
#Entity(tableName = "file")
data class File(
#PrimaryKey
#ColumnInfo(name = "path") var path: String,
#ColumnInfo(name = "date", index = true) var date: Long,
#ColumnInfo(name = "number") var num: Float = -1f
)
#Entity(tableName = "del_file")
data class delFile(
#PrimaryKey
#ColumnInfo(name = "path") var path: String,
#ColumnInfo(name = "date", index = true) var date: Long,
#ColumnInfo(name = "number") var num: Float = -1f
)
The reason I want to manage those two tables separately is that they are used in completely different situations.
'file' will be used in the app's file system, and 'del_file' will be managed only in the recycle bin.
I also thought about adding and managing a column called "is_alive" to the "file" table, but I thought it was a bad structure in that it would be meaningless data in almost all entries and that filtering was required for all queries to be used in the app's file system.
The best way I think is to manage each of the two tables, but I thought I couldn't come up with a better way because I wasn't good enough.
Is there a way to manage the table separately while making that data class one? Or is there a better way?
I would be very, very grateful if you could tell me a good way.
Generally, having two separate tables for the same data model is not good because it may lead to data duplication, which has various disadvantages; for example, it costs storage space and data inconsistency, etc.
There are two ways to deal with this situation. If you want to distinguish file items just by one field(for example, is_alive), the best way is to use one table with the is_alive field. You can use this way to solve your problem.
But if those distinguishable fields are more than one or maybe in the future become more, the solution is to create another table(like def_file) that contains only the reference to the original table(file table) and those fields. In other words, to avoiding data duplication, separate those fields in another table with a reference to the original table and then use JOIN when you want to retrieve them.
For more detail see this
You cannot have a single #Entity annotated class for multiple tables.
However, you could use a class as the basis of other classes, by embedding the class into the other classes.
e.g. you could have:-
#Entity(tableName = "file")
data class File(
#PrimaryKey
#ColumnInfo(name = "path") var path: String,
#ColumnInfo(name = "date", index = true) var date: Long,
#ColumnInfo(name = "number") var num: Float = -1f
)
and the second class as :-
#Entity(tableName = "del_file",primaryKeys = ["path"], indices = [Index("date")])
data class delFile(
#Embedded
val file: File
)
However, you may need to be aware of what is and isn't applied to the class that Embeds another class.
You cannot use #ColumnInfo with an #Embedded (it is not necessarily a single column)
The #PrimaryKey is lost/dropped when Embedding thus why you would need to define the primary key in the #Entity annotation of the class that Embeds the other class (it is feasible to have multiple #Embeddeds and thus impossible to determine what the correct primary key should be (Room requires that a primary key is defined))
Likewise for the index on the date column and hence the indicies being defined in the #Entity of the delFile class that Embeds the other class.
However the name given in the #ColumnInfo of the first is used in the second (safe to propogate this).
As an example of the columns names being different than the field/variable name, if, for the first, you had:-
#Entity(tableName = "file")
data class File(
#PrimaryKey
#ColumnInfo(name = "mypath") var path: String,
#ColumnInfo(name = "mydate", index = true) var date: Long,
#ColumnInfo(name = "mynumber") var num: Float = -1f
)
Then as the column names are different to the field/variable names then you would have to use:-
#Entity(tableName = "del_file",primaryKeys = ["mypath"], indices = [Index("mydate")])
data class delFile(
#Embedded
val file: File
)
You would also have to be aware that, changes to the File class would also apply to the delFile class, which could be useful at times but potentially problematic at other times.
Changes to the second (delFile) class would not be applied to the first (File) class, so you would have the freedom to augment the second e.g.
#Entity(tableName = "del_file",primaryKeys = ["mypath"], indices = [Index("mydate")])
data class delFile(
#Embedded
val file: File,
val another_column: String
)
This would result in the del_file table having the additional another_column column.
Say I have a one to many relationship between City and Person.
#Entity(tableName = "city")
data class City(
#PrimaryKey
val cityId: Int,
val name: String
)
#Entity(tableName = "person")
data class Person(
#PrimaryKey(autoGenerate = true)
val personId: Long = 0,
val name: String,
val cityId: Int
)
The goal is to get all person in the database and their corresponding city. According to the Jetpack documentation, the obvious way is to create a CityWithPersons class.
data class CityWithPersons(
#Embedded
val city: City,
#Relation(parentColumn = "cityId", entityColumn = "cityId")
val persons: List<Person>
)
Get all CityWithPersons, then combine persons from there.
But in my scenario, there could be less than 10 person and more than 1000 cities in the database. It seems ridiculous and very inefficient to do it this way.
Other potential approaches could be:
get a full list of person then query the city with cityId one by one
embed the City in Person entitiy instead of cityId
Do it as many to many relationship. PersonWithCity will just have a cities array with one entity
I wonder which would be the best way to do it? Or a better way I didn't think of?
I wonder which would be the best way to do it? Or a better way I didn't think of?
I don't believe that the many-many relationship would provide any performance advantage as you would still need to search through one of the tables. Nor do I believe that get a full list of person then query the city with cityId one by one would be of benefit (however do you need to? (rhetorical) See the PersonAndCity that effectively does this in one go)
the obvious way is to create a CityWithPersons class
Seeing that you are looking at the issue from the Person perspective, then why not PersonWithCity class?
embed the City in Person entitiy instead of cityId :-
data class PersonWithCity(
#Embedded
val person: Person,
#Relation(parentColumn = "cityId",entityColumn = "cityId")
val city: City
)
And a Dao such as :-
#Query("SELECT * FROM person")
fun getPersonWithCity(): List<PersonWithCity>
Do you need to build everything?
Another consideration I don't believe you've considered :-
data class PersonAndCity(
val personId: Long,
val name: String,
val cityId: Int,
val cityName: String,
)
And a Dao such as
#Query("SELECT *, city.name AS cityName FROM person JOIN city ON person.cityId = city.cityId")
fun getPersonAndCity(): List<PersonAndCity>
No #Relation
Running the above 2 and the original with 100000 Person and 10000 cities (I assume more Person rows) and Person randomly linked to a City extracting all with each of the 3 methods then the results are :-
690ms (extracts 10000 Cities with ? Persons) using CityWithPersons
1560ms (extracts all 100000 Persons with the City) using PersonWithCity
1475ms (extracts all 100000 Persons with the City information rather than a City object)
Changing to 10 Persons with 1000 Cities then
49ms (CityWithPersons (10000 extracted))
2ms (PersonWithCity (10) extracted)
5ms (PersonAndCity (10 extracted))
As such, the best way is dependant upon the what you are doing. As can be seen the ratio between Cities and Persons is a factor that should be considered.
In short you should undertake testing :-
For the results above I used :-
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val db = Room.databaseBuilder(applicationContext,MyDatabase::class.java,"Mydb")
.allowMainThreadQueries()
.build()
val dao = db.getAllDao()
val people = 10
val cities = 1000
for(i in 1..cities) {
dao.insertCity(City(null,"City00" + i))
}
for(i in 1..people) {
dao.insertPerson(Person(null,"Person" + i,(1..cities).random()))
}
val TAG = "CITYPERSONINFO"
Log.d(TAG,"Starting Test 1 - Using CityWithPerson")
val usingCityWithPerson = dao.getCityWithPerson()
Log.d(TAG,"Done Test 1. Rows Extracted = " + usingCityWithPerson.size)
Log.d(TAG,"Starting Test 2 - UsingPersonAndCity")
val usingPersonWithCity = dao.getPersonWithCity()
Log.d(TAG,"Done Test 2. Rows Extracted = " + usingPersonWithCity.size)
Log.d(TAG,"Starting Test 3 - UsingPersonAndCity (no #Relation just JOIN)")
val usingPersonAndCity = dao.getPersonAndCity()
Log.d(TAG,"Done Test 3. Rows Extracted = " + usingPersonAndCity.size)
}
}
Note that I uninstall the App between runs.
I am new to Room and it's throwing me through a loop. I am trying to take a specific row of a specific data class in my Room database and compile into a list and turn this list into a String so I can do stuff with it. I can sort of do this by printing the contents of the Room database table to the console, but that's it.
I'll show you my code that I have
this is my data class.
code.kt
#Parcelize
#Entity(tableName = "code_table")
data class code(
#PrimaryKey(autoGenerate = true)
val id: Int,
val code: String //this is the item I want to isolate
): Parcelable
a snippet of my ViewModel.kt that I use.
val readAllData: LiveData<List<code>>
private val repository: respository
init {
val Dao = database.getDatabase(application).dao()
repository = respository(Dao)
readAllData = repository.readAllData
}
a snippet of my repository.kt that I use.
val readAllData: LiveData<List<code>> = dao.readAllData()
a snippet of my Dao.kt that I use.
#Query("SELECT * FROM code_table ORDER BY id ASC")
fun readAllData(): LiveData<List<code>>
How I read the Room database table
private fun dbread(){
mUserViewModel = ViewModelProvider(this).get(ViewModel::class.java)
mUserViewModel.readAllData.observe(viewLifecycleOwner, Observer { user ->
var abc = user
println(abc)
})
}
this is what it outputs
I/System.out: [code(id=2, code=textA), code(id=3, code=textB), code(id=4, code=textC)]
I am not hell-bent on only querying the code row but I need to aggregate all the data in the code row into a string or JSON array. so, in this case, that would be the textA, textB and textC (I'm worried I haven't made myself clear)
I have also tried doing the following in the Dao
#Query("SELECT code FROM code_table")
fun readdb(): LiveData<List<code>>//I've tried this with lot different types within the Parentheses
this makes the build fail: it says the following
:app:kaptDebugKotlin 1 error
java.lang.reflect.InvocationTargetException (no error message)
I would guesstimate that this error is because the SQL syntax is off but I don't think it is.
I have also tried messing around with the ViewModel and repository to see if I can get them to only output just the code row but to no avail. I'm surprised I'm the first to post about this.
thank you for your time.
You can obtain you desidered result just by using SQL.
In Android (and with Room) you're using a SQLite database, so the most important thing is writing some SQL that is compliant with SQLite.
Then you can select the concatenation of all the values in your table, just by asking to the database to do it. You can achieve it with this:
#Query("SELECT GROUP_CONCAT(code) FROM code_table")
fun readConcatenatedCode(): LiveData<String>
The returned value will be a LiveData of String, and the String will contain all the values, concatenated.
I'm storing podcast data in a Room database, where each podcast has a List<Int> called genreIds. I'd like to be able to store this in such a way that I can easily query it later by doing something like SELECT * FROM podcasts WHERE genreIds CONTAINS :genre, or whatever the command would be.
So what is the best way to store that list of Ints so that it can be easily queried later, and how would I do that using Room? I've used TypeConverters before, but that converts it to a string, which is difficult to query, so I'd like to be able to link it to another table or something that can be easily queried, I'm just not sure how to do that.
Thanks in advance.
The data stored on a the db with Room, depends on the data class you use. If you specify a data class with an Int member, that will be an Int on the db.
Example:
data class TrackPlayedDB (
#PrimaryKey(autoGenerate = true)
val _id: Int = 0,
val timesPlayed: Int,
val artist: String,
val trackTitle: String,
val playDateTime: LocalDateTime
)
here timesPlayed will be an Int on the DB (as _id). You'll specify your data classes like the following, this will build the corresponding tables.
#Database(entities = [TrackPlayedDB::class], version = 1, exportSchema = false)
#TypeConverters(Converters::class)
abstract class MyRoomDatabase : BaseRoomDatabase() {
Edit: Following author's comment, I stand corrected i didn't get the question right.
Author actually asks how to store a List<Int> as field on a table. There are 2 solutions to do that: one, as Author suggests, is to store the List as String and use Like keyword to write queries with a clause like the following:
SELECT * FROM mytable
WHERE column1 LIKE '%word1%'
OR column1 LIKE '%word2%'
OR column1 LIKE '%word3%'
as a simple search on SO would have shown: SQL SELECT WHERE field contains words
The Author says he used TypeConverters so i'll skip how to convert a List<Int> into a string
The other solution to this problem is to realise that nothing was understood about the theory of Transactional Databases. In fact, when you have a many-to-many relationship, as in the case of podcast and genre, theory dictates that you build a table that links the ids of podcasts and the ids of genres, as it is explained here: https://dzone.com/articles/how-to-handle-a-many-to-many-relationship-in-datab
and other countless books, videos and blogs.
This benefits the db with added clarity, performance and scalability.
Bottom line, Author's db design is wrong.
I found [this article on Medium][1] that I found very helpful. What I'm trying to do is a many to many relationship, which in this case would be done something like the following:
Podcast class:
#Entity(tableName = "podcasts")
data class Podcast(
#PrimaryKey
#ColumnInfo(name = "podcast_id")
val id: String,
// other fields
}
Genre class:
#Entity(tableName = "genres")
data class Genre (
#PrimaryKey
#ColumnInfo(name = "genre_id")
val id: Int,
val name: String,
val parent_id: Int
)
PodcastDetails class:
data class PodcastDetails (
#Embedded
val podcast: Podcast,
#Relation(
parentColumn = "podcast_id",
entityColumn = "genre_id",
associateBy = Junction(PodcastGenreCrossRef::class)
)
val genres: List<Genre>
)
PodcastGenreCrossRef:
#Entity(primaryKeys = ["podcast_id", "genre_id"])
data class PodcastGenreCrossRef (
val podcast_id: Int,
val genre_id: Int
)
And access it in the DAO like this:
#Transaction
#Query(SELECT * FROM podcasts)
fun getPodcastsWithGenre() : List<PodcastDetails>
[1]: https://medium.com/androiddevelopers/database-relations-with-room-544ab95e4542