Observe livedata with an initial timeout - android

I've a livedata which emits everytime there is a update in the database. When the particular screen opens, this livedata emits immediately with whatever value is there in the database. Then, a network call is made to update the database. After the database is updated, the livedata emits again. This leads to two emissions in very quick succession. Subsequent updates to the database work properly cz there is only one emission whenever the database is updated. Only the first time, there are 2 updates in very quick succession. I want to avoid that.
An idea to avoid that would be something like this. When the livedata emits, wait for Xs. If there is another emission in those Xs, discard the data from old emission and use the new one. Wait for Xs again. If there is no emission in those Xs, use the latest data.
This looks very similiar to throttling but only once. I was wondering if there's a simple way to do something like using LiveData or MediatorLiveData.

You can post delayed Runnable with timeout you want after first LiveData event.
Every LiveData update remove posted Runnable and post it again.

You can use MediatorLiveData and a boolean val for achieving this.
Create a mDbLiveData, mediator livedata mFinalLiveData and boolean mLoadedFromAPI when data from API is loaded.
On API success or failure, set mLoadedFromAPI to true;
Observe mFinalLiveData in your Activity/Fragment
LiveData<Model> mDbLiveData;
MediatorLiveData<Model> mFinalLiveData = new MediatorLiveData();
private boolean mLoadedFromAPI = false;
// Load db data in mDbLiveData
mDbLiveData = // Data from DB
// Add mDbLiveData as source in mFinaliveData
mFinalLiveData.addSource(mDbLiveData, dbData -> {
if (mLoadedFromAPI) mFinalLiveData.postValue(dbData);
});

This post helped. https://medium.com/#guilherme.devel/throttle-operator-with-livedata-and-kotlin-coroutines-ec42f8cbc0b0
I modified the solution a bit to fit my usecase:
fun <T> LiveData<T>.debounceOnce(duration: Long,
coroutineContextProvider: CoroutineContextProvider): LiveData<T> {
return MediatorLiveData<T>().also { mediatorLivedata ->
var shouldDebounce = true
var job: Job? = null
val source = this
mediatorLivedata.addSource(source) {
if (shouldDebounce) {
job?.cancel()
job = CoroutineScope(coroutineContextProvider.IO).launch {
delay(duration)
withContext(coroutineContextProvider.Main) {
mediatorLivedata.value = source.value
shouldDebounce = false
}
}
} else {
job?.cancel()
mediatorLivedata.value = source.value
}
}
}
}
open class CoroutineContextProvider #Inject constructor() {
open val Main: CoroutineContext by lazy { Dispatchers.Main }
open val IO: CoroutineContext by lazy { Dispatchers.Default }
}

Related

ViewModel + Room test coverage in UnitTest

I have an unit test like this:
...
subj.mintToken(to, value, uri)
advanceUntilIdle()
...
val pendingTxFinalState = subj.uiState.value.pendingTx.count()
assertThat("Model should have a single pending tx, but has $pendingTxFinalState", pendingTxFinalState == 1)
...
The model field in ViewModel is populated by the request to cache in the init {} block. Each change in table would trigger this coroutine flow. This piece of unit test checks correctness of this functionality.
The current issue is this Flow in init {} block is triggered only on the test start when ViewModel instance is created. It does not respond on update in table.
It is important to note I don't use in test a room database neither test database, but FakeCacheRepository where behaviour of methods are emulated by flow with mocked data. However the behaviour of flow should be the same as there is still in change in underlying data.
val txPool = ConcurrentLinkedQueue<ITransaction>()
override fun createChainTx(tx: ITransaction): Flow<ITransaction> {
return flow {
txPool.add(tx)
emit(tx)
}
}
override fun getAllChainTransactions(): Flow<List<ITransaction>> {
return flow {
emit(txPool.toList())
}
}
Do you see the issue here or better way to test this?
My guess is you’re writing you’re own FakeCacheRepo and in the update function you are calling createChainTx. The value of the flow isn’t updating though because the create function doesn’t just update the value it creates a new flow instead of updating the old one. You can modify the set up to emit continuously in a loop (with some buffer delay) based on a variable. Then when you change the variable it will change what the current flow is emiting as expected.
The code example here is roughly doing that: https://developer.android.com/kotlin/flow#create
override fun createChainTx(): Flow<ITransaction> {
return flow {
while(true) {
val tx = getLatestTxValue() // Get the latest updated value from an outside source
txPool.add(tx)
emit(tx)
delay(refreshIntervalMs) // Suspends the coroutine for some time
}
}
}

How do you make make a subscriber to a kotlin sharedflow run operations in parallel?

I have a connection to a Bluetooth device that emits data every 250ms
In my viewmodel I wish to subscribe to said data , run some suspending code (which takes approximatelly 1000ms to run) and then present the result.
the following is a simple example of what I'm trying to do
Repository:
class Repo() : CoroutineScope {
private val supervisor = SupervisorJob()
override val coroutineContext: CoroutineContext = supervisor + Dispatchers.Default
private val _dataFlow = MutableSharedFlow<Int>()
private var dataJob: Job? = null
val dataFlow: Flow<Int> = _dataFlow
init {
launch {
var counter = 0
while (true) {
counter++
Log.d("Repo", "emmitting $counter")
_dataFlow.emit(counter)
delay(250)
}
}
}
}
the viewmodel
class VM(app:Application):AndroidViewModel(app) {
private val _reading = MutableLiveData<String>()
val latestReading :LiveData<String>() = _reading
init {
viewModelScope.launch(Dispatchers.Main) {
repo.dataFlow
.map {
validateData() //this is where some validation happens it is very fast
}
.flowOn(Dispatchers.Default)
.forEach {
delay(1000) //this is to simulate the work that is done,
}
.flowOn(Dispatchers.IO)
.map {
transformData() //this will transform the data to be human readable
}
.flowOn(Dispatchers.Default)
.collect {
_reading.postValue(it)
}
}
}
}
as you can see, when data comes, first I validate it to make sure it is not corrupt (on Default dispatcher) then I perform some operation on it (saving and running a long algorithm that takes time on the IO dispatcher) then I change it so the application user can understand it (switching back to Default dispatcher) then I post it to mutable live data so if there is a subscriber from the ui layer they can see the current data (on the Main dispatcher)
I have two questions
a) If validateData fails how can I cancel the current emission and move on to the next one?
b) Is there a way for the dataFlow subscriber working on the viewModel to generate new threads so the delay parts can run in parallel?
the timeline right now looks like the first part, but I want it to run like the second one
Is there a way to do this?
I've tried using buffer() which as the documentation states "Buffers flow emissions via channel of a specified capacity and runs collector in a separate coroutine." but when I set it to BufferOverflow.SUSPEND I get the behaviour of the first part, and when I set it to BufferOverflow.DROP_OLDEST or BufferOverflow.DORP_LATEST I loose emissions
I have also tried using .conflate() like so:
repo.dataFlow
.conflate()
.map { ....
and even though the emissions start one after the other, the part with the delay still waits for the previous one to finish before starting the next one
when I use .flowOn(Dispatchers.Default) for that part , I loose emissions, and when I use .flowOn(Dispatchers.IO) or something like Executors.newFixedThreadPool(4).asCoroutineDispatcher() they always wait for the previous one to finish before starting a new one
Edit 2:
After about 3 hours of experiments this seems to work
viewModelScope.launch(Dispatchers.Default) {
repo.dataFlow
.map {
validateData(it)
}
.flowOn(Dispatchers.Default)
.map {
async {
delay(1000)
it
}
}
.flowOn(Dispatchers.IO) // NOTE (A)
.map {
val result = it.await()
transformData(result)
}
.flowOn(Dispatchers.Default)
.collect {
_readings.postValue(it)
}
}
however I still haven't figured out how to cancel the emission if validatedata fails
and for some reason it only works if I use Dispatchers.IO , Executors.newFixedThreadPool(20).asCoroutineDispatcher() and Dispatchers.Unconfined where I put note (A), Dispatchers.Main does not seem to work (which I expected) but Dispatchers.Default also does not seem to work and I don't know why
First question: Well you cannot recover from an exception in a sense of continuing
the collection of the flow, as per docs "Flow collection can complete with an exception when an emitter or code inside the operators throw an exception." therefore once an exception has been thrown the collection is completed (exceptionally) you can however handle the exception by either wrapping your collection inside try/catch block or using the catch() operator.
Second question: You cannot, while the producer (emitting side) can be made concurrent
by using the buffer() operator, collection is always sequential.
As per your diagram, you need fan out (one producer, many consumers), you cannot
achieve that with flows. Flows are cold, each time you collect from them, they start
emitting from the beginning.
Fan out can be achieved using channels, where you can have one coroutine producing
values and many coroutines that consume those values.
Edit: Oh you meant the validation failed not the function itself, in that case you can use the filter() operator.
The BroadcastChannel and ConflatedBroadcastChannel are getting deprecated. SharedFlow cannot help you in your use case, as they emit values in a broadcast fashion, meaning producer waits until all consumers consume each value before producing the next one. That is still sequential, you need parallelism. You can achieve it using the produce() channel builder.
A simple example:
val scope = CoroutineScope(Job() + Dispatchers.IO)
val producer: ReceiveChannel<Int> = scope.produce {
var counter = 0
val startTime = System.currentTimeMillis()
while (isActive) {
counter++
send(counter)
println("producer produced $counter at ${System.currentTimeMillis() - startTime} ms from the beginning")
delay(250)
}
}
val consumerOne = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerOne consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
val consumerTwo = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerTwo consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
val consumerThree = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerThree consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
Observe production and consumption times.

Is livedata builder ok for one-shot operations?

For example, let's say that we have a product catalog view with an option to add product to a cart.
Each time when user clicks add to cart, a viewModel method addToCart is called, that could look like this:
//inside viewModel
fun addToCart(item:Item): LiveData<Result> = liveData {
val result = repository.addToCart(item) // loadUser is a suspend function.
emit(result)
}
//inside view
addButton.onClickListener = {
viewModel.addToCart(selectedItem).observe (viewLifecycleOwner, Observer () {
result -> //show result
}
}
What happens after adding for example, 5 items -> will there be 5 livedata objects in memory observed by the view?
If yes, when will they be cleanup? And if yes, should we avoid livedata builder for one-shot operations that can be called multiple times?
Your implementation seems wrong! You are constantly returning a new LiveData object for every addToCard function call. About your first question, it's a Yes.
If you want to do it correctly via liveData.
// In ViewModel
private val _result = MutableLiveData<Result>()
val result: LiveData<Result>
get() = _result;
fun addToCart(item: Item) {
viewModelScope.launch {
// Call suspend functions
result.value = ...
}
}
// Activity/Fragment
viewModel.result.observe(lifecycleOwner) { result ->
// Process the result
...
}
viewModel.addToCart(selectedItem)
All you have to do is call it from activity & process the result. You can also use StateFlow for this purpose. It also has an extension asLiveData which converts Flow -> LiveData as well.
According to LiveData implementation of:
public void observe(#NonNull LifecycleOwner owner, #NonNull Observer<? super T> observer) {
assertMainThread("observe");
if (owner.getLifecycle().getCurrentState() == DESTROYED) {
// ignore
return;
}
LifecycleBoundObserver wrapper = new LifecycleBoundObserver(owner, observer);
ObserverWrapper existing = mObservers.putIfAbsent(observer, wrapper);
if (existing != null && !existing.isAttachedTo(owner)) {
throw new IllegalArgumentException("Cannot add the same observer"
+ " with different lifecycles");
}
if (existing != null) {
return;
}
owner.getLifecycle().addObserver(wrapper);
}
a new Observer (wrapper) is added every time you observe a LiveData. Looking at this I would be carefull creating new Observers from a view (click) event. At the moment I can not tell if a Garbage Collector can free this resources.
As #kaustubhpatange mentioned, you should have one LiveData with a state/value that can be changed by the viewModel, with every new result. That LiveData can be observed (once) in your Activity or Fragment onCreate() function:
fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
viewModel.result.observe(lifecycleOwner) { result ->
// handle the result
}
}
Using MutableLiveData in your ViewModel, you can mostly create LiveData only once, and populate it later with values from click events, responses etc.
TL;DR
If your operation is One-Shot use Coroutine and LiveData.
If your operation serving with Streams you can use Flow.
For one-shot operations, your approach it's OK.
I think with liveData builder there is no any Memory leak.
If you use for example private backing property for LiveData and observe an public LiveData it might occurs different behavior like get latest value before assign new value to that.

Emit coroutine Flow from Room while backfilling via network request

I have my architecture like so:
Dao methods returning Flow<T>:
#Query("SELECT * FROM table WHERE id = :id")
fun itemById(id: Int): Flow<Item>
Repository layer returning items from DB but also backfilling from network:
(* Need help here -- this is not working as intended **)
fun items(): Flow<Item> = flow {
// Immediately emit values from DB
emitAll(itemDao.itemById(1))
// Backfill DB via network request without blocking coroutine
itemApi.makeRequest()
.also { insert(it) }
}
ViewModel layer taking the flow, applying any transformations, and converting it into a LiveData using .asLiveData():
fun observeItem(): LiveData<Item> = itemRepository.getItemFlow()
.map { // apply transformation to view model }
.asLiveData()
Fragment observing LiveData emissions and updating UI:
viewModel.item().observeNotNull(viewLifecycleOwner) {
renderUI(it)
}
The issue I'm having is at step 2. I can't seem to figure out a way to structure the logic so that I can emit the items from Flow immediately, but also perform the network fetch without waiting.
Since the fetch from network logic is in the same suspend function it'll wait for the network request to finish before emitting the results downstream. But I just want to fire that request independently since I'm not interested in waiting for a result (when it comes back, it'll update Room and I'll get the results naturally).
Any thoughts?
EDIT
Marko's solution works well for me, but I did attempt a similar approach like so:
suspend fun items(): Flow<List<Cryptocurrency>> = coroutineScope {
launch {
itemApi.makeRequest().also { insert(it) }
}
itemDao.itemById(1)
}
It sounds like you're describing a background task that you want to launch. For that you need access to your coroutine scope, so items() should be an extension function on CoroutineScope:
fun CoroutineScope.items(): Flow<Item> {
launch {
itemApi.makeRequest().also { insert(it) }
}
return flow {
emitAll(itemDao.itemById(1))
}
}
On the other hand, if you'd like to start a remote fetch whose result will also become a part of the response, you can do it as follows:
fun items(): Flow<Item> = flow {
coroutineScope {
val lateItem = async { itemApi.makeRequest().also { insert(it) } }
emitAll(itemDao.itemById(1))
emit(lateItem.await())
}
}

android -MutableLiveData doesn't observe on new data

I'm using mvvm and android architecture component , i'm new in this architecture .
in my application , I get some data from web service and show them in recycleView , it works fine .
then I've a button for adding new data , when the user input the data , it goes into web service , then I have to get the data and update my adapter again.
this is my code in activity:
private fun getUserCats() {
vm.getCats().observe(this, Observer {
if(it!=null) {
rc_cats.visibility= View.VISIBLE
pb.visibility=View.GONE
catAdapter.reloadData(it)
}
})
}
this is view model :
class CategoryViewModel(private val model:CategoryModel): ViewModel() {
private lateinit var catsLiveData:MutableLiveData<MutableList<Cat>>
fun getCats():MutableLiveData<MutableList<Cat>>{
if(!::catsLiveData.isInitialized){
catsLiveData=model.getCats()
}
return catsLiveData;
}
fun addCat(catName:String){
model.addCat(catName)
}
}
and this is my model class:
class CategoryModel(
private val netManager: NetManager,
private val sharedPrefManager: SharedPrefManager) {
private lateinit var categoryDao: CategoryDao
private lateinit var dbConnection: DbConnection
private lateinit var lastUpdate: LastUpdate
fun getCats(): MutableLiveData<MutableList<Cat>> {
dbConnection = DbConnection.getInstance(MyApp.INSTANCE)!!
categoryDao = dbConnection.CategoryDao()
lastUpdate = LastUpdate(MyApp.INSTANCE)
if (netManager.isConnected!!) {
return getCatsOnline();
} else {
return getCatsOffline();
}
}
fun addCat(catName: String) {
val Category = ApiConnection.client.create(Category::class.java)
Category.newCategory(catName, sharedPrefManager.getUid())
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
{ success ->
getCatsOnline()
}, { error ->
Log.v("this", "ErrorNewCat " + error.localizedMessage)
}
)
}
private fun getCatsOnline(): MutableLiveData<MutableList<Cat>> {
Log.v("this", "online ");
var list: MutableLiveData<MutableList<Cat>> = MutableLiveData()
list = getCatsOffline()
val getCats = ApiConnection.client.create(Category::class.java)
getCats.getCats(sharedPrefManager.getUid(), lastUpdate.getLastCatDate())
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
{ success ->
list += success.cats
lastUpdate.setLastCatDate()
Observable.just(DbConnection)
.subscribeOn(Schedulers.io())
.subscribe({ db ->
categoryDao.insert(success.cats)
})
}, { error ->
Log.v("this", "ErrorGetCats " + error.localizedMessage);
}
)
return list;
}
I call getCat from activity and it goes into model and send it to my web service , after it was successful I call getCatsOnline method to get the data again from webservice .
as I debugged , it gets the data but it doesn't notify my activity , I mean the observer is not triggered in my activity .
how can I fix this ? what is wrong with my code?
You have made several different mistakes of varying importance in LiveData and RxJava usage, as well as MVVM design itself.
LiveData and RxJava
Note that LiveData and RxJava are streams. They are not one time use, so you need to observe the same LiveData object, and more importantly that same LiveData object needs to get updated.
If you look at getCatsOnline() method, every time the method gets called it's creating a whole new LiveData instance. That instance is different from the previous LiveData object, so whatever that is listening to the previous LiveData object won't get notified to the new change.
And few additional tips:
In getCatsOnline() you are subscribing to an Observable inside of another subscriber. That is common mistake from beginners who treat RxJava as a call back. It is not a call back, and you need to chain these calls.
Do not subscribe in Model layer, because it breaks the stream and you cannot tell when to unsubscribe.
It does not make sense to ever use AndroidSchedulers.mainThread(). There is no need to switch to main thread in Model layer especially since LiveData observers only run on main thread.
Do not expose MutableLiveData to other layer. Just return as LiveData.
One last thing I want to point out is that you are using RxJava and LiveData together. Since you are new to both, I recommend you to stick with just one of them. If you must need to use both, use LiveDataReactiveStreams to bridge these two correctly.
Design
How to fix all this? I am guessing that what you are trying to do is to:
(1) view needs category -> (2) get categories from the server -> (3) create/update an observable list object with the new cats, and independently keep the result in DB -> (4) list instance should notify activity automatically.
It is difficult to pull this off correctly because you have this list instance that you have to manually create and update. You also need to worry about where and how long to keep this list instance.
A better design would be:
(1) view needs category -> (2) get a LiveData from DB and observe -> (3) get new categories from the server and update DB with the server response -> (4) view is notified automatically because it's been observing DB!
This is much easier to implement because it has this one way dependency: View -> DB -> Server
Example CategoryModel:
class CategoryModel(
private val netManager: NetManager,
private val sharedPrefManager: SharedPrefManager) {
private val categoryDao: CategoryDao
private val dbConnection: DbConnection
private var lastUpdate: LastUpdate // Maybe store this value in more persistent place..
fun getInstance(netManager: NetManager, sharedPrefManager: SharedPrefManager) {
// ... singleton
}
fun getCats(): Observable<List<Cat>> {
return getCatsOffline();
}
// Notice this method returns just Completable. Any new data should be observed through `getCats()` method.
fun refreshCats(): Completable {
val getCats = ApiConnection.client.create(Category::class.java)
// getCats method may return a Single
return getCats.getCats(sharedPrefManager.getUid(), lastUpdate.getLastCatDate())
.flatMap { success -> categoryDao.insert(success.cats) } // insert to db
.doOnSuccess { lastUpdate.setLastCatDate() }
.ignoreElement()
.subscribeOn(Schedulers.io())
}
fun addCat(catName: String): Completable {
val Category = ApiConnection.client.create(Category::class.java)
// newCategory may return a Single
return Category.newCategory(catName, sharedPrefManager.getUid())
.ignoreElement()
.andThen(refreshCats())
.subscribeOn(Schedulers.io())
)
}
}
I recommend you to read through Guide to App Architecture and one of these livedata-mvvm example app from Google.

Categories

Resources