Ensure sequential state update when using RXJava scan operator - android

I'm trying to implement redux state update pattern using RXJava
val subject=PublishSubject.create()
val subject1=PublishSubject.create()
// multiple threads posting
// on subject and subject1 here. Concurrently
subject.mergeWith(subject1)
.scan(
getInitState(),
{state, event ->
// state update here
}
)
.subscribe({state ->
// use state here
})
As you can see, I'm using scan operator to maintain the state.
How can I be sure that the state updates happen sequentially even when multiple threads are producing events?
Is there some mechanism in scan operator which makes the events stand in some queue while waiting for current state update function to finish?
What I have done:
I have successfully implemented this pattern in Android environment. It's really easy because if you always do the state update in
AndroidSchedulers.mainThread()
And make state object immutable you are guaranteed to have atomic and sequential state update. But what happens if you don't have dedicated scheduler for state updates? What if you are not on Android?
What I have researched:
I have read the source code for scan operator and there is no
waiting "queue" involved. Just simple state update and emission
I have also read SerializedSubject source code. There indeed is a waiting queue which serializes emissions. But what happens if I have two subjects? Serializing both of them doesn't mean that they don't interfere with each other.

To force execution on a single thread, you can explicitly create a single thread scheduler to replace AndroidSchedulers.mainThread():
val singleThreadScheduler = Schedulers.single()
Even if the events are emitted on other threads, you can ensure you process them only on your single thread using observeOn:
subject.mergeWith(subject1)
.observeOn(singleThreadScheduler)
.scan(
getInitState(),
{state, event ->
// state update here
}
)
.subscribe({state ->
// use state here
})
The difference between observeOn and subscribeOn can be pretty confusing, and logging the thread id can be useful to check everything is running on the thread you expect.
http://reactivex.io/documentation/scheduler.html

Related

Collect flow but only any new values, not the currently existing value

Currently struggling with this one, and so far no combination of SharedFlow and StateFlow have worked.
I have a flow that might have already started with a value, or not.
Using that flow I want to collect any new values that are emitted after I start collecting.
At this moment all my attempts have always failed, no matter what I try it always gets the current value as soon as I start collecting.
An example of what I am trying to achieve:
Given a Flow (could be any type, Int is just for simplification)
with the following timeline: value 4 is emitted | value 2 is emitted | value 10 is emitted
I want to be able to do the following:
If I start collecting after value 4 has already been emitted, I want to only receive anything after that, in this case it would collect 2 and 10 once emitted
If I start collecting after value 2 then it would only receive the 10
If I start collecting before 4 then it would receive 4, 2 and 10
Tried SharedFlow and Stateflow, tried with replay = 0 and WhileSubscribed, no combination I could find would do what I am looking for.
The only workaround so far that I found was to locally register the time I start my .collect{ } and compare with the start time of the item I receive in the collect. In this case I have the object I am using has a specific origin time, but this workaround will not work for everything like the example above with Integers.
EDIT: Adding implementation example as requested for SharedFlow
This is tied to a Room database call that returns a Flow<MyObject>
MyFragment.kt
lifecycleScope.launch(Dispatchers.IO) {
viewModel.getMyObjectFlow.shareIn(
viewModel.viewModelScope, // also tried with fragment lifecyclescope
SharingStarted.WhileSubscribed(), // also tried with the other 2 options
replay = 0,
).collect{
...
}
}
You have a misconception of how flows work. They always emit only after you start collecting. They emit on-demand. Let's get this example:
val flow1 = flow {
println("Emitting 1")
emit(1)
delay(10.seconds)
println("Emitting 2")
emit(2)
}
delay(5.seconds)
println("Start collecting")
flow1.collect {
println("Collected: $it")
}
The output is:
Start collecting
Emitting 1
Collected: 1
not:
Emitting 1
Start collecting
Collected: 1
This is because flow starts emitting only after you start collecting it. Otherwise, it would have nowhere to emit.
Of course, there are flows which emit from some kind of a cache, queue or a buffer. For example shared flows do this. In that case it looks like you collect after emitting. But this is not really the case. Technically speaking, it works like this:
val buffer = listOf(1 , 2, 3)
val flow1 = flow {
buffer.forEach {
println("Emitting $it")
emit(it)
}
}
It still emits after you start collecting, but it just emits from the cache. Of course, the item was added to the cache before you started collecting, but this is entirely abstracted from you. You can't know why a flow emitted an item. From the collector perspective it always emitted just now, not in the past. Similarly, you can't know if a webserver read the data from the DB or a cache - this is abstracted from you.
Summing up: it is not possible to collect only new items from just any flow in an universal way. Flows in general don't understand the concept of "new items". They just emit, but you don't know why they do this. Maybe they somehow generate items on-the-fly, maybe they passively observe external events or maybe they re-transmit some items that they collected from another flow. You don't know that.
While developing your solution, you need to understand what was the source of items and develop your code accordingly. For example, if the source is a regular cold flow, then it never starts doing anything before you start collecting. If the source is a state flow, you can just drop the first item. If it is a shared flow or a flow with some replay buffer, then the situation is more complicated.
One possible approach would be to start collecting earlier than we need, initially ignore all collected items and at some point in time start processing them. But this is still far from perfect and it may not work as we expect.
It doesn't make sense to use shareIn at the use site like that. You're creating a shared Flow that cannot be shared because you don't store the reference for other classes to access and use.
Anyway, the problem is that you are creating the SharedFlow at the use site, so your shared flow only begins collecting from upstream when the fragment calls this code. If the upstream flow is cold, then you will be getting the first value emitted by the cold flow.
The SharedFlow should be created in the ViewModel and put in a property so each Fragment can collect from the same instance. You'll want to use SharingStarted.Eagerly to prevent the cold upstream flow from restarting from the beginning when there are new subscribers after a break.

Is it possible to implement an operator like delay but that also delays errors?

I'm trying for some time now to implement an extension function (just becuse it's easier to me) that is capable of delaying both normal item emissions and errors. The existing delay operators only delays normal item emissions, errors are delivered ASAP.
For context, I'm trying to immitate an Android LiveData's behavior (kinda). LiveDatas are a observable pattern implementation that is lifecycle aware. Their observers are only notified if they are in a state where they can process that emission. If they are not ready, the emission is cached in the livedata and delivered as soon as they become ready.
I created a BehaviourSubject that emits the state of my Activities and Fragments when it changes. With that I created a delay operator like this:
fun <T> Flowable<T>.delayUntilActive(): Flowable<T> = delay { lifecycleSubject.toFlowable(BackpressureStrategy.LATEST).filter { it.isActive } }
and then use it like this
myUseCase.getFlowable(Unit)
.map { it.map { it.toDisplayModel() } }
.delayUntilActive()
.subscribe({
view.displaySomethings(
}, { }).addTo(disposables)
So even if myUseCase emits when the view is not ready to display somethings, the emission won't reach onNext() until the view does become ready. The problem is that I also want the view to displayError() when onError is triggered, but that too is lifecycle sensitive. If the view isn't ready, the app will crash.
So I'm looking for a way to delay both emissions and errors (onComplete would be good too). Is this possible?
I tried some things with zip, onErrorReturn, delay inside delay, but nothing seemed right. I'd be equally unimpressed if this had a really easy solution I'm overlooking, or is impossible. Any ideas are welcome.
Bonus: any better way to do that for Single and Completable too? currently I'm just converting them to flowable.
Thanks in advance!
You can handle the error via onErrorResumeNext, then taking the same error and delaying it via delaySubscription until your desired signal to emit said error happens:
source
.onErrorResumeNext({ error ->
Observable.error(error)
.delaySubscription(lifecycleSubject.filter { it.Active } )
})

RxJava2 and Android complex observable chaining

I have been working with Rx Java 2 for awhile but recently came across a situation that has stumped me. I have a semi-complex chain of operations and wish to pass a "state object" down the chain.
There are 4 operations during which I wish to repeat operations 2 and 3 (serialy, not together) until certain conditions are true. I know i can solve this by chaining each operation using andThen(), but this limits my ability to pass a state object down the chain without reaching outside of the chain.
The reason I need to have a state object is because I need to save an initial value during the first operation and compare it to a value recieved during operation 4 to determine if the overall procedure was successful.
Any clues as to what RxJava2 operators can help me achieve the proper repeat conditions for operation 2 and 3? I would prefer to not nest observables if possible.
You can keep your state as some AtomicReference<State> and use repeatUntil operator.
AtomicReference<State> state = new AtomicReference<>();
Completable operation = Completable.create() // do something and modify state
.repeatUntil(() -> state.get() == SATISFYING_CONDITION);
You can easily chain these Completables with andThen

RxJava2 - Emitting items using PublishSubject

I've a scenerio where I've
subject1: PublishSubject and subject2:BehaviorSubject.
First, I emit single item for subject1, then I emit item for subject2, but right after that I also want to emit different item to subject1.
fun emittingItems() {
subject1.onNext(functionA1)
subject2.onNext(functionB)
if (something) subject1.onNext(functionA2)
}
What happens is, that I receive an item in this sequence: functionA1, functionA2, functionB.
Why do I get this behavior? How can I emit items in this sequence: functionA1, functionB,functionA2.
Subscribing to subjects:
val disposable = viewModel.subject1
.observeOn(AndroidSchedulers.mainThread())
.subscribe(this::someFunction())
disposables.add(disposable)
With observeOn(AndroidSchedulers.mainThread()) you schedule the propagation of events on the main thread. The scheduling itself is sequential, while each scheduled Runnable might handle more than one element added to the queue used for it.
It's a kind of race condition which will arise for sure when calling emittingItems() on the main thread itself and could arise when calling it from any other thread.
But since you're handling two different asynchronous streams, you cannot expect any sequential observation within the two different observers.
You can achieve the given, by merging both sources as one stream:
Observable.merge(subject1, subject2)
.observeOn(AndroidSchedulers.mainThread())
.subscribe(subject);

Updating flatMap concurrent limit in the same subscription

I have an Android service that downloads files when a PublishSubject receives download events through EventBus and I want to limit the number of concurrent downloads based on a setting.
When the service is instantiated, it creates the PublishSubject and the following subscription:
PublishSubject<DownloadEvent> downloadsSubject = PublishSubject.create();
Subscription downloadSubscription = downloadsSubject
.subscribeOn(Schedulers.io())
.filter(event -> !isDownloaded(event))
.flatMap(this::addDownloadToQueue)
.flatMap(this::startDownload, preferences.getDownloadThreadsNumber())
.onBackpressureBuffer()
.subscribe();
But the setting is obtained only when the subscription is made, and changes to the setting have no effect.
Is there a way to update this value (or another approach) for next queue emissions without having to subscribe again?
Here is a runnable class with a custom operator which should do what you wanted.
There are several race conditions in such scenarios and I've tried to cover most of them. The operator doesn't coordinate backpressure so you may need onBackpressureBuffer.

Categories

Resources