Let's say we have a Channel like this
private val channel = Channel<String>(1)
And we are listening to the Channel elements like this
channel.receiveAsFlow().collect { myStr ->
println(myStr)
}
If I run something like this
private val scope = CoroutineScope(Dispatchers.Main + SupervisorJob())
...
fun sendMessage(myMessage: String) {
scope.launch {
channel.send(myMessage)
}
}
...
sendMessage("a")
sendMessage("b")
sendMessage("c")
sendMessage("d")
The output is going to be
a
b
c
d
Now, what I'm trying to achieve is that if I send "b" it delays the processing of the elements in the channel for 1 second.
For example, if I do
...
sendMessage("a")
sendMessage("b")
sendMessage("c")
sendMessage("b")
sendMessage("d")
sendMessage("e")
The output that I would expect would be like
a // prints immediately
b // prints right after a
c // prints after 1 second
b // prints right after c
d // prints after 1 second
e // prints right after d
My question is, how would I achieve this behavior? I've been trying to add delay() here and there, but I didn't have any luck.
Here's an idea, but it feels a little hacky to me. trySend would not work with this. I'm not sure how to make trySend make sense with your criteria, because it's supposed to return immediately with a result about whether the value posted.
Here, send() suspends until the possible delay is over. If you don't want to wait for it, you'd have to launch a coroutine each time you send something.
Since the Channel "constructor" is not a true constructor, you can't subclass it. My workaround is to create a class that uses it as a delegate.
val backingChannel = Channel<String>(1)
val channel = object: Channel<String> by backingChannel {
var delayNext = false
val mutex = Mutex()
override suspend fun send(element: String) = mutex.withLock {
if (delayNext) {
delay(1000)
}
delayNext = element == "b"
backingChannel.send(element)
}
}
Related
I have an instance of CoroutineScope and log() function which look like the following:
private val scope = CoroutineScope(Dispatchers.IO)
fun log(message: String) = scope.launch { // launching a coroutine
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
And I use this test code to launch coroutines:
repeat(5) { item ->
log("Log $item")
}
The log() function can be called from any place, in any Thread, but not from a coroutine.
After a couple of tests I can see not sequential result like the following:
Log 0
Log 2
Log 4
Log 1
Log 3
There can be different order of printed logs. If I understand correctly the execution of coroutines doesn't guarantee to be sequential. What it means is that a coroutine for item 2 can be launched before the coroutine for item 0.
I want that coroutines were launched sequentially for each item and "some blocking operation" would execute sequentially, to always achieve next logs:
Log 0
Log 1
Log 2
Log 3
Log 4
Is there a way to make launching coroutines sequential? Or maybe there are other ways to achieve what I want?
Thanks in advance for any help!
One possible strategy is to use a Channel to join the launched jobs in order. You need to launch the jobs lazily so they don't start until join is called on them. trySend always succeeds when the Channel has unlimited capacity. You need to use trySend so it can be called from outside a coroutine.
private val lazyJobChannel = Channel<Job>(capacity = Channel.UNLIMITED).apply {
scope.launch {
consumeEach { it.join() }
}
}
fun log(message: String) {
lazyJobChannel.trySend(
scope.launch(start = CoroutineStart.LAZY) {
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
)
}
Since Flows are sequential we can use MutableSharedFlow to collect and handle data sequentially:
class Info {
// make sure replay(in case some jobs were emitted before sharedFlow is being collected and could be lost)
// and extraBufferCapacity are large enough to handle all the jobs.
// In case some jobs are lost try to increase either of the values.
private val sharedFlow = MutableSharedFlow<String>(replay = 10, extraBufferCapacity = 10)
private val scope = CoroutineScope(Dispatchers.IO)
init {
sharedFlow.onEach { message ->
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking or suspend operation
}.launchIn(scope)
}
fun log(message: String) {
sharedFlow.tryEmit(message)
}
}
fun test() {
val info = Info()
repeat(10) { item ->
info.log("Log $item")
}
}
It always prints the logs in the correct order:
Log 0
Log 1
Log 2
...
Log 9
It works for all cases, but need to be sure there are enough elements set to replay and extraBufferCapacity parameters of MutableSharedFlow to handle all items.
Another approach is
Using Dispatchers.IO.limitedParallelism(1) as a context for the CoroutineScope. It makes coroutines run sequentially if they don't contain calls to suspend functions and launched from the same Thread, e.g. Main Thread. So this solution works only with blocking (not suspend) operation inside launch coroutine builder:
private val scope = CoroutineScope(Dispatchers.IO.limitedParallelism(1))
fun log(message: String) = scope.launch { // launching a coroutine from the same Thread, e.g. Main Thread
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // only blocking operation, not `suspend` operation
}
It turns out that the single thread dispatcher is a FIFO executor. So limiting the CoroutineScope execution to one thread solves the problem.
I have an instance of CoroutineScope and log() function which look like the following:
private val scope = CoroutineScope(Dispatchers.IO)
fun log(message: String) = scope.launch { // launching a coroutine
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
And I use this test code to launch coroutines:
repeat(5) { item ->
log("Log $item")
}
The log() function can be called from any place, in any Thread, but not from a coroutine.
After a couple of tests I can see not sequential result like the following:
Log 0
Log 2
Log 4
Log 1
Log 3
There can be different order of printed logs. If I understand correctly the execution of coroutines doesn't guarantee to be sequential. What it means is that a coroutine for item 2 can be launched before the coroutine for item 0.
I want that coroutines were launched sequentially for each item and "some blocking operation" would execute sequentially, to always achieve next logs:
Log 0
Log 1
Log 2
Log 3
Log 4
Is there a way to make launching coroutines sequential? Or maybe there are other ways to achieve what I want?
Thanks in advance for any help!
One possible strategy is to use a Channel to join the launched jobs in order. You need to launch the jobs lazily so they don't start until join is called on them. trySend always succeeds when the Channel has unlimited capacity. You need to use trySend so it can be called from outside a coroutine.
private val lazyJobChannel = Channel<Job>(capacity = Channel.UNLIMITED).apply {
scope.launch {
consumeEach { it.join() }
}
}
fun log(message: String) {
lazyJobChannel.trySend(
scope.launch(start = CoroutineStart.LAZY) {
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
)
}
Since Flows are sequential we can use MutableSharedFlow to collect and handle data sequentially:
class Info {
// make sure replay(in case some jobs were emitted before sharedFlow is being collected and could be lost)
// and extraBufferCapacity are large enough to handle all the jobs.
// In case some jobs are lost try to increase either of the values.
private val sharedFlow = MutableSharedFlow<String>(replay = 10, extraBufferCapacity = 10)
private val scope = CoroutineScope(Dispatchers.IO)
init {
sharedFlow.onEach { message ->
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking or suspend operation
}.launchIn(scope)
}
fun log(message: String) {
sharedFlow.tryEmit(message)
}
}
fun test() {
val info = Info()
repeat(10) { item ->
info.log("Log $item")
}
}
It always prints the logs in the correct order:
Log 0
Log 1
Log 2
...
Log 9
It works for all cases, but need to be sure there are enough elements set to replay and extraBufferCapacity parameters of MutableSharedFlow to handle all items.
Another approach is
Using Dispatchers.IO.limitedParallelism(1) as a context for the CoroutineScope. It makes coroutines run sequentially if they don't contain calls to suspend functions and launched from the same Thread, e.g. Main Thread. So this solution works only with blocking (not suspend) operation inside launch coroutine builder:
private val scope = CoroutineScope(Dispatchers.IO.limitedParallelism(1))
fun log(message: String) = scope.launch { // launching a coroutine from the same Thread, e.g. Main Thread
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // only blocking operation, not `suspend` operation
}
It turns out that the single thread dispatcher is a FIFO executor. So limiting the CoroutineScope execution to one thread solves the problem.
Suppose I have some data that I need to transfer to the UI, and the data should be emitted with a certain delay, so I have a Flow in my ViewModel:
val myFlow = flow {
listOfSomeData.forEachIndexed { index, data ->
//....
emit(data.UIdata)
delay(data.requiredDelay)
}
}
Somewhere in the UI flow is collected and displayed:
#Composable
fun MyUI(viewModel: ViewModel) {
val data by viewModel.myFlow.collectAsState(INITIAL_DATA)
//....
}
Now I want the user to be able to pause/resume emission by pressing some button. How can i do this?
The only thing I could come up with is an infinite loop inside Flow builder:
val pause = mutableStateOf(false)
//....
val myFlow = flow {
listOfSomeData.forEachIndexed { index, data ->
emit(data.UIdata)
delay(data.requiredDelay)
while (pause.value) { delay(100) } //looks ugly
}
}
Is there any other more appropriate way?
You can tidy up your approach by using a flow to hold pause value then collect it:
val pause = MutableStateFlow(false)
//....
val myFlow = flow {
listOfSomeData.forEachIndexed { index, data ->
emit(data.UIdata)
delay(data.requiredDelay)
if (pause.value) pause.first { isPaused -> !isPaused } // suspends
}
}
Do you need mutableStateOf for compose? Maybe you can transform it into a flow but I'm not aware how it looks bc I don't use compose.
A bit of a creative rant below:
I actually was wondering about this and looking for more flexible approach - ideally source flow should suspend during emit. I noticed that it can be done when using buffered flow with BufferOverflow.SUSPEND so I started fiddling with it.
I came up with something like this that lets me suspend any producer:
// assume source flow can't be accessed
val sourceFlow = flow {
listOfSomeData.forEachIndexed { index, data ->
emit(data.UIdata)
delay(data.requiredDelay)
}
}
val pause = MutableStateFlow(false)
val myFlow = sourceFlow
.buffer(Channel.RENDEZVOUS, BufferOverflow.SUSPEND)
.transform {
if (pause.value) pause.first { isPaused -> !isPaused }
emit(it)
}
.buffer()
It does seem like a small hack to me and there's a downside that source flow will still get to the next emit call after pausing so: n value gets suspended inside transform but source gets suspended on n+1.
If anyone has better idea on how to suspend source flow "immediately" I'd be happy to hear it.
If you don't need a specific delay you can use flow.filter{pause.value != true}
I have a connection to a Bluetooth device that emits data every 250ms
In my viewmodel I wish to subscribe to said data , run some suspending code (which takes approximatelly 1000ms to run) and then present the result.
the following is a simple example of what I'm trying to do
Repository:
class Repo() : CoroutineScope {
private val supervisor = SupervisorJob()
override val coroutineContext: CoroutineContext = supervisor + Dispatchers.Default
private val _dataFlow = MutableSharedFlow<Int>()
private var dataJob: Job? = null
val dataFlow: Flow<Int> = _dataFlow
init {
launch {
var counter = 0
while (true) {
counter++
Log.d("Repo", "emmitting $counter")
_dataFlow.emit(counter)
delay(250)
}
}
}
}
the viewmodel
class VM(app:Application):AndroidViewModel(app) {
private val _reading = MutableLiveData<String>()
val latestReading :LiveData<String>() = _reading
init {
viewModelScope.launch(Dispatchers.Main) {
repo.dataFlow
.map {
validateData() //this is where some validation happens it is very fast
}
.flowOn(Dispatchers.Default)
.forEach {
delay(1000) //this is to simulate the work that is done,
}
.flowOn(Dispatchers.IO)
.map {
transformData() //this will transform the data to be human readable
}
.flowOn(Dispatchers.Default)
.collect {
_reading.postValue(it)
}
}
}
}
as you can see, when data comes, first I validate it to make sure it is not corrupt (on Default dispatcher) then I perform some operation on it (saving and running a long algorithm that takes time on the IO dispatcher) then I change it so the application user can understand it (switching back to Default dispatcher) then I post it to mutable live data so if there is a subscriber from the ui layer they can see the current data (on the Main dispatcher)
I have two questions
a) If validateData fails how can I cancel the current emission and move on to the next one?
b) Is there a way for the dataFlow subscriber working on the viewModel to generate new threads so the delay parts can run in parallel?
the timeline right now looks like the first part, but I want it to run like the second one
Is there a way to do this?
I've tried using buffer() which as the documentation states "Buffers flow emissions via channel of a specified capacity and runs collector in a separate coroutine." but when I set it to BufferOverflow.SUSPEND I get the behaviour of the first part, and when I set it to BufferOverflow.DROP_OLDEST or BufferOverflow.DORP_LATEST I loose emissions
I have also tried using .conflate() like so:
repo.dataFlow
.conflate()
.map { ....
and even though the emissions start one after the other, the part with the delay still waits for the previous one to finish before starting the next one
when I use .flowOn(Dispatchers.Default) for that part , I loose emissions, and when I use .flowOn(Dispatchers.IO) or something like Executors.newFixedThreadPool(4).asCoroutineDispatcher() they always wait for the previous one to finish before starting a new one
Edit 2:
After about 3 hours of experiments this seems to work
viewModelScope.launch(Dispatchers.Default) {
repo.dataFlow
.map {
validateData(it)
}
.flowOn(Dispatchers.Default)
.map {
async {
delay(1000)
it
}
}
.flowOn(Dispatchers.IO) // NOTE (A)
.map {
val result = it.await()
transformData(result)
}
.flowOn(Dispatchers.Default)
.collect {
_readings.postValue(it)
}
}
however I still haven't figured out how to cancel the emission if validatedata fails
and for some reason it only works if I use Dispatchers.IO , Executors.newFixedThreadPool(20).asCoroutineDispatcher() and Dispatchers.Unconfined where I put note (A), Dispatchers.Main does not seem to work (which I expected) but Dispatchers.Default also does not seem to work and I don't know why
First question: Well you cannot recover from an exception in a sense of continuing
the collection of the flow, as per docs "Flow collection can complete with an exception when an emitter or code inside the operators throw an exception." therefore once an exception has been thrown the collection is completed (exceptionally) you can however handle the exception by either wrapping your collection inside try/catch block or using the catch() operator.
Second question: You cannot, while the producer (emitting side) can be made concurrent
by using the buffer() operator, collection is always sequential.
As per your diagram, you need fan out (one producer, many consumers), you cannot
achieve that with flows. Flows are cold, each time you collect from them, they start
emitting from the beginning.
Fan out can be achieved using channels, where you can have one coroutine producing
values and many coroutines that consume those values.
Edit: Oh you meant the validation failed not the function itself, in that case you can use the filter() operator.
The BroadcastChannel and ConflatedBroadcastChannel are getting deprecated. SharedFlow cannot help you in your use case, as they emit values in a broadcast fashion, meaning producer waits until all consumers consume each value before producing the next one. That is still sequential, you need parallelism. You can achieve it using the produce() channel builder.
A simple example:
val scope = CoroutineScope(Job() + Dispatchers.IO)
val producer: ReceiveChannel<Int> = scope.produce {
var counter = 0
val startTime = System.currentTimeMillis()
while (isActive) {
counter++
send(counter)
println("producer produced $counter at ${System.currentTimeMillis() - startTime} ms from the beginning")
delay(250)
}
}
val consumerOne = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerOne consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
val consumerTwo = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerTwo consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
val consumerThree = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerThree consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
Observe production and consumption times.
I used a PublishSubject and I was sending messages to it and also I was listening for results. It worked flawlessly, but now I'm not sure how to do the same thing with Kotlin's coroutines (flows or channels).
private val subject = PublishProcessor.create<Boolean>>()
...
fun someMethod(b: Boolean) {
subject.onNext(b)
}
fun observe() {
subject.debounce(500, TimeUnit.MILLISECONDS)
.subscribe { /* value received */ }
}
Since I need the debounce operator I really wanted to do the same thing with flows so I created a channel and then I tried to create a flow from that channel and listen to changes, but I'm not getting any results.
private val channel = Channel<Boolean>()
...
fun someMethod(b: Boolean) {
channel.send(b)
}
fun observe() {
flow {
channel.consumeEach { value ->
emit(value)
}
}.debounce(500, TimeUnit.MILLISECONDS)
.onEach {
// value received
}
}
What is wrong?
Flow is a cold asynchronous stream, just like an Observable.
All transformations on the flow, such as map and filter do not trigger flow collection or execution, only terminal operators (e.g. single) do trigger it.
The onEach method is just a transformation. Therefore you should replace it with the terminal flow operator collect. Also you could use a BroadcastChannel to have cleaner code:
private val channel = BroadcastChannel<Boolean>(1)
suspend fun someMethod(b: Boolean) {
channel.send(b)
}
suspend fun observe() {
channel
.asFlow()
.debounce(500)
.collect {
// value received
}
}
Update: At the time the question was asked there was an overload of debounce with two parameters (like in the question). There is not anymore. But now there is one which takes one argument in milliseconds (Long).
It should be SharedFlow/MutableSharedFlow for PublishProcessor/PublishRelay
private val _myFlow = MutableSharedFlow<Boolean>(
replay = 0,
extraBufferCapacity = 1, // you can increase
BufferOverflow.DROP_OLDEST
)
val myFlow = _myFlow.asSharedFlow()
// ...
fun someMethod(b: Boolean) {
_myFlow.tryEmit(b)
}
fun observe() {
myFlow.debounce(500)
.onEach { }
// flowOn(), catch{}
.launchIn(coroutineScope)
}
And StateFlow/MutableStateFlow for BehaviorProcessor/BehaviorRelay.
private val _myFlow = MutableStateFlow<Boolean>(false)
val myFlow = _myFlow.asStateFlow()
// ...
fun someMethod(b: Boolean) {
_myFlow.value = b // same as _myFlow.emit(v), myFlow.tryEmit(b)
}
fun observe() {
myFlow.debounce(500)
.onEach { }
// flowOn(), catch{}
.launchIn(coroutineScope)
}
StateFlow must have initial value, if you don't want that, this is workaround:
private val _myFlow = MutableStateFlow<Boolean?>(null)
val myFlow = _myFlow.asStateFlow()
.filterNotNull()
MutableStateFlow uses .equals comparison when setting new value, so it does not emit same value again and again (versus distinctUntilChanged which uses referential comparison).
So MutableStateFlow ≈ BehaviorProcessor.distinctUntilChanged(). If you want exact BehaviorProcessor behavior then you can use this:
private val _myFlow = MutableSharedFlow<Boolean>(
replay = 1,
extraBufferCapacity = 0,
BufferOverflow.DROP_OLDEST
)
ArrayBroadcastChannel in Kotlin coroutines is the one most similar to PublishSubject.
Like PublishSubject, an ArrayBroadcastChannel can have multiple
subscribers and all the active subscribers are immediately notified.
Like PublishSubject, events pushed to this channel are lost, if there are no active subscribers at the moment.
Unlike PublishSubject, backpressure is inbuilt into the coroutine channels, and that is where the buffer capacity comes in. This number really depends on which use case the channel is being used for. For most of the normal use cases, I just go with 10, which should be more than enough. If you push events faster to this channel than receivers consuming it, you can give more capacity.
Actually BroadcastChannel is obsolete already, Jetbrains changed their approach to use SharedFlows instead. Which is a lot more cleaner, easier to implement and solves a lot of pain points.
Essentially, you can achieve the same thing like this.
class BroadcastEventBus {
private val _events = MutableSharedFlow<Event>()
val events = _events.asSharedFlow() // read-only public view
suspend fun postEvent(event: Event) {
_events.emit(event) // suspends until subscribers receive it
}
}
To read about it more, checkout Roman's Medium article.
"Shared flows, broadcast channels" by Roman Elizarov