How to define a dependency between two coroutines? - android

Consider the following code:
init {
// coroutine 1
// this is and needs to be in a separate coroutine as the collection runs indefinite
viewModelScope.launch {
myService.someSharedFlow.collect {
// handle values
}
}
// coroutine 2
viewModelScope.launch {
// this shall not be executed before the subscription to the SharedFlow in coroutine 1 is set up
// to make sure I don't miss any emitted values
withContext(Dispatchers.IO) {
myService.initialize() // will send a value through the flow after initialization
}
}
}
How can I let coroutine 2 wait until the subscription to the SharedFlow in coroutine 1 is set up?

If you want to wait for a collector to subscribe on your flow before pushing values in it, I see two solutions:
If you have a MutableSharedFlow, you can use subscriptionCount property to know if your flow has been subscribed to.
As #broot said, there's also the possibility to use onSubscription on a SharedFlow. This does not require to expose mutable API. However, if there can be multiple collectors, and you want to ensure initialization is triggered only once, you will have to add some manual checks over it.
Here is a sample program that show how to use both solutions:
import kotlinx.coroutines.*
import kotlinx.coroutines.flow.*
import kotlin.time.Duration.Companion.seconds
fun main() : Unit = runBlocking {
val flow = MutableSharedFlow<Int>()
println(
"""
Solution 1: subscriptionCount
-----------------------------
""".trimIndent())
waitForSubsciptionCount(flow)
println(
"""
Solution 2: onSubscription
-----------------------------
""".trimIndent())
reactToSubscription(flow)
}
suspend fun CoroutineScope.waitForSubsciptionCount(flow: MutableSharedFlow<Int>) {
val collectJob = collectAfterDelay(flow)
println("Waiting for subscription")
flow.subscriptionCount.filter { it > 0 }.first()
println("Subscription detected")
flow.initialize()
// Specific to this test program, to avoid it to hang indefinitely
collectJob.cancel()
}
suspend fun CoroutineScope.reactToSubscription(flow: SharedFlow<Int>) {
println("Waiting for subscription")
// This is optional. Helps to detect "end" of the flow.
val initialized = MutableStateFlow(false)
val initializableFlow = flow.onSubscription {
println("Subscription detected")
initialize()
initialized.emit(true)
}
val collectJob = collectAfterDelay(initializableFlow)
// Specific to this test program, to avoid it to indefinitely
initialized.filter { it == true }.first()
collectJob.cancel()
}
private fun CoroutineScope.collectAfterDelay(flow: Flow<Int>) = launch {
delay(1.seconds)
print("Start collecting")
flow.collect { println("collected: $it") }
}
suspend fun FlowCollector<Int>.initialize() {
for (i in 1..3) {
emit(i)
println("Emitted $i")
}
}
The output is:
Solution 1: subscriptionCount
-----------------------------
Waiting for subscription
Start collectingSubscription detected
collected: 1
Emitted 1
collected: 2
Emitted 2
collected: 3
Emitted 3
Solution 2: onSubscription
-----------------------------
Waiting for subscription
Start collectingSubscription detected
collected: 1
Emitted 1
collected: 2
Emitted 2
collected: 3
Emitted 3
NOTE: Both solutions above should also allow you to move initialization trigger inside your service, and hide it completely from consumers. This way, code calling your service would not require to mangle with your service initialization.
EDIT: Here is an example of a service that triggers flow emission after a subscriber start collecting:
class SubscriptionCountService {
private val _flow = MutableSharedFlow<Int>()
public val flow = _flow.asSharedFlow()
init {
_flow.subscriptionCount.filter { it > 0 }
.take(1)
.onCompletion {
println("First subscription detected: initialize")
_flow.initialize()
}
.launchIn(CoroutineScope(Dispatchers.IO))
}
}
fun main() : Unit = runBlocking {
val flow = SubscriptionCountService().flow
launch {
delay(1.seconds)
println("Start collecting")
// Collect a limited number of entries to terminate main program easily
flow.take(3)
.collect { println("collected: $it") }
}
}
The output is:
Start collecting
collected: 1
Emitted 1
collected: 2
Emitted 2
collected: 3
Emitted 3

Related

Can I replace the Flow with StateFlow for a Counter that is Lifecycle Aware

I have a counter logic using Flow in ViewModel, and auto increment.
class MainViewModel(
private val savedStateHandle: SavedStateHandle
): ViewModel() {
val counterFlow = flow {
while (true) {
val value = savedStateHandle.get<Int>("SomeKey") ?: 0
emit(value)
savedStateHandle["SomeKey"] = value + 1
delay(1000)
}
}
}
In the Activity
val counterFlowStateVariable = viewModel.externalDataWithLifecycle.collectAsStateWithLifecycle(0)
This counter will only increment and count during the App is active
It stops increment when onBackground, and continues when onForeground. It doesn't get reset. This is made possible by using collectAsStateWithLifecycle.
It stops increment when the Activity is killed by the system and restores the state when the Activity is back. The counter value is not reset. This is made possible by using savedStateHandle
I'm thinking if I can use a stateFlow instead of flow?
I would say, you should. flow is cold, meaning it has no state, so previous values aren't stored. Because your source of emitted values is external (savedStateHandle) and you mix emitting and saving that value within the flow builder, you introduce a synchronization problem, if more than one collector is active.
Perform small test:
// this value reflects "saveStateHandle"
var index = 0
val myFlow = flow {
while(true) {
emit(index)
index++
delay(300)
}
}
Now collect it three times:
launch {
myFlow.collect {
println("First: $it")
}
}
delay(299)
launch {
myFlow.collect {
println("second: $it")
}
}
delay(599)
launch {
myFlow.collect {
println("third: $it")
}
}
You'll start noticing that some collectors are reading previous values (newer values already read by other collectors), meaning their save operation will use that previous value, instead up to date one.
Using stateFlow you "centralize" the state read/update calls, making it independent of a number of active collectors.
var index = 0
val myFlow = MutableStateFlow(index)
launch {
while (true) {
index++
mySharedFlow.value = index
delay(300)
}
}

kotlin, how to run sequential background threads [duplicate]

I have an instance of CoroutineScope and log() function which look like the following:
private val scope = CoroutineScope(Dispatchers.IO)
fun log(message: String) = scope.launch { // launching a coroutine
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
And I use this test code to launch coroutines:
repeat(5) { item ->
log("Log $item")
}
The log() function can be called from any place, in any Thread, but not from a coroutine.
After a couple of tests I can see not sequential result like the following:
Log 0
Log 2
Log 4
Log 1
Log 3
There can be different order of printed logs. If I understand correctly the execution of coroutines doesn't guarantee to be sequential. What it means is that a coroutine for item 2 can be launched before the coroutine for item 0.
I want that coroutines were launched sequentially for each item and "some blocking operation" would execute sequentially, to always achieve next logs:
Log 0
Log 1
Log 2
Log 3
Log 4
Is there a way to make launching coroutines sequential? Or maybe there are other ways to achieve what I want?
Thanks in advance for any help!
One possible strategy is to use a Channel to join the launched jobs in order. You need to launch the jobs lazily so they don't start until join is called on them. trySend always succeeds when the Channel has unlimited capacity. You need to use trySend so it can be called from outside a coroutine.
private val lazyJobChannel = Channel<Job>(capacity = Channel.UNLIMITED).apply {
scope.launch {
consumeEach { it.join() }
}
}
fun log(message: String) {
lazyJobChannel.trySend(
scope.launch(start = CoroutineStart.LAZY) {
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
)
}
Since Flows are sequential we can use MutableSharedFlow to collect and handle data sequentially:
class Info {
// make sure replay(in case some jobs were emitted before sharedFlow is being collected and could be lost)
// and extraBufferCapacity are large enough to handle all the jobs.
// In case some jobs are lost try to increase either of the values.
private val sharedFlow = MutableSharedFlow<String>(replay = 10, extraBufferCapacity = 10)
private val scope = CoroutineScope(Dispatchers.IO)
init {
sharedFlow.onEach { message ->
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking or suspend operation
}.launchIn(scope)
}
fun log(message: String) {
sharedFlow.tryEmit(message)
}
}
fun test() {
val info = Info()
repeat(10) { item ->
info.log("Log $item")
}
}
It always prints the logs in the correct order:
Log 0
Log 1
Log 2
...
Log 9
It works for all cases, but need to be sure there are enough elements set to replay and extraBufferCapacity parameters of MutableSharedFlow to handle all items.
Another approach is
Using Dispatchers.IO.limitedParallelism(1) as a context for the CoroutineScope. It makes coroutines run sequentially if they don't contain calls to suspend functions and launched from the same Thread, e.g. Main Thread. So this solution works only with blocking (not suspend) operation inside launch coroutine builder:
private val scope = CoroutineScope(Dispatchers.IO.limitedParallelism(1))
fun log(message: String) = scope.launch { // launching a coroutine from the same Thread, e.g. Main Thread
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // only blocking operation, not `suspend` operation
}
It turns out that the single thread dispatcher is a FIFO executor. So limiting the CoroutineScope execution to one thread solves the problem.

How to run Kotlin coroutines sequentially?

I have an instance of CoroutineScope and log() function which look like the following:
private val scope = CoroutineScope(Dispatchers.IO)
fun log(message: String) = scope.launch { // launching a coroutine
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
And I use this test code to launch coroutines:
repeat(5) { item ->
log("Log $item")
}
The log() function can be called from any place, in any Thread, but not from a coroutine.
After a couple of tests I can see not sequential result like the following:
Log 0
Log 2
Log 4
Log 1
Log 3
There can be different order of printed logs. If I understand correctly the execution of coroutines doesn't guarantee to be sequential. What it means is that a coroutine for item 2 can be launched before the coroutine for item 0.
I want that coroutines were launched sequentially for each item and "some blocking operation" would execute sequentially, to always achieve next logs:
Log 0
Log 1
Log 2
Log 3
Log 4
Is there a way to make launching coroutines sequential? Or maybe there are other ways to achieve what I want?
Thanks in advance for any help!
One possible strategy is to use a Channel to join the launched jobs in order. You need to launch the jobs lazily so they don't start until join is called on them. trySend always succeeds when the Channel has unlimited capacity. You need to use trySend so it can be called from outside a coroutine.
private val lazyJobChannel = Channel<Job>(capacity = Channel.UNLIMITED).apply {
scope.launch {
consumeEach { it.join() }
}
}
fun log(message: String) {
lazyJobChannel.trySend(
scope.launch(start = CoroutineStart.LAZY) {
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
)
}
Since Flows are sequential we can use MutableSharedFlow to collect and handle data sequentially:
class Info {
// make sure replay(in case some jobs were emitted before sharedFlow is being collected and could be lost)
// and extraBufferCapacity are large enough to handle all the jobs.
// In case some jobs are lost try to increase either of the values.
private val sharedFlow = MutableSharedFlow<String>(replay = 10, extraBufferCapacity = 10)
private val scope = CoroutineScope(Dispatchers.IO)
init {
sharedFlow.onEach { message ->
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking or suspend operation
}.launchIn(scope)
}
fun log(message: String) {
sharedFlow.tryEmit(message)
}
}
fun test() {
val info = Info()
repeat(10) { item ->
info.log("Log $item")
}
}
It always prints the logs in the correct order:
Log 0
Log 1
Log 2
...
Log 9
It works for all cases, but need to be sure there are enough elements set to replay and extraBufferCapacity parameters of MutableSharedFlow to handle all items.
Another approach is
Using Dispatchers.IO.limitedParallelism(1) as a context for the CoroutineScope. It makes coroutines run sequentially if they don't contain calls to suspend functions and launched from the same Thread, e.g. Main Thread. So this solution works only with blocking (not suspend) operation inside launch coroutine builder:
private val scope = CoroutineScope(Dispatchers.IO.limitedParallelism(1))
fun log(message: String) = scope.launch { // launching a coroutine from the same Thread, e.g. Main Thread
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // only blocking operation, not `suspend` operation
}
It turns out that the single thread dispatcher is a FIFO executor. So limiting the CoroutineScope execution to one thread solves the problem.

How do you make make a subscriber to a kotlin sharedflow run operations in parallel?

I have a connection to a Bluetooth device that emits data every 250ms
In my viewmodel I wish to subscribe to said data , run some suspending code (which takes approximatelly 1000ms to run) and then present the result.
the following is a simple example of what I'm trying to do
Repository:
class Repo() : CoroutineScope {
private val supervisor = SupervisorJob()
override val coroutineContext: CoroutineContext = supervisor + Dispatchers.Default
private val _dataFlow = MutableSharedFlow<Int>()
private var dataJob: Job? = null
val dataFlow: Flow<Int> = _dataFlow
init {
launch {
var counter = 0
while (true) {
counter++
Log.d("Repo", "emmitting $counter")
_dataFlow.emit(counter)
delay(250)
}
}
}
}
the viewmodel
class VM(app:Application):AndroidViewModel(app) {
private val _reading = MutableLiveData<String>()
val latestReading :LiveData<String>() = _reading
init {
viewModelScope.launch(Dispatchers.Main) {
repo.dataFlow
.map {
validateData() //this is where some validation happens it is very fast
}
.flowOn(Dispatchers.Default)
.forEach {
delay(1000) //this is to simulate the work that is done,
}
.flowOn(Dispatchers.IO)
.map {
transformData() //this will transform the data to be human readable
}
.flowOn(Dispatchers.Default)
.collect {
_reading.postValue(it)
}
}
}
}
as you can see, when data comes, first I validate it to make sure it is not corrupt (on Default dispatcher) then I perform some operation on it (saving and running a long algorithm that takes time on the IO dispatcher) then I change it so the application user can understand it (switching back to Default dispatcher) then I post it to mutable live data so if there is a subscriber from the ui layer they can see the current data (on the Main dispatcher)
I have two questions
a) If validateData fails how can I cancel the current emission and move on to the next one?
b) Is there a way for the dataFlow subscriber working on the viewModel to generate new threads so the delay parts can run in parallel?
the timeline right now looks like the first part, but I want it to run like the second one
Is there a way to do this?
I've tried using buffer() which as the documentation states "Buffers flow emissions via channel of a specified capacity and runs collector in a separate coroutine." but when I set it to BufferOverflow.SUSPEND I get the behaviour of the first part, and when I set it to BufferOverflow.DROP_OLDEST or BufferOverflow.DORP_LATEST I loose emissions
I have also tried using .conflate() like so:
repo.dataFlow
.conflate()
.map { ....
and even though the emissions start one after the other, the part with the delay still waits for the previous one to finish before starting the next one
when I use .flowOn(Dispatchers.Default) for that part , I loose emissions, and when I use .flowOn(Dispatchers.IO) or something like Executors.newFixedThreadPool(4).asCoroutineDispatcher() they always wait for the previous one to finish before starting a new one
Edit 2:
After about 3 hours of experiments this seems to work
viewModelScope.launch(Dispatchers.Default) {
repo.dataFlow
.map {
validateData(it)
}
.flowOn(Dispatchers.Default)
.map {
async {
delay(1000)
it
}
}
.flowOn(Dispatchers.IO) // NOTE (A)
.map {
val result = it.await()
transformData(result)
}
.flowOn(Dispatchers.Default)
.collect {
_readings.postValue(it)
}
}
however I still haven't figured out how to cancel the emission if validatedata fails
and for some reason it only works if I use Dispatchers.IO , Executors.newFixedThreadPool(20).asCoroutineDispatcher() and Dispatchers.Unconfined where I put note (A), Dispatchers.Main does not seem to work (which I expected) but Dispatchers.Default also does not seem to work and I don't know why
First question: Well you cannot recover from an exception in a sense of continuing
the collection of the flow, as per docs "Flow collection can complete with an exception when an emitter or code inside the operators throw an exception." therefore once an exception has been thrown the collection is completed (exceptionally) you can however handle the exception by either wrapping your collection inside try/catch block or using the catch() operator.
Second question: You cannot, while the producer (emitting side) can be made concurrent
by using the buffer() operator, collection is always sequential.
As per your diagram, you need fan out (one producer, many consumers), you cannot
achieve that with flows. Flows are cold, each time you collect from them, they start
emitting from the beginning.
Fan out can be achieved using channels, where you can have one coroutine producing
values and many coroutines that consume those values.
Edit: Oh you meant the validation failed not the function itself, in that case you can use the filter() operator.
The BroadcastChannel and ConflatedBroadcastChannel are getting deprecated. SharedFlow cannot help you in your use case, as they emit values in a broadcast fashion, meaning producer waits until all consumers consume each value before producing the next one. That is still sequential, you need parallelism. You can achieve it using the produce() channel builder.
A simple example:
val scope = CoroutineScope(Job() + Dispatchers.IO)
val producer: ReceiveChannel<Int> = scope.produce {
var counter = 0
val startTime = System.currentTimeMillis()
while (isActive) {
counter++
send(counter)
println("producer produced $counter at ${System.currentTimeMillis() - startTime} ms from the beginning")
delay(250)
}
}
val consumerOne = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerOne consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
val consumerTwo = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerTwo consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
val consumerThree = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerThree consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
Observe production and consumption times.

Testing RxJava repeatWhen with Mockk returnsMany

I'm trying to test multiple server responses with Mockk library. Something like I found in this answer for Mockito.
There is my sample UseCase code, which every few seconds repeats call to load the system from a remote server and when the remote system contains more users than local it stops running (onComplete is executed).
override fun execute(localSystem: System, delay: Long): Completable {
return cloudRepository.getSystem(localSystem.id)
.repeatWhen { repeatHandler -> // Repeat every [delay] seconds
repeatHandler.delay(params.delay, TimeUnit.SECONDS)
}
.takeUntil { // Repeat until remote count of users is greater than local count
return#takeUntil it.users.count() > localSystem.users.count()
}
.ignoreElements() // Ignore onNext() calls and wait for onComplete()/onError() call
}
To test this behavior I'm mocking the cloudRepository.getSystem() method with the Mockk library:
#Test
fun testListeningEnds() {
every { getSystem(TEST_SYSTEM_ID) } returnsMany listOf(
Single.just(testSystemGetResponse), // return the same amount of users as local system has
Single.just(testSystemGetResponse), // return the same amount of users as local system has
Single.just( // return the greater amount of users as local system has
testSystemGetResponse.copy(
owners = listOf(
TEST_USER,
TEST_USER.copy(id = UUID.randomUUID().toString())
)
)
)
)
useCase.execute(
localSystem = TEST_SYSTEM,
delay = 3L
)
.test()
.await()
.assertComplete()
}
As you can see I'm using the returnsMany Answer which should return a different value on every call.
The main problem is that returnsMany returns the same first value every time and .takeUntil {} never succeeds what means that onComplete() is never called for this Completable. How to make returnsMany return a different value on each call?
You probably don't understand how exactly .repeatWhen() works. You expect cloudRepository.getSystem(id) being called every time repetition is requested. That is not correct. Repeated subscription is done all the time on the same instance of mocked Single – first Single.just(testSystemGetResponse) in your case.
How to make sure, getSystem() is called every time? Wrap your Single into Single.defer(). It's similar to Single.fromCallable() but there is a difference between the return type of passed lambda. Lambda passed to the .defer() operator must return Rx type (Single in our case).
Final implementation (I have made a few changes to make it compile successfully):
data class User(val id: String)
data class System(val users: List<User>, val id: Long)
class CloudRepository {
fun getSystem(id: Long) = Single.just(System(mutableListOf(), id))
}
class SO63506574(
private val cloudRepository: CloudRepository
) {
fun execute(localSystem: System, delay: Long): Completable {
return Single.defer { cloudRepository.getSystem(localSystem.id) } // <-- defer
.repeatWhen { repeatHandler ->
repeatHandler.delay(delay, TimeUnit.SECONDS)
}
.takeUntil {
return#takeUntil it.users.count() > localSystem.users.count()
}
.ignoreElements()
}
}
And test (succeeds after ~8s):
class SO63506574Test {
#Test
fun testListeningEnds() {
val TEST_USER = User("UUID")
val TEST_SYSTEM = System(mutableListOf(), 10)
val repository = mockk<CloudRepository>()
val useCase = SO63506574(repository)
val testSystemGetResponse = System(mutableListOf(), 10)
every { repository.getSystem(10) } returnsMany listOf(
Single.just(testSystemGetResponse), // return the same amount of users as local system has
Single.just(testSystemGetResponse), // return the same amount of users as local system has
Single.just( // return the greater amount of users as local system has
testSystemGetResponse.copy(
users = listOf(
TEST_USER,
TEST_USER.copy(id = UUID.randomUUID().toString())
)
)
)
)
useCase.execute(
localSystem = TEST_SYSTEM,
delay = 3L
)
.test()
.await()
.assertComplete()
}
}

Categories

Resources