I am using Reactive Extensions (RxJava 2) to perform an RPC call to a Bluetooth device, resulting in an incoming data stream, which I subsequently parse, also uxing Rx. The resulting API is a simple Flowable<DownloadedRecord>. For this, I am building on top of the Rx API of the Sweetblue library for Android.
My problem is there is a race condition between 'requesting' the device to start streaming, and subscribing to the stream in time to make sure no packets are missed.
I use a Completable to first perform an RPC call to request data streaming to commence, andThen( readRecords ). A race condition seems to occur where some packets are emitted by Sweetblue, before readRecords had time to subscribe to this stream, thereby 'breaking' readRecords.
To abstract away from this concrete scenario, take the following stand alone code:
val numbers = PublishSubject.create<Int>()
var currentTotal = 0
val sumToTen = numbers
.doOnNext { currentTotal += it }
.doOnNext { println( "Produced $it" ) }
.takeUntil { currentTotal >= 10 }
.doOnComplete { println( "Produced a total of $currentTotal." ) }
Completable.fromAction { numbers.onNext( 9 ) } ) // Mimic race condition.
.andThen( sumToTen )
.subscribe { println( "Observed: $it, Current total: $currentTotal" ) }
numbers.onNext( 1 )
The numbers.onNext( 9 ) call mimics the race condition. This number is never observed by sumToTen, since sumToTen is only subscribed to on the next line. Thus, the stream never completes.
After some investigating, I understand I can 'solve' this problem by using replay and connect.
val numbers = PublishSubject.create<Int>()
var currentTotal = 0
val sumToTen = numbers
.doOnNext { currentTotal += it }
.doOnNext { println( "Produced $it" ) }
.takeUntil { currentTotal >= 10 }
.doOnComplete { println( "Produced a total of $currentTotal." ) }
.replay( 1 ) // Always replay last item upon subscription.
Completable.fromAction { sumToTen.connect() }
.andThen( Completable.fromAction { numbers.onNext( 9 ) } )
.andThen( sumToTen )
.subscribe { println( "Observed: $it, Current total: $currentTotal" ) }
numbers.onNext( 1 )
Now the sumToTen stream completes, since, by first connecting to sumToThen prior to 'starting to stream data' (onNext( 9 )), this stream subscribes to numbers, thus the intended side effects occur (currentTotal). But, '9' is only observed when the replay buffer is big enough (in this case it is). For example, replacing replay( 1 ) with publish will make the stream complete ("Produced a total of 10"), but will not observe '9'.
I am not fully satisfied with this solution for two reasons:
This simply minimizes the chance of the race condition occurring. How large to set the replay buffer is arbitrary.
This will always keep the specified number of elements in replay in memory, even though the intent is only to do so until subscribed to.
Practically speaking neither of these are a real problem, but this is an eye soar from a maintainability perspective: the code does not clearly communicate the intent.
Is there a better way to deal with this scenario? E.g.:
A replay operator which only replays for one subscriber (thus drops the cache once emitted for the first time).
An entirely different approach than what I explored here with publish/connect?
Related
I have a connection to a Bluetooth device that emits data every 250ms
In my viewmodel I wish to subscribe to said data , run some suspending code (which takes approximatelly 1000ms to run) and then present the result.
the following is a simple example of what I'm trying to do
Repository:
class Repo() : CoroutineScope {
private val supervisor = SupervisorJob()
override val coroutineContext: CoroutineContext = supervisor + Dispatchers.Default
private val _dataFlow = MutableSharedFlow<Int>()
private var dataJob: Job? = null
val dataFlow: Flow<Int> = _dataFlow
init {
launch {
var counter = 0
while (true) {
counter++
Log.d("Repo", "emmitting $counter")
_dataFlow.emit(counter)
delay(250)
}
}
}
}
the viewmodel
class VM(app:Application):AndroidViewModel(app) {
private val _reading = MutableLiveData<String>()
val latestReading :LiveData<String>() = _reading
init {
viewModelScope.launch(Dispatchers.Main) {
repo.dataFlow
.map {
validateData() //this is where some validation happens it is very fast
}
.flowOn(Dispatchers.Default)
.forEach {
delay(1000) //this is to simulate the work that is done,
}
.flowOn(Dispatchers.IO)
.map {
transformData() //this will transform the data to be human readable
}
.flowOn(Dispatchers.Default)
.collect {
_reading.postValue(it)
}
}
}
}
as you can see, when data comes, first I validate it to make sure it is not corrupt (on Default dispatcher) then I perform some operation on it (saving and running a long algorithm that takes time on the IO dispatcher) then I change it so the application user can understand it (switching back to Default dispatcher) then I post it to mutable live data so if there is a subscriber from the ui layer they can see the current data (on the Main dispatcher)
I have two questions
a) If validateData fails how can I cancel the current emission and move on to the next one?
b) Is there a way for the dataFlow subscriber working on the viewModel to generate new threads so the delay parts can run in parallel?
the timeline right now looks like the first part, but I want it to run like the second one
Is there a way to do this?
I've tried using buffer() which as the documentation states "Buffers flow emissions via channel of a specified capacity and runs collector in a separate coroutine." but when I set it to BufferOverflow.SUSPEND I get the behaviour of the first part, and when I set it to BufferOverflow.DROP_OLDEST or BufferOverflow.DORP_LATEST I loose emissions
I have also tried using .conflate() like so:
repo.dataFlow
.conflate()
.map { ....
and even though the emissions start one after the other, the part with the delay still waits for the previous one to finish before starting the next one
when I use .flowOn(Dispatchers.Default) for that part , I loose emissions, and when I use .flowOn(Dispatchers.IO) or something like Executors.newFixedThreadPool(4).asCoroutineDispatcher() they always wait for the previous one to finish before starting a new one
Edit 2:
After about 3 hours of experiments this seems to work
viewModelScope.launch(Dispatchers.Default) {
repo.dataFlow
.map {
validateData(it)
}
.flowOn(Dispatchers.Default)
.map {
async {
delay(1000)
it
}
}
.flowOn(Dispatchers.IO) // NOTE (A)
.map {
val result = it.await()
transformData(result)
}
.flowOn(Dispatchers.Default)
.collect {
_readings.postValue(it)
}
}
however I still haven't figured out how to cancel the emission if validatedata fails
and for some reason it only works if I use Dispatchers.IO , Executors.newFixedThreadPool(20).asCoroutineDispatcher() and Dispatchers.Unconfined where I put note (A), Dispatchers.Main does not seem to work (which I expected) but Dispatchers.Default also does not seem to work and I don't know why
First question: Well you cannot recover from an exception in a sense of continuing
the collection of the flow, as per docs "Flow collection can complete with an exception when an emitter or code inside the operators throw an exception." therefore once an exception has been thrown the collection is completed (exceptionally) you can however handle the exception by either wrapping your collection inside try/catch block or using the catch() operator.
Second question: You cannot, while the producer (emitting side) can be made concurrent
by using the buffer() operator, collection is always sequential.
As per your diagram, you need fan out (one producer, many consumers), you cannot
achieve that with flows. Flows are cold, each time you collect from them, they start
emitting from the beginning.
Fan out can be achieved using channels, where you can have one coroutine producing
values and many coroutines that consume those values.
Edit: Oh you meant the validation failed not the function itself, in that case you can use the filter() operator.
The BroadcastChannel and ConflatedBroadcastChannel are getting deprecated. SharedFlow cannot help you in your use case, as they emit values in a broadcast fashion, meaning producer waits until all consumers consume each value before producing the next one. That is still sequential, you need parallelism. You can achieve it using the produce() channel builder.
A simple example:
val scope = CoroutineScope(Job() + Dispatchers.IO)
val producer: ReceiveChannel<Int> = scope.produce {
var counter = 0
val startTime = System.currentTimeMillis()
while (isActive) {
counter++
send(counter)
println("producer produced $counter at ${System.currentTimeMillis() - startTime} ms from the beginning")
delay(250)
}
}
val consumerOne = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerOne consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
val consumerTwo = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerTwo consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
val consumerThree = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerThree consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
Observe production and consumption times.
Hi I have a rxJava observable and Flatmap which I want to convert to kotlin coroutine Flow.
rxJava observable
val startFuellingObservable: Observable<Void>
subscription / flatmap
subscriptions += view.startFuellingObservable
.onBackpressureLatest()
.doOnNext { view.showLoader(false) }
.flatMap {
if (!hasOpenInopIncidents()) {
//THIS API CALL RETURNS RX OBSERVABLE
startFuellingUseCase.execute(equipmentProvider.get())
} else {
val incidentOpenResponse = GenericResponse(false)
incidentOpenResponse.error = OPEN_INCIDENTS
Observable.just(incidentOpenResponse)
}
}
.subscribe(
{ handleStartFuellingClicked(view, it) },
{ onStartFuellingError(view) }
)
I have changed my observable to Flow
val startFuellingObservable: Flow<Void>
as it is now Flow
I am able to do this
view.startFuellingObservable
.onEach { view.showLoader(false) }
*** I have made the API call to return Flow instead of observable
But I am not sure how to do the rest of the flatmap using Flow
Could you please suggest how to do the same code using Flow please
Thanks
R
late answer but I hope it may help others.
First of all, there is a Flow from kotlin Concurrent so you definitely need to import
implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-core:1.5.0'
which belongs to import kotlinx.coroutines.flow
Observables<T> from RxJava will be Flow<T>
Rxjava FlatMap is FlatMapMerge in Kotlin Flow API
FlatMapMerge example:
val startTime = System.currentTimeMillis() // remember the start time
(1..3).asFlow().onEach { delay(100) } // a number every 100 ms
.flatMapMerge { requestFlow(it) }
.collect { value -> // collect and print
println("$value at ${System.currentTimeMillis() - startTime} ms from start")
}
result:
1: First at 136 ms from start
2: First at 231 ms from start
3: First at 333 ms from start
1: Second at 639 ms from start
2: Second at 732 ms from start
3: Second at 833 ms from start
there are 3 types of FlatMap in Flow API
FlatMapConcat
This operator is sequential and paired. Once the outerFlow emits once, the innerFlow must emit once before the final result is collected. Once either flow emits a Nth time, the other flow must emit a Nth time before the Nth flatMapResult is collected.
FlatMapMerge
This operator has the least restrictions on emissions, but can result in too many emissions. Every time the outerFlow emits a value, each of the innerFlow emissions are flatMapped from that value into the final flatMapResult to be collected. The final emission count is a multiplication of innerFlow and outerFlow emissions.
FlatMapLatest
This operator cares only about the latest emitted results and does not process old emissions. Every time the outerFlow emits a value, it is flatMapped with the latest innerFlow value. Every time the innerFlow emits a value, it is flatMapped with the latest outerFlow value. Thus the final emission count is a value between zero and innerFlow emissions times outerFlow emissions.
I want my code work like this graph, but it does not work ...
My Code:
private fun <T> register(cls: Class<T>): Flowable<Pair<T, Long>> {
return FlowableFromObservable(mRelay).onBackpressureBuffer(4).filter(
/* filter target event. */
EventPredictable(cls)
).cast(
/* cast to target event */
cls
).onBackpressureDrop {
Log.i(TAG, "drop event: $it")
}.concatMap { data ->
/* start interval task blocking */
val period = 1L
val unit = TimeUnit.SECONDS
MLog.d(TAG, "startInterval: data = $data")
Flowable.interval(0, period, unit).take(DURATION.toLong()).takeUntil(
getStopFlowable()
).map {
Pair(data, it)
}
}
}
private fun getStopFlowable(): Flowable<StopIntervalEvent> {
return RxBus.getDefault().register(StopIntervalEvent::class.java)
.toFlowable(BackpressureStrategy.LATEST)
}
when I send 140 event in 10 ms, my code drop 12 event, not dropping 140 - 4 = 136 event that I expect. Why my code don't work like the graph above? Thank for your watching and answers!
onBackpressurDrop is always ready to receive items thus onBackpressureBuffer has no practical effect in your setup. onBackpressurBuffer(int) would fail on overflow so you'd never se the expected behavior with it. In addition, concatMap fetches 2 items upfront by default so it will get source items 1 and 2.
Instead, try using the overload with the backpressure strategy configurable:
mRelay
.toFlowable(BackpressureStaregy.MISSING)
.onBackpressureBuffer(4, null, BackpressureOverflowStrategy.DROP_LATEST)
.flatMap(data ->
Flowable.intervalRange(0, DURATION.toLong(), 0, period, unit)
.takeUntil(getStopFlowable())
.map(it -> new Pair(data, it))
, 1 // <--------------------------------------------------- max concurrency
);
I'm new in RxJava and can't realize – why my "zipped" observable doesn't emit items when I use two PublishSubject with it? (As far I know ZIP operator should "merge" two stream into one)
val currentSubject = PublishSubject.create<Int>()
val maxSubject = PublishSubject.create<Int>()
currentSubject.onNext(1)
maxSubject.onNext(2)
currentSubject.onNext(1)
maxSubject.onNext(2)
Log.d("custom", "BINGO!")
val zipped = Observables.zip(currentSubject, maxSubject) { current, max -> "current : $current, max : $max " }
zipped.subscribe(
{ Log.d("custom", it) },
{ Log.d("custom", "BONGO!") },
{ Log.d("custom", "KONGO!") }
)
currentSubject.onComplete()
maxSubject.onComplete()
I'm expecting the items are showed up in "{ Log.d("custom", it) }" function, but it's not happens. What I'm doing wrong?
Log after compile:
2019-06-25 22:25:36.802 3631-3631/ru.grigoryev.rxjavatestdeleteafter D/custom: BINGO!
2019-06-25 22:25:36.873 3631-3631/ru.grigoryev.rxjavatestdeleteafter D/custom: KONGO!
The issue here is not with your zip implementation, but instead with the default behavior of a PublishSubject. But first, let's back up
Hot and Cold Observables
In Rx, there are two types of Obervables, hot and cold. The most common type is a cold observable. A cold obervable will not start emitting values until .subscribe() has been called upon it.
val obs = Observable.fromIterable(listOf(1, 2, 3, 4);
obs.subscribe { print(it) }
// Prints 1, 2, 3, 4
A hot observable will emit values regardless if an observer has subscribed to it.
val subject = PublishSubject.create<Int>()
subject.onNext(1)
subject.onNext(2)
subject.subscribe { print(it) }
subject.onNext(3)
subject.onNext(4)
// Prints 3, 4
Notice how 1 and 2 where not printed. This is because a PublishSubject is a hot observable and emits 1 and 2 before it is subscribed to.
Back to your Question
In your example, your publish subjects are emitting 1 and 2 before they are subscribed to. To see them zipped together, move your code around.
val currentSubject = PublishSubject.create<Int>()
val maxSubject = PublishSubject.create<Int>()
Log.d("custom", "BINGO!")
val zipped = Observables.zip(currentSubject, maxSubject) { current, max -> "current : $current, max : $max " }
zipped.subscribe(
{ Log.d("custom", it) },
{ Log.d("custom", "BONGO!") },
{ Log.d("custom", "KONGO!") }
)
currentSubject.onNext(1)
maxSubject.onNext(2)
currentSubject.onNext(1)
maxSubject.onNext(2)
currentSubject.onComplete()
maxSubject.onComplete()
I'm converting a very complex write/read/write cycle written on the native BLE stack and am wondering if the following pattern is feasible for handling in RxAndroidBLE (code is Kotlin)
fun sendCommandList(connection: RxBleConnection, commands: Array<String>) {
Observable.fromArray(commands)
.flatMapIterable { it.asIterable() } // convert to individual commands
.map { it.toByteArray() }
.map {
connection.writeCharacteristic(TX_CHAR_UUID, it)
.flatMap { bytes -> connection.readCharacteristic((RX_CHAR_UUID)) }
.flatMap { bytes -> val ackNackBytes = processResponse(bytes); connection.writeCharacteristic(TX_CHAR_UUID, ackNackBytes) }
}
.subscribeBy( onError = { }, onComplete = { }, onNext = { })
}
I'm just trying to work out the code before I get access to the hardware so can't test this at the moment and am wondering if it's feasible. I'm more worried about the read portion if the complete response might be not be within one readCharacteristic(). If not would having a long running read pumping into a buffer that can be handled one byte at a time and removing only those bytes that make up a valid response and reading from that instead.
Thoughts? This seems like a common usage pattern, but I've been unable to find anything like it as far as samples.