Avoiding same-pool deadlocks when using Flowable in Reactive Extensions - android

While subscribing to a Reactive Extensions Flowable stream, I noticed the stream halts/hangs (no more future items are emitted, and no error is returned) after 128 items have been returned.
val download: Flowable<DownloadedRecord> = sensor.downloadRecords()
download
.doOnComplete { Log.i( "TEST", "Finished!" ) }
.subscribe(
{ record ->
Log.i( "TEST", "Got record: ${record.record.id}; left: ${record.recordsLeft}" )
},
{ error ->
Log.i( "TEST", "Error while downloading records: $error" )
} )
Most likely, this is related to Reactive Extensions. I discovered the default buffer size of Flowable is set to 128; unlikely to be a coincidence.
While trying to understand what is happening, I ran into the following documentation on Flowable.subscribeOn.
If there is a create(FlowableOnSubscribe, BackpressureStrategy) type source up in the chain, it is recommended to have requestOn false to avoid same-pool deadlock because requests may pile up behind an eager/blocking emitter.
Although I do not quite understand what a same-pool deadlock is in this situation, it looks like something similar is happening to my stream.
1. What is a same-pool deadlock in Reactive Extensions? What would be a minimal code sample to recreate it (on Android)?
Currently at a loss, I tried applying .subscribeOn( Schedulers.io(), false ) before .subscribe, without really understanding what this does, but my stream still locks up after 128 items have been emitted.
2. How could I go about debugging this issue, and how/where can it be resolved?

What is a same-pool deadlock in Reactive Extensions?
RxJava uses single threaded executors in the standard schedulers. When a blocking or eager source is emitting items, it occupies this single thread and even though the downstream requests more, subscribeOn will schedule those requests behind the currently running/blocking code that then never gets notified about the new opportunities.
What would be a minimal code sample to recreate it (on Android)?
Why would you want code that deadlocks?
I tried applying .subscribeOn( Schedulers.io(), false )
What is your actual flow? You likely applied subscribeOn too far from the source and thus it has no effect. The most reliable is to put it right next to create.
How could I go about debugging this issue, and how/where can it be resolved?
Putting doOnNext and doOnRequest at various places and see where signals disappear.

Related

Is it possible to implement an operator like delay but that also delays errors?

I'm trying for some time now to implement an extension function (just becuse it's easier to me) that is capable of delaying both normal item emissions and errors. The existing delay operators only delays normal item emissions, errors are delivered ASAP.
For context, I'm trying to immitate an Android LiveData's behavior (kinda). LiveDatas are a observable pattern implementation that is lifecycle aware. Their observers are only notified if they are in a state where they can process that emission. If they are not ready, the emission is cached in the livedata and delivered as soon as they become ready.
I created a BehaviourSubject that emits the state of my Activities and Fragments when it changes. With that I created a delay operator like this:
fun <T> Flowable<T>.delayUntilActive(): Flowable<T> = delay { lifecycleSubject.toFlowable(BackpressureStrategy.LATEST).filter { it.isActive } }
and then use it like this
myUseCase.getFlowable(Unit)
.map { it.map { it.toDisplayModel() } }
.delayUntilActive()
.subscribe({
view.displaySomethings(
}, { }).addTo(disposables)
So even if myUseCase emits when the view is not ready to display somethings, the emission won't reach onNext() until the view does become ready. The problem is that I also want the view to displayError() when onError is triggered, but that too is lifecycle sensitive. If the view isn't ready, the app will crash.
So I'm looking for a way to delay both emissions and errors (onComplete would be good too). Is this possible?
I tried some things with zip, onErrorReturn, delay inside delay, but nothing seemed right. I'd be equally unimpressed if this had a really easy solution I'm overlooking, or is impossible. Any ideas are welcome.
Bonus: any better way to do that for Single and Completable too? currently I'm just converting them to flowable.
Thanks in advance!
You can handle the error via onErrorResumeNext, then taking the same error and delaying it via delaySubscription until your desired signal to emit said error happens:
source
.onErrorResumeNext({ error ->
Observable.error(error)
.delaySubscription(lifecycleSubject.filter { it.Active } )
})

How to keep track of the number of emits in flowable?

Let's say I have a flowable, that some view is subscribed to and it's listening to the changes. I would like to add a custom method based on only the first emit of the flowable, but also keeping the other methods that listen to the changes. What is the best way to approach it?
The naive approach I have is to duplicate the flowable and convert it to Single or Completable to get the results, but it seems redundant.
Thank you.
Use .take(1). BTW also make sure that flowable is shared (otherwise some observers will miss events).
I think you can use share operator for that. Share operator makes a Connectable Observable. And then Connectable Observable publishes items each subscribes.
val o = Flowable.fromArray(1, 2, 3, 4, 5)
.map {
println("heavy operation")
it + it
}
.share() // publish the changes
.subscribeOn(Schedulers.computation()) // for testing. change what you want
o.take(1).subscribe { println("Special work: $it") } // take one
o.subscribe { println("Normal work: $it") }
Result
heavy operation
Special work: 2
Normal work: 2
heavy operation
Normal work: 4
heavy operation
Normal work: 6
heavy operation
Normal work: 8
heavy operation
Normal work: 10

RxJava - how to see Flowable and backpressure in action?

I am writing a sample app, that processes the bitmap. The process can be controlled by a slider, so when the slider position is changed, I generate another bitmap.
When the user drags the slider, it emits some 10-20 events per second. Processing the bitmap takes about 1 second, so the processing queue becomes quickly stuck with requests.
It seems like a good backpressure example to me, but I couldn't figure out how to use stuff like Flowable and BackpressureStrategy to handle it properly. Moreover, I couldn't make this small sample work:
val pubsub = PublishSubject.create<Int>()
pubsub
.toFlowable(BackpressureStrategy.LATEST)
.observeOn(computation())
.subscribe {
Timber.d("consume %d - %s", it, Thread.currentThread().name)
Thread.sleep(3000)
}
for (i in 0 .. 1000) {
Timber.d("emit %d - %s", i, Thread.currentThread().name)
pubsub.onNext(i)
}
Well, I expect this code to emit 1000 integers through PublishSubject, but as long as processing each takes 3 seconds, 999 of integers should be dropped, only "0" and "1000" should be processed...
But in the logs I see, that all my integers are slowly processed, one by one, and the backpressure strategy is ignored. Actually, toFlowable(...) expression seems to do nothing. With or without backpressure, I see 1000 emissions followed by the several minutes of consumption.
What am I missing here? How can I drop the intermediate elements and consume only the latest available?
solved:
observeOn(computation()) is actually observeOn(computation(), delayErrors = false, bufferSize = 128). To see real backpressure, decrease the bufferSize, when you call observeOn(...)
This might be related to observeOn(computation()). Depending on the backing thread, this might be throttled automatically. The emission of the items is queued. Therefore there's no backpressure on the Flowable.
Try putting these thread changes before toFlowable(LATEST) or use a different Scheduler which is not as forgiving or put even more items to pubsub.
Also you could use observeOn(Scheduler scheduler, boolean, int) to enforce a bufferSize.

Retrofit2/RxJava2, valve(), FlowableTransformers.valve not working

I tried to make a shorter code of what I have , but it seems not enough to make it readable so I ended up with the following:
Flowable
.defer (return new outer observable upon subscribing)
.retryWhen ( ->
Flowable.flatMap throwable when recieve a 400
valve.onNext(false)
-> Flowable.defer ( return new Network_A_Observable)
.retryWhen ( ->
Flowable.flatmap throwable when receive 500)
valve.onNext(true)
)
.compose(flowable valve (i intentionally put a false here))
.subscribe(new subscriber)
This is the very short version of my long-non-lambda code that performs a series of network calls and retrying appropriately on certain conditions.
I have no problems with retrying and emitting a new outer observable for each retry(as this was solved on my other post, but not sure if it has something to do with my current issue), now I noticed that when I perform, say, 2 asynchrounous network calls, it returns two different values, although those two values are valid, and ofcourse the stream of observable works as expected and that was what I intended to do(retry when some error happens), now I realize that I should "PAUSE" the next stream of calls, searching like "how to pause an observable, until I found FlowableTransformer.valve(), which is available on RxJava2Extensions, I ran a code snippet from a blog where it pauses/continues a stream, but when I tried it on the above code, event if the valve is set to false, it keeps on finishing the whole stream of flowable.
Am i missing something?
Any help would be greatly appreciated.

Limit throughput with RxJava

The case I'm into right now is quite hard to explain so I will write a simpler version just to explain the issue.
I have an Observable.from() which emits a sequence of files defined by an ArrayList of files. All of these files should be uploaded to a server. For that I have an function that does the job and returns an Observable.
Observable<Response> uploadFile(File file);
When I run this code it gets crazy, the Observable.from() emits all of the files and they are uploaded all at ones, or at least for a max of threads it can handle.
I want to have a max of 2 file uploads in parallel. Is there any operator that can handle this for me?
I tried buffer, window and some others but they seems to only emit two items together instead of having two parallel file uploads constantly. I also tried to set a max threads pool on the uploading part, but this cannot be used in my case.
There should be a simple operator for this right? Am I missing something?
I think all files are uploaded in parallel because you're using flatMap(), which executes all transformations simultaneously. Instead you should use concatMap(), which runs one transformation after another. And to run two parallel uploads you need to call window(2) on you files observable and then invoke flatMap() as you did in your code.
Observable<Response> responses =
files
.window(2)
.concatMap(windowFiles ->
windowFiles.flatMap(file -> uploadFile(file));
);
UPDATE:
I found a better solution, which does exactly what you want. There's an overload of flatMap() that accepts the max number of concurrent threads.
Observable<Response> responses =
files
.onBackpressureBuffer()
.flatMap(index -> {
return uploadFile(file).subscribeOn(Schedulers.io());
}, 2);

Categories

Resources