What is the difference between
ObservableTransformer {
Observable.merge(
it.ofType(x).compose(transformerherex),
it.ofType(y).compose(transformerherey)
)
}
and
ObservableTransformer {
it.publish{ shared ->
Observable.merge(
shared.ofType(x).compose(transformerherex),
shared.ofType(y).compose(transformerherey)
)
}
}
when I run my code using this two, I got the same results. What does publish do here.
The difference is that the top transformer will subscribe to the upstream twice for a single subscription from the downstream, duplicating any side effects of the upstream which is usually not wanted:
Observable<Object> mixedSource = Observable.<Object>just("a", 1, "b", 2, "c", 3)
.doOnSubscribe(s -> System.out.println("Subscribed!"));
mixedSource.compose(f ->
Observable.merge(
f.ofType(Integer.class).compose(g -> g.map(v -> v + 1)),
f.ofType(String.class).compose(g -> g.map(v -> v.toUpperCase()))
)
)
.subscribe(System.out::println);
will print
Subscribed!
2
3
4
Subscribed!
A
B
C
The side-effect represented here is the printout Subscribed! Depending on the actual work in a real source, that could mean sending an email twice, retrieving the rows of a table twice. With this particular example, you can see that even if the source values are interleaved in their type, the output contains them separately.
In contrast, publish(Function) will establish one subscription to the source per one end subscriber, thus any side-effects at the source only happen once.
mixedSource.publish(f ->
Observable.merge(
f.ofType(Integer.class).compose(g -> g.map(v -> v + 1)),
f.ofType(String.class).compose(g -> g.map(v -> v.toUpperCase()))
)
)
.subscribe(System.out::println);
which prints
Subscribed!
A
2
B
3
C
4
because the source is subscribed once and each item is multicast to the two "arms" of the .ofType().compose().
publish operator converts your Observable to Connectable Observable.
Lets see what does Connectable Observable mean: Suppose you want to subscribe an observable multiple time and want to serve same items to each subscriber. You need to use Connectable Observable.
Example:
var period = TimeSpan.FromSeconds(1);
var observable = Observable.Interval(period).Publish();
observable.Connect();
observable.Subscribe(i => Console.WriteLine("first subscription : {0}", i));
Thread.Sleep(period);
observable.Subscribe(i => Console.WriteLine("second subscription : {0}", i));
output:
first subscription : 0
first subscription : 1
second subscription : 1
first subscription : 2
second subscription : 2
In this case, we are quick enough to subscribe before the first item is published, but only on the first subscription. The second subscription subscribes late and misses the first publication.
We could move the invocation of the Connect() method until after all subscriptions have been made. That way, even with the call to Thread.Sleep we will not really subscribe to the underlying until after both subscriptions are made. This would be done as follows:
var period = TimeSpan.FromSeconds(1);
var observable = Observable.Interval(period).Publish();
observable.Subscribe(i => Console.WriteLine("first subscription : {0}", i));
Thread.Sleep(period);
observable.Subscribe(i => Console.WriteLine("second subscription : {0}", i));
observable.Connect();
output:
first subscription : 0
second subscription : 0
first subscription : 1
second subscription : 1
first subscription : 2
second subscription : 2
So using Completable Observable, we have a way to control when to let Observable emit items.
Example taken from : http://www.introtorx.com/Content/v1.0.10621.0/14_HotAndColdObservables.html#PublishAndConnect
EDIT
According to 180th slide in this link:
Another nature of publish is that if any observer start observing after 10 seconds of observable started emitting items, observer gets only items those were emitted after 10 seconds(at the time of subscription) not all the items. So in sides, as i could understood that publish is being used for UI events. And it totally makes sense that any observer should only receive those events that has been performed after it has subscribed NOT all the events happened before.
Hope it helps.
Related
I'm looking to implement a batching mechanism before an api post for some simple event collection and logging.
Since this is Android, I also would like to handle lifecycle events for if this service is stopped, so what is the way to manually flush the buffered window if the service is stopped but the count or time has not been hit yet.
For example, I have a PublishSubject (subject), create a flowable and the perform a window operation on it like so:
subject.toFlowable(BackpressureStrategy.BUFFER)
.window(30,
TimeUnit.SECONDS,
20,
true)
.flatMapSingle { it.toList() }
.subscribe (this::send)
If my service/app is paused or killed, I'd like to just send what is in the buffer.
The problem you face is to stop observing when necessary and flush current items in window. Documentation for Flowable.window() operator say this:
When the source Publisher completes or encounters an error, the resulting Publisher emits the current window and propagates the notification from the source Publisher.
So you need to make your Subject emit error or complete. In most of the cases, this is not a correct way how to work with subjects. Let's replace Subject with something what can be easily completed:
private val stopObserver = BehaviorSubject.create<Unit>() // (1)
private fun emitStop() { // (2)
stopObserver.onNext(Unit)
}
private fun sourceSubject(): Flowable<Long> { // (3)
return Flowable.interval(1, TimeUnit.SECONDS)
.takeUntil(stopObserver.toFlowable(BackpressureStrategy.BUFFER)) // (4)
}
private fun runObservation() { // (5)
sourceSubject()
.window(10)
.flatMapSingle { it.toList() }
.doOnNext { Log.d("onNext", "${it.count()} items") }
.subscribe()
}
Explanation of important parts:
Create new Subject which emits everytime you realize app to being stopped or paused.
You can simply emit onNext event to Subject when needed with function emitStop()
sourceSubject() function imitates your source Subject. This one emits item every second.
takeUntil() operator completes stream when passed Publisher (stopObserver) emits an item. This ensures, our overall source Publisher (sourceSubject) completes.
I have used simpler version of window() operator, but all of them use the same principle regarding to source publisher.
Possible output:
2019-11-30 10:48:54.527 D/onNext: 10 items
2019-11-30 10:49:04.524 D/onNext: 10 items
2019-11-30 10:49:14.525 D/onNext: 10 items
2019-11-30 10:49:19.056 D/onNext: 4 items
In an Android app scenario, I want to fetch some Observable<Data> from network, and there are multiple Observer<Data> subscribed to it to update corresponding views. In case of error -say a timeout- show a button to the user to try again.
How can I do the try again part? can I tell the observable to re-execute its logic again without re-subscribing to it?
Let's assume you have two buttons, "Retry" and "Cancel", initially hidden. Create two Observables retryButtonClicks and cancelButtonClicks. Then, apply the retryWhen operator to the designated download flow and act upon the signals of these button clicks:
download.retryWhen(errors -> {
return errors
.observeOn(AndroidSchedulers.mainThread())
.flatMap(e -> {
// show the "Retry" and "Cancel" buttons around here
return Observable.amb(
retryButtonClicks.take(1).map(v -> "Retry"),
cancelButtonClicks.take(1).map(v -> "Cancel")
)
.doOnNext(v -> { /* hide the "Retry" and "Cancel" buttons */ });
})
.takeWhile(v -> "Retry".equals(v))
;
});
There is actually specific methods
retry()
Returns an Observable that mirrors the source Observable,
resubscribing to it if it calls onError (infinite retry count).
and retry(long count)
Returns an Observable that mirrors the source Observable,
resubscribing to it if it calls onError up to a specified number of
retries.
Read more in an article and in the docs
I want to achieve the following with RxJava and as I may not have enough knowledge in this area would like to have some help :)
I need to create a PublishSubject which would emit events with the following sequence:
Emit 1, 2, 3
Buffer 4 in subscribe's completion if a certain condition is not satisfied (may be a network connection for example or some other condition).
For 5, 6 ... buffer after 4 if the condition is not satisfied yet.
Repeat to emit 4 after some time when the condition is satisfied.
If trying to emit 5,6 and the condition is satisfied, then instead of buffering 5, 6 ... after 4, just emit 4 and then 5, 6, 7 , 8 ...
The last 2 points are necessary because the sequence of emitting is really important, which makes difficulties for me to achieve to this.
I hope I could describe what I want to achieve :)
Findings: After asking this question I've done some findings and achieved the following:
private Observable observable = publishSubject
.observeOn(Schedulers.io())
.map(Manager::callNew)
.doOnError(throwable -> Logger.e(throwable, "Error occurred"))
.retryWhen(throwableObservable -> throwableObservable
.zipWith(Observable.range(1, 10), (n, i) -> i)
.flatMap(retryCount -> {
long retrySeconds = (long) Math.pow(2, retryCount);
Logger.d("The call has been failed retrying in %s seconds. Retry count %s", retrySeconds, retryCount);
return Observable.timer(retrySeconds, TimeUnit.SECONDS)
.doOnNext(aLong -> {
C24Logger.d("Timer was completed. %s", aLong);
})
.doOnComplete(() -> Logger.d("Timer was completed."));
}));
The problem is here with PublishSubject. Because it already has emitted all the items, it emits only new ones for retryWhen. If I use ReplaySubject them it emits also the old completed items too for the new retryWhen re-subscribe, which I do not need anymore.
Is there a way to use the ReplaySubject to remove the completed items from the buffer?
You want to be able to turn buffering on and off, depending upon an external condition. Perhaps the simplest way to do it is use the buffer() operator to continually buffer items based on the condition.
(I have removed stuff from the observer chain)
private Observable observable = publishSubject
.publish( obs -> obs.buffer( obs.filter( v -> externalCondition( v ) ) ) )
.flatMapIterable( bufferedList -> bufferedList )
.subscribe( ... );
The publish() operator allows multiple observer chains to subscribe to the incoming observer chain. The buffer() operator monitors the observable that emits a value only when the external condition is true.
When the external condition is true, buffer() will emit a series of lists with only a single element. When the condition goes false, buffer() starts buffering up the results, and when the condition goes true again, all the buffered items are emitted as a list. The flatMapIterable() step will take each item out of the buffer and emit it separately.
I have list coming back from a REST endpoint. I need to break that list down into categories (category is an item in each entry of the list). Individual categories will be written to a cache for faster lookup later.
I didn't know if I could .map() the entries and supply multiple filter() or some type of case statement to put the category entries in the right bucket.
Does something like this sound reasonable to implement with rxJava?
UPDATE:
Non-working version
private Map<String, List<VideoMetadataInfoEntity>> buildCategories( Observable<List<VideoMetadataInfoEntity>> videoList ) {
Map<String, List<VideoMetadataInfoEntity>> categoryMap = new HashMap<>();
videoList
.flatMap( Observable::from )
.subscribe( videoMetadataInfoEntity -> mapCategory(videoMetadataInfoEntity, categoryMap ) );
Observable.just( categoryMap )
.doOnNext( saveCategoriesToCacheAction );
return categoryMap;
}
These fire in sequence, however, and this is my understanding, the second observable is not sending anything the saveCategoriesToCacheAction since it hasn't subscribed to the result of the first observable.
I am starting to think I should modify my cache strategy. The list will always have all the details. The service doesn't provide me a subset that I can use for listing and then another call to get the full details. It is either full list or full details for one item. It might be a better approach to just cache each one individually and into their own category caches right now. I was trying to do the map so that this network call could return the requested category, but subsequent calls would come from the cache, until such time as the cache has expired and a new network call refreshes it.
My solution is:
Observable.range(1, 20)
.groupBy(number -> number % 2)
.flatMap(groupedObservable -> groupedObservable.toList())
.toMap(list -> list.get(0) % 2);
As a result I have [{0=[2, 4, 6, 8, 10, 12, 14, 16, 18, 20], 1=[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]}]
Explanation:
range(1, 20) - creates an observable which emits first twenty numbers
groupBy(number -> number % 2) - creates an observable that emits group observables where each group observable holds items grouped with the grouping function(here it is x % 2)
flatMap(groupedObservable -> groupedObservable.toList()) - turns each group into an observable that emits all its items as a list
toMap(list -> list.get(0) % 2) - creates the map
RxJava is more for asynchronous message processing, but as it also espouses functional programming principles it could be used as a poor man's stream api. If you are using Java 8 consider using streams to do this job, but as you are asking this question I assume you are using Java 7.
To do what you want you could try (forgive the lambda, substitute it with an anonymous inner class if you are not using Retrolambda):
Observable.from(list).subscribe(item -> groupItemInCategoryBucket(item));
where groupItemInCategoryBucket is your method that contains the switch statement or whatever other way you have of caching the items.
Please note that this is the equivalent of a for loop, and although it is idiomatic to use this style in many other nice languages, a lot of Java developers might be a bit puzzled when they see this code.
Generally grouping of items can be achieved using a groupBy operator (for more information about it visit this page).
Map<Integer, List<Integer>> groupedValues = new HashMap<>(4);
Observable.range(1, 20)
.groupBy(i -> i % 2, i -> i)
.subscribe(go -> {
List<Integer> groupValues = new ArrayList<>();
groupedValues.put(go.getKey(), groupValues);
go.subscribe(t -> add(t, groupValues));
});
How it works:
Firstly, observable emits items 1 through 20 (this happens in range method)
Which then are emitted to separate observables based on their
parity(groupBy method, after this method you operate on GroupedObservable)
You then subscribe to the grouped observable, receiving (in subscribers onNext) separate observables that will contain grouped items and the key they were grouped by.
Remember to either subscribe to the grouped observables or issue take(0) on them if their content does not interest you to prevent memory leaks.
I am not sure whether it is the most efficient way or not and would welcome some input about this solution.
Say I have 2 Observables (A & B) that are essentially network calls (using Retrofit to give context).
The current flow of the app is as follows:
A & B are kicked off at about the same time (asynchronously).
B is executed 0 or more times on user interaction
I have 3 different scenarios that I want to listen for given these 2 observables/api calls.
I want to know immediately when Observable A completes
I want to know immediately when Observable B completes
I want to know when both have completed
First off, is this a good use case for RxJava?
I know how to do each scenario individually (using zip for the last), though I don't know how to do all of them simultaneously.
If I subscribe to Observable A, A begins. If I subscribe to B, B begins. If A & B complete before I subscribe to zip(a, b), I could miss the event and never actually see this complete, right?
Any general guidance would be appreciated. My RxJava knowledge is pretty thin :P
You can achieve this using three different observable, one for each of your case.
As you'll have to share states between each observables, you'll have to convert retrofit cold observables to hot observable. (see here for more information on this topic)
ConnectableObservable a = service.callA().publish();
ConnectableObservable b = service.callB().publish();
a.subscribe((e) -> { /* onNext */ }, (ex) -> {/* onError */}, () -> {/* when A is completed */ });
b.subscribe((e) -> { /* onNext */ }, (ex) -> {/* onError */}, () -> {/* when B is completed */ });
a.mergeWith(b).subscribe((e) -> { /* onNext */ }, (ex) -> {/* onError */}, () -> {/* when A and B are completed */ });
a.connect(); // start subscription to a
b.connect(); // start subscription to b
Do not share an object between onCompleted methods or you'll have to deal with concurrencies issues.