My problem is i can't get infinite stream with Retrofit. After i get credentials for initial poll() request - i do initial poll() request. Each poll() request responds in 25 sec if there is no change, or earlier if there are any changes - returning changed_data[]. Each response contains timestamp data needed for next poll request - i should do new poll() request after each poll() response. Here is my code:
getServerApi().getLongPollServer()
.flatMap(longPollServer -> getLongPollServerApi(longPollServer.getServer()).poll("a_check", Config.LONG_POLLING_SERVER_TIMEOUT, 2, longPollServer.getKey(), longPollServer.getTs(), "")
.take(1)
.flatMap(longPollEnvelope -> getLongPollServerApi(longPollServer.getServer()).poll("a_check", Config.LONG_POLLING_SERVER_TIMEOUT, 2, longPollServer.getKey(), longPollEnvelope.getTs(), "")))
.retry()
.subscribe(longPollEnvelope1 -> {
processUpdates(longPollEnvelope1.getUpdates());
});
I'm new to RxJava, maybe i don't understand something, but i can't get infinite stream. I get 3 calls, then onNext and onComplete.
P.S. Maybe there is a better solution to implement long-polling on Android?
Whilst not ideal, I believe that you could use RX's side effects to achieve a desired result ('doOn' operations).
Observable<CredentialsWithTimestamp> credentialsProvider = Observable.just(new CredentialsWithTimestamp("credentials", 1434873025320L)); // replace with your implementation
Observable<ServerResponse> o = credentialsProvider.flatMap(credentialsWithTimestamp -> {
// side effect variable
AtomicLong timestamp = new AtomicLong(credentialsWithTimestamp.timestamp); // computational steering (inc. initial value)
return Observable.just(credentialsWithTimestamp.credentials) // same credentials are reused for each request - if invalid / onError, the later retry() will be called for new credentials
.flatMap(credentials -> api.query("request", credentials, timestamp.get())) // this will use the value from previous doOnNext
.doOnNext(serverResponse -> timestamp.set(serverResponse.getTimestamp()))
.repeat();
})
.retry()
.share();
private static class CredentialsWithTimestamp {
public final String credentials;
public final long timestamp; // I assume this is necessary for you from the first request
public CredentialsWithTimestamp(String credentials, long timestamp) {
this.credentials = credentials;
this.timestamp = timestamp;
}
}
When subscribing to 'o' the internal observable will repeat. Should there be an error then 'o' will retry and re-request from the credentials stream.
In your example, computational steering is achieved by updating the timestamp variable, which is necessary for the next request.
Related
In my Android App I have a presenter which handles user interactions, contains kind of request manager and if needed sends user input over request manager to request manager.
Request manager itself contains server API and handles server request using this RxJava.
I have a code, which sends a request to server everytime a user enters a message and show the response from server:
private Observable<List<Answer>> sendRequest(String request) {
MyRequest request = new MyRequest();
request.setInput(request);
return Observable.fromCallable(() -> serverApi.process(request))
.doOnNext(myResponse -> {
// store some data
})
.map(MyResponse::getAnswers)
.subscribeOn(Schedulers.newThread())
.observeOn(AndroidSchedulers.mainThread());
}
However now I need to have kind of queue. The user may send a new message before the server has responded. Each message from the queue should be processed sequentially. I.e. the second message will be sent after we've got a response to the first message and so on.
In case an error occurs no further requests should be handled.
I also need to display the answers within a RecyclerView.
I have no idea how to change the code above to achieve the handling described above
I see kind of problem. On one hand, this queue can be anytime updated by the user, on the other hand anytime server sent a response the message should be removed from the queue.
Maybe there is a rxjava operator or special way I just missed.
I saw a similar answer here, however, the "queue" there is constant.
Making N sequential api calls using RxJava and Retrofit
I'll be very thankful for any solution or link
I don't fnd any elegant native-RxJava solution. So I will custom a Subscriber to do your work.
For your 3 points:
For sequential execution, we create a single thread scheduler
Scheduler sequential = Schedulers.from(Executors.newFixedThreadPool(1));
For stop all requests when error occur, we should subscribe all request together instead of create a Flowable every time. So we define following functions (here I request is Integer and response String):
void sendRequest(Integer request)
Flowable<String> reciveResponse()
and define a field to make association of request and response flow:
FlowableProcessor<Integer> requestQueue = UnicastProcessor.create();
For re-run the not-sent request, we define the rerun function:
void rerun()
Then we can use it:
reciveResponse().subscribe(/**your subscriber**/)
Now let us implement them.
When send request, we simply push it into requestQueue
public void sendRequest(Integer request) {
requestQueue.onNext(request);
}
First, to do the request sequentialy, we should schedule work to sequential:
requestQueue
.observeOn(sequential)
.map(i -> mockLongTimeRequest(i)) // mock for your serverApi.process
.observeOn(AndroidSchedulers.mainThread());
Second, to stop request when error occur. It's a default behavior. If we do nothing, an error will broken the subscription and any futher items will not be emitted.
Third, to re-run the not-sent requests. First because that the native operator will cancel the stream, like MapSubscriber do (RxJava-2.1.0-FlowableMap#63):
try {
v = ObjectHelper.requireNonNull(mapper.apply(t), "The mapper function returned a null value.");
} catch (Throwable ex) {
fail(ex);// fail will call cancel
return;
}
We should wrap the error. Here I use my Try class to wrap the possible exception, you can use any other implementation that can wrap the exception instead of throw it:
.map(i -> Try.to(() -> mockLongTimeRequest(i)))
And then it's the custom OnErrorStopSubscriber implements Subscriber<Try<T>>, Subscription.
It request and emits items normally. When error occur(in fact is a failed Try emitted) it stopped there and won't request or emit even downstream request it. After call rerun method, it will back to the running statu and emit normally. The class is about 80 lines. You can see the code on my github.
Now we can test our code:
public static void main(String[] args) throws InterruptedException {
Q47264933 q = new Q47264933();
IntStream.range(1, 10).forEach(i -> q.sendRequest(i));// emit 1 to 10
q.reciveResponse().subscribe(e -> System.out.println("\tdo for: " + e));
Thread.sleep(10000);
q.rerun(); // re-run after 10s
Thread.sleep(10000);// wait for it complete because the worker thread is deamon
}
private String mockLongTimeRequest(int i) {
Thread.sleep((long) (1000 * Math.random()));
if (i == 5) {
throw new RuntimeException(); // error occur when request 5
}
return Integer.toString(i);
}
and output:
1 start at:129
1 done at:948
2 start at:950
do for: 1
2 done at:1383
3 start at:1383
do for: 2
3 done at:1778
4 start at:1778
do for: 3
4 done at:2397
5 start at:2397
do for: 4
error happen: java.lang.RuntimeException
6 start at:10129
6 done at:10253
7 start at:10253
do for: 6
7 done at:10415
8 start at:10415
do for: 7
8 done at:10874
9 start at:10874
do for: 8
9 done at:11544
do for: 9
You can see it runs sequentialy. And stopped when error occur. After call rerun method, it continue handle the left not-sent request.
For complete code, see my github.
For this kind of behaviour I'm using Flowable backpressure implementation.
Create outer stream that is parent for your api request stream, flatMap the api request with maxConcurrency = 1 and implement some sort of buffer strategy, so your Flowable doesn't throw exception.
Flowable.create(emitter -> {/* user input stream*/}, BackpressureStrategy.BUFFER)
.onBackpressureBuffer(127, // buffer size
() -> {/* overflow action*/},
BackpressureOverflowStrategy.DROP_LATEST) // action when buffer exceeds 127
.flatMap(request -> sendRequest(request), 1) // very important parameter
.subscribe(results -> {
// work with results
}, error -> {
// work with errors
});
It will buffer user input up to given threshold, and then drop it(if you don't do this it will throw exception, but it is highly unlikely that user will exceed such buffer), it will execute sequentially 1 by 1 like a queue. Don't try to implement this behaviour yourself if there are operators for thing kind of behaviour in libary itself.
Oh I forgot to mention, your sendRequest() method must return Flowable or you can convert it to Flowable.
Hope this helps!
My solutions would be as follows (I did something similar in Swift before):
You will need a wrapper interface (let's call it "Event") for both requests and responses.
You will need a state object (let's make it class "State") that will contain request queue and the latest server response, and a method that will accept "Event" as parameter and return 'this'.
Your main processing chain will look like Observable state = Observable.merge(serverResponsesMappedToEventObservable, requestsMappedToEventObservable).scan(new State(), (state, event) -> { state.apply(event) })
Both parameters of the .merge() method will probably be Subjects.
Queue processing will happen in the only method of "State" object (pick and send request from the queue on any event, add to queue on request event, update latest response on response event).
i suggest to create asynchronous observable methods , here a sample :
public Observable<Integer> sendRequest(int x){
return Observable.defer(() -> {
System.out.println("Sending Request : you get Here X ");
return storeYourData(x);
});
}
public Observable<Integer> storeYourData(int x){
return Observable.defer(() -> {
System.out.println("X Stored : "+x);
return readAnswers(x);
}).doOnError(this::handlingStoreErrors);
}
public Observable<Integer> readAnswers(int h){
return Observable.just(h);
}
public void handlingStoreErrors(Throwable throwable){
//Handle Your Exception.
}
the first observable will send request when he get response will proceed the second one and you can chain , you can customize each method to handle errors or success, this sample like queue.
here the result for execution :
for (int i = 0; i < 1000; i++) {
rx.sendRequest(i).subscribe(integer -> System.out.println(integer));
}
Sending Request : you get Here X
X Stored : 0
0
Sending Request : you get Here X
X Stored : 1
1
Sending Request : you get Here X
X Stored : 2
2
Sending Request : you get Here X
X Stored : 3
3
.
.
.
Sending Request : you get Here X
X Stored : 996
996
Sending Request : you get Here X
X Stored : 997
997
Sending Request : you get Here X
X Stored : 998
998
Sending Request : you get Here X
X Stored : 999
999
I wrote a RxJava implementation of a TokenManager for a remote API (that I'm consuming via Retrofit). However I ran into a snag where a method call with blockingGet() is resulting in skipped UI frames even though I subscribeOn(Schedulers.io())
Basically, I've included getToken() as a parameter in the API search() call method. If the token exists, it will be provided, if not, it will be fetched via the API token() call. <-- This is the problem. When this method gets called, it's resulting in skipped frames in the UI (progressbar freezes momentarily + respective logcat Choreographer skipped frames!! message)
Looking for suggestions on how to remedy the skipped frames, or suggestions on how to better implement this code.
ListFetcher (class that calls TokenManager getToken() )
public Single<List<Business>> getList(final String latitude, final String longitude) {
return api
.search(
tokenManager.getToken(), // <-- Here's the TokenManager reference
AppSettings.SEARCH_TERM,
latitude,
longitude,
AppSettings.SEARCH_RADIUS,
Yelp3Api.SEARCH_LIMIT)
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.flatMap(searchResponse -> {
if (searchResponse.getBusinesses().size() < searchResponse.getTotal()) {
return subsequentSearchCalls(searchResponse, latitude, longitude)
.map(businesses -> {
List<Business> list = new ArrayList<>();
list.addAll(searchResponse.getBusinesses());
list.addAll(businesses);
return list;
});
} else {
return Single.just(searchResponse.getBusinesses());
}
});
}
TokenManager getToken()
public synchronized String getToken() {
final String cachedToken = sharedPrefs.getString(tokenKey, "null");
if (cachedToken.equals("null")) {
String tokenString = api
.token(
Yelp3Api.GrantType.CLIENT_CREDENTIALS,
BuildConfig.YELPFUSION_CLIENT_ID,
BuildConfig.YELPFUSION_CLIENT_SECRET)
.subscribeOn(Schedulers.io())
.doOnSuccess(this::setSharedPrefToken)
.map(tokenResponse -> String.format(AUTH_FORMAT, tokenResponse.getAccessToken()))
.blockingGet();
return tokenString;
} else {
return String.format(AUTH_FORMAT, cachedToken);
}
}
Yelp3Api (Retrofit interface)
#FormUrlEncoded
#POST("oauth2/token")
Single<TokenResponse> token(
#Field("grant_type") String grantType,
#Field("client_id") String clientId,
#Field("client_secret") String clientSecret
);
#GET("v3/businesses/search")
Single<SearchResponse> search(
#Header("Authorization") String authorization,
#Query("term") String term,
#Query("latitude") String latitude,
#Query("longitude") String longitude,
#Query("radius") int radius,
#Query("limit") int limit
);
When you call getToken() it is not executed lazily. It executes blocking call on your UI thread.
To fix it, wrap getToken in Single.fromCallable and flatMap it above api.search()
Hope this clears out what is actually happening
I ended up re-writing this using okhttp authenticator to request auth credentials and an okhttp interceptor to add the auth header to outgoing requests.
According to the internet, this seems to be a better implementation.
However I'm accepting #Tuby's answer as it is a more direct resolution to the original question.
My application subscribes to an Observable<Timestamped<byte[]>> of data packets arriving in sequence, and assembles them into larger frames. It must examine each packet to find the "Start of Frame" header and do some minor processing to assemble the packets into a valid frame.
How can I create a new Observable<Frame> that will emit these completed frames to a Subscriber?
Update: the suggested answer doesn't want to work for me. Some details:
My source Observable emits Timestamped<byte[]> packets.
Desired Output is an Observable of DataFrame objects, each including the data from several packets along with some other fields.
I have a class FrameAssembler with a method DataFrame receivePacket( Timestamped<byte[]> packet ). It returns null until it has assembled a frame, which it then returns and gets ready for the next one.
I can't create the output Observable. I'm trying this
Observable<DataFrame> source = Observable
.just( new Timestamped<byte[]>(100, new byte[10]) ) // sample packet
.scan( new FrameAssembler(), (acc, packet) -> acc.receivePacket( packet ))
.filter( frame -> frame != null )
but the lambda is underlined, with the message "Bad return type in lambda expression: DataFrame cannot be converted to TestScan.FrameAssembler".
I'm thoroughly stumped by this. What is acc and what's it doing there? Why does it want to convert the DataFrame returned by receivePacket into FrameAssembler? And why is new FrameAssembler() used as the first argument to scan()?
You probably want to use the 2-parameter scan operator:
class ByteAccumulator {
private byte[] buffer = ...
public byte[] receivePacket(byte[] receivedPacket) {
// add the received packet to the buffer
if(containsFullFrame(buffer)) {
return extractFrameAndTrimBuffer();
} else {
return null;
}
}
}
Observable<byte[]> source = ...
source.scan(new ByteAccumulator(), ByteAccumulator::receivePacket)
.filter(frame -> frame != null)
...
Edit: You need an intermediate class to adapt your FrameAssembler to what scan expects:
public FrameScanner {
private final FrameAssembler assembler;
private final DataFrame frame;
public FrameScanner() {this(new FrameAssembler(), null);}
public FrameScanner(FrameAssembler assembler,DataFrame frame) {
this.frame=frame; this.assembler=assembler;
}
public getFrame() {return frame;}
public FrameScanner scan(Timestamped<byte[]> nextBytes) {
return new FrameScanner(assembler, assembler.receivePacker(nextBytes));
}
}
Now you should be able to use it like this:
.scan(new FrameScanner(), FrameScanner::scan)
.map(FrameScanner::getFrame)
.filter(Objects::nonNull)
Hmm... now that I think about it, instead of the abofethis might also work:
FrameAssembler assembler=new FrameAssembler();
...
.scan((DataFrame)null, (ignore, packet) -> assembler.receivePacket( packet))
.filter(Objects::nonNull)
I couldn't get the proposed solution using the scan() operator to work. I believe the problem was the null being returned until a complete set of packets was received. Observable operator chains don't seem to like nulls.
How I solved it:
In the onNext() handler of the data packet Observable subscription:
Thread.currentThread().setPriority( DATA_RX_PRIORITY );
packetArrayList = DataOps.addPacket( packetArrayList, dataPacket );
if( packetArrayList != null ) { // we have a new complete packet buffer
DataOps.DataFrame frameReturned = DataOps.pBuf2dFrame( packetArrayList );
frameRelayer.onNext( frameReturned ); // send the new frame to the BehaviorSubject
}
The addPacket() routine adds each received packet to an ArrayList, but returns null except when a complete Frame's packets have been accumulated, when it returns the filled ArrayList.
When a non-null ArrayList is received, the pBuf2dFrame() method parses the packets and assembles them into a new DataFrame object.
Then comes the trick that converts the Observable of packets into an Observable of DataFrames: frameRelayer is a BehaviorSubject (an RxJava object that can function as both an Observable and a Subscriber). All you have to do is call its onNext() method with the new DataFrame to have it passed on to any Subscribers to frameRelayer.
I have a retrofit function for which i make call from various activities and fragments, which produces same output type. for simple example
retrofitService.fetchData().enqueue(new Callback<MyData>{})
since i am going to call the same function from different activities i created a separate common call back interface
interface OnDataFetch(){
// where mydata holds the response body on success
// msg is response.ErrorBody when the response code is not 200
// resonseType = 0 for success
// responseType = 1 for response code other than 200
// reponstType = 1 for on failure
public void dataFetched(Data mydata,ErrorMsg msg, int responseType);
}
now on on response and on failure i call the dataFetched();
onSuccess(.....){
response.isSuccessful(){
responsetType = 0;
Data mydata = response.body();
}else{
//when response code is not 200 we get error msg from server
responstType = 1;
mydata = null;
ErrorMsg msg = Gson.from(response.ErrorBody(),ErrorMsg);
}
dataFetched(mydata,msg,responseType);
}
onFailure(....){
responseType = 2;
dataFetched(null,null,responseType);
}
now all those activities which require mydata implements the dataFetched interface. Hence i am separating retrofit logics into separate class. Is this the right way to simplify activities with REST API calls or is there some there better way to do the same. Later may be i will store those response to sql database, take that also into consideration while answering thanks(Don't care about syntax).
I use Retrofit + RxJava in my Android project. But my webservice has limits (1 reqest\second). So when I click "Load" button frequently then it returns JSON with error. Question: Which RxJava operator should I use to resend queries as long as they are not successful? Now I just have onError method called and that's it.
Use retryWhen operator, with 1 second of delay in your specific use case.
Here is a detailed explanation on same topic by Dan Lew, http://blog.danlew.net/2016/01/25/rxjavas-repeatwhen-and-retrywhen-explained/
You need to check if the json is not as you like, if it´s not you can throw a runtime exception, then using retryWhen, you can retry to get the json again. Then if all your tries fails, you can get the last error with operator onErrorResumeNext, and then return a default json if you want.
int count = 0;
#Test
public void observableOnErrorResumeNext() {
Subscription subscription = Observable.just(new JsonObject())
.doOnNext(json -> if(json == BAD_JSON){ throw RuntimeException() })
.retryWhen(errors -> errors.doOnNext(o -> count++)
.flatMap(t -> count > 3 ? Observable.error(t) : Observable.just(null).delay(100, TimeUnit.MILLISECONDS)),
Schedulers.newThread())
.onErrorResumeNext(t -> {
System.out.println("Error after all retries:" + t.getCause());
return Observable.just("You can return here a default Json");
})
.subscribe(s -> System.out.println(s));
new TestSubscriber((Observer) subscription).awaitTerminalEvent(500, TimeUnit.MILLISECONDS);
}
You can see more examples if you need here. https://github.com/politrons/reactive