I am trying to implement long polling which is lifecycle aware(in Activity/Fragment). The polling will be scoped to the fragment which sends API request to the server every fixed interval of time. However, I am unable to implement it.
This is how I want the implementation to be like
Have a hard timeout on client-side without considering any extra delay incurred in receiving the response.
Wait for the response of previous API call before sending the next request. i.e., the request in polling queue should wait for response irrespective of its priority due to polling interval
Consider:
HARD_TIMEOUT = 10s
POLLING_INTERVAL = 2s
Request 1: Started at: 0sec Response delay:1.2sec Duration left: 0.8sec
Request 2: Started at: 2sec Response delay:0.4sec Duration left: 1.6sec
Request 3: Started at: 4sec Response delay:2.5sec Duration left: 0sec
Request 4: Started at: 6.5sec Response delay:0.5sec Duration left: 1.0sec
Request 5: Started at: 8sec Response delay:0.8sec Duration left: 1.2sec
For this use case, I want to use polling instead of socket. Any Idea/solutions would be appreciated. Thank you.
Ok, Figured out a solution for polling using channels. this should help someone searching for an example.
private val pollingChannel = Channel<Deferred<Result<OrderStatus>>>()
val POLLING_TIMEOUT_DURATION = 10000L
val POLLING_FREQUENCY = 2000L
A channel is required to hold your asynchronous request just in case more request comes in while your async task is being executed.
val pollingChannel = Channel<Deferred<Pair<Int,Int>>>()
QUEUE EXECUTOR: It will pick an async task and start executing them in FIFO order.
CoroutineScope(Dispatchers.IO).launch {
for (i in pollingChannel) {
val x = i.await()
println("${SimpleDateFormat("mm:ss.SSS").format(Calendar.getInstance().time)} Request ${x.first}: value ${x.second}")
}
}
POLLING FUNCTION: Adds your async task to the polling channel every fixed interval of time until timeout.
CoroutineScope(Dispatchers.IO).launch {
var reqIndex = 1
val timedOut = withTimeoutOrNull(POLLING_TIMEOUT_DURATION) {
while (receiverJob.isActive) {
pollingChannel.send(async {
getRandomNumber(reqIndex++)
})
delay(POLLING_FREQUENCY)
}
}
}
ASYNCHRONOUS OPERATION
to avoid answer being verbose I created a function with a random delay, please replace with the required API call
private suspend fun getRandomNumber(index: Int): Pair<Int,Int> {
val randomDuration = (1..6L).random() * 500
delay(randomDuration)
return Pair(index,(0..100).random())
}
SAMPLE OUTPUT
Related
I am currently developing an app that will get Fitness History Data from Google Fit. Getting the steps and weight are okay but getting the sleep data is a bit of a problem. I want to get the accurate start and end time but the only way to get that is to bucket it by activity segment. The problem is, when there's a lot of data you are trying to get (the app that I'm currently developing requires to get data from 365 days ago at the most), it will not even return a timeout error and my app will keep loading. It will not even start to read the data from Google Fit. So, I wanna ask if there's a way to get the sleep data by activity segment despite its large size? And please do share your code. And by the way, this is how I get my sleep data:
val sleepReadRequest = DataReadRequest.Builder()
.aggregate(DataType.TYPE_ACTIVITY_SEGMENT, DataType.AGGREGATE_ACTIVITY_SUMMARY)
.bucketByActivitySegment(1, TimeUnit.MINUTES)
.setTimeRange(offset, end, TimeUnit.MILLISECONDS)
.build()
LogUtil.d(TAG, "getting sleep data...")
Fitness.getHistoryClient(
context,
Objects.requireNonNull<GoogleSignInAccount>(GoogleSignIn.getLastSignedInAccount(context))
)
.readData(sleepReadRequest)
.addOnSuccessListener { dataReadResponse ->
LogUtil.d(TAG, "success sleep data")
val secondSet = handleDataReturned(dataReadResponse, false, DateUtil.convertTimeStampToDate(offset, DateUtil.DATE_FORMAT))
dailyData.addAll(secondSet)
val allDailyList = getDailyDataList(dailyData, userHeight)
callback.onGetDataSuccess(allDailyList)
}
.addOnFailureListener { e ->
LogUtil.d(TAG, "fail sleep data")
if (e is ApiException && e.statusCode == GoogleFitError.NOT_SIGNED.code) { // not signed app exception
revokePermission(context)
callback.onGetDataFailure(GoogleFitError.parse(e.statusCode))
} else {
callback.onGetDataFailure(AppError.parse(Throwable(e)))
}
}
.addOnCompleteListener { task ->
LogUtil.d(TAG, "complete sleep data")
callback.onGetDataComplete(task)
}
Rather than aggregating, can you just read the activity segments and iterate through them yourself?
val sleepReadRequest =
DataReadRequest.Builder()
.read(DataType.TYPE_ACTIVITY_SEGMENT)
.setTimeRange(offset, end, TimeUnit.MILLISECONDS)
.build()
You can then retrieve the returned data with DataReadResult#getDataSet(DataType).
If you find that it's timing out (a year of data at once is potentially rather a lot!) I'd suggest batching the request into smaller ones and caching data in the past which is unlikely to change.
I am changing the way our application works to use retrofit instead of just OkHTTP
The way it used to work is we would send the request, retrieve the body as an input stream and read all bytes into a string.
After that we would parse the body using gson.
The problem is that the server seems to have a problem with the configuration (which I am told is on the list of things to fix but will take a long time) so for example it may return 400 bytes of data, but will send the message that the bytes are actually 402.
The way we currently handle it is by catching the EOF exception and ignoring it, and then parsing the returned string normally.
right now I use the following request to get the entities I want
#GET("/services/v1/entities")
suspend fun getEntities() : List<ServerEntity>
which , when there is no error, works correctly
the solutions I've found so far are either
a) use the following code to retry all requests until I do not get an EOF exception:
internal suspend fun <T> tryTimes(times: Int = 3, func: suspend () -> T): T {
var tries = times.coerceAtLeast(2)
try {
var lastException: EOFException? = null
while (tries > 0) {
try {
return func.invoke()
} catch (eof: EOFException) {
lastException = eof
tries--
}
}
throw lastException!!
} finally {
log.d("DM", "tried request ${times.coerceAtLeast(2) - tries} times")
}
}
which most of the time logs either 0 or 1 tries
or change all my requests to
#GET("/services/v1/entities")
suspend fun getEntities() : ResponseBody
and parse the stream manually ( ResponseBody may be incorrect but you can understand what I mean)
is there a way to use my original function and make retrofit know that in the case of an EOF exception it should resume instead of stopping?
In my Android App I have a presenter which handles user interactions, contains kind of request manager and if needed sends user input over request manager to request manager.
Request manager itself contains server API and handles server request using this RxJava.
I have a code, which sends a request to server everytime a user enters a message and show the response from server:
private Observable<List<Answer>> sendRequest(String request) {
MyRequest request = new MyRequest();
request.setInput(request);
return Observable.fromCallable(() -> serverApi.process(request))
.doOnNext(myResponse -> {
// store some data
})
.map(MyResponse::getAnswers)
.subscribeOn(Schedulers.newThread())
.observeOn(AndroidSchedulers.mainThread());
}
However now I need to have kind of queue. The user may send a new message before the server has responded. Each message from the queue should be processed sequentially. I.e. the second message will be sent after we've got a response to the first message and so on.
In case an error occurs no further requests should be handled.
I also need to display the answers within a RecyclerView.
I have no idea how to change the code above to achieve the handling described above
I see kind of problem. On one hand, this queue can be anytime updated by the user, on the other hand anytime server sent a response the message should be removed from the queue.
Maybe there is a rxjava operator or special way I just missed.
I saw a similar answer here, however, the "queue" there is constant.
Making N sequential api calls using RxJava and Retrofit
I'll be very thankful for any solution or link
I don't fnd any elegant native-RxJava solution. So I will custom a Subscriber to do your work.
For your 3 points:
For sequential execution, we create a single thread scheduler
Scheduler sequential = Schedulers.from(Executors.newFixedThreadPool(1));
For stop all requests when error occur, we should subscribe all request together instead of create a Flowable every time. So we define following functions (here I request is Integer and response String):
void sendRequest(Integer request)
Flowable<String> reciveResponse()
and define a field to make association of request and response flow:
FlowableProcessor<Integer> requestQueue = UnicastProcessor.create();
For re-run the not-sent request, we define the rerun function:
void rerun()
Then we can use it:
reciveResponse().subscribe(/**your subscriber**/)
Now let us implement them.
When send request, we simply push it into requestQueue
public void sendRequest(Integer request) {
requestQueue.onNext(request);
}
First, to do the request sequentialy, we should schedule work to sequential:
requestQueue
.observeOn(sequential)
.map(i -> mockLongTimeRequest(i)) // mock for your serverApi.process
.observeOn(AndroidSchedulers.mainThread());
Second, to stop request when error occur. It's a default behavior. If we do nothing, an error will broken the subscription and any futher items will not be emitted.
Third, to re-run the not-sent requests. First because that the native operator will cancel the stream, like MapSubscriber do (RxJava-2.1.0-FlowableMap#63):
try {
v = ObjectHelper.requireNonNull(mapper.apply(t), "The mapper function returned a null value.");
} catch (Throwable ex) {
fail(ex);// fail will call cancel
return;
}
We should wrap the error. Here I use my Try class to wrap the possible exception, you can use any other implementation that can wrap the exception instead of throw it:
.map(i -> Try.to(() -> mockLongTimeRequest(i)))
And then it's the custom OnErrorStopSubscriber implements Subscriber<Try<T>>, Subscription.
It request and emits items normally. When error occur(in fact is a failed Try emitted) it stopped there and won't request or emit even downstream request it. After call rerun method, it will back to the running statu and emit normally. The class is about 80 lines. You can see the code on my github.
Now we can test our code:
public static void main(String[] args) throws InterruptedException {
Q47264933 q = new Q47264933();
IntStream.range(1, 10).forEach(i -> q.sendRequest(i));// emit 1 to 10
q.reciveResponse().subscribe(e -> System.out.println("\tdo for: " + e));
Thread.sleep(10000);
q.rerun(); // re-run after 10s
Thread.sleep(10000);// wait for it complete because the worker thread is deamon
}
private String mockLongTimeRequest(int i) {
Thread.sleep((long) (1000 * Math.random()));
if (i == 5) {
throw new RuntimeException(); // error occur when request 5
}
return Integer.toString(i);
}
and output:
1 start at:129
1 done at:948
2 start at:950
do for: 1
2 done at:1383
3 start at:1383
do for: 2
3 done at:1778
4 start at:1778
do for: 3
4 done at:2397
5 start at:2397
do for: 4
error happen: java.lang.RuntimeException
6 start at:10129
6 done at:10253
7 start at:10253
do for: 6
7 done at:10415
8 start at:10415
do for: 7
8 done at:10874
9 start at:10874
do for: 8
9 done at:11544
do for: 9
You can see it runs sequentialy. And stopped when error occur. After call rerun method, it continue handle the left not-sent request.
For complete code, see my github.
For this kind of behaviour I'm using Flowable backpressure implementation.
Create outer stream that is parent for your api request stream, flatMap the api request with maxConcurrency = 1 and implement some sort of buffer strategy, so your Flowable doesn't throw exception.
Flowable.create(emitter -> {/* user input stream*/}, BackpressureStrategy.BUFFER)
.onBackpressureBuffer(127, // buffer size
() -> {/* overflow action*/},
BackpressureOverflowStrategy.DROP_LATEST) // action when buffer exceeds 127
.flatMap(request -> sendRequest(request), 1) // very important parameter
.subscribe(results -> {
// work with results
}, error -> {
// work with errors
});
It will buffer user input up to given threshold, and then drop it(if you don't do this it will throw exception, but it is highly unlikely that user will exceed such buffer), it will execute sequentially 1 by 1 like a queue. Don't try to implement this behaviour yourself if there are operators for thing kind of behaviour in libary itself.
Oh I forgot to mention, your sendRequest() method must return Flowable or you can convert it to Flowable.
Hope this helps!
My solutions would be as follows (I did something similar in Swift before):
You will need a wrapper interface (let's call it "Event") for both requests and responses.
You will need a state object (let's make it class "State") that will contain request queue and the latest server response, and a method that will accept "Event" as parameter and return 'this'.
Your main processing chain will look like Observable state = Observable.merge(serverResponsesMappedToEventObservable, requestsMappedToEventObservable).scan(new State(), (state, event) -> { state.apply(event) })
Both parameters of the .merge() method will probably be Subjects.
Queue processing will happen in the only method of "State" object (pick and send request from the queue on any event, add to queue on request event, update latest response on response event).
i suggest to create asynchronous observable methods , here a sample :
public Observable<Integer> sendRequest(int x){
return Observable.defer(() -> {
System.out.println("Sending Request : you get Here X ");
return storeYourData(x);
});
}
public Observable<Integer> storeYourData(int x){
return Observable.defer(() -> {
System.out.println("X Stored : "+x);
return readAnswers(x);
}).doOnError(this::handlingStoreErrors);
}
public Observable<Integer> readAnswers(int h){
return Observable.just(h);
}
public void handlingStoreErrors(Throwable throwable){
//Handle Your Exception.
}
the first observable will send request when he get response will proceed the second one and you can chain , you can customize each method to handle errors or success, this sample like queue.
here the result for execution :
for (int i = 0; i < 1000; i++) {
rx.sendRequest(i).subscribe(integer -> System.out.println(integer));
}
Sending Request : you get Here X
X Stored : 0
0
Sending Request : you get Here X
X Stored : 1
1
Sending Request : you get Here X
X Stored : 2
2
Sending Request : you get Here X
X Stored : 3
3
.
.
.
Sending Request : you get Here X
X Stored : 996
996
Sending Request : you get Here X
X Stored : 997
997
Sending Request : you get Here X
X Stored : 998
998
Sending Request : you get Here X
X Stored : 999
999
My problem is i can't get infinite stream with Retrofit. After i get credentials for initial poll() request - i do initial poll() request. Each poll() request responds in 25 sec if there is no change, or earlier if there are any changes - returning changed_data[]. Each response contains timestamp data needed for next poll request - i should do new poll() request after each poll() response. Here is my code:
getServerApi().getLongPollServer()
.flatMap(longPollServer -> getLongPollServerApi(longPollServer.getServer()).poll("a_check", Config.LONG_POLLING_SERVER_TIMEOUT, 2, longPollServer.getKey(), longPollServer.getTs(), "")
.take(1)
.flatMap(longPollEnvelope -> getLongPollServerApi(longPollServer.getServer()).poll("a_check", Config.LONG_POLLING_SERVER_TIMEOUT, 2, longPollServer.getKey(), longPollEnvelope.getTs(), "")))
.retry()
.subscribe(longPollEnvelope1 -> {
processUpdates(longPollEnvelope1.getUpdates());
});
I'm new to RxJava, maybe i don't understand something, but i can't get infinite stream. I get 3 calls, then onNext and onComplete.
P.S. Maybe there is a better solution to implement long-polling on Android?
Whilst not ideal, I believe that you could use RX's side effects to achieve a desired result ('doOn' operations).
Observable<CredentialsWithTimestamp> credentialsProvider = Observable.just(new CredentialsWithTimestamp("credentials", 1434873025320L)); // replace with your implementation
Observable<ServerResponse> o = credentialsProvider.flatMap(credentialsWithTimestamp -> {
// side effect variable
AtomicLong timestamp = new AtomicLong(credentialsWithTimestamp.timestamp); // computational steering (inc. initial value)
return Observable.just(credentialsWithTimestamp.credentials) // same credentials are reused for each request - if invalid / onError, the later retry() will be called for new credentials
.flatMap(credentials -> api.query("request", credentials, timestamp.get())) // this will use the value from previous doOnNext
.doOnNext(serverResponse -> timestamp.set(serverResponse.getTimestamp()))
.repeat();
})
.retry()
.share();
private static class CredentialsWithTimestamp {
public final String credentials;
public final long timestamp; // I assume this is necessary for you from the first request
public CredentialsWithTimestamp(String credentials, long timestamp) {
this.credentials = credentials;
this.timestamp = timestamp;
}
}
When subscribing to 'o' the internal observable will repeat. Should there be an error then 'o' will retry and re-request from the credentials stream.
In your example, computational steering is achieved by updating the timestamp variable, which is necessary for the next request.
I am using AsyncHttpClient to get some JSONs. My method will parse through the JSON and will fire another get request, so I don't actually know how many threads are running. After some searching, I think I could use ThreadPoolExecutor to know when all my threads are finished, so I can write to a database. How will the executor know I submitted a job if I am using AsyncHttpClient.get()?
AsyncHttpClient client = new AsyncHttpClient();
int limit = 20;
BlockingQueue<Runnable> q = new ArrayBlockingQueue<Runnable>(limit);
ThreadPoolExecutor executor =
new ThreadPoolExecutor(limit, limit, 20, TimeUnit.SECONDS, q);
client.setThreadPool(executor);
parseSilo(url, context); // this fires client.get() ... as it encounters urls in JSON feed
executor.shutdown();
while (!executor.awaitTermination(10, TimeUnit.SECONDS)) {
Log.e(TAG, executor.getTaskCount() + " tasks left");
}
I do not see where you submit your job to the ThreadPoolExecutor
Sample code:
executor.execute(new Runnabale() {
public void run() {
// your job code
}
});
Edit:
I just noticed that you are overriding the default ThreadPoolExecuter of AsyncHttpClient so it will be used it when issuing a request (e.g. get), it should work without explicitly sending anything to the executor
You can also override the method terminated() instead of looping until all tasks finish.