I have used retrofit creating new objects each time I use a webservice . The webservice response time is good . But when tried within the app it slows down.
Does this effect the response?
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(ApiConstants.BASE_URL)
.addCallAdapterFactory(new ErrorCallback.ErrorHandlingCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create())
.build();
In the Retrofit documentation the build, baseUrl, addCallAdapterFactory, addConverterFactory operations dont entablish any connections. This make sense because they are used just to prepare the Retrofit instance for communication.
Also, considering that restful servicies typically use HTTP for communication, there is no need to entablish a connection per session. The connection entablishes every time you call an operation of the webservice api.
So, creating the instance everytime will no affect in terms of communication, but it will affect the client cpu and memory resources unnecessary.
Creating a new Retrofit instance does affect the response time and I can confirm this by my own recent experience. If I created new instance for each request the response time that I was getting was >= 1700ms where as when I used single instance for API calls, I got response in <= 500 ms.
Related
I have a case where some users end up in a loop of requesting #GET API call too often.
Too often = 10-20x every second.
Currently, I've not located the problem and it seems that it's not going to be an easy fix, but I was wondering, is there a possibility to set some kind of limitations on Retrofit2, where if the app goes into some kind of loop where single API request is called so many times, it actually ignores these requests, for instance, do 1-5x same requests in a second max. or something similar?
How could this be done (from a networking library settings perspective)? (Till I find the root cause, I'd like to protect backend)
According the this answer you can use dispatcher as below:
Dispatcher dispatcher = new Dispatcher();
dispatcher.setMaxRequests(1);
OkHttpClient client = new OkHttpClient.Builder()
.dispatcher(dispatcher)
.build()
After then you will be able to send one request on a time.
Apologies if this is too simple of a question, I am new to using remote data sources.
I assume Enqueue is running on background threads instead of main thread, but which is faster and better for optimization? As I understand using Runnables will take up more code, but I have seen multiple apps built with such a method, is it better than the simpler Enqueue method?
Retrofit will use underlying OkHttp to make calls to the server. Enqueue is tested and it is always better to use the globally recognized by developers with performance testing and lots of other aspects around it. It also covers your ExecutorService without you having to write the implementation for it. I'll add few points for readers new to OkHttp.
new Request.Builder().url(endpoint).build() creates the request but
doesn't send anything.
client.newCall(request).execute() sends the request and waits for
the response, but doesn't download the response, only its headers so
you can check things like response.isSuccessful() immediately.
response.body().string() downloads the body of the response and
returns a string.
You can push your own implementation of ExecutorService like this
OkHttpClient.Builder().dispatcher(Dispatcher(executorService)).build()
I have asked question recently on stackoverflow regarding my slow network calls using Retrofit and RxJava
I have made some changes of scheduler.io in it and removed main thread from all the execution. But still the result is not that much satisfying. I have digged in the code of OkHttp and Retrofit. Below are my findings:
Retrofit uses scheduler.io which will execute all the requests in the threadpool. Previously it was creating new threads. Does changing it to scheduler.io from newthread make any difference?
Now OkHttp has limit of max requests per host of 5 in the code. So does that make any difference?
All the calls are happening in Dispatcher using enqueue. Do we need to set some configuration to Retrofit and OkHttp for that?
Any suggestions to increase the performance of webservice call will be appreciated?
I'm using OkHttp at the first time. I see the tutorial said that If wanna use Response Caching, I must call new OkHttpClient() exactly once (a singleton instance). OkhttpClient is thread-safe by synchronized methods. But in my application, there are many threads connects to the network to get remote data simultaneously, some threads must wait for a thread have done getting data to execute its operation.
So Is it's performance not better than normal?
If yes, If I don't enable Reponse Caching, should I call new OkHttpClient() many times for better performance?
Thanks
For the best performance, share a single OkHttpClient instance. This allows your cache to be shared, and in the future when we implement fancy joining & cancelling it’ll allow outgoing requests to be merged.
I've successfully created a Retrofit API Rest client making both GET & POST calls and also incorporated that into Robospice as a background service.
However, I want the Robospice service to connect to the database and asynchronously persist the retrieved objects from the GET call. Using the Retrofit Callback class seems the obvious way but connecting to the database requires Context and I"m concerned about leaking the Context.
so, what would be the best approach to get the Robospice SpiceService to persist data to the database both prior to and post a request being processed?
Your question is really fuzzy to me. I don't understand why you can't use the normal persistence mechanism of RS. If you do so, it's pretty easy to persist your data when requests have been executed.
Maybe I am missing something. So, if your requirement is really to persist data yourself, then the approach you propose looks right. You could inject the spice service itself inside your request (see how addRequest is override in RetrofitSpiceService for instance). The request would then hold a context that can be used for persistence inside a callback, or inside the request itself.
Recently I have coded a POST request using retrofit and RS. I changed the signature of the POST request to return a Void. Then slightly modified the retrofit converter to deal with that case and return null. The request received the spice service via injection as mentioned earlier and could do some actions on the database.
Here is some code to inject the application inside a request from within a spice service.
#Override
public void addRequest(CachedSpiceRequest<?> request,
Set<RequestListener<?>> listRequestListener) {
if (request.getSpiceRequest() instanceof MySpiceRequest) {
MySpiceRequest<?> mySpiceRequest = (MySpiceRequest<?>) request
.getSpiceRequest();
mySpiceRequest.setApplication(this.getApplication());
}
super.addRequest(request, listRequestListener);
}
In the end, as I'm batching the various Rest API service calls to save radio (Reto Meier Dev Bytes: Efficient Data Transfers), I call the Rest API services (RetrofitSpiceServices) from within a controller Robospice SpiceService containing both the reference to the DatabaseHelper (requiring Context) and the respective Retrofit callbacks for the Rest services.
This way, the controller service handles all the triggering (AlarmManager triggers the controller service) and persisting to DB and the Rest services can shut themselves down as normal without knowledge of context, database or suchlike.
For #lion789:
I have 4 models each with a corresponding API call to sync with the server (1 POST, 3 GET).
To handle these sync calls, I have an IntentService that contains 4 SpiceManager attribute and 4 Retrofit Callback classes - one for each model/API call.
The IntentService is passed an Enum indicating a sequence of APIs that should be called.
The IntentService calls the appropriate SpiceManager which runs, then the Callback triggers the persistence and calls an IntentService method to trigger the next API call in the sequence.
A lot of this is abstracted and interfaced as I use it for my Auth and Push Registration code so it's a bit of a nightmare to describe but it's been working pretty well thus far.