I'm using Retrofit for making HTTP calls. But it seems like the library is compressing (gzip) the request by default. Since the API can't handle compressed request, is there any way to turn-off default compression?
Retrofit does no compression. In fact, it's barely involved in HTTP at all as it just delegates the hard work to a real HTTP client.
That said, I'm going to guess you're talking about OkHttp, but OkHttp also does no compression by default. Adding request body compression is one of the examples we provide. Since it's not supported by a large majority of webservers by default, it isn't enabled by default.
OkHttp will automatically add an Accept-Encoding: gzip header to requests. This indicates to the server that OkHttp can read Gzip response bodies. If the server chooses to send a Gzipped response body (it doesn't have to), it will be transparently un-Gzipped before being handed back to the application code.
Related
I am trying to use the built in HTTPResponseCache in my app (making requests via the HTTPURLConnection API) but am having problems trying to get it to cache any responses that were requested with an Authorization header included.
The only way I can get it to cache the response at all is to explicitly put 'public' in the Cache-Control response header on the server (s-maxage might work too, haven't tried, but explicitly putting private results in no caching); but this will mean that any intermediate proxies will cache the response to serve to other clients, which is not what I want.
My understanding is that a user agent cache would cache responses requested with Authorization headers by default or with a private header. It seems like the HTTPResponseCache is acting like a shared cache in how it is interpreting the headers, rather than a user agent cache. Or is my understanding of the caching standards not correct?
Is there any way I can get the cache to act like a user agent HTTP cache?
This in my install code:
public static void setupCache(Context context, long httpCacheSize){
File httpCacheDir = new File(context.getCacheDir(),"http");
HttpResponseCache.install(httpCacheDir, httpCacheSize);
}
Do I need to do something different here? Or perhaps I need to include some user agent information in my requests?
Whilst I found no solution to this specific issue, I worked around my problem by refactoring my HTTP client code to use Volley (http://developer.android.com/training/volley/index.html) rather than HTTPURLConnection. The caching facilities in Volley are implemented separately to HTTPResponseCache and implement handling of cache control headers as expected for a user agent cache.
When I use OkHttp to GET JSON from a URL like this :
Request request = new Request.Builder()
.url(url).build();
I usually get a same response (sometimes I can get a new response).
If I use like this:
Request request = new Request.Builder()
.cacheControl(new CacheControl.Builder().noCache().noStore().build())
.url(url).build();
I will get a new response everytime.
I want to know why I get the same reponse by the first method?
Caching in HTTP
HTTP is typically used for distributed information systems, where performance can be improved by the use of response caches. The HTTP/1.1 protocol includes a number of elements intended to make caching work as well as possible. Because these elements are inextricable from other aspects of the protocol, and because they interact with each other, it is useful to describe the basic caching design of HTTP separately from the detailed descriptions of methods, headers, response codes, etc.
Caching would be useless if it did not significantly improve performance. The goal of caching in HTTP/1.1 is to eliminate the need to send requests in many cases, and to eliminate the need to send full responses in many other cases. The former reduces the number of network round-trips required for many operations; we use an "expiration" mechanism for this purpose. The latter reduces network bandwidth requirements.
For more information on this, go through Caching in HTTP. Also for help on coding aspect, go through this documentation on Class Cache.
Wanted to know what would be the most efficient way of doing this. The reason I want to divide the file is so that I dont send the same blocks of data again if the network becomes unavailable while the transfer is going on. This is especially usefule for bigger files.
What you want is an HTTP Multipart Request, which is provided by the Apache HTTP Library, by Retrofit and by Ion. Volley does not let you perform such a request currently.
I have an issue using Volley with conditional GETs on cached responses when the request goes through one or more redirect hops.
On the initial request, if the server responds with a 302, the HTTP stack (I'm using the default HurlStack) transparently follows the redirect(s) and ultimately returns the response from the final server.
On subsequent requests, Volley adds an If-Modified-Since header to perform a conditional GET, but these are sent to the initial server, so instead of redirecting we just get a 304 response and the request never reaches the final server.
Since Volley is very loosely coupled with its underlying HTTP stack, there's no way to communicate the fact that the cache headers should only be sent with the final request.
The best solution that I can see (besides never sending conditional GETs) is to write a custom HttpUrlConnection implementation that recognizes that certain headers belong to a specific URL and only sends those headers when appropriate. This means, I would have to save the URL of the final server somewhere, probably as a custom header so that it is saved in the cache along with the other headers.
A slightly less hacky solution would be to write a custom HttpStack implementation that handles redirects manually. But that would mean we could not reuse the connection for redirects to the same host, so it would be less efficient.
Has anyone else run into this issue and have a better solution? It doesn't seem to be specific to Volley or HttpUrlConnection, but to any HTTP library that handles redirects transparently. How do you tell it which headers go with a given URL?
Android uses Apache's HTTP Components library to perform HTTP requests and exposes an API that doesn't support asynchronous requests or pipelining. We're writing an app that would benefit from pipelining so we are using Hotpotato to perform those requests. In an effort to reduce the size of the APK (Hotpotato and Netty add ~2-4MB to the APK size) we're looking to implement our own on top of HttpCore and HttpNIO.
The Apache NIO extensions docs have an obscure reference to pipelining, mentioning that "non-blocking HTTP connections [are] fully pipelining capable", and there's a bug on the HttpClient code that mentions pipelining support, but there's no mention of how to implement it.
How do I use Apache's HTTP Components to implement support for HTTP pipelining and persistent connections on top of Android's existing Apache HTTP components libraries?
Most likely you are not going to like the answer, but so be it. The reason why support for HTTP pipelining is lacking is that HTTP pipelining is simply not useful outside a very limited number of use cases. HTTP pipelining is applicable (or recommended by the HTTP spec) for non-idempotent HTTP methods only. This effectively precludes pipelining of POST requests. Pipelining can be marginally useful for browsers that need to retrieve a large sets of static files using GET requests while being restricted to only two simultaneous HTTP connections to the same host. In this case HTTP pipelining may produce marginal performance improvements. At the same time I contend that an HTTP agent using a moderately sized pool of persistent connections (no more than five) will outperform pipelining HTTP agent. The extra complexity of HTTP pipelining is simply not worth the trouble and this is the reason why there is no great urgency to add out-of-box support for HTTP pipelining to HttpClient and HttpCore.
Having said all that non-blocking HTTP connections of HttpCore NIO are fully asynchronous and always function in a full duplex mode. HttpCore imposes no restriction as to how many requests can be written out or how many responses can be received in one go. It is the responsibility of the protocol handler to correlate HTTP requests and responses into logically related sequences of message exchanges. Standard HTTP protocol handlers do not pipeline HTTP messages in order to be able to support the expect-continue handshaking for POST requests (expectation verification and pipelining are pretty much mutually exclusive). However, there is nothing that prevents you from building a custom NHttpClientHandler class and making it pipeline requests. You can start off by taking the source code of HttpAsyncClientProtocolHandler [1], rip out the expect-continue handshaking code and add queuing of incoming and outgoing HTTP messages.
Hope this helps
[1] http://svn.apache.org/repos/asf/httpcomponents/httpcore/trunk/httpcore-nio/src/main/java/org/apache/http/nio/protocol/HttpAsyncClientProtocolHandler.java