HttpURLConnection not decompressing Gzip - android

I'm trying to use HttpURLConnection on Gingerbread+ Android devices and am having trouble with the gzip encoding. According to the documentation
"In Gingerbread, we added transparent response compression.
HttpURLConnection will automatically add this header to outgoing
requests, and handle the corresponding response:
Accept-Encoding: gzip"
The problem is that this is not actually happening. The Accept-Encoding: gzip header is not getting added at all. If I add it manually, I would then expect the decompressing part of it to work by connection.openInputStream() to automatically return a GZipInputStream but it just returns a plain InputStream.
Has anyone experienced this? I havent seen any posts of this happening so its very odd. The project is compiled against API 17 so that shouldnt be a problem and the device is running 4.3.
Thanks.

I had the same problem and it was related to HTTPS. If you call:
URL url = new URL("https://www.example.com");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
you actually get an instance of HttpsURLConnection (a subclass of HttpURLConnection) that does NOT handle gzip automatically.
In those cases you can:
conn.setRequestProperty("Accept-Encoding", "gzip");
...
InputStream inStream = new GZIPInputStream(conn.getInputStream());

Since Android Gingerbread (http) and ICS (http + https), when you use Http(s)URLConnection, Android adds the Accept-Encoding: gzip header automatically, but only if you didn't add it yourself first.
In that case, if the server returns gzip-encoded content, you will automatically get a GZIPInputStream and the Content-Encoding: gzip header will be removed from the response. That's why it's called transparent: so the client cannot tell the difference between compressed or uncompressed responses by looking at the stream content or the headers.
If you add the Accept-Encoding: gzip header manually yourself however, then transparent gzip handling is disabled for that request and if the server returns gzip-compressed content, then you need to look for the Content-Encoding: gzip header and create the GZIPInputStream yourself to decompress content.

I tested with a few of my devices and HttpURLConnection is adding Accept-Encoding: gzip to the headers.
Have you tried configuring Fiddler for your Android devices to verify http headers? Perhaps your server does not support compression.

I've experienced this myself and also found that it worked if I added the header and used GZIPInputStream manually. However, we were setting our headers manually (had a custom header we had to add).
The Android documentation says "By default, this implementation of HttpURLConnection requests that servers use gzip compression." So I'm guessing that the default only applies if you don't manually set the headers.

Related

IIS and Apache Content-Type header... IIS will only accept one, Apache accepts more than one

IIS will send back a 400 error if you send it two content-type headers, here is an example:
1: Content-type : application/json
2: Content-type : application/json; charset=utf-8;
Apache handles that and processes properly for json.
My reading of the w3c spec is that only a single Content-Type header is allowable. Arguably though both headers mean exactly the same thing as JSON in this case is as I understand it, UTF-8. So whose right here? IIS or Apache?
My app fails running on IIS, the Android lib I am using sends 2 headers if I give it my own and fails on IIS. So currently I'm locked into Apache.

java.io.IOException: unexpected end of stream on Connection in android

I have web service URL, it working fine. It gives the JSON data.
When I am using HttpURLConnection and InputStream, I am getting this error:
java.io.IOException: unexpected end of stream on
Connection{comenius-api.sabacloud.com:443, proxy=DIRECT
hostAddress=12.130.57.1
cipherSuite=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 protocol=http/1.1}
(recycle count=0)
My code:
try {
URL url = new URL("https://comenius-api.sabacloud.com/v1/people/username=" + username + ":(internalWorkHistory)?type=internal&SabaCertificate=" + certificate);
HttpURLConnection con = (HttpURLConnection) url.openConnection();
InputStream ist = con.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(ist));
while ((singleLine = reader.readLine()) != null) {
data = data + singleLine;
Log.e("Response", data);
}
} catch (Exception e) {
e.printStackTrace();
}
How to fix this?
I had the same problem using OKHttp3. The problem was that I didn't close the connection for each request and for the client the same connection was available and for the server not, for this reason the server returns a error.
The solution is indicating to each request to close the connection when it is finished. You have to add a flag in the header to indicate this. In OKHttp3 is like this:
Request request = new Request.Builder()
.url(URL)
.header("Connection", "close")
...
I encountered this problem today. And it turns out that it is the server fault, as in the server throwed an error and shut down as it is parsing the request.
Check your backend, if it is not yours, notify the owner of that server
"Keepalive makes it difficult for the client to determine where one response ends and the next response begins" 1
It seems that the issue is caused by a collision on reusing alive connection under 2 cases:
server doesn't send Content-Length in response headers
(streaming content case, so no Content-Length can be used) server doesn't use Chunked transfer encoding
So if you observed the exception, sniff http headers (e.g. at Android Studio Profiler tool). If you will see in response header both
"Connection: keep-alive"
and no
"Content-Length: ***" or "Transfer-Encoding: chunked" headers,
then this is the described above case.
Since it is totally server issue, the solution should be to calculate Content-Length and to put it to response header on server side, if it is possible (or to use Chunked transfer encoding).
Recommendation to close connections on the client side should be considered just as a workaround, keep in mind that it degrades overall performance.
Just found out the solution
It is really a server side problem AND The solution is to send the content-length header If you are using php just make your code like this
<?php
ob_start();
// the code - functions .. etc ... example:
$v = print_r($_POST,1);
$v .= "\n\r".print_r($_SERVER,1);
$file = 'file.txt';
file_put_contents($file,$v);
print $v;
// finally
$c = ob_get_contents();
$length = strlen($c);
header('Content-Length: '.$length);
header("Content-Type: text/plain");
//ob_end_flush(); // DID NOT WORK !!
ob_flush()
?>
The trick used here is to send the content-length header using the output buffer
I had the same problem, turned out I still had the proxy configured on the emulator while didn't have Charles open anymore
Consider using OkHttp's retryOnConnectionFailure configuration parameter – as documented, this enables the client to silently recover from:
Stale pooled connections. The ConnectionPool reuses sockets to decrease request latency, but these connections will occasionally time out.
If you happen to be using the ktor client (read: kotlin multiplatform), you can use:
HttpClient(OkHttp) {
engine {
config {
retryOnConnectionFailure(true)
}
}
}
h/t
I was testing my App with localhost using XAMPP and this error was occurring, The problem was with the port i was using skype using 443 port i simply quit skype and error was resolved!
Its a server error. It means somehow execution returns to your client without the server sending actual response header.
If you have control over server code, check that after processing the request, you explicitly send a response header with a response code. That way retrofit knows the request has been processed.
I have the same issue. This error is caused by the server-side only supports http2. You need to update JDK to a version that supports http2 (>=jdk9) to fix this issue.
Add "Connection: keep-alive" to yor Rest Api endpoint
#Headers({"Content-Type: application/json", "Accept: application/json", "Connection: keep-alive"})
This is if your endpoint is being called consecutively
This may be an old thread, as for me, check your internet connection (Wifi) there might be some restriction on accessing some of your endpoints.
I've fixed mine by using/connecting to my mobile data.
-cheers/happy codings
Most probably there are 2 things, happening at the same time.
First, the url contains a port which is not commonly used AND secondly, you are using a VPN or proxy that does not support that port.
Personally, I had the same problem. My server port was 45860 and I was using pSiphon anti-filter VPN.
In that condition my Postman reported "connection hang-up" only when server's relpy was an error with status codes bigger than 0. (it was fine when some text was returning from server with no error code)
Then I changed my web service port to 8080 on my server and, WOW, it worked! although psiphon vpn was connected.Therefore, my suggestion is, if you can change the server port, so try it, or check if there is a proxy problem

Android HttpResponseCache and "Authorization" request header

I'm trying to get HttpResponseCache to cache responses to requests that include an "Authorization" header. I'm including this header because the API I am calling uses basic authentication.
HttpUrlConnection connection = initialiseConnection();
String usernameAndPasswordString = Base64.encodeToString(String.format("%s:%s", username, password).getBytes(), Base64.NO_WRAP);
connection.setRequestProperty("Authorization", String.format("basic %s", usernameAndPasswordString));`
To test this, I'm making the request with WiFi turned on. I'm then turning off WiFi and data and making the request again. I then get a FileNotFoundException when trying to read the response body.
InputStream inputStream = new BufferedInputStream(connection.getInputStream());
If I do the same thing but without the "Authorization" header (to an app on a different server that doesn't use basic auth), my code is able to read the response from the cache.
I am aware that an HTTP cache is not meant to cache a response that was the result of a request including an "Authorization" header, but does that mean that I just can't cache any responses from this server without writing my own cache? Is there any known way around this or to override this behaviour in HttpUrlConnection / HttpResponseCache?
Thanks in advance!
I managed to get to the bottom of this by going through the source code of HttpResponseCache (via https://github.com/candrews/HttpResponseCache, a custom version of the class by candrews taken from the Android source :) ). Including "public", "must-revalidate" or "s-maxage" directives in the Cache-Control header of the response will allow caching by HttpResponseCache even if the Authorization header was included in the request.

android post gzip

Trying to implement gzip on an Android project to reduce client data charges. All devices connect to a WCF webservice and IIS is now sending compressed data back to the devices as expected. Now I need to work out how to post back gzipped xml data modified on the android device.
The device code is as follows
httpsURLConnection.setDoInput(true);
httpsURLConnection.setDoOutput(true);
httpsURLConnection.setConnectTimeout(TIMEOUT);
httpsURLConnection.setReadTimeout(TIMEOUT);
httpsURLConnection.setRequestMethod("POST");
httpsURLConnection.setRequestProperty("Content-Type", "application/xml");
httpsURLConnection.setRequestProperty("Content-Encoding", "gzip);
httpsURLConnection.connect();
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(new BufferedOutputStream((httpsURLConnection.getOutputStream())));
gzipOutputStream.write(_xmlText.getBytes());
gzipOutputStream.flush();
gzipOutputStream.finish();
gzipOutputStream.close();
Running Wireshark on webserver shows the gzip packets and decompresses them to show the correct data from the device.
The problem is that the WCF web service does not seem to recognise the data - the error is XmlException - The data at the root level is invalid. Line 1, position 1.
Which makes me think that the data is still compressed and WCF cannot handle gzip - which I seem to remember reading about previously. Do I then need to create a message decoder in .net to handle the gzip compression? Was this hopefully addressed in .net 4.5?
Any help with these questions appreciated.

Disable gzip content-encoding in android simulator

I was trying to read the HTTP messages between the browser in the Android simulator and other third party web-servers using tcpdump. However, since the browser can accept gzip content-encoding, I can't see the HTML content as plain-text in the tcpdump output. Is there a way to change the configs of the browser so that it doesn't send that Accept-Encoding: gzip header line?
This post implies if you remove the
Accept-Encoding
header, you'll get raw data back... you should be able to write a custom WebView that never sends that header. Hope that works!
http://forgetmenotes.blogspot.com/2009/05/how-to-disable-gzip-compression-in.html

Categories

Resources