java.io.IOException: unexpected end of stream on Connection in android - android

I have web service URL, it working fine. It gives the JSON data.
When I am using HttpURLConnection and InputStream, I am getting this error:
java.io.IOException: unexpected end of stream on
Connection{comenius-api.sabacloud.com:443, proxy=DIRECT
hostAddress=12.130.57.1
cipherSuite=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 protocol=http/1.1}
(recycle count=0)
My code:
try {
URL url = new URL("https://comenius-api.sabacloud.com/v1/people/username=" + username + ":(internalWorkHistory)?type=internal&SabaCertificate=" + certificate);
HttpURLConnection con = (HttpURLConnection) url.openConnection();
InputStream ist = con.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(ist));
while ((singleLine = reader.readLine()) != null) {
data = data + singleLine;
Log.e("Response", data);
}
} catch (Exception e) {
e.printStackTrace();
}
How to fix this?

I had the same problem using OKHttp3. The problem was that I didn't close the connection for each request and for the client the same connection was available and for the server not, for this reason the server returns a error.
The solution is indicating to each request to close the connection when it is finished. You have to add a flag in the header to indicate this. In OKHttp3 is like this:
Request request = new Request.Builder()
.url(URL)
.header("Connection", "close")
...

I encountered this problem today. And it turns out that it is the server fault, as in the server throwed an error and shut down as it is parsing the request.
Check your backend, if it is not yours, notify the owner of that server

"Keepalive makes it difficult for the client to determine where one response ends and the next response begins" 1
It seems that the issue is caused by a collision on reusing alive connection under 2 cases:
server doesn't send Content-Length in response headers
(streaming content case, so no Content-Length can be used) server doesn't use Chunked transfer encoding
So if you observed the exception, sniff http headers (e.g. at Android Studio Profiler tool). If you will see in response header both
"Connection: keep-alive"
and no
"Content-Length: ***" or "Transfer-Encoding: chunked" headers,
then this is the described above case.
Since it is totally server issue, the solution should be to calculate Content-Length and to put it to response header on server side, if it is possible (or to use Chunked transfer encoding).
Recommendation to close connections on the client side should be considered just as a workaround, keep in mind that it degrades overall performance.

Just found out the solution
It is really a server side problem AND The solution is to send the content-length header If you are using php just make your code like this
<?php
ob_start();
// the code - functions .. etc ... example:
$v = print_r($_POST,1);
$v .= "\n\r".print_r($_SERVER,1);
$file = 'file.txt';
file_put_contents($file,$v);
print $v;
// finally
$c = ob_get_contents();
$length = strlen($c);
header('Content-Length: '.$length);
header("Content-Type: text/plain");
//ob_end_flush(); // DID NOT WORK !!
ob_flush()
?>
The trick used here is to send the content-length header using the output buffer

I had the same problem, turned out I still had the proxy configured on the emulator while didn't have Charles open anymore

Consider using OkHttp's retryOnConnectionFailure configuration parameter – as documented, this enables the client to silently recover from:
Stale pooled connections. The ConnectionPool reuses sockets to decrease request latency, but these connections will occasionally time out.
If you happen to be using the ktor client (read: kotlin multiplatform), you can use:
HttpClient(OkHttp) {
engine {
config {
retryOnConnectionFailure(true)
}
}
}
h/t

I was testing my App with localhost using XAMPP and this error was occurring, The problem was with the port i was using skype using 443 port i simply quit skype and error was resolved!

Its a server error. It means somehow execution returns to your client without the server sending actual response header.
If you have control over server code, check that after processing the request, you explicitly send a response header with a response code. That way retrofit knows the request has been processed.

I have the same issue. This error is caused by the server-side only supports http2. You need to update JDK to a version that supports http2 (>=jdk9) to fix this issue.

Add "Connection: keep-alive" to yor Rest Api endpoint
#Headers({"Content-Type: application/json", "Accept: application/json", "Connection: keep-alive"})
This is if your endpoint is being called consecutively

This may be an old thread, as for me, check your internet connection (Wifi) there might be some restriction on accessing some of your endpoints.
I've fixed mine by using/connecting to my mobile data.
-cheers/happy codings

Most probably there are 2 things, happening at the same time.
First, the url contains a port which is not commonly used AND secondly, you are using a VPN or proxy that does not support that port.
Personally, I had the same problem. My server port was 45860 and I was using pSiphon anti-filter VPN.
In that condition my Postman reported "connection hang-up" only when server's relpy was an error with status codes bigger than 0. (it was fine when some text was returning from server with no error code)
Then I changed my web service port to 8080 on my server and, WOW, it worked! although psiphon vpn was connected.Therefore, my suggestion is, if you can change the server port, so try it, or check if there is a proxy problem

Related

Cannot access WEBAPI(uTorrent) from another network,using the same code works for local networks

I have the port forwarded, I've checked here:
http://www.canyouseeme.org/
I can get the cookie, however using the this logic(that works for local networks):
Get cookie
Get token using the cookie and authentication
Use token to call WEBAPI actions
The problem is that I cannot even get the token.
I get this error : FileNotFoundException
The same IP, port and link works fine in my browser, making me think there is either a problem with authentication or something with the cookie.
String authorization = "Basic " + new String(android.util.Base64.encode((auth).getBytes(), android.util.Base64.NO_WRAP));
That is then used like this:
connection.setRequestProperty("Authorization", authorization);
The HTTP response is : HEADER FIELDS{null=[HTTP/1.1 400 ERROR], Connection=[keep-alive], Content-Length=[17], Content-Type=[text/html], X-Android-Received-Millis=[1484153865632], X-Android-Response-Source=[NETWORK 400], X-Android-Selected-Protocol=[http/1.1], X-Android-Sent-Millis=[1484153865562]}
There shouldnt be an error at this point. I really dont get how it could be a bad request when connecting to a global IP but OK on a local connection.
So my question is. Should I authenticate differently when connecting to a global IP? How? Like always, the documentation is of no help at all.
No idea what was wrong. But setting up a new port at 8080, which could be used as an inside and outside port worked.

Reverse proxified request return null in onSuccess in loopj.AsyncHttp

I have to use an restful API for my app in android, In this API we have to reverse proxy our app requests through our server( I use nginx proxy_pass for that), since the API only answers to one registered IP.
When I use curl to send a request through reverse proxy of nginx it works swell.
But when my Android app sends a request to it using loopj.asynchttp, onSuccessful fires but request status is 0, and body of answer is null, I'm totally confused. It returns no data so I could find out what is wrong with it.
In logs of nginx requests from the mobile appears, so the request certainly reaches to my server, but the answer returned is problematic.
These kinds of problem occurs when the server or loopJ closes your connection.
it has several reasons:
Firewall
LoopJ's time out
enginx's time out
for first reason, it depends on your firewall, but it's so easy to configure it.
for loopJ's time out, you can use setTimeout function like following code:
AsyncHttpClient client = new AsyncHttpClient();
client.setTimeout(//connection time you need as an integer);
you can use following link to set your nginx time out configuration:
click here!
I hope these tips help you.

Android HttpResponseCache and "Authorization" request header

I'm trying to get HttpResponseCache to cache responses to requests that include an "Authorization" header. I'm including this header because the API I am calling uses basic authentication.
HttpUrlConnection connection = initialiseConnection();
String usernameAndPasswordString = Base64.encodeToString(String.format("%s:%s", username, password).getBytes(), Base64.NO_WRAP);
connection.setRequestProperty("Authorization", String.format("basic %s", usernameAndPasswordString));`
To test this, I'm making the request with WiFi turned on. I'm then turning off WiFi and data and making the request again. I then get a FileNotFoundException when trying to read the response body.
InputStream inputStream = new BufferedInputStream(connection.getInputStream());
If I do the same thing but without the "Authorization" header (to an app on a different server that doesn't use basic auth), my code is able to read the response from the cache.
I am aware that an HTTP cache is not meant to cache a response that was the result of a request including an "Authorization" header, but does that mean that I just can't cache any responses from this server without writing my own cache? Is there any known way around this or to override this behaviour in HttpUrlConnection / HttpResponseCache?
Thanks in advance!
I managed to get to the bottom of this by going through the source code of HttpResponseCache (via https://github.com/candrews/HttpResponseCache, a custom version of the class by candrews taken from the Android source :) ). Including "public", "must-revalidate" or "s-maxage" directives in the Cache-Control header of the response will allow caching by HttpResponseCache even if the Authorization header was included in the request.

HttpsURLConnection server_name corrupted (?) over 3g

As per Google recommendation, I am using HttpsURLConnection for my api-level 15 project.
My test case is very simple :
URL url = new URL(STATS);
HttpsURLConnection we = (HttpsURLConnection)url.openConnection();
InputStream in = new BufferedInputStream(we.getInputStream());
When I connect to my server over WiFi, everything works fine.
When I connect to my server over 3g, I am getting an error in my Apache logs :
Hostname 202.139.83.152 provided via SNI and hostname myserver.com provided via HTTP are different
Now the 202.139.83.152 address is the proxy address of my mobile providers APN.
I have dumped out the 'Client Hello' packet of both requests and the Handshake Protocol/Extension:server_name field contains the target hostname (myserver.com) for the wifi request but the APN proxy address for the 3g request.
Is this :
Something I have coded incorrectly
Something I have configured incorrectly on my phone (Samsung Galaxy S3)
Something I have configured incorrectly on my server
Something evil my mobile provider is doing
A bug in the Android libraries
My server is using a dedicated ip address for this vhost.
I can successfully make a request over 3g using a simple subclass of DefaultHttpClient but as my min API level is 15, I was hoping to go down the 'preferred' path.
Any suggestions would be very greatfully received. I've spent way too much time trying to get this basic thing working.
My colleague who is handling the iPhone development for this project shakes his head because his code 'just works out of the box'.
It turns out that this is a known issue.
I have found a workaround although I'm not sure how robust it is. What works for me so far is to disable the Proxy when making the connection :
HttpsURLConnection http = (HttpsURLConnection) url.openConnection(Proxy.NO_PROXY);
I hope this helps someone.
Here is my server side workaround for Apache (working for the last year on Apache2 - Apache/2.2.14).
Recompiled Apache/mod_ssl.so after having changed ssl_engine_kernel.c by removing the "return HTTP_BAD_REQUEST;" for the strcmp(host, servername) check :
if (strcmp(host, servername)) {
ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server,
"Hostname %s provided via SNI and hostname %s provided"
" via HTTP are different", servername, host);
//return HTTP_BAD_REQUEST; // REMOVE THIS LINE
You will still get the error log message but not the error 400 response code.

Authorization problem in Android with glassfish

I'm writing an HTTP client in Android that connects to glassfish on my localhost and sends some json information to the server.
I use:
UsernamePasswordCredentials cred =
new UsernamePasswordCredentials(SettingsHelper.mUser, SettingsHelper.mPwd);
client.getCredentialsProvider().setCredentials(AuthScope.ANY, cred);
for authontication, and making a put request.
The problem is sometimes, the client sends an unauthorized request before retrying and sending an authorized one, and on the second time, it pushes the json entity, before getting a proper response from the server (100 - continue). then, the server doesn't respond at all, and everything hangs.
i will note that sometimes it works.
Has anyone else experienced this problem? How might I resolve it?
finally managed to make it work with basic pre-emptive authontication.
example here

Categories

Resources