Where does the socket timeout of 21000 ms come from? - android

The Problem
An app I'm maintaining keeps getting socket timeouts after approximately 21000 ms, despite the fact that I've explicitly set longer timeouts. This seemingly magical value of 21000 ms has come up in a few other SO questions and answers, and I'm trying to figure out exactly where it comes from.
Here's the essence of my code:
HttpURLConnection connection = null;
try {
URL url = new URL(urlString);
connection = (HttpURLConnection) url.openConnection();
connection.setConnectTimeout(45000);
connection.setReadTimeout(90000);
int responseCode = connection.getResponseCode();
if (responseCode == 200) {
// code omitted
}
} catch (Exception e) {
// code omitted
}
finally {
if (connection != null) {
connection.disconnect();
}
}
Catching all exceptions in one block is admittedly not ideal, but it's inherited code and I'm reluctant to mess with it. I know it's catching SocketTimeoutException after 21000 ms because it logs the simple name of the exception class.
Clues
I found a question where an asker was getting a ConnectTimeout after 21000 ms, despite explicitly setting it to 40000 ms. That's intriguing despite the exception class being different.
I also found a poorly-explained answer which claims that the server side is responsible for the 21000 ms timeout.
My Hunch
I don't think any action or inaction of the server could cause a shorter-than-expected socket timeout on the client. But maybe the TCP stacks in Windows and Android share a common ancestor, or at least use similar connect retry logic.
Could it be that Android imposes a maximum connect timeout of 21000 ms, and setting a longer timeout in HttpURLConnection is futile? Or could this timeout be triggered by some Windows machine on the path between the mobile device and the server? Do some Android versions throw a SocketTimeoutException where others throw a ConnectException?

According to RFC 1122 (TRANSPORT LAYER -- TCP), section 4.2.3.1 ("Retransmission Timeout Calculation"):
"Implementation also MUST include exponential backoff for successive RTO values for the same segment".
So xpa1492's answer sounds plausible (despite its Windows-specific nature); the implementation of a TCP stack either follows this RFC or gets panned for failing to do so.
By the way, RFC 1122 specifies 3 seconds as the initial timeout, explicitly, making xpa1492's (3 + 6 + 12 = 21) answer sound like the answer to your mystery.
And yes, the Android TCP stack shares a common ancestor with Windows TCP stack; they were both created using RFC 1122 as a guide ("[The Linux TCP stack is] an implementation of the TCP protocol defined in RFC 793, RFC 1122 and RFC 2001 with the NewReno and SACK extensions").
I suspect that your problem is related to radio interference, so you might want to try enabling F-RTO, as you might be hitting the "magic number" repeatedly because of the environment in which you are testing.

It seems like it is a Windows default configuration...
https://social.technet.microsoft.com/Forums/windows/en-US/9e7f59dd-6469-4ade-91ca-ceb5bcaf2675/windows-7-tcp-parameter-tcpmaxconnectretransmissions-and-tcpinitialrtt?forum=w7itpronetworking
Based on the link and some further reading, Windows will by default do 3 retries and double the timeout with each attempt, starting a s 3sec one. So you end up with 3sec + 6sec + 12sec = 21sec timeout.

I wrote a crude test app, based on the code in my question, that simulates a connect timeout by attempting to connect to a non-routable address as suggested in this answer. On my Moto G (Android 4.4.2), it throws a SocketTimeoutException in approximately 45 seconds as expected. Curiously, if I do not explicitly set the connect timeout, it instead throws a ConnectException after approximately one minute.
I'm going to write a slightly more sophisticated test app and send it to the customer to try to determine if the device itself is imposing a 21s timeout, or if some router on their mobile network might be the culprit. I'll update this answer with the results.
Result: This appears to be an OS bug that affects the Samsung SPH-P100 (Galaxy Tab 1) from Sprint. I don't have access to a Tab 1 from any other carrier, so this could be blamed on Samsung or Sprint. It does not seem to generally affect Android 2.x, because I have a ZTE X501 running 2.3.6 which allows me to set longer timeouts.

Related

Android socket stream not cuting down when connection closed [duplicate]

I am running into some issues with the Java socket API. I am trying to display the number of players currently connected to my game. It is easy to determine when a player has connected. However, it seems unnecessarily difficult to determine when a player has disconnected using the socket API.
Calling isConnected() on a socket that has been disconnected remotely always seems to return true. Similarly, calling isClosed() on a socket that has been closed remotely always seems to return false. I have read that to actually determine whether or not a socket has been closed, data must be written to the output stream and an exception must be caught. This seems like a really unclean way to handle this situation. We would just constantly have to spam a garbage message over the network to ever know when a socket had closed.
Is there any other solution?
There is no TCP API that will tell you the current state of the connection. isConnected() and isClosed() tell you the current state of your socket. Not the same thing.
isConnected() tells you whether you have connected this socket. You have, so it returns true.
isClosed() tells you whether you have closed this socket. Until you have, it returns false.
If the peer has closed the connection in an orderly way
read() returns -1
readLine() returns null
readXXX() throws EOFException for any other XXX.
A write will throw an IOException: 'connection reset by peer', eventually, subject to buffering delays.
If the connection has dropped for any other reason, a write will throw an IOException, eventually, as above, and a read may do the same thing.
If the peer is still connected but not using the connection, a read timeout can be used.
Contrary to what you may read elsewhere, ClosedChannelException doesn't tell you this. [Neither does SocketException: socket closed.] It only tells you that you closed the channel, and then continued to use it. In other words, a programming error on your part. It does not indicate a closed connection.
As a result of some experiments with Java 7 on Windows XP it also appears that if:
you're selecting on OP_READ
select() returns a value of greater than zero
the associated SelectionKey is already invalid (key.isValid() == false)
it means the peer has reset the connection. However this may be peculiar to either the JRE version or platform.
It is general practice in various messaging protocols to keep heartbeating each other (keep sending ping packets) the packet does not need to be very large. The probing mechanism will allow you to detect the disconnected client even before TCP figures it out in general (TCP timeout is far higher) Send a probe and wait for say 5 seconds for a reply, if you do not see reply for say 2-3 subsequent probes, your player is disconnected.
Also, related question
I see the other answer just posted, but I think you are interactive with clients playing your game, so I may pose another approach (while BufferedReader is definitely valid in some cases).
If you wanted to... you could delegate the "registration" responsibility to the client. I.e. you would have a collection of connected users with a timestamp on the last message received from each... if a client times out, you would force a re-registration of the client, but that leads to the quote and idea below.
I have read that to actually determine whether or not a socket has
been closed data must be written to the output stream and an exception
must be caught. This seems like a really unclean way to handle this
situation.
If your Java code did not close/disconnect the Socket, then how else would you be notified that the remote host closed your connection? Ultimately, your try/catch is doing roughly the same thing that a poller listening for events on the ACTUAL socket would be doing. Consider the following:
your local system could close your socket without notifying you... that is just the implementation of Socket (i.e. it doesn't poll the hardware/driver/firmware/whatever for state change).
new Socket(Proxy p)... there are multiple parties (6 endpoints really) that could be closing the connection on you...
I think one of the features of the abstracted languages is that you are abstracted from the minutia. Think of the using keyword in C# (try/finally) for SqlConnection s or whatever... it's just the cost of doing business... I think that try/catch/finally is the accepted and necesary pattern for Socket use.
I faced similar problem. In my case client must send data periodically. I hope you have same requirement. Then I set SO_TIMEOUT socket.setSoTimeout(1000 * 60 * 5); which is throw java.net.SocketTimeoutException when specified time is expired. Then I can detect dead client easily.
I think this is nature of tcp connections, in that standards it takes about 6 minutes of silence in transmission before we conclude that out connection is gone!
So I don`t think you can find an exact solution for this problem. Maybe the better way is to write some handy code to guess when server should suppose a user connection is closed.
As #user207421 say there is no way to know the current state of the connection because of the TCP/IP Protocol Architecture Model. So the server has to notice you before closing the connection or you check it by yourself.
This is a simple example that shows how to know the socket is closed by the server:
sockAdr = new InetSocketAddress(SERVER_HOSTNAME, SERVER_PORT);
socket = new Socket();
timeout = 5000;
socket.connect(sockAdr, timeout);
reader = new BufferedReader(new InputStreamReader(socket.getInputStream());
while ((data = reader.readLine())!=null)
log.e(TAG, "received -> " + data);
log.e(TAG, "Socket closed !");
Here you are another general solution for any data type.
int offset = 0;
byte[] buffer = new byte[8192];
try {
do {
int b = inputStream.read();
if (b == -1)
break;
buffer[offset++] = (byte) b;
//check offset with buffer length and reallocate array if needed
} while (inputStream.available() > 0);
} catch (SocketException e) {
//connection was lost
}
//process buffer
Thats how I handle it
while(true) {
if((receiveMessage = receiveRead.readLine()) != null ) {
System.out.println("first message same :"+receiveMessage);
System.out.println(receiveMessage);
}
else if(receiveRead.readLine()==null)
{
System.out.println("Client has disconected: "+sock.isClosed());
System.exit(1);
} }
if the result.code == null
On Linux when write()ing into a socket which the other side, unknown to you, closed will provoke a SIGPIPE signal/exception however you want to call it. However if you don't want to be caught out by the SIGPIPE you can use send() with the flag MSG_NOSIGNAL. The send() call will return with -1 and in this case you can check errno which will tell you that you tried to write a broken pipe (in this case a socket) with the value EPIPE which according to errno.h is equivalent to 32. As a reaction to the EPIPE you could double back and try to reopen the socket and try to send your information again.

HttpURL connection what is the recommended value for setConnectionTimeout and setReadTimeOut?

I Want to know what is the recommended value for methods setConnectionTimeou() and
setReadTimeOut() for HttpURL connection? I know these value depends upon the server and what task server is performing. but still i want to know the recommended values for these method.
It's hard to answer such question without knowing the typical response time. Users are fairly accustomed to wait a few seconds when using mobile devices whilst on mobile networks.
Personally if the timeout is between 10 - 15s I will consider it a normal latency, if it is 20s or more, I will most likely quit the app.
From Default Documentation
Both setConnectTimeout (int timeout) and setReadTimeout (int timeout) From API 1
A SocketTimeoutException is thrown if the connection could not be established in this time. Default is 0 which stands for an infinite timeout.
see this link its give you more idea about this.
https://www.nngroup.com/articles/website-response-times/
You can do it this example:
Your method
How to add parameters to HttpURLConnection using POST
OR
You can follow this other form
https://developer.android.com/reference/java/net/HttpURLConnection.html

Android 6.0 Marshmallow BLE : Connection Parameters

The Bluetooth Low Energy connection parameters management seems to have changed in Android 6.
I have a BLE Peripheral device who needs to use some specific connection parameters (notably, the connection interval), and I want to use the minimum connection interval allowed by the BLE specification (i.e. 7,5ms).
The Android SDK doesn't allow to choose it from the BLE GAP Central (the smartphone) side, so the proper way to do it is to make my GAP Peripheral device send a L2CAP Connection Parameter Update Request after the GAP connection is made.
The parameters I request are:
conn interval min : 7,5ms
conn interval max : 7,5ms
slave latency : 0
supervision timeout : 2000ms
This worked as expected with all Android devices I've been testing, from 4.3 to 5.x : after sending the L2CAP Connection Parameter Update Request, my device receives a L2CAP Connection Parameter Update Response with 0x0000 (accepted), followed by a LE Connection Update Complete event where I can see that the requested connection parameters have well been taken into account.
Now, with a Nexus 9 tablet or with 2 different Nexus 5 devices, all having Android 6.0.1, I can see that the the L2CAP Connection Parameter Update Request is always rejected (I receive a L2CAP Connection Parameter Update Response with 0x0001 (rejected)). Then I receive a LE Connection Update Complete event where I can see that the requested connection parameters have NOT been taken into account.
I've been trying this with 2 different implementations on the Peripheral side (one with ST Microelectronics' BlueNRG, one with Nordic Semiconductor's nRF52), both with the exact same result.
Then, after more testing : I have tried different parameter sets, changing the conn interval max (I kept other parameters the same). Here is what I found :
with conn interval max = 18.75ms, update request was accepted with interval set to 18.75ms
with conn interval max = 17.50ms, update request was accepted with interval set to 15.00ms
with conn interval max = 15.00ms, update request was accepted with interval set to 15.00ms
with conn interval max = 13.75ms, update request was accepted with interval set to 11.25ms
with conn interval max = 11.25ms, update request was accepted with interval set to 11.25ms
with any other conn interval max value below 11.25ms, I get rejected.
So the observation is that something has clearly changed with the way Android 6's BLE stack handles the connection parameters. But there doesn't seem to be any kind of information or documentation to confirm that.
My observations lead to a conclusion that the minimum connection interval allowed is now 11.25ms (which actually fits my needs) instead of 7.5ms in earlier Android versions. But having found it empirically, I'd want to be sure that I'm not missing some other constraints/rules or if that minimum would not be dynamic, depending for example on the current battery level...
What would be great would be to have the equivalent of Apple's Bluetooth Design Guidelines (cf. ยง3.6) to set things clear on how an LE Peripheral should deal with this topic.
Is anyone having the same issue or is aware of some more helpful information from Google ?
Compare method connectionParameterUpdate() from GattService.java in AOSP 6.0.1_r17 vs AOSP 5.1.1_r14. In both instances, call goes all the way to Buedroid in BTA_DmBleUpdateConnectionParams() in bta_dm_api.c with same params.
6.0:
switch (connectionPriority)
{
case BluetoothGatt.CONNECTION_PRIORITY_HIGH:
minInterval = 9; // 11.25ms
maxInterval = 12; // 15ms
break;
case BluetoothGatt.CONNECTION_PRIORITY_LOW_POWER:
minInterval = 80; // 100ms
maxInterval = 100; // 125ms
latency = 2;
break;
}
5.1:
switch (connectionPriority)
{
case BluetoothGatt.CONNECTION_PRIORITY_HIGH:
minInterval = 6; // 7.5ms
maxInterval = 8; // 10ms
break;
case BluetoothGatt.CONNECTION_PRIORITY_LOW_POWER:
minInterval = 80; // 100ms
maxInterval = 100; // 125ms
latency = 2;
break;
}
This might be a part of the answer to your question. Although BLE allows down to 7.5ms CI, I cannot speculate why link layer would not switch to lower CI on request by peripheral. I don't know if any part of android code controls outcome of negotiations with peripheral device.
Google has not provided any documentation about the Bluetooth LE stack changes concerning connection parameter changes even though there have clearly been some in Android 6.
My experience with it has been the same as your own, that being that 11.25ms is now the fastest connection interval allowed in Android 6+.
My educated guess as to why they don't release documentation is that many manufacturers put their own BLE stacks into their phones (the BLE on Samsung and HTC behave differently from vanilla Android).
One other observation I have made that caused a great deal of problems is that Android 6+ will change the connection parameters 2 to 6 times before settling on the requested parameters.
I observed that after requesting a connection parameter update interval of 800ms to 1100ms, I saw the initial interval come back at 7.5ms, that then jumped to 48.75ms and then jumped to the 1098.75ms I requested. Then I subscribed to notifications on one of my services and the connection interval again jumped back to 7.5ms and then back to 1098.75ms. After this, it stabilized at 1098.75ms for the duration of the connection.
These tests were run on a Nexus 6 with Android 6.0.1
Obviously, some very strange things are happening on the Android 6 BLE stack.
11.25 ms is the new minimum connection interval. The reason they don't allow 7.5 ms anymore is because if you stream audio over bluetooth at the same time the audio might became choppy.
Google guys made a mistake in one of recent commits in Bluedroid by defining BTM_BLE_CONN_INT_MIN_LIMIT as 0x0009 which gives you 1.25ms x 9 = 11.25ms. In order to comply with standard it has to be defined as 0x0006.

Android Bluetooth Low Energy connection timeout while BLE chip is computing

My BLE application requires computation on the server side (BLE chip) which takes time and results with disconnection.
Th flow is like this:
1- Android phone writes the characteristic value to the BLE chip.
2- The chip evaluates this value and starts computation.
3- The connection is lost soon after the computation has started.
What solution can I apply to prevent the disconnecton? I have two solutions in my mind:
1- Changing the connection interval: Currently Android uses 7.5 msec as connection interval. Since the computation on BLE chip takes time, packets are not sent or received during the computation. Increasing the connection interval will decrease the number of lost packets. However there is no guarantee that Android phone will accept the new connection parameters.
2- Running the computation in a separate thread: I dont think that BLE chips' SDK support multi-threading such that while there is a computation process going on, it will keep receiving and sending packets and prevent the disconnection. I use CSR chip and I think it doesnt support.
Please correct me if I am wrong at my points.
Do you have any other suggestions to solve the issue?
Thanks in advance.
Thank you for the answers. I found out what the problem is after spending hours.
First of all, when Android gives error 133 or 129, it is most probably because of the remote device.
At the beginning I thought that the problem occurred because of the supervision timeout. Then I re-configured the connection parameters of the CSR chip but it didn't help.
There is a problem about CSR app development with xIDE (IDE of CSR). When there is run-time-error due to index overshoot or accessing some invalid pointers, then you would not receive any errors in xIDE. I finally found out the array problem and fixed it. Now it works perfect.
Thanks a lot!
I don't know exactly if what i going to explain it's feasible under Android because I used BLE only with a low level applications, anyway if your problems are the connection parameters you can try to change the Slave_Latency.
It should be usefull since playing with this parameter, you can change the number of connection intervals in which the Central device can wait until it considers the connection lost.
The following equation is usefull to derive the connection parameters:
Effective_Connection_Interval = (Connection_Interval)*(1+(Slave_Latency))
Remember that can exists some kind of Supervision_Timeout that can collide with your Effective_Connection_Interval

Android: first http response always takes longer (even without keep-alive)

I am using the class HttpUrlConnection for requesting JSON responses
I realized that no matter if I set or not
System.setProperty("http.keepAlive", "false");
The first response is always going to take longer, while the next responses are very quick, with and without keepAlive. I am not even using SSL.
Notice that, my app doesn't need to perform any authentication with the server, so there isn't any startup call to the webservices. The first request I make to the webservices is actually the very first.
I am also verifying server-side with "netstat", that by setting keepAlive false on the Android client the connections disappear straight away, while without specifying keepAlive false they keep staying as "ESTABLISHED".
How can you explain that subsequent responses are quicker even if the connection doesn't persist?
ANDROID CODE:
line 1) URL url = new URL(stringUrl);
line 2) HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection();
line 3) InputStream instream = new BufferedInputStream(urlConnection.getInputStream());
Until line 2 everything always gets executed very quickly, either with keepAlive or not. Line 3 in the first request takes around 3 seconds, while in all subsequent always less than 1 second. Each request is about 0.5KB gzipped.
SYSTEM:
I am testing using a Nexus 5, connected via 3G
My webservices are written in Go, running on a CentOS 6.4 linux server
I am using standard tcp v4
UPDATE:
For the moment I have decided to use a trick: when the fragment is resuming, I make a HTTP HEAD request to the server. In this way all subsequent calls in the next 10 seconds are very quick. If the user waits more than 10 seconds then the first one will be slow, and the next ones will be quick again. This is all happening without using KeepAlive.
It's a really big mistery now. It looks like there is some kind of "awake" period which lasts for about 10 seconds. I don't think there is anything strange on my code which can result on that. Also because everything seems to happen during the line 3 I reported above.
SOLVED! thanks to Mark Allison!
Here is a very clear explanation:
http://developer.android.com/training/efficient-downloads/efficient-network-access.html
Also, everything can easily be monitored using Android DDMS's Network Statistics. If you wait some seconds (let's say 20) from last request, you can see that it takes 2 seconds to transmit a new request.
I suspect that the lag that you are seeing is simply down to the cellular radio transitioning from either low power or idle state to full power (which can take over 2 seconds).
Check out Reto Meier's excellent DevBytes series on Efficient Data Transfer for an in-depth explanation of what's going on.
The first request cannot leverage a keep-alive obviously, because thankfully Android doesn't keep the connections alive for minutes or hours. Only subsequent requests can reuse keep-alive connections and only after a short period of time.
It's natural that you have to wait in line 3. Before something like conn.getResponseCode() or conn.getInputStream() the HttpURLConnection is in CREATED state. There is no network activity until it's getting in CONNECTED state. Buffered* shouldn't make any difference here.
I've observed long delays when using SSL and there was a time-shift between server and device. This happens very often when using an emulator which is not cold-booted. For that I've a small script running before test. It's important that PC and emulator are in the same time-zone, otherwise it's very contra-productive: (see below, because it's hard to show the command inline).
I can imagine that Android saves battery in putting 3G into sleep mode when there is no activity. This is just speculation, but you could make a test by creating some other network activity with other apps (browser, twitter, ...) and then see whether your app needs the same long "think time" until first connection.
There are other good options for losing time: DNS resolution, Server-side "sleep" (e.g. a virtual machine loading "memory" from disk).
The command to set time of Android emulator:
adb -e shell date -s `date +"%Y%m%d.%H%M%S"`
Edit
To further analyze the problem, you could run tcpdump on your server. Here is tutorial in case you don't know it well. Store the dumps to files (pcap) and then you can view them with wireshark. Depending on the traffic on your CentOS server you have to set some filters so you only record the traffic from your Android device. I hope that this gives some insight to the problem.
To exclude your server from being the bad guy, you could create a small script with curl commands doing the equivalent as your app.
You could create a super-tiny service without database or other i/o dependencies and measure the performance. I don't know "Go", but the best thing would be a static JSON file delivered by Apache or nginx. If you only have Go, then take something like /ping -> { echo "pong" }. Please tell us your measurements and observations.
Instead of using so many classes I suggest you use this library
you can have a look at here
http://loopj.com/android-async-http/
your code will become very very less , instead of declaring so many classes writing bulk of code , you can just use 4 lines of code
AsyncHttpClient client = new AsyncHttpClient();
client.get("http://www.google.com", new AsyncHttpResponseHandler() {
#Override
public void onSuccess(String response) {
System.out.println(response);
}
});
It is very efficient in geting the response very quickly(1 or 2 secs including parsing).
I hope this will help you out. :)

Categories

Resources