I am running into some issues with the Java socket API. I am trying to display the number of players currently connected to my game. It is easy to determine when a player has connected. However, it seems unnecessarily difficult to determine when a player has disconnected using the socket API.
Calling isConnected() on a socket that has been disconnected remotely always seems to return true. Similarly, calling isClosed() on a socket that has been closed remotely always seems to return false. I have read that to actually determine whether or not a socket has been closed, data must be written to the output stream and an exception must be caught. This seems like a really unclean way to handle this situation. We would just constantly have to spam a garbage message over the network to ever know when a socket had closed.
Is there any other solution?
There is no TCP API that will tell you the current state of the connection. isConnected() and isClosed() tell you the current state of your socket. Not the same thing.
isConnected() tells you whether you have connected this socket. You have, so it returns true.
isClosed() tells you whether you have closed this socket. Until you have, it returns false.
If the peer has closed the connection in an orderly way
read() returns -1
readLine() returns null
readXXX() throws EOFException for any other XXX.
A write will throw an IOException: 'connection reset by peer', eventually, subject to buffering delays.
If the connection has dropped for any other reason, a write will throw an IOException, eventually, as above, and a read may do the same thing.
If the peer is still connected but not using the connection, a read timeout can be used.
Contrary to what you may read elsewhere, ClosedChannelException doesn't tell you this. [Neither does SocketException: socket closed.] It only tells you that you closed the channel, and then continued to use it. In other words, a programming error on your part. It does not indicate a closed connection.
As a result of some experiments with Java 7 on Windows XP it also appears that if:
you're selecting on OP_READ
select() returns a value of greater than zero
the associated SelectionKey is already invalid (key.isValid() == false)
it means the peer has reset the connection. However this may be peculiar to either the JRE version or platform.
It is general practice in various messaging protocols to keep heartbeating each other (keep sending ping packets) the packet does not need to be very large. The probing mechanism will allow you to detect the disconnected client even before TCP figures it out in general (TCP timeout is far higher) Send a probe and wait for say 5 seconds for a reply, if you do not see reply for say 2-3 subsequent probes, your player is disconnected.
Also, related question
I see the other answer just posted, but I think you are interactive with clients playing your game, so I may pose another approach (while BufferedReader is definitely valid in some cases).
If you wanted to... you could delegate the "registration" responsibility to the client. I.e. you would have a collection of connected users with a timestamp on the last message received from each... if a client times out, you would force a re-registration of the client, but that leads to the quote and idea below.
I have read that to actually determine whether or not a socket has
been closed data must be written to the output stream and an exception
must be caught. This seems like a really unclean way to handle this
situation.
If your Java code did not close/disconnect the Socket, then how else would you be notified that the remote host closed your connection? Ultimately, your try/catch is doing roughly the same thing that a poller listening for events on the ACTUAL socket would be doing. Consider the following:
your local system could close your socket without notifying you... that is just the implementation of Socket (i.e. it doesn't poll the hardware/driver/firmware/whatever for state change).
new Socket(Proxy p)... there are multiple parties (6 endpoints really) that could be closing the connection on you...
I think one of the features of the abstracted languages is that you are abstracted from the minutia. Think of the using keyword in C# (try/finally) for SqlConnection s or whatever... it's just the cost of doing business... I think that try/catch/finally is the accepted and necesary pattern for Socket use.
I faced similar problem. In my case client must send data periodically. I hope you have same requirement. Then I set SO_TIMEOUT socket.setSoTimeout(1000 * 60 * 5); which is throw java.net.SocketTimeoutException when specified time is expired. Then I can detect dead client easily.
I think this is nature of tcp connections, in that standards it takes about 6 minutes of silence in transmission before we conclude that out connection is gone!
So I don`t think you can find an exact solution for this problem. Maybe the better way is to write some handy code to guess when server should suppose a user connection is closed.
As #user207421 say there is no way to know the current state of the connection because of the TCP/IP Protocol Architecture Model. So the server has to notice you before closing the connection or you check it by yourself.
This is a simple example that shows how to know the socket is closed by the server:
sockAdr = new InetSocketAddress(SERVER_HOSTNAME, SERVER_PORT);
socket = new Socket();
timeout = 5000;
socket.connect(sockAdr, timeout);
reader = new BufferedReader(new InputStreamReader(socket.getInputStream());
while ((data = reader.readLine())!=null)
log.e(TAG, "received -> " + data);
log.e(TAG, "Socket closed !");
Here you are another general solution for any data type.
int offset = 0;
byte[] buffer = new byte[8192];
try {
do {
int b = inputStream.read();
if (b == -1)
break;
buffer[offset++] = (byte) b;
//check offset with buffer length and reallocate array if needed
} while (inputStream.available() > 0);
} catch (SocketException e) {
//connection was lost
}
//process buffer
Thats how I handle it
while(true) {
if((receiveMessage = receiveRead.readLine()) != null ) {
System.out.println("first message same :"+receiveMessage);
System.out.println(receiveMessage);
}
else if(receiveRead.readLine()==null)
{
System.out.println("Client has disconected: "+sock.isClosed());
System.exit(1);
} }
if the result.code == null
On Linux when write()ing into a socket which the other side, unknown to you, closed will provoke a SIGPIPE signal/exception however you want to call it. However if you don't want to be caught out by the SIGPIPE you can use send() with the flag MSG_NOSIGNAL. The send() call will return with -1 and in this case you can check errno which will tell you that you tried to write a broken pipe (in this case a socket) with the value EPIPE which according to errno.h is equivalent to 32. As a reaction to the EPIPE you could double back and try to reopen the socket and try to send your information again.
Related
I am working on a UDP based socket apps, and here I got some questions on how to implement the listen function on receive side
Is below a good way to let the receive side socket to keep listen the server side? Suppose I don't know when will the server side will send the packet to receive side, so I need to keep the receive function always on. Will it miss or some how break the while(true) loop? If yes, how to "reconnect" and make the listen loop alive again?
while(true){
try{
if ( udpsocket_receiving.isClosed() || !udpsocket_receiving.isConnected() ) {
serverAddress = InetAddress.getByName(SERVERIP);
udpsocket_receiving = new MulticastSocket(SERVERPORT) ;
udpsocket_receiving.joinGroup(serverAddress);
udpsocket_receiving.setSoTimeout(10000);
}
udpsocket_receiving.receive(recpacket);
// Block of code to do with the packet
} catch ( SocketTimeoutException e ) {
// What suppose to do here if I catch this exception?
} finally {
udpsocket_receiving.close();
continue;
}
}
Can above method already solve if I don't have Internet access for certain time, suppose the method will always catch to the SocketTimeoutException right? But when the Internet access resume later, can I still keep listen when packet comes?
Suppose I got the first packet from sender side, and executing the code, but sender side send a second packet on that time, will I miss the packet? Since the while loop on the first packet is not end.
Is below a good approach to manually close the socket and "reconnect' it again? Will it somehow bind the port and can not use that same port to new a object again? And if this is block of correct code, should I put those inside the SocketTimeoutException in question one?
udpsocket_receiving.leaveGroup(serverAddress);
udpsocket_receiving.disconnect();
udpsocket_receiving.close();
udpsocket_receiving = new MulticastSocket(SERVERPORT) ;
udpsocket_receiving.setSoTimeout(10000);
udpsocket_receiving.joinGroup(serverAddress);
No. You are never connecting the socket, so dpsocket_receiving.isConnected() will never be true. You don't need all that closed/open nonsense inside the read loop. The only person who is ever going to close the socket is you. The SocketTimeoutException means that no datagram was received within the read timeout period. What you do about that is up to you, maybe nothing, but it doesn't mean you have to close and re-initialize the socket. Redoing all this won't solve an Internet connectivity problem.
The only way you will normally miss packets is if they get dropped, but closing and reopening the socket provides a window in which that will definitely occur. Don't do it.
When closing the socket, all you have to do is close it. Leaving all its multicast groups is automatic, as is disconnecting, and as you never connected it there was no need to disconnect it in the first palce.
For receiving UDP broadcast packets from the server to an android device, i used a service class and listen for packets in a thread. It receives the packet successfully. The problem is that if multiple packets are being sent from the server in the same time then packet loss will be the result.
I even tried with a queue and processing the received packets in separate thread then also i am not getting the packet. I am completely new to network programming any help would be widely appreciated
void startListenForUdpBroadcast() {
UDPBroadcastThread = new Thread(new Runnable() {
public void run() {
try {
InetAddress broadcastIP = InetAddress.getByName(UdpConstants.IP_ADDRESS);
Integer port = UdpConstants.RECEIVER_PORT;
while (shouldRestartSocketListen) {
listenAndWaitAndThrowIntent(broadcastIP, port);
}
} catch (Exception e) {
Log.i("UDP", "no longer listening for UDP broadcasts cause of error " + e.getMessage());
}
}
});
UDPBroadcastThread.setPriority(Thread.MAX_PRIORITY); //Setting The Listener thread to MAX PRIORITY to minimize packet loss.
UDPBroadcastThread.start();
}
This code listens for new packets and pushes to queue
private void listenAndWaitAndThrowIntent(InetAddress broadcastIP, Integer port) throws Exception {
byte[] recvBuf = new byte[64000];
if (socket == null || socket.isClosed()) {
socket = new DatagramSocket(port, broadcastIP);
socket.setBroadcast(true);
}
//socket.setSoTimeout(1000);
DatagramPacket packet = new DatagramPacket(recvBuf, recvBuf.length);
socket.receive(packet);
messQueue.add(packet);
}
This checks the queue for new messages and process it
/**
* #purpose Checking queue and If anything is added to the queue then broadcast it to UI
*/
private void checkQueue() {
queueThread = new Thread(new Runnable() {
public void run() {
try {
while (shouldRestartSocketListen) {
if (!messQueue.isEmpty()) {
broadcastIntent(messQueue.poll());
}
}
} catch (Exception e) {
}
}
});
queueThread.start();
}
The problem with UDP is that your sender (your server) does not know you (your android device) missed some. It's not lost because you can't read it fast enough, sometimes just over the air interference/congestion or a busy socket.
The receiver would only know if:
you get an error while processing data since you're missing data
OR your UDP packets are numbered sequentially in its header and you detect a missing number (eg. 1,2,4 - missing 3)
Once the packet is lost, it's lost. You got two options:
implement a resend request: upon detection of a missing packet, the receiver would need to notify the sender to resend that missing packet until it does get it, and your packet processing might be halted until it does
OR be able to ignore it, "hey, we can do it without him", and fill-in with blank data (eg. a bitmap would have some blank pixels, like a broken image)
throttle your sending speed down so the packets wont jam up and get lost
the smaller your packets, the more likely they'll live
(option 1: all this resend requesting is just pseudo-TCP, so you might just consider abandoning UDP and go TCP)
I think your problem is mainly that you use Udp Broadcast over wifi.
Their are two very well documented answers why this is a very slow way to operate and why there are much more packet losts:
answer number one.
answer number two.
The thing I did to solve the extremely slow bandwidth was some kind of multi-unicast protocol:
Manage the list of clients you have connected.
Send each packet you have in your server to all of your clients separately with send call.
This is the code in java:
DatagramPacket packet = new DatagramPacket(buffer, size);
packet.setPort(PORT);
for (byte[] clientAddress : ClientsAddressList) {
packet.setAddress(InetAddress.getByAddress(clientAddress));
transceiverSocket.send(packet);
}
If you receive multiple datagrams in a short burst, your receiver loop may have trouble keeping up, and the OS-level RCVBUF in the socket may overflow (causing the OS to drop a packet it indeed did receive).
You might get better handling of short bursts if you increase the RCVBUF size. Prior to doing this, get an idea of how big it is already via socket.getReceiveBufferSize. Also bear in mind that the number of bytes in the receive buffer must accommodate not just the payload but also the headers and the sk_buff structure that stores packets in the kernel (see, e.g. lxr.free-electrons.com/source/include/linux/…).
You can adjust the recieve buffer size via socket.setReceiveBufferSize - but bear in mind that this message is just a hint, and may be overridden by settings in the kernel (if you request a size bigger than the max size allowable by the current kernel network settings, then you'll only get the max size).
After requesting a bigger receive buffer, you should double check what the kernel has allowed by calling socket.getReceiveBufferSize.
If you have the right permissions, you should be able to tweak the max buffer size the kernel will allow - see e.g. https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html
[ Note that, in general, this will accommodate for short bursts where datagrams arrive faster than your client loop can read them - but if the client loop is chronically slower than datagram delivery, you'll still get occasional drops due to overflow. In this case, you need to speed up your client loop, or slow down the sender.
Also, as otherwise noted in other answers, packets may actually be dropped by your network - and mobile networks especially may be prone to this - so if you absolutely need guaranteed delivery you should use TCP. However, if this were your primary problem, you would expect to see dropped packets even when your server sends them slowly, rather than in a burst.]
I suppose that you are capturing only a single packet by saying
socket.receive(packet);
This is a Blocking I/O call which will wait infinitely until it receives a packet so once first packet arrives it is done waiting and next command executes i.e
messQueue.add(packet);
However when multiple packets are been received you need to continue receiving packets. in your case you just stopped receiving packages after arrival of first package
Note: UDP being a un-reliable protocol doesn't guarantee packet delivery so there might be a case a packet is lost , However this can't be a problem on every run of your program , However a nice way to check whether the packet is hitting your machine and problem is within your application (application is not able to handle the packets recieved) use tcpdump (it's a command-line utility for linux-based OS or MAC) use the following command
sudo tcpdump -i <interface name(one that handles traffic) eth0, eth1 .. en0, en1 (for MAC)> host <ipaddress or hostname of UDP Broadcast server>
Example:
sudo tcpdump -i en1 host 187.156.13.5
(if tcpdump command not found then go forward and install it)
By using this command you will see packets pumping in from destination ump server on terminal if you see more then one packets arriving then you would be sure that packets are arriving at machine , However application falls short to address the packet
it might help
With reference to above explanation by me following changes you can make to make code behave according to requirement
I suppose you can make following changes to make your problem code work instead of creating socket into listenAndWaitAndThrowIntent(InetAddress broadcastIP, Integer port ) method create it in startListenForUdpBroadcast() as follows
socket = new DatagramSocket(port, broadcastIP);
socket.setBroadcast(true);
while (shouldRestartSocketListen) {
listenAndWaitAndThrowIntent(broadcastIP, port, socket);
}
Now you also need to change implementation of listenAndWaitAndThrowIntent method as follows
private void listenAndWaitAndThrowIntent(InetAddress broadcastIP,
Integer port, DatagramSocket socket) throws Exception {
byte[] recvBuf = new byte[64000];
//socket.setSoTimeout(1000);
//change value of condition in for loop to desired number of packets you would like to receive in below example it is 2 so it captures two packets
for (int i=0 ; i <= 2 ; i++ ){
DatagramPacket packet = new DatagramPacket(recvBuf, recvBuf.length);
socket.receive(packet);
messQueue.add(packet);
}
}
Try this it should work !!
it might help
I am currently trying to send some data between two android devices using Bluetooth. I've read plenty of questions regarding bluetooth transfer, sockets, and streams. So far without any luck.
The connection part is working. I get the device address then open a connection using the following :
BluetoothDevice device = BluetoothAdapter.getDefaultAdapter().getRemoteDevice(myOtherDeviceAdress);
BluetoothSocket socket = device.createRfcommSocketToServiceRecord(UUID.fromString(myUUID));
socket.connect();
And then try to send some data using the OutputStream
OutputStream mmout=tmp.getOutputStream();
byte[] toSend="Hello World!".getBytes();
mmout.write(toSend);
mmout.flush();
On the receiving end:
mBluetoothServerSocket = mBluetoothAdapter.listenUsingRfcommWithServiceRecord("ccv_prototype", UUID.fromString(myUUID));
mBluetoothSocket = mBluetoothServerSocket.accept(3 * 1000);
InputStream is = mBluetoothSocket.getInputStream();
BufferedReader r = new BufferedReader(new InputStreamReader(is));
And then, different version trying to read the buffer, currently:
int c;
StringBuilder response = new StringBuilder();
try {
while ((c = r.read()) != -1) {
//Since c is an integer, cast it to a char. If it isn't -1, it will be in the correct range of char.
response.append((char) c);
}
} catch (IOException e) {
e.printStackTrace();
}
String result = response.toString();
Log.d("MyTag", "Received String: " + result);
My issue here is that if I don't close the OutputStream, the receiving end never receives the EOF, but if I add mmout.close();, it closes before it even had time to read the message I wanted to send. So far, my only idea is to send a specific token as an EOF but this doesn't sound right.
What did I miss ?
Any help appreciated.
The simple answer is yes. You should send a specific token to represent EOF. When you do a read() operation on a Bluetooth socket, it will either return immediately with some data if there's data ready to be read, or otherwise the read() call will block until there is some data, or some IO exception happens (e.g. the connection drops). This is why you must make use of Threads, particularly for Bluetooth socket read and write operations. What you're attempting to do is rely on the BufferedReader returning -1 to indicate "no more data". Sadly, this isn't how it works. The -1 will only happen in the event of some IO exception or the connection closing.
Detection of where your piece of information (i.e. your packet of data) starts and finishes, or indeed determining when an overall communication session is ended, is something that you handle yourself in your own application protocol (or of course an existing protocol) that works over the sockets. This is an important concept with any protocol that works through streaming sockets. A good example to look at is HTTP, which as you know is conventionally used over TCP. Taking a quick look at HTTP will show you (a) how the HTTP protocol uses headers to tell the recipient how many more bytes to expect for the overall HTTP "message", and (b) how HTTP headers are also used to negotiate when the connection should close. What you cannot do is attempt to use methods on the sockets themselves to determine when the sender has finished writing a message. Similarly if one end is to be aware that the other end wants to close the connection, that should be negotiated over the application protocol.
The Problem
An app I'm maintaining keeps getting socket timeouts after approximately 21000 ms, despite the fact that I've explicitly set longer timeouts. This seemingly magical value of 21000 ms has come up in a few other SO questions and answers, and I'm trying to figure out exactly where it comes from.
Here's the essence of my code:
HttpURLConnection connection = null;
try {
URL url = new URL(urlString);
connection = (HttpURLConnection) url.openConnection();
connection.setConnectTimeout(45000);
connection.setReadTimeout(90000);
int responseCode = connection.getResponseCode();
if (responseCode == 200) {
// code omitted
}
} catch (Exception e) {
// code omitted
}
finally {
if (connection != null) {
connection.disconnect();
}
}
Catching all exceptions in one block is admittedly not ideal, but it's inherited code and I'm reluctant to mess with it. I know it's catching SocketTimeoutException after 21000 ms because it logs the simple name of the exception class.
Clues
I found a question where an asker was getting a ConnectTimeout after 21000 ms, despite explicitly setting it to 40000 ms. That's intriguing despite the exception class being different.
I also found a poorly-explained answer which claims that the server side is responsible for the 21000 ms timeout.
My Hunch
I don't think any action or inaction of the server could cause a shorter-than-expected socket timeout on the client. But maybe the TCP stacks in Windows and Android share a common ancestor, or at least use similar connect retry logic.
Could it be that Android imposes a maximum connect timeout of 21000 ms, and setting a longer timeout in HttpURLConnection is futile? Or could this timeout be triggered by some Windows machine on the path between the mobile device and the server? Do some Android versions throw a SocketTimeoutException where others throw a ConnectException?
According to RFC 1122 (TRANSPORT LAYER -- TCP), section 4.2.3.1 ("Retransmission Timeout Calculation"):
"Implementation also MUST include exponential backoff for successive RTO values for the same segment".
So xpa1492's answer sounds plausible (despite its Windows-specific nature); the implementation of a TCP stack either follows this RFC or gets panned for failing to do so.
By the way, RFC 1122 specifies 3 seconds as the initial timeout, explicitly, making xpa1492's (3 + 6 + 12 = 21) answer sound like the answer to your mystery.
And yes, the Android TCP stack shares a common ancestor with Windows TCP stack; they were both created using RFC 1122 as a guide ("[The Linux TCP stack is] an implementation of the TCP protocol defined in RFC 793, RFC 1122 and RFC 2001 with the NewReno and SACK extensions").
I suspect that your problem is related to radio interference, so you might want to try enabling F-RTO, as you might be hitting the "magic number" repeatedly because of the environment in which you are testing.
It seems like it is a Windows default configuration...
https://social.technet.microsoft.com/Forums/windows/en-US/9e7f59dd-6469-4ade-91ca-ceb5bcaf2675/windows-7-tcp-parameter-tcpmaxconnectretransmissions-and-tcpinitialrtt?forum=w7itpronetworking
Based on the link and some further reading, Windows will by default do 3 retries and double the timeout with each attempt, starting a s 3sec one. So you end up with 3sec + 6sec + 12sec = 21sec timeout.
I wrote a crude test app, based on the code in my question, that simulates a connect timeout by attempting to connect to a non-routable address as suggested in this answer. On my Moto G (Android 4.4.2), it throws a SocketTimeoutException in approximately 45 seconds as expected. Curiously, if I do not explicitly set the connect timeout, it instead throws a ConnectException after approximately one minute.
I'm going to write a slightly more sophisticated test app and send it to the customer to try to determine if the device itself is imposing a 21s timeout, or if some router on their mobile network might be the culprit. I'll update this answer with the results.
Result: This appears to be an OS bug that affects the Samsung SPH-P100 (Galaxy Tab 1) from Sprint. I don't have access to a Tab 1 from any other carrier, so this could be blamed on Samsung or Sprint. It does not seem to generally affect Android 2.x, because I have a ZTE X501 running 2.3.6 which allows me to set longer timeouts.
I have what appears to be a timing problem between a client (Galaxy Nexus) and a custom server since upgrading from Ice Cream Sandwich to Jelly Bean. Here is the general flow:
Client opens socket, issues HTTP get to server
Server accepts, starts new thread, responds with HTTP header and 200 OK.
Server writes (binary) file to socket.
Client reads data from socket and saves to a file.
After server thread writes all data, it closes the socket, and terminates
This has worked well over the past several months prior to the Jelly Bean update. Since the update the binary transfer succeeds about 70% of the time. The remaining 30% fails
when 'serverSocket.getInputStream().read' returns a -1 indicating the end of stream has been reached. No data has been read, no error exceptions raised, nothing in logcat.
The possibility of a timing problem arises when I change the server behavior in step #5. The thread was closing the socket after the write with the observed problems. If I remove the socket close, terminate the thread after the write, and let the OS eventually close the socket then it seems to work all the time.
I used tcpdump and WireShark to look at the packets in both the successful and failed cases. In the failed case a socket is closed in a few milli-seconds while in the successful case the socket is closed is a quarter or more of a second. The net of this is that any delay we cause in the socket closing improves our chances for success.
If anyone has any suggestions with what we may be doing to cause this problem or suggestions on how to narrow down the problem please feel free to respond. I can add code samples if required.
It looks like that when the server ask for the connection close, the socket is immediatly closed. Maybe the default ocket linger's time has changed between version ???
Try setting the socket linger's time using:
socket.setSoLinger(boolean on, int timeout);
to have the server waiting some time before close channel if some data still waiting to be sent.
If this doesn't solve, you can change your flow above to:
...
4.Client reads data from socket and saves to a file.
5.Client send confirmation to server.
6.Server close connection.
--EDITED--
A gracefull way to achive the above without additional TCP data packets traveling for the closing confirmation is:
when server finish writing to the socket calls:
socket.shutdownOutput();
when client socket.read() returns -1, client calls:
socket.close();
This ensures that client is informed that all data has been sent, and sender will wait for the socket closure protocol to complete.