I would like to monitor Wifi-Direct network (Bandwidth, latency etc). How can I measure the time it takes for X bytes to be received over a network (wifi direct). I mean TX + over the network + RX time.
In DMMS (android Studio)I found the option of Network Statistics but here it is only shown transmission and reception time (and it is not very accurate because it appears on a graph).
I had thought about using System.currentTimeMillis() but I have not found how to synchronize the clocks of both devices.
TX:
socket.bind(null);
socket.connect((new InetSocketAddress(host, port)), SOCKET_TIMEOUT);
int tamMensaje= 1024*1024; // 1 MB
byte[] bitAleatorio = new byte [tamMensaje]; // 1 byte [-128 a 127
for (int x=0;x<bitAleatorio.length;x++){
bitAleatorio[x] =(byte) Math.round(Math.random());
}
DataOutputStream DOS = new DataOutputStream(socket.getOutputStream());
for(int i=0;i<1024;i++){
DOS.write(bitAleatorio,(i*1024),1024);
}
RX:
ServerSocket serverSocket = new ServerSocket(SERVERPORT);
Socket client = serverSocket.accept();
DataInputStream DIS = new DataInputStream(client.getInputStream());
int tamMensaje= 1024*1024;
byte[] msg_received2= new byte [tamMensaje];
for(int i=0;i<1024;i++){
DIS.read(msg_received2,(i*1024),1024);
}
client.close();
serverSocket.close();
Thanks
There are two approaches that can considerably accurate solve the problem:
Sync time on both devices.
You can use NTP for that and either install a separate app like this one or implement it in your code using a library like this one.
After that you can rely on System.currentTimeMillis() to compare the message sent/receive time on both devices.
Using relative time or time of single device.
You can implement something like icmp echo, using udp datagrams (they are faster than tcp). The algorithm should be following (assuming we have devices A and B):
A sends some packet to B and saves timeSent somewhere;
B receives packet and immediately sends ACK packet to A;
A receives ACK and saves timeRecv;
Finally, Long latency = (timeSent - timeRecv)/2;.
This will work for small payloads, like icmp echo. Measuring large network transmission time can be done by sending separate ACK responses for both start/end of receiving it.
Related
I am trying to do multicast in android. I have three phones ( all Galaxy S5 ). One phone has Wifi tethering turned on and is acting as AP(mobile data is turned off). Of the other two phones, one is transmitting to a multicast address the other is receiving, with both connected to the AP of the first phone. The transmitting and receiving phone are kept side by side.
I found that of 1000 packets that I am sending, I am receiving only about 200. Below is an outline of how I am sending
MulticastSocket mMulticastSocket = new MulticastSocket(port);
InetAddress multicastGrp = InetAddress.getByName("239.255.255.250");
mMulticastSocket.joinGroup(multicastGrp);
String sendStr = "";
int packetSize = 1400;
for(int i = 0; i < packetSize; i++)
sendStr = sendStr+"a";
for(int i = 0; i < 1000; i++)
mMulticastSocket.send(new DatagramPacket(sendStr.getBytes(), sendStr.getBytes().length, mMulticastSocket, port);
Receive is something like,
//acquire multicast lock
byte[] buffer = new buffer[2048];
DatagramPacket rPack = new DatagramPacket(buffer, buffer.length);
mMulticastSocket.setReceiveBufferSize(1024*64);
int recvCount = 0;
while(true) {
mMulticastSocket.receive(rPack);
recvCount++;
}
//release multicast lock
Both send and receive are done in worker threads. I also found that as the value of 'packetSize' reduces, the number of received packets is increasing. I guess it is due to CPU load or receive buffer, but in any case, I want to multicast packets of size 1400 bytes and receive as many as possible( I know that number of received packets is a function of the channel between the two phones but I think keeping them side by side is almost the best channel that can be had). Also, when I am doing a UDP unicast, i am able to receive about 900 of the 1000 packets sent.
I am not able to understand why the number of received packets is so low for multicasting. What is it that I am missing?
Your server is sending packets as fast as possible. There is nothing in your sender loop that would throttle back the data rate of the packets being sent. So each packet is being sent microseconds apart. Most likely something along the network stack will cause some buffers to fill up. UDP has no built in flow control or congestion control. If you can have some small delay (~1ms to 5ms) between packets, you will probably see much less packet loss.
for (int i = 0; i < 1000; i++)
{
mMulticastSocket.send(new DatagramPacket(sendStr.getBytes(),
sendStr.getBytes().length, mMulticastSocket
Thread.sleep(5, 0); // quick hack to throttle back rate
}
I'm very new to python and I want to make some application that allows me to use the accelerometer values I get from my phone in windows.
With SL4A (python for android) I created client and server scripts to read the values from the accelerometer and send them to the server (running on pc) through a socket.
This is my server code (windows pc):
import socket
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind(('', 5000))
serversocket.listen(5)
while True:
connection, address = serversocket.accept()
buf = connection.recv(64)
if len(buf) > 0:
print buf
and this my client code (android phone):
import android, socket, time
droid = android.Android()
clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
clientsocket.connect(('192.168.2.6', 5000))
def readAcc():
dt = 100
endTime = 300
timeSensed = 0
droid.startSensingTimed(2,dt)
while timeSensed <= endTime:
senout = droid.sensorsReadAccelerometer().result
time.sleep(dt/1000.0)
timeSensed+=dt
print senout
clientsocket.send(str(senout).strip('[]'))
readAcc()
droid.stopSensing()
Output example:
1.8659877, 8.8259849999999993, 3.5412905000000001
This works fine if I want to read the values once, but I was wondering how I would go from here, I want to be able to use the values for controlling games and other applications.
Thanks for reading :)
My idea would be to first experiment with different motions of the device (say, a fast upward motion), gather the output values, and then simply trigger events (or whatever you need to trigger in a game) based on a specfic range of accelerometer values (similar movements should have similar value ranges).
In my setup, I use a PC (laptop) with Win7 that creates a hosted network (WiFi). I program it in C++/CLI (.net). On the other side, I have a tablet with Android (ICS). The tablet is the server and the PC is the client. I chose TCP for accuracy, I exchange commands, I don't want to loose packets.
On the tablet side, I use a ServerSocket that waits for communication with the method accept(). Then it returns a Socket, etc.
On the PC side, I use a Socket then I connect it to the tablet using connect(). After it is done, I send() bytes to the tablet that get them through its InputBuffer... The it returns bytes through its OutputBuffer and they are retrieved in the PC by calling receive().
The problem is that sometimes it takes few seconds for a send/receive cycle, other times it takes few tens of milliseconds. So here are my questions:
Is it a normal behavior that we can expect for such communications?
Can we fine tune this behavior by setting, say, buffer sizes or other things?
What is the longest step: connecting or transferring data?
I would appreciate if someone could give me some clues. I'm pretty new in network programmation and sometime "I'm loosing my latin!" (well... a french expression!)
Code examples: (skipping trivial code and comments...)
PC side (C++/CLI)
Socket^ lpSocket = gcnew Socket( SocketType::Stream, ProtocolType::Tcp );
lpSocket->Connect( Addr, Port ); // PROBLEMS HERE
SocketError err;
array<Byte>^ lOutBuffer = Encoding::UTF8->GetBytes( "Here's a sentence.\r\n" );
lpSocket->Send( lOutBuffer, 0, lOutBuffer->Length, SocketFlags::None, err );
array<Byte>^ lInBuffer = gcnew array<Byte>( 1024 );
int N = lpSocket->Receive( lInBuffer, 0, lInBuffer->Length, SocketFlags::None, err );
String^ lpOutString = Encoding::UTF8->getString( lInBuffer, 0, N );
Tablet side (Android ICS) The code is a little more complicated due to threading. I will skip the mandatory try/catch...
// ... We are in a background thread
ServerSocket mSS = new ServerSocket();
mSS.bind( new InetSocketAddress( mPort ) );
mSS.setReuseAddress( true );
Socket lCS = mSS.accept();
lCS.setSoTimeout( 1 );
lCS.setTcpNoDelay( true );
BufferedInputStream lIn = new BufferedInputStream( lCS.getInputStream() );
PrintStream lPS = new PrintStream( lCS.getOutputStream() );
byte[] lBuf = new byte[1024];
String lInStr="";
while ( true )
{
try
{
if ( lInStr.endsWith( "\r\n" ) ) { break; }
int N;
if ((N=lIn.read( lBuf,0,1024))!=-1) { lInStr += new String( lBuf, 0, N, "UTF-8" );}
else { break; }
}
catch ( SocketTimeoutException e )
{ /* Can occur because timeout is set to 1ms */ }
}
lOut.print( lInStr ); // Just echo input to output
lOut.close();
lCS.close();
mSS.close();
So, it is a simple app where I can accept one connection. Once it is accepted, I process it, then I close the sockets and that's it.
The problems occur (it seems) in the PC side with connect(). I often get error Timeout or ConnectionRefused. If I redo the connect() in a loop, I obtain a connection. But even when there is no timeout, the connection can take few seconds to establish.
The WiFi in the tablet I was working with is bugged. I tried with a phone and other tablets and it works quite fine.
I don't know actually if it is a bug with the version of Android (4.0.3) or if it is a hardware issue because I could not test two different tablets having the same Android.
Anyway...
I appreciate the help you gave me when saying that the connection delay looked too long. This is what sent me to the right path to the solution!
My application implements VpnService to intercept network traffic and provide tailored responses. The goal is to handle traffic to specific addresses, and discard other requests.
Presently I'm successful in parsing incoming requests and constructing and sending responses. The problem, however, is that these responses do not arrive as the actual response to the original request; testing with a socket connection simply times out.
In order to make this distinction, I'm presently parsing the raw IP packets from the VpnService's input stream as follows:
VpnService.Builder b = new VpnService.Builder();
b.addAddress("10.2.3.4", 28);
b.addRoute("0.0.0.0", 0);
b.setMtu(1500);
...
ParcelFileDescriptor vpnInterface = b.establish();
final FileInputStream in = new FileInputStream(
vpnInterface.getFileDescriptor());
final FileOutputStream out = new FileOutputStream(
vpnInterface.getFileDescriptor());
// Allocate the buffer for a single packet.
ByteBuffer packet = ByteBuffer.allocate(32767);
// We keep forwarding packets till something goes wrong.
try {
while (vpnInterface != null && vpnInterface.getFileDescriptor() != null
&& vpnInterface.getFileDescriptor().valid()) {
packet.clear();
SystemClock.sleep(10);
// Read the outgoing packet from the input stream.
final byte[] data = packet.array();
int length = in.read(data);
if (length > 0) {
packet.limit(length);
/*
1. Parse the TCP/UDP header
2. Create an own socket with the same src/dest port/ip
3. Use protect() on this socket so it not routed over tun0
4. Send the packet body (excluding the header)
5. Obtain the response
6. Add the TCP header to the response and forward it
*/
final IpDatagram ip = IpDatagram.create(packet);
...
}
}
IpDatagram is a class through which create() parses the byte array into a representation of the IP packet, containing the IP header, options and body. I proceed to parse the byte array of the body according to the protocol type. In this case, I'm only interested in IPv4 with a TCP payload—here too I create a representation of the TCP header, options and body.
After obtaining an instance of IpDatagram, I can determine the source and destination IP (from the IP header) and port (from the TCP header). I also acknowledge the request TCP's flags (such as SYN, ACK and PSH) and sequence number. In the app:
Subsequently I construct a new IpDatagram as a response, where:
The source and destination IP are reversed from the incoming request;
The source and destination ports are reversed from the incoming request;
The TCP acknowledgement number is set to the incoming request's sequence number;
A dummy HTTP/1.1 payload is provided as the TCP's body.
I convert the resulting IpDatagram to a byte array and write it to the VpnServer's output stream:
TcpDatagram tcp = new TcpDatagram(tcpHeader, tcpOptions, tcpBody);
IpDatagram ip = new Ip4Datagram(ipHeader, ipOptions, tcp);
out.write(ip.toBytes());
My application displays the outgoing datagram as it should be, but nevertheless, all connections are still timing out.
Here's a sample incoming TCP/IP packet in hexadecimal:
4500003c7de04000400605f10a0203044faa5a3bb9240050858bc52b00000000a00239089a570000020405b40402080a00bfb8cb0000000001030306
And the resulting outgoing TCP/IP packet in hexadecimal:
450000bb30394000800613194faa5a3b0a0203040050b92400a00000858bc52b501820001fab0000485454502f312e3120323030204f4b0a446174653a205475652c203139204e6f7620323031332031323a32333a303320474d540a436f6e74656e742d547970653a20746578742f68746d6c0a436f6e74656e742d4c656e6774683a2031320a457870697265733a205475652c203139204e6f7620323031332031323a32333a303320474d540a0a48656c6c6f20776f726c6421
However, a simple test simply times out; I creata a new socket and connect it to the IP above, yet the response provided above never arrives.
What could be going wrong? Is there any way to troubleshoot why my response isn't arriving?
This TCP/IP response doesn't contain a valid TCP header checksum:
450000bb30394000800613194faa5a3b0a0203040050b92400a00000858bc52b501820001fab0000485454502f312e3120323030204f4b0a446174653a205475652c203139204e6f7620323031332031323a32333a303320474d540a436f6e74656e742d547970653a20746578742f68746d6c0a436f6e74656e742d4c656e6774683a2031320a457870697265733a205475652c203139204e6f7620323031332031323a32333a303320474d540a0a48656c6c6f20776f726c6421
More generally, the request and response mechanism is very picky. This is of course the case due to the very nature of networking, and as the kernel takes care of ensuring that responses are good and to which port a response should be sent, anything that doesn't compute will simply be discarded as a bad packet. This also holds true when responding from the VpnService's output stream, as you're operating on the network layer.
To return to the specific case above: the IP packet is correct (including the checksum) but the TCP packet was not. You need to compute the TCP header checksum over not just the TCP packet, but prefixed by the pseudo header as follows:
(source: tcpipguide.com)
It should be then be computed over the following bytes:
I have a problem with socket send (or write) function on android.
There is my network lib that I use on Linux and Android. Code is written in C.
On Android, application creates a service, which loads a native code and creates the connection with the help of my network lib. Connection is the TCP socket. When I call send (or write, no difference), code hangs in this call in most cases. Sometimes, it unhangs after 10-120 seconds. Sometimes, it waits longer (until I kill the application). Data size being sent is about 40-50 bytes. First data sending (handshake, 5 bytes) never hangs (or I am just lucky). The hanging send is, usually, next after handshake packet. Time between this first handshake packet sending and hanging sending is about 10-20 seconds.
The socket is used on another thread (I use pthread), where the recv is called. But, I do not send data to Android in this time, so recv is just waiting when I call send.
I am sure that other side is waiting for the data – I see that recv on other side returns with EAGAIN every 3 seconds (I set timeout) and immediately calls recv again. Recv is waiting 10 bytes always (minimal size of packet).
I am unable to reproduce this behavior on Linux-to-Android transfer or Linux-to-Linux, only on Adnroid-to-Linux. I am able to reproduce this with two available to me different Android devices, so I don’t think this is the problem in broken hardware of one particular device.
I tried to set SO_KEEPALIVE and TCP_NODELAY options with no success.
What can issue the hang-up on send/write calls and how can I resolve this?
Socket created with this code:
int sockfd, n;
addrinfo hints, *res, *ressave;
bzero(&hints, sizeof(addrinfo));
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
if ((n = getaddrinfo(host, serv, &hints, &res)) != 0)
{ /* stripped error handling*/ }
ressave = res;
do
{
sockfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol);
if (sockfd < 0) continue;
if (connect(sockfd, res->ai_addr, res->ai_addrlen) == 0)
{
break; /* success */
}
close(sockfd); /* ignore this one */
} while ((res = res->ai_next) != NULL);
Hanging send operation is:
mWriteMutex.lock();
mSocketMutex.lockRead();
ssize_t n = send(mSocket, pArray, size, 0);
mSocketMutex.unlock();
mWriteMutex.unlock();
The problem is solved with the help of Nikolai N Fetissov in commentaries - his right question has unblocked my mind and I found a problem in RWMutex.