I want to measure the data transferred and received for my HTTP request over the mobile network. I found the TrafficStats API in Android 2.2, but the numbers I get from this seem to be too low. I am measuring on a real device (HTC Desire), over 3G network.
Immediately before the request a compute the tx/rx data since boot time, then do a GET request, then calculate the rx/tx numbers again:
long bytesStart = TrafficStats.getTotalTxBytes() + TrafficStats.getTotalRxBytes();
// execute request (Apache HTTP client)
client.execute(get);
long totalBytesNow = TrafficStats.getTotalTxBytes() + TrafficStats.getTotalRxBytes();
long bytesDiff = totalBytesNow - bytesStart;
For bytesDiff I get numbers around 1.5KB, where it should have been 3.5KB. The totalBytes number is around 5MB, which seems to be a reasonable amount of data for 10 hours that the device is running.
I read here on stackoverflow that the numbers come from a file, can it be that the data is not flushed to the file immediately? Or why are my numbers always too low (I checked the data size on the server and it is really larger)?
Is it a problem of my HTC Desire device (Android 2.2 by HTC), does the TrafficStats API work for anyone?
Thanks,
Martin
Could it be that the HTTP stack is compressing your data?
Also, the files on /sys are not actual files in the common sense, they are a file interface to kernel data (you know, the 'everything is a file' philosophy of Unix/Linux)
OK, I found out the problem in my coding. After "client.execute(get);" I only have the headers etc. transferred, not the content. After evaluating the content InputStream the numbers are correct...
Related
I'm doing a little test where I send out a short string(4-8 bytes) to a client every 0.5 seconds from a Node.js server using ws. The client is either using iOS/Android or a web browser. The client does not send anything back to the server, except for TCP-ACKS I suppose. The weird thing is, when I'm debugging the app in iOS using XCode network report, I can only see that the client sends out some bytes(approx 500) when the connections establishes, probably during the HTTP handshake. The remaining time ZERO data is going out from the device, there is only data coming in. The same results is achieved when receiving data in Chrome and tracking the data using Nettop.
The thing that makes so confused is that on the Android, almost the same amount of data that goes in to the device goes out when inspecting the network usage with Android profiler/Battery Historian/TrafficStats. I have tried using different libraries for the Websocket implementation and using different Android devices.
I have a hard time believing the ACKS sent out by the android is as big as the message received, even though it's just a small string of four characters.
So my questions are:
Could the case be that Nettop/XCode network report is simply ignoring all the ACKS, so in reality as much data is sent out in Chrome/iOS as in Android?
Is there something 'Wrong' with the libraries used in Android or could it be something with its operating system?
Could an ACK be as big as a simple TCP-package with 4 characters in it?
The result below when using Websocket
The data received/transmitted when using Android Battery Historian
The data received/transmitted on iOS using Network Report
Could an ACK be as big as a simple TCP-package with 4 characters in it?
An ACK consists of the IP and the TCP header and no payload. With IPv4 this means at least 20 bytes IP header and 20 bytes TCP header, i.e. 40 bytes. A packet with 4 bytes payload is only larger by 4 bytes, i.e. 44 bytes or just 10%.
The network report in Android shows 68350 in vs. 61370 in bytes, which is a difference of 11%. This matches the expected difference.
I'm not familiar with what iOS measure here, but it probably either measures only the application payload (i.e. the 4 bytes) or simply ignores packets with no payload, i.e. the ACK's.
With a normal Characteristics Read only the MTU Size (20bytes) of data will be read.
My customer will offer a characteristics with a larger size (about 100bytes).
I saw that BLE offers a "Long Read" feature which reads until the size of the characteristics is reached.
(https://bluegiga.zendesk.com/entries/25053373--REFERENCE-BLE-master-slave-GATT-client-server-and-data-RX-TX-basics)
attclient_read_long command - Starts a procedure where the client first sends normal read request to the server, and if the server returns an attribute value with a length equal to the BLE MTU (22 bytes), then the client continues to send "read long" requests until rest of the attribute is read. This only applies if you are reading attributes that are longer than 22 bytes. It is often simpler to construct your GATT server such that there are no long attributes, for simplicity. Note that the BLE protocol still requires that data is packetized into max. 22-byte chunks, so using "read long" does not save transmission time.
But how can I use this feature in Android?
The BluetoothGatt class only offers a simple "Read()" - same for iOS.
Increasing the MTU is not possible since we need to support devices with AP Level < 21 (increaseMTU was introduced at API 21)
I can confirm for iOS that a read operation as per standard will occur first. Then if the server returns a completely filled PDU, the iOS device will then continue to perform the blob read operation. Tested with iPhone 7 running iOS 11.2.x
You do NOT need to call the peripheral.readValue(characteristic) multiple times for long attributes. CoreBluetooth does all of this under the covers.
Refer to the Bluetooth Spec Core v5.0, specifically Vol 3, Part F. "Long Attribute Values".
Experiment to prove above.
I have an Android Thing acting as the server that I'm making return the maximum length with my iPhone during a read operation. iOS and my RPI3 exchange a MTU of 185. So the read response is (MTU - 1) 184 bytes long. The server (RPI) then receives a new read request with an offset of 184, which you can then return more data. This is continued until the offset is > 512, or the last read response returns less than the MTU - 1 length.
Based upon the fact that the BluetoothGattServer supports long attributes, I'd assume the BluetoothGatt object does as well. Since there is no way via the API to set the offset to be read, I'd assume you can invoke read just once.
I have 2 android phones phones, both connected to the same wifi, both with bluetooth.
I want some method that syncs somehow the phones and starts a function on the same time on both phones.
For example playing a song at the same time.
I already tried with bluetooth but its with lag, sometimes 0.5 secs. I want something in +- 0.01sec if possible.
Someone suggesting playing it in the future with 2-3 seconds, sending the time-stamp, but how do you sync the internal clocks of the devices then ?
Before calling that particular method, try to measure the latency between the two devices:
1.First device says Hi(store the current time)
2.Second device receives the Hi.
3.Second device says back Hi !!
4.First device receives the Hi.((storedTime - currentTime) / 2 )
Now you have the latency, send your request to second device to start your particular method and start it on first one after the latency.
Try to measure the latency 5 to 10 times to be more accurate.
you have a way to transfer data between the devices right ?
if so you can send a time-stamp which is in the future,
ex: if the present time stamp is 1421242326 you send 1421242329 or something and start the function at that time on both devices.
Basically use #Dula's suggestion (device 1 sends command to device 2 and gives a "start time" which lies in the future). Both devices then start the action at the same time (in the future).
To make sure that the devices are synchronized, you can use a server-based time sync (assuming that both devices have Internet access). To do this, each device contacts the same server (using NTP, or HTTP-based NTP, or contacts a known HTTP server, like www.google.com and uses the value in the "Date" header of the HTTP response). The "server-date" is compared to the system clock on the device, and the difference is the "time-offset from server-time". The time-offsets can be used to synchronize on the "server-time", which is then used as the time base for the actual action (playing the media, etc.).
If your WiFi router allows clients to talk to each other (many public hotspots disable this), you could implement a simple socket listener on one (or each) device and have the initiating device broadcast a message.
For more complicated things and network flexibility, I've had good success with connected sessions using AllJoin. There is a bit of a learning curve to do interesting things, but the simple stuff is pretty easy once you understand the architecture.
Use a server to provide a synchronous event to just the two clients who have decclared their mutual affinity (random as a parm and pair serializer Partner-1 or Partner-2 which they share prior to their respectve calls for the sync event).
Assume both clients on same subnet (packets from 2 events serialized on the server , arrive across the network at the 2 clients simultaneously client-side) This provides synchronous PLays by 2 , bound clients.
The event delivered by server is either a confirm to play queued selected track OR a broadcast( decoupled, more formal)
The only tricky thing is the server side algorythm implementing this:
Queue a pair of requests or error
Part1, part2 with same Random value constitute valid pair if both received before either times out.
On a valid pair schedule both to the same future event in their respective , committed responses.
OnSchedule do the actual IO for 2 paired requests. Respective packets will arrive back at respective clients at same time, each response having been subject to equal network latency
Ng if two diff carrier 4G or lte networks involved. (Oops)
This thing is possible via socket, you will send a event via socket then the other device receive that event. For learn socket io chat
maybe it's not the answer you are looking for but i think that due to the high precision you are wanting , you should look for a push technology, i advice you to take look at SignalR. It's real time technology which gives you abstraction of sending methods , it have a built-in methods like Clients.All.Broadcast that fit your needs.
You can try to use some MQTT framework to send message between two device, or into a set with more number of devices.
I have an android client that functions as a central and have an app on my MAC (peripheral) that this central connects to and sends data.
At this point, I need to wait almost 100ms after I call writeCharacteristic(..) to receive the onCharacteristicWrite(..) callback. I am sending strings. If I send smaller strings, the throughput is great (understandably). When the string contains about 200 characters and I send 20 byte chunks, it takes almost a second before the entire string is seen at the peripheral. When I set the write type to NO_RESPONSE before writing the characteristic, I see no data on the peripheral.
After I connect, I have done the following to improve throughput:
Stopped discovery after services are discovered because it is an expensive operation
I set the write type to default first - When I do this, I see data on the peripheral. But, there is a significant delay. When I set the writeType to NO_RESPONSE, I see no data on the peripheral. I have no logic in onCharacteristicWrite(..) either. Sometimes, I see the data getting truncated on the peripheral.
I have set the desired connection latency to low on my mac app. Is there a way to set a value (as 7.5ms perhaps?).
When I set the write type to default and send a string of 200 characters - I split the string into 20 byte chunks. I now have 10 chunks to send. If I set characteristic value and call writeCharacteristic(..) in loop, I see no data. When I add a ~100ms delay after writeCharacteristic(..) before it executes the next iteration of the loop, I see data on the peripheral.
I see a huge increase in throughput between an iOS central - iOS peripheral. I don't see why Android central - iOS peripheral shouldn't work he same way. From my understanding, Android and iOS use the same chip.
Any reason the performance is so poor? Is there anything else I can do to improve throughput?
Please have a look at the MTU size. My experience:
Using a iOS central, the central automatically starts the MTU size negotiation with some large value. I think it is larger than 200 bytes.
On most Android devices I tested this does not start automatically but you have to start the MTU size negotiation by your app (central). If you do not do that, Android cuts your data into 20 byte pieces. This has big influence on your throughput.
I did a client/server(android/pc) and it seems that network usage from client uses a lot of CPU. Like to receive only 4k-5k from network, the cpu rises to 33 milliseconds. The cpu can be higher than 90-100 milliseconds if data is higher like ~32k.
First, I've tried the client(network part) in java version and after in c and the problem is still there.
I profiled the server part that send data and it uses about 0 millisecond.
Some details:
TCP connection.
The client connects to the server, client sends request, server sends
data (chunk of 4-10k), client send request, server sends...
Network part is threaded.
Get data with (recv or recv/select).
Smart Phone: Nexus one.
Tested in profiler mode (only network part and display fps/milliseconds).
Tested in Wifi (computer, phone, network are close).
Let me know if you have any suggestions or questions.
Thanks.
Are you using BufferedOutputStream on Android side to write the data? If not, it writes it byte by byte, which would explain the high CPU usage.
If this is not the case, please add some source code to the question.