All:
I am little new to MTU settings and am wondering how to deal with fragmentation seen over TCP/UDP (we are sending packets over a Cellular network via an Android application).
We are seeing a lot of packet fragmentation sending from an android device to server. I would like to get some further information on the topic:
What is MTU ?
What tools we can use to look or verify packet fragmentation (Wireshark, tcpdump etc.) ?
How can we use standard tools like ping/traceroute from an Android device (and is this useful)?
Thanks in advance
MTU means Maximum Transmission Unit -- it means the largest packet size the native link layer will transmit. For example, the MTU of Ethernet based networks is typically 1500 bytes. When a system wishes to send an IP datagram larger than the MTU of the network, it must fragment the packet into chunks no larger than the MTU. Note that fragmentation happens quite rarely with TCP, especially since most TCP implementations now use Path MTU Discovery. Fragmentation is typically only an issue when UDP is used with unusually large packets.
You can indeed observe packet fragmentation with all sorts of tools that monitor the raw network, including tcpdump, wireshark, etc.
If you want to learn more about low level TCP/IP mechanics like this, beyond the usual resources like Wikipedia, I suggest reading Stevens, "TCP/IP Illustrated", Volume 1.
Related
I am trying to send around 10 MB of data through BLE from Android device, currently able to achieve 17Kbps, is this the best throughput through BLE or can it be improved by any means.
There are many factors that go into controlling BLE throughput. The theoretical max you can achieve using GATT based APIs (and no LE Data Packet Length Extension) is 37.6 kilobytes / sec. In practice the best numbers you can achieve on an Android phone (with a good BLE chip) are going to be in the ~20kBps range
If you are interested in more details about the different factors controlling throughput check out this article
17KBps is not very bad throughput. From the Android application side, the BLE throughput can be modified by two ways. You can try these methods.
(1) Use BLE write data without acknowledgement and do error checking of data from the application layer. This will improve the data transfer speed compare to write with response.
(2) Use the Data length extension feature in BLE 4.2. This can be used to increase the throughput if the host and controller of both devices support this feature. Normally when initializing a BLE connection both devices will negotiate to set maximum data length supported by both devices. If you want to manually set Data length in Android devices, use set MTU function. Data length will be set to “new MTU + 4 bytes”. 4 bytes are L2CAP overhead.
I have a hardware acting as a peripheral and I want to read all of its memory, which has 1081344 bytes. Using the standard size of 23 bytes (with 20 bytes useful for data -> see Edit to understand how it does it) it takes more than a minute to read all the memory, so I want to improve the throughput. To do so, I could make the MTU bigger, as I saw in this useful article (https://punchthrough.com/blog/posts/maximizing-ble-throughput-part-2-use-larger-att-mtu). I heard that transfering with larger MTU is unstable. Is it too unstable that it's not worthy to use or it, regardless the instability, offers better throughput?
The problem is android provides requestMtu method just from API 21. I want to know if its possible to change MTU from peripheral without having to implement any of callbacks or functions on android side (this way it would work on any API after 18). From this answer (Requesting MTU with Bluetooth Low Energy connection on Android 4.3-4.4 (API 18-20)) it seems to be possible, but I am not sure if there is some implementation I should do on android side.
If android will not accept this request, is there another way to change MTU without depending on the new funcionalities of API 21 and further? This answer (Change the MTU or Packet Size for Bluetooth in Android?) makes me believe that this is possible, but it's not clear to me how it's done.
If I can request the MTU size from peripheral (the firmware developer confirmed to me that this is possible), can the peripheral know if it succed? I think it would be enough to work with, because I can communicate to my peripheral through characteristics and it could tell me if the request were accepted or not. I just wanna know if it's ok to believe that android will accept and send a success response or if there's some limitation of 23 bytes of MTU on APIs before 21.
Since it takes some time to implement it on hardware, I would like to know if its worth to try or if will definitely not work.
Edit: the peripheral is now updating one single caractheristic with a package of 20 bytes at a time and sending changing notifications to my central. So, the divisions in chunks are done on server side. What I want is to make that chunks bigger.
I am working on a project that is meant for testing network diagnostics. The section I'm working on now involves testing TCP and UDP connections. One of the points of measurement is packet loss for both TCP and UDP.
In the past I used: '/statistics/rx_dropped' (I'm leaving off the beginning portion of that file path), suffice to say it points to the number of packets dropped in total on a specified network interface.
However, in Android N this file requires root to read and I can't assume that the phone this app will be on is rooted.
Does anyone have a decent way of measuring packet loss on the client side for at least TCP that doesn't require rooting?
I am mostly aware of networking jargon so don't be shy, the reason I ask is because the way I measure has to be fairly elegant (either using some existing java/android library or finding a legal way of reading packet loss from the device).
I'm trying to come up with a solution enabling data exchange between an embedded device (xMega128(C) based) and an Android apps. The catch is the data exchange must be conducted via the Internet and both the embedded device and the mobile device running the app can be behind different NATs, connecting using different ISPs, 3G, LTE, etc.
I tried UDP hole punching, but it does not work with symmetric NATs. Multi hole punching with prediction also does not guarantee 100% reliabilty. I also considered using ICE, but ICE C libraries (pjnath, libnice) are incompatible with the hardware chosen (libs require os). Right now I'm considering implementing or use (if exists) traffic relay server, but that just seems like a hack to me.
Are there any other options I hadn't considered? Any help will be appreciated.
Ideally, the communication scheme would be:
100% reliable
relatively low-latency (3 seconds absolute max)
scalable (say up to 500k devices in the future)
initializable by both the app and the device
multi-user – one device would connect to many android apps
Also, if this helps, the data exchange between the device and the app is not very high-intensity – roughly 1 session per hour, ~50 messages per session with 10-20 seconds between them, each message weighing around 100 bytes.
What you're describing is effectively peer to peer or a subset thereof and to get that working reliably is a lot of work. Where peer to peer fails you normally fall back to a relay server. It can be done but the amount work to do it is quite large. Your list of requirements is also quite steep...
100% reliable
There's no such thing as a reliable connection. You need to build in fault tolerance to the app to make it reliable.
relatively low-latency (3 seconds absolute max)
Quite often you will be limited by physics ie speed of light. Low latency is hard.
scalable (say up to 500k devices in the future)
I don't know what this means ie is this concurrent connections?
From wikipedia on NAT Traversal
Many techniques exist, but no single method works in every situation
since NAT behavior is not standardized. Many NAT traversal techniques
require assistance from a server at a publicly routable IP address.
Some methods use the server only when establishing the connection,
while others are based on relaying all data through it, which adds
bandwidth costs and increases latency, detrimental to real-time voice
and video communications.
ie it will work sometimes ie it will be unreliable so you need to user several methods to make it reliable.
As long as both endpoints are behind different NATs you don't control, it won't work reliably. No way. You need a relay.
Hey Guys I'm dealing with an annoying thing.
While I'm sending larger amounts of data over the RFCOMM channel and connected A2DP, the audio will skip while. I've tried a lot of different things the only sure fire way is to space out the data being sent with delays. I'm pretty sure this is a low level Android issue as it mostly happens on 2.3.X but still happens on 4.0
Has anyone seen a similar issue?
An A2DP connection can consume the majority of available bluetooth bandwidth. Once you start adding other RFCOMM packets, you are taking up space that could otherwise be used for A2DP retransmissions, so your ability to hide lost packets is decreased. Other portions of bandwidth can be lost if your device is doing periodic page or inquiry scans, so you might want to ensure that is not happening. Basically, I wouldn't have too much expectation of running A2DP and RFCOMM at the same time unless your RFCOMM traffic is extremely low.