I am working on a project that is meant for testing network diagnostics. The section I'm working on now involves testing TCP and UDP connections. One of the points of measurement is packet loss for both TCP and UDP.
In the past I used: '/statistics/rx_dropped' (I'm leaving off the beginning portion of that file path), suffice to say it points to the number of packets dropped in total on a specified network interface.
However, in Android N this file requires root to read and I can't assume that the phone this app will be on is rooted.
Does anyone have a decent way of measuring packet loss on the client side for at least TCP that doesn't require rooting?
I am mostly aware of networking jargon so don't be shy, the reason I ask is because the way I measure has to be fairly elegant (either using some existing java/android library or finding a legal way of reading packet loss from the device).
Related
I'm trying to come up with a solution enabling data exchange between an embedded device (xMega128(C) based) and an Android apps. The catch is the data exchange must be conducted via the Internet and both the embedded device and the mobile device running the app can be behind different NATs, connecting using different ISPs, 3G, LTE, etc.
I tried UDP hole punching, but it does not work with symmetric NATs. Multi hole punching with prediction also does not guarantee 100% reliabilty. I also considered using ICE, but ICE C libraries (pjnath, libnice) are incompatible with the hardware chosen (libs require os). Right now I'm considering implementing or use (if exists) traffic relay server, but that just seems like a hack to me.
Are there any other options I hadn't considered? Any help will be appreciated.
Ideally, the communication scheme would be:
100% reliable
relatively low-latency (3 seconds absolute max)
scalable (say up to 500k devices in the future)
initializable by both the app and the device
multi-user – one device would connect to many android apps
Also, if this helps, the data exchange between the device and the app is not very high-intensity – roughly 1 session per hour, ~50 messages per session with 10-20 seconds between them, each message weighing around 100 bytes.
What you're describing is effectively peer to peer or a subset thereof and to get that working reliably is a lot of work. Where peer to peer fails you normally fall back to a relay server. It can be done but the amount work to do it is quite large. Your list of requirements is also quite steep...
100% reliable
There's no such thing as a reliable connection. You need to build in fault tolerance to the app to make it reliable.
relatively low-latency (3 seconds absolute max)
Quite often you will be limited by physics ie speed of light. Low latency is hard.
scalable (say up to 500k devices in the future)
I don't know what this means ie is this concurrent connections?
From wikipedia on NAT Traversal
Many techniques exist, but no single method works in every situation
since NAT behavior is not standardized. Many NAT traversal techniques
require assistance from a server at a publicly routable IP address.
Some methods use the server only when establishing the connection,
while others are based on relaying all data through it, which adds
bandwidth costs and increases latency, detrimental to real-time voice
and video communications.
ie it will work sometimes ie it will be unreliable so you need to user several methods to make it reliable.
As long as both endpoints are behind different NATs you don't control, it won't work reliably. No way. You need a relay.
I'm trying to see how gsm affects data on a phone call. Here is what I'm trying to do. One person will be talking on a phone and I will record his voice from phone's mic while he speaks and on the other phone I will get the data coming from gsm and compare them. I want to write an android application to get that data. Is that possible on android or can you suggest another way to achieve this?
Some background (you may know this already)...
When you make a GSM call, the analogue signal in the phone microphone corresponding to your speech is converted into a series of digital values and then encoded with a voice codec. This is basically a clever algorithm to capture as much of the speech as possible, in as little data as possible.
The idea is to maintain very good speech quality while saving on the amount of bandwidth needed for a call. Techniques used include not transmitting quite periods (when you are not speaking) and various compressions and predictive encoding algorithms. There have been and still are a number of codecs in use in GSM, but the latest and general preferred codec is called AMR-Narrowband.
Nearly all GSM deployments encrypt speech between the phone and the base station - while there are publicised weaknesses in the various encryption algorithms, I am assuming (hoping...) that decrypting is not what you are looking for.
Your question - 'I want to see that if there will be data loss or corruption when voice reaches over gsm'
Firstly, is it worth noting that speech is 'relatively' tolerant of small amounts of data loss and corruption, at least compared to data. It is quite common to have bursts of packet loss in VoIP networks and it may cause a temporary degradation in voice quality. Secondly, packet loss in a VoIP network will include delayed packets which can be confusing - if the packet arrives too late to be included in the 'sound' being played in the receivers speaker then it is effectively lost from the VoIP point of view, even though other measures may show that it simply arrived late.
To actually measure the loss between the GSM phone and the basestation you would need access to the data received at the basestation, which you will not usually have unless you are the operator.
If you do an end to end test, from one GSM to another, your speech path will traverse other network nodes also, so you will not know if any loss or corruption is happening over the GSM air interface or in one or more of the other nodes.
You would also need to be aware of handover from one cell to another and from 2G to 3G (GSM to UMTS) which may affect your tests (even stationary phones can handover in certain circumstances).
If your interest is purely academic then the easiest thing might be to create your own GSM base station and test on this - there exists several open source GSM 'network in a box' projects which should allow you do this. I have not used it myself, but this one looks the most actively supported at this time - check out the mailing list under the community tab for a good place to maybe follow up your investigations:
http://openbts.org
I'm trying to interact with my Arduino Uno from my Android tablet running 4.0.3 and got everything working using the Android USB Host API, which means I can send data over USB to the board and read it there via Serial.read() successfully.
Now I'm trying to implement a feedback functionality, which means the other way around, sending from the Arduino Uno and reading on the tablet. This works also quite well using Serial.write(), but I got a little problem: Sometimes, there are no bytes transferred and some other times, only some of them, so the content I'm sending is cut in half. How do I fix this?
I'm asssuming the Serial port has some issues sending all of the data. Should I perhaps change the baud-rate which is currently at 9600?
Here's the code on the Android side:
byte[]data = new byte[1024]; //Bigger size just to make sure...
int returnLen = conn.bulkTransfer(epIN,data,data.length,500); //epIN is the`
On the Arduino side a simple
Serial.write(5);
is used.
The read() behavior you are seeing is working as intended: sometimes, fewer bytes will be available than the amount you want. It's very similar to standard POSIX read() syscall behavior:
"It is not an error if this number is smaller than the number of bytes requested; this may happen for example because fewer bytes are actually available right now [..]"
Your application needs to handle this possibility. A common technique is to keep track of how many bytes you have read, and perform successive read() calls, accumulating bytes until you have the number desired.
Finally, consider using usb-serial-for-android, an open source library implementing FTDI and CDC (Arduino) USB drivers. Though it doesn't specifically deal with this application-level issue, it may save you trouble elsewhere, and has an active mailing list.
Hey Guys I'm dealing with an annoying thing.
While I'm sending larger amounts of data over the RFCOMM channel and connected A2DP, the audio will skip while. I've tried a lot of different things the only sure fire way is to space out the data being sent with delays. I'm pretty sure this is a low level Android issue as it mostly happens on 2.3.X but still happens on 4.0
Has anyone seen a similar issue?
An A2DP connection can consume the majority of available bluetooth bandwidth. Once you start adding other RFCOMM packets, you are taking up space that could otherwise be used for A2DP retransmissions, so your ability to hide lost packets is decreased. Other portions of bandwidth can be lost if your device is doing periodic page or inquiry scans, so you might want to ensure that is not happening. Basically, I wouldn't have too much expectation of running A2DP and RFCOMM at the same time unless your RFCOMM traffic is extremely low.
All:
I am little new to MTU settings and am wondering how to deal with fragmentation seen over TCP/UDP (we are sending packets over a Cellular network via an Android application).
We are seeing a lot of packet fragmentation sending from an android device to server. I would like to get some further information on the topic:
What is MTU ?
What tools we can use to look or verify packet fragmentation (Wireshark, tcpdump etc.) ?
How can we use standard tools like ping/traceroute from an Android device (and is this useful)?
Thanks in advance
MTU means Maximum Transmission Unit -- it means the largest packet size the native link layer will transmit. For example, the MTU of Ethernet based networks is typically 1500 bytes. When a system wishes to send an IP datagram larger than the MTU of the network, it must fragment the packet into chunks no larger than the MTU. Note that fragmentation happens quite rarely with TCP, especially since most TCP implementations now use Path MTU Discovery. Fragmentation is typically only an issue when UDP is used with unusually large packets.
You can indeed observe packet fragmentation with all sorts of tools that monitor the raw network, including tcpdump, wireshark, etc.
If you want to learn more about low level TCP/IP mechanics like this, beyond the usual resources like Wikipedia, I suggest reading Stevens, "TCP/IP Illustrated", Volume 1.