I am quite aware that udp is not a reliable protocol.
But I see some difference in reliability when I test it with an android device and a laptop.
For example, I have a device that sends udp packets and I am doing a receiving test with a laptop and Android. When testing it with a laptop(a simple udp receiver written in c++), there are merely no losses. Almost no loss occurs.
But when I test it with an android smartphone, the loss rate dramatically increases. About 10% of packet loss occurs.
Why am I resulting in such difference? Is this just a difference of capacity caused by hardware spec?
Related
I am building a UDP-based audio streaming system and face a weird behavior with two different Android devices, that I do not understand at all.
I have a server broadcasting UDP packets on my local WiFi to 192.168.178.255. The Android app listens to these broadcasts using a DatagramSocket and receives the packets just fine. A MulticastLock has been acquired, but the app stays in the foreground, so this shouldn't matter.
However, after a seemingly random amount of time (around a few minutes) the app stops receiving the broadcasts. Weirdly enough, this does not only affect my own app, but also any third-party tools which can listen to UDP broadcasts. UDP packets addressed specifically to the device are still received. The device will no longer receive any broadcasts until I disable and re-enable the wifi connection.
What's even more weird: The test packets I send have a sequence number, so I can keep track of the last packet the device has received. I am running this test on two different devices (a Huawai FIG-LX1 phone and a Fire-Tablet) and the devices stop receiving packets at EXACTLY the same packet. I am looking at the network traffic on Wireshark and I can see nothing special about this packet or anything happening on the network.
Any ideas what may be killing broadcast reception on two completely different devices in this way?
Unfortunately neither of the devices are rooted, so I cannot run a network analyzer directly on the device, to determine if -for example- the router might not be forwarding the packets anymore.
In the context of BLE (Bluetooth Low Energy), Write Commands can be used to write from a Client to the Server, and Notifications to write from the Server to the Client. In my setup, the Client is a Central device (Android phone), and the Server is a Peripheral (dev board).
After performing several data throughput tests with multiple phones, I noticed that the throughput varies greatly with the phone, which is expected because a great deal of the BLE lower layers implementation is up to the manufacturer to figure out. But what caught my attention was that Write Command always achieve a much lower throughput that Notifications, independently from the phone. Why is that?
They should have the same throughput. Multiple write commands and notifications can be sent during one connection event. They are treated the same.
You could use an air sniffer to see if you find any problems.
How long the connection event should be open can be suggested when the connection is created and with connection parameter updates. Sadly, Android's BLE stack hard codes this to the default value, which means no recommendation. That will in practice mean you are limited to 3 or 4 packets per connection event.
I'm using the USB host abilities of Android to communicate with my hardware. My communication is done through a CP210X usb to serial chip, and I'm using the driver provided here. My device is a Motorola Xoom running Android 4.0.3.
Everything for the most part works fine. I'm able to send any amount of arbitrary data and the device on the other side of the CP210X chip gets it just fine. We use a request/response protocol, and rarely send any more than about 200 bytes in a second. The other (device) side is able to send me data, and everything works until the amount of data it sends me is longer than 20 bytes. If it sends me 20 or fewer bytes, I receive it normally. However, as soon as the amount of data that it sends me is 21 bytes or greater, the packet never gets to me. UsbDeviceConnection's bulkTransfer never reports anything different happening, as it's return value is always -1 on timeouts anyways. My bulkTransfer timeout is 100ms, which should be plenty of time to transfer such a small amount of data.
In an effort to see if it was an issue with my CP210X chip, I swapped it with an FTDI serial to usb, and experienced the same issues. That really makes me worry that the issue I'm facing might be in the usb drivers on my device. However, I also noticed the behavior when trying it out on a Samsung Galaxy S3, so that's got me very confused.
Is there anything I'm missing or doing wrong?
I'm working on an Android application which sends/receives a high volume of UDP traffic to a Windows endpoint over a WLAN (and no, I can't use TCP).
The problem is that when I ramp up the traffic, I begin to see HUGE delays between when I call sendto (the app is written with the NDK) and when I see the packet arrive at the Windows endpoint. In the neighborhood of 10 seconds! The same thing happens in reverse too: I see huge delays between the packet being sent by the Windows endpoint and being picked up by recvfrom().
Changing SO_SNDBUF has no effect, so I don't think it's an issue with the application-level buffering control.
I've verified that the problem exists on a variety of Android devices, so I don't think it's an issue with the hardware/wireless drivers
Using a sniffer and correlating the timestamps, I confirmed that the delay is occurring between calling sendto() and the packet being sent from the Android device, so the buffering isn't happening in the AP or Windows endpoint
So at this point I'm all but out of ideas. The facts would lead me to believe that the buffering is happening on the Android OS layer, but 10 seconds of 10Mbps traffic? That seems too high to be feasible for an OS where memory footprint is such a huge concern.
Also, if the issue is that I'm sending data too fast and overwhelming the OS, then I would expect sendto() to return ENOMEM or ENOBUFS... But there are no indications that anything is wrong on the Android application level.
So my question is: what's causing this delay? And is there a way to mitigate it, or do I need to alter my application to have longer timeouts or some way of detecting this condition before it gets bad?
You are sending too much.. how much are you sending? 10Mbps is definitely way too high. Bear in mind:
Every UDP datagram you send has an additional 28 byte header (UDP + IP over IPv4)
Connection link speeds are theoretical maximum limits that you will never be able to achieve
The Phone OS could be limiting you, Phone OS's need to conserve battery and try to minimise socket comms to do so.
OR
You say your CPU is at 20%, how many cores do you have - you could be maxing out the core that is doing the sending, i.e. processing speed is the bottleneck.
For people with the same problem:
Try reusing the DatagramSocket. That solved it for me.
I've seen reports of very similar behavior on the linux-rt list lately, which may be related.
http://comments.gmane.org/gmane.linux.rt.user/10163
For a several months I was haunted with spurious jitter, detected on
UDP messages - multicast UDP messages where received on originating
node without any delay, but on other nodes a delay in range of 10s of
milliseconds was detected. Simply, it looked like a message was stuck
in kernel before finally getting transmitted.
Finally, thanks to LTTng tool, I was able to locate the problem down to
this peace of code in net/sched/sch_generic.c:
There appears to be a locking issue on transmit that causes the tx stall.
I am facing weird problem in receiving udp packets on Sony Xperia Z tablet. My application didn't receive many udp packets. So I have rooted the tablet to install the shark app and captured the network traffic using shark app after rooting the device. When I analyzed the report, the device has received all the packets but my application did not receive many of them. If the application didn't receive any packets, issue could be the packet filter which blocks broadcast packets. Here, my application receives few packets but misses packets received by device. I have not observed this issue with samsung tab 2 and motorola xoom tablet where it receives all the packets. It sounds like there is no code issue. Have anybody faced similar problem? Let me know if you have any suggestions or inputs that I can try.
UPDATE:
I have added my comments below.
I'd tell you a UDP joke, but you might not get it.
Packet loss is a documented feature of the UDP protocol.
UDP protocol does not guarantee that the package will be delivered to the addressee.
http://en.wikipedia.org/wiki/User_Datagram_Protocol
I have found why my app missed some packets received by device. I have set datagram socket receiver buffer size to small value. I removed this code setting buffer size and then it strated receving all the packets. By default, android sets buffer size as 163840B but I set the size to 64 bytes. It is working fine with default buffer size set by android.