I am doing a firmware upgrade of a BLE peripheral device based on ESP32 from an Android central.
The firmware file is sent in parts of 512 bytes each. The ESP32 device (the GATT server) sends a notification to the central (GATT client) and the central sends the next part, followed by a write command to the peripheral. Then the next notification is sent and so on.
The upgrade works, however, it takes a long time to complete (~10-12 min for a 600kB file).
I sniffed the traffic with Wireshark and it turned out there are 15-20 empty PDUs between the sending of each notification by the peripheral and the start of sending the part by the central. I searched what may be the problem on the server side but could not find anything relevant.
Maybe something is happening on the Android central that delays the sending process? Or maybe I am missing something with the ESP32? Here is a Wireshark capture (I’ve underlined in red where the sending should start):
EDIT: I haven't added extra sleep on server and if I had, there would be no empty server PDUs, correct?
I tried what you suggested, to use just android's internal mechanism for confirmation and the download is now about 3x faster. Thank you! However, in the captures there are some strange (to me) things like a lot of 26-byte response packets from the server to the master (captures below). Why is that and is it possible to combine them into 3 packets, the way they were sent from the master?
Also, about he explanation in the link you gave:
The Bluetooth controller sends a "number of packets complete" event back to Android's Bluetooth stack over HCI which indicates that the Link Layer of the remote's device has acknowledged a packet.
I didn't quite get that. If its a Write_No_Response from the master, how does the remote device acknowledge receiving a packet?
And is this Android flow control mechanism a possible explanation of my original problem with the empty packets?
Seems like the Android device is not fast enough, or you have added an extra sleep.
Assuming the peripheral can handle the data, you can quite reliably send write commands without using a notification acknowledging scheme. See onCharacteristicWrite and onNotificationSent are being called too fast - how to acquire real outgoing data rates?. Basically just wait for onCharacteristicWrite before sending the next one.
Related
I have posted several questions which are basically the same question over and over again because I honestly can't find the problem.
I need to receive notifications from a BLE device on a particular characteristic. I use the BGX Silicon Labs Bluetooth kit which is connected to a Processor board that we send and receive data from. The documentation states that two characteristics RX and TX are used for data exchange. Rx for sending and TX for receiving.
Here is the link as well:https://docs.silabs.com/gecko-os/1/bgx/latest/ble-services
Now I tried following the guide from PunchThrough that helps build a starter BLE app for android. I pretty much did everything they did. Now when I enable notifications for the RX characteristic when sending, the onCharacteristicChanged override function gets called as it should be. But when I send a request using that characteristic and expect a response based on that request sent in the form of a notification. Our team seems to manage fine using the software developed for the Desktop version, the android version I am working on however seems to succeed only in Writing data but not receiving. My question is simple, why does the onCharacteristicChanged function never get called when I enable notifications on the Tx characteristic?
The code has a queuing mechanism, and it writes to the descriptor and has notifications enabled on the TX characteristic but I get nothing, nothing at all. I can provide parts of the code if you wish but first I wanted to discuss what possible reason notifications are not received.
I wonder if anyone else is seeing this. We have successfully used the RN4020 MLDP protocol (similar intent to SPP on Classic Bluetooth) with both iOS and a Bluetooth-LE USB module (BLED112) on Windows. Basically serial bytes coming in to the RN4020 module are sent via a characteristic, resulting in a notification on the connected device (iPhone, PC). Bytes written to the characteristic on the connected device come out of the RN4020 serial port.
But using similar API calls on Android (using C#/Xamarin low-level APIs), I occaisionally see the data I send appear as a notification. About 1 time in 5. The data sent does go to the module and then to the equipment it is attached to. I think I saw evidence that there was some sensitivity to timing.
Has anyone seen similar behavior? I do not see this on iOS or the BluetoothLED dongle (BLED112). I believe too that the Windows UWP version I started did not show this behavior.
Thanks to anyone who can help me understand this -- currently I have a hack in to discard received data that exactly matches what was sent out recently. But I would hate to release like this.
In the context of BLE (Bluetooth Low Energy), Write Commands can be used to write from a Client to the Server, and Notifications to write from the Server to the Client. In my setup, the Client is a Central device (Android phone), and the Server is a Peripheral (dev board).
After performing several data throughput tests with multiple phones, I noticed that the throughput varies greatly with the phone, which is expected because a great deal of the BLE lower layers implementation is up to the manufacturer to figure out. But what caught my attention was that Write Command always achieve a much lower throughput that Notifications, independently from the phone. Why is that?
They should have the same throughput. Multiple write commands and notifications can be sent during one connection event. They are treated the same.
You could use an air sniffer to see if you find any problems.
How long the connection event should be open can be suggested when the connection is created and with connection parameter updates. Sadly, Android's BLE stack hard codes this to the default value, which means no recommendation. That will in practice mean you are limited to 3 or 4 packets per connection event.
Ok, so here's my problem. I have an android app transmitting UDP packets to a PC (a java program which listens for the packets), based on user interactions with the android device. To keep things simple, let's say this is happening - the user taps the screen of the phone, and it sends a UDP packet with the coordinates of the point where the user tapped. The listener program receives and reads this packet, and outputs the string received, using System.out.println().
Now, what's happening is that the program works perfectly for the first few packets. Then it stops working, as in, the listener program on the desktop does not display any output. Now, the issue is probably with the transmission, as I have a text label on the app (for testing purpose) that displays what is being transmitted, so the transmission packet is definitely being built properly. But I have no idea on how to understand if this is a problem with sending the data (on the android device side), or receiving (on the desktop side). How can I find out what's wrong and solve this issue?
I have mostly worked with TCP transmission and all the UDP i have done are mostly Copy-Paste [:-)] or with APIs
For TCP, after transmission, I throw a debug message, which helps me to know that the transmission occurred properly. But in this case, your write will have to be blocking.
Also, you could use a Packet Tracer on your listener terminal to determine whether it is receiving the packets properly. The one I love is WireShark (I think its a fork of Ethereal). Its really easy to use. You tell it which interface to listen on and give a filter and it will list out the packets as and when they come.
However, if you are using Windows 7, you will need admin privileges.
Are you using native code or Java classes? I have tried both, and with the NDK (i.e. sockets written as C functions being called from Java) I have seen erratic behavior on the server side, mostly due to threading issues. Using the Java Socket class I have not had issues however. Moreover, if your Android app is the client, that should not be the problem. I would also use Wireshark to check whether the packets are reaching the PC.
I'm trying to develop a turn base game over XMPP. ( The only solution I found for multiplatform game ). I can send messages without problems. If the other user isn't online, the server (OpenFire) save it for later deliver.
The problem cames when a device change the network ( change from 3g to WiFi, change 3g IP... ) or the device lost the network ( turn off 3g, wifi or lost connection ). The server thinks that the device is online and send the message but it ( obviusly ) never arrive, so packet is lost.
I know one solution. Implement ACK over my game protocol, but I don't like this idea to much. Do you have any other suggestion? I think this is a server problem. Do you know another server witch implements TCP or ACK?
Thank you!!
EDIT: I do that: Connect device to server. I turn down the 3G and WiFi connectivity to the device. Android and the server still thinking that the connection is alive.
http://issues.igniterealtime.org/browse/SMACK-331
PD: I ask to openfeint for they multiplayer api, but they didn't asnwer me...
Although BOSH will likely work in this case, another option is XEP-0198: Stream Management. This will allow you to have all of the performance of a fully-connected socket, along with quick reconnects, positive acking, and queuing while un-acked or disconnected in both directions.
Under some conditions TCP/IP is not reliable. This is why ACKs, message receipts, IQs or other extensions in XMPP can solve this problem.
I have done lots of mobile programming over the years, also often with Openfire. But I have not seen lost messages. So I assume that there is a problem in either the library you are using on Android, or the Openfire version you use.
Instead of using raw sockets you can also use BOSH:
http://xmpp.org/extensions/xep-0124.html
BOSH is based on WebRequests like Comet and works very well in environments where you switch or loose often the connection. It can keep the connection alive until your network is back and does not result in connection drops when one or more requests fail in a row.
I too came across this issue and been trying to figure out a proper way to get this resolved.
Problem for me is that I set the offline messages policy to "Always Store" and thus XEP-0184 doesn't really help out to determine if a message is not getting delivered to its receiver.
Providing this scenario:
- I have 2 users chatting, call them A and B
- A sends B a message while B's connection just got lost
- The message got dropped and A is not notified
- In this case A does not know that the message got dropped, it'll just assume that the message is delivered to the server, server will eventually deliver it to B
- B lose the message forever
So I temporarily put in a work around for this... i store all those messages that are not delivered (i.e. haven't receive the message delivery receipt) into a queue, then periodically (say, 6 minutes - it's the time when those dead connections got wiped) check every message of the queue to see if the intended recipient is "Online" AND the receipt's still not received... If it's the case, then I mark that message as "Failed delivery"
This is quite a terrible way to fix it (please advise if you have a better way of doing it). I think the best thing to do is to have the server to do this: if message failed to deliver and the offline message policy is "Always Store", then we store it to "ofoffline" for delayed delivery.
This is an old one, but I was solving such an issue recently. It helped me when I set the XMPP resource (the last part of full JID) to something reasonable, when building connection. Otherwise it will be random generated on each reconnect - and that changes the full JID.