I am new to the BLE development. I want to send some large amount of data over a BLE connection with maximum throughput.
I have a GATT server, which is running on Linux, and a client which is running as an app on Android. I have created a custom characteristic with the maximum allowed size(512 bytes). I am requesting it from the app with a read operation. Every time I receive a call for reading on the server side I change it's value until I am finished with all the data(I know this isn't the best way but that's not the problem for now).
As for the connection parameters using android's requestConnectionPriority(CONNECTION_PRIORITY_HIGH) i can see that they are trying to negotiate a connection interval of 7.5ms but for some reason, it changes to 15ms and it remains there. Maybe my phone doesn't support it but I don't think so.
The next thing and the main problem is the MTU. Using hcidump, I can see that they are starting to negotiate the MTU as I can see an MTU Request from the client with a value of 517(by default) and a server Response with the same value. But when I trigger the data exchange I can see(using Wireshark) that the packets are containing only 32 bytes of payload. I don't know if it's a restriction with my Bluetooth adapter.
An MTU packet can consist of many radio packets and the 32byte radio packet payload is probably a restriction in your bluetooth adapter. No phone supports 7.5ms connection intervals at this point in time. You should also enable Data Length Extention if your phone and device supports it. This will allow you to transmit multiple MTUs throughout the connection event.
Related
I'm making an Android application getting data from a USB device (mode USB host). I read the document at https://developer.android.com/index.html and also some posts in stackoverflow and I found that they use sometimes bulkTransfer(), sometimes controlTransfer() but I can't find out the difference between two method and when do we use each one? Could anyone please give me some suggestion?
Control Transfer is mainly used for sending commands or receiving a device descriptor. It is normally used when setting up a device. The typical packet length is 8 bytes for low speed devices and 8, 16, 32 or 64 bytes for high speed devices. Data which is transferred via this method is formatted into three packets:
Packet 1 – Setup: The packet which contains the address and endpoint number
Packet 2 – Data: The data being sent
Packet 3 – Status: Where the device acknowledges whether the setup packet has been received and read correctly without errors.
Bulk Transfer is used for sending large packets of data to your target device. Printers and Scanners generally follow this transfer protocol. Bulk transfer has built in error correction to ensure that data is transferred and received without error. The process is considered complete when the amount of data obtained is equal to the amount of data requested. This transfer method is not ideal for time sensitive applications since there is no latency guarantee.
I have been analizing the Bluetooth snoop file on several Android devices, where the Android device is the Central (Client) and the peer device is the Peripheral (Server).
When performing a Write Command (WC) (sending data from Android to Peripheral), besides the WC sent packet, wireshark identifies an HCI event labeled as Number of Completed Packets.
As HCI messages are exchanged between Host and Controller of the same device, do these events take up a packet slot on the Connection Interval(CI) ? Because while I'm able to send 3 packets/CI using Notifications, only 1 packets/CI is being sent when using Write Command.
You can send multiple Write Without Response packets (called Write Command on the ATT layer) in a single connection event. The bluetooth controller has a buffer where it enqueues outgoing packets (called ACL data packets). You can see the size of this queue in the snoop log by looking for LE Read Buffer Size. When bluetooth is started, the host reads this value will keep track of the currently available space in some counter variable. When it sends a packet to the device (for example a Write Command), the counter is decreased by one. When the host receives a Number of Completed Packets event (which means that the packet has been sent out over the air), it increases the counter. As long as this counter remains positive after you issue a Write Without Response, your GATT onCharacteristicWrite callback will be called so you can immediately enqueue another Write Without Response packet. When the next connection event occurs, it sends multiple packets that are enqueued.
If you still can't achieve throughput higher than one packet per connection event, make sure you have set up the characteristic to use the WRITE_TYPE_NO_RESPONSE.
I'm developing a Bluetooth low energy application to connect with a device which will be sending 20 byte long transmissions in notification mode in intervals of 6 milliseconds or more.
So far the application is working fine. It can scan, discover and then subscribe to the characteristic to receive data notifications. The issue is that for the first 2-4 seconds the data will be read nicely in a sequential order but after that the notification data starts to appear in bursts or as in chunks of data but not in consistent intervals between each transmission.
This doesn't happen when i check the data transmission with the Texas Instruments BLE evaluation kit, there my reader shows a perfect transfer with not bursts appearing. Only on android it's become visible.
Could this be an issue that can be configured to fix in android side?
Could this be a problem with the high transmission rate (~milliseconds intervals)?
Thank you..
So it sums up to that optimal throughput can be achieved with the proper configuration of connection parameters for the BLE connection. It is usually done at the peripherals end and may have to differ for the platform connecting to (i.e. IOS , Android may have different connection requirements..)
P.S. : Since i was looking at android found this method documented here https://developer.android.com/reference/android/bluetooth/BluetoothGatt.html#requestConnectionPriority(int) which is calling for a connection priority( CONNECTION_PRIORITY_BALANCED, CONNECTION_PRIORITY_HIGH or CONNECTION_PRIORITY_LOW_POWER) But I didn't test it.
You could try to enable Bluetooth HCI Snoop Log in Developer Options and then view the log file in WireShark. Look for connection update commands, these can be issued by either side of communication. This command change the transmission settings and slow down the transfer. Also look for GAPROLE_PARAM_UPDATE_ENABLE in your TI BLE app.
Yes Michael we use CC2650 and for our requirement BLE is sufficient bur I'm not sure if it really supports bluetooth classic (http://www.ti.com/product/CC2650/description) .
You can try playing around the BLE connection parameters to get the setup tuned, that's what we did other than trying to build the app giving priority to BLE operations.Take a look at this for more information on connection parameters.
https://devzone.nordicsemi.com/question/60/what-is-connection-parameters/
You can't configure the connection parameters on the phone but the peripheral(i.e. SensorTag) even so there's not guarantee that the given parameters will actually be accepted by the central device in case will settle with a set of parameters accepted by the central device. (Android and IOS have different policies in terms of these..)
In our case we are transmitting in intervals of 15ms and seems quite stable. But all these high frequent transfers at the cost of the low power consumption capabilities of BLE which is really what it is intended for. We could go even below that close to 7.5ms which is the minimum connection interval supported by Android. Our initial tests were stable but reliability of such a low latency is questionable.
I'm using VpnService to capture packets and after capturing them I want to send them to their destination. Now, the capturing aspect works. I got the protocol, Source IP / Destination IP and the Source Port / Destination Port from the packets.
I was thinking about creating a socket with these parameters. VpnService has actually a method protect() which protects the socket and the traffic will not be forwarded through VPN.
I don't have muche experience with sockets. But the other day I read a comment here saying I only send the actual data through the socket and not the IP or TCP header? But since TCP uses a 3-way-handshake (correct me if i'm wrong) the first packets wouldn't have any data, just a SYN - flag.
Does that mean this method doesn't work or can i send a packet with the header through the socket?
Yes, we can send data via sockets and dont have to worry about Transport-layer or IP layer headers. Depending upon the socket type (SOCK_STREAM or SOCK_DGRAM), the underlying layer (and the stack for behavior) adds TCP or UDP header on top of application data. Lastly, before sending it out, the IP layer would add IP header. But, if your design requires, you can always "encapsulate" your entire packet with IP/TCP/Data as a data and send it to the other end. When the other end receives the packet, the application layer would receive data which would actually be the original IP/TCP/Data.
Edit
You should explore 2 more questions: a) how would we maintain the packet boundary and (b) what about MTU size. The first one needs to be thought about since TCP does not bother about packet boundary, so it is possible that when you read data on the receiver, it would not start with the header -- one quick solution would be to check if you are hitting the header and then read the length of the packet and continue to read till you have read that much data. The second one is if your packet is already the size of MTU, then adding 2 additional headers would throw it beyond MTU and hence, would likely be fragmented. If you are worried about performance, then this may not be a good thing.
I wanted to check what the average time to setup a
TCP
HTTP
UDP
SIP
connection on mobile handset (say Android OS device) is. What I want to check is the total average time to connect between a mobile handset and server say sending just 1 byte of data.
Write a test app that tries connection to each, records the times to setup, connect, send a byte and disconnect. Your app then repeats this several times, probably with a wait, then gives you an average.
This is also going to depend on the network type (GSM/UMTS/Wi-Fi), load on the cell, internet traffic between your handset and the destination etc.
What are you attempting to do?