I have an application that needs to transmit bursts of data to android device.
A sequence of packets during this burst is attached. We observe that during channel change, stack goes too busy and does not transmit anything for ~45-48 ms.
This leads to data loss on transmitting device. We cannot hold that amount of data on transmitter to cope with delay in bluedroid.
Are there any suggested changes in AOSP / bluedroid that can improve this situation? Any suggestions on where to start looking?
Target device is Nexus 7 2013 (flo) with AOSP 5.1.1_r14
This is probably hardware specific behavior. Perhaps the hardware buffer is limited to 4 packets, and driver delivers/fetches packets only during interrupt on CI elapse.
MD column is "More Data" it tells if transmitter has more data to send. In this case, Slave sets MD=0 on 4th transmitted packet on every CI and so master goes away. Master checks back in next CI with empty packet.
In my application, somehow this gap is 46 ms due to issue in negotiating ConnInterval, so master defaults to 48.75 ms CI. And so it sleeps for ~46ms
My data loss is happening due to issue mentioned by #Gaurav, but on Slave, i.e. Slave LL is dropping packets fed to it while it already has 4 packets. [This is a guess] I'll confirm and update.
Update
Looks like CONNECT_IND contained transmitWindowSize of 2.5ms in the captured log, transmission of 4 packets will take 2.3+ms, and 5th packet wont fit in 2.5 ms. transmitWindowSize might be the real reason why no more than 4 packets are transmitted per CI, still doesn't explain MD=0 in 4th packet though.
Related
I'm doing a little test where I send out a short string(4-8 bytes) to a client every 0.5 seconds from a Node.js server using ws. The client is either using iOS/Android or a web browser. The client does not send anything back to the server, except for TCP-ACKS I suppose. The weird thing is, when I'm debugging the app in iOS using XCode network report, I can only see that the client sends out some bytes(approx 500) when the connections establishes, probably during the HTTP handshake. The remaining time ZERO data is going out from the device, there is only data coming in. The same results is achieved when receiving data in Chrome and tracking the data using Nettop.
The thing that makes so confused is that on the Android, almost the same amount of data that goes in to the device goes out when inspecting the network usage with Android profiler/Battery Historian/TrafficStats. I have tried using different libraries for the Websocket implementation and using different Android devices.
I have a hard time believing the ACKS sent out by the android is as big as the message received, even though it's just a small string of four characters.
So my questions are:
Could the case be that Nettop/XCode network report is simply ignoring all the ACKS, so in reality as much data is sent out in Chrome/iOS as in Android?
Is there something 'Wrong' with the libraries used in Android or could it be something with its operating system?
Could an ACK be as big as a simple TCP-package with 4 characters in it?
The result below when using Websocket
The data received/transmitted when using Android Battery Historian
The data received/transmitted on iOS using Network Report
Could an ACK be as big as a simple TCP-package with 4 characters in it?
An ACK consists of the IP and the TCP header and no payload. With IPv4 this means at least 20 bytes IP header and 20 bytes TCP header, i.e. 40 bytes. A packet with 4 bytes payload is only larger by 4 bytes, i.e. 44 bytes or just 10%.
The network report in Android shows 68350 in vs. 61370 in bytes, which is a difference of 11%. This matches the expected difference.
I'm not familiar with what iOS measure here, but it probably either measures only the application payload (i.e. the 4 bytes) or simply ignores packets with no payload, i.e. the ACK's.
I am currently working on file transfer between two mobile devices. For this, i am using socket communication. On socket using DataInput and Data Output Stream, i am able to get approximately 6 MB/s speed. But as per my use case user can select all images, videos, apk's, documents to transfer. so let's say user selected 2GB data to transfer with my app he has to wait for more than 6 minutes. so I have done some modifications.
1] On receiver side I opened up 5 ports (one for image, one for video and so on)
2] Sender will send appropriate files on corresponding ports.
3] I am sending all the files at a time parallelly using async task and receiver is receiving data in 5 different thread.
But the problem is speed still is same for 2GB it's approximately 6MB/s.
So my question is:
1] Will multiport socket increase performance?
2] if I am doing something wrong, then how can I send data parallelly on different ports on a socket?
Android devices cannot go more than 7mb/s as per my experience. Use some data monitor start file transfer on lan. You will notice its maximum speed is approximately 7mb.
Sorry i cannot comment yet so added as answer.
If I want to transfer a lot of data (e.g. 1 MB file) over BLE, what's the best way to do it?
I control both sides of the connection, but the client side is iOS/Android so only has access to GATT. I can't do anything with L2CAP.
I also can't wait for Bluetooth 4.1, 6LoWPAN, Connection-Oriented-Channels or anything like that.
I would assume the answer is to have one "request" characteristic that you write a data request to ("Give me 3000 bytes starting at byte 0"), and a "data out" characteristic that sends lots of 20 byte notifications (the maximum characteristic size) containing the data.
Is there a better way?
Yes we are using the approach you have mentioned.
Request data with the last index number(First time the index is 0)
The server send you with data with index no.Store the index no for subsequent format
continue Step 1 and 2 till the time server sends end of data-probably with index -1 or something.
Make sure you transfer the data you required in the most space efficient format.See if you can zip the files and transfer it.
You can update connection interval to small value with smallest 6*1.25 ms in remote BLE device.
Actually, BLE is designed for Low energy, small packet, low data rate.
L2cap data will be transmitted in different data channel with frequency hop. Packets TX/RX happen within each connection interval and max number of packets TX/RX in an event is restricted by specification, finally implemented by manufacture. So we can change connection interval as small as possible to increase data rate.
Refer BT 4.0 spec Vol 2, 7.8.18 LE Connection Update Command.
Try to negotiate a larger MTU than the default.
Then each notification can be larger. Even though it will be fragmented by the L2CAP layer, you will get a slightly larger throughput since the packet header will be smaller.
I'm writing an BLE application, where need to track if peripherals device is advertising or has stop.
I followed getting peripherals without duplications this and BLE Filtering behaviour of startLeScan() and I completely agree over here.
To make it feasible I kept timer which re-scan for peripherals after certain time (3 sec). But with new device available on market(with 5.0 update), some time re-scan take bit time to find peripherals.
Any suggestion or if anyone have achieved this?
Sounds like you're interested in scanning advertisements rather than connecting to devices. This is the "observer" role in Bluetooth Low Evergy, and corresponds to the "broadcaster" role more commonly known as a Beacon. (Bluetooth Core 4.1 Vol 1 Part A Section 6.2)
Typically you enable passive scanning, looking for ADV_IND packets broadcast by beacons. These may or may not contain a UUID. Alternatively, you can active scan by transmitting SCAN_REQ to which you may receive a SCAN_RSP. Many devices use different advertising content in ADV_IND and SCAN_RSP to increase the amount of information that can be broadcast - you could, for instance, fit a UUID128 into the ADV_IND followed by the Device Name in the SCAN_RSP. (Bluetooth Core 4.1 Vol 2 Part E Section 7.8.10)
Now you need to define "go away" - are you expecting the advertisements to stop or to fade away? You will get a Receive Signal Strength Indication "RSSI" with each advertisement (Bluetooth Core 4.1 Vol 2 Part E Section 7.7.65.2) - this is how iBeacon positioning works and there's plenty of support for beacon receivers in Android.
Alternatively you wait for N seconds for an advertisement that should be transmitted every T seconds where N>2T. The downside of the timed approach is that probably not receiving a beacon isn't the same as definitely receiving a weak beacon; to be sure you need N to be large and that impacts the latency between the broadcaster being switched off or moving out of range and your app detecting it.
One more thing - watch out that Advertising stops if something connects to a Peripheral (if you really are scanning for peripherals) another good reason to monitor RSSI.
First scenario: Bonded Devices
We know that if a bond is made, then most of the commercially available devices send directed advertisements in during re-connection. In situations such as this, according to BLE 4.0 specification, you cannot scan these devices on any BLE sniffer.
Second scenario: Connectable Devices
Peripheral devices are usually in this mode when they are initially in the reset phase. The central sends a connect initiator in response to an advertisement packet. This scenario offers you a lot of flexibility since you can play around with two predominant configuration options to alter connection time. These are: slavelatency on the peripheral and conninterval on the central. Now, I don't know how much effort it's going to take get it working on the Android platform, but if you use the Bluez BLE stack and a configurable peripheral such as a TI Sensor tag, then you can play around with these values.
Third scenario: Beacon devices
Since this is what your question revolves around, according to the BLE architecture, there are no parameters to play with. In this scenario, the central is just a dumb device left at the mercy of when a peripheral chooses to send it's beaconing signal.
Reference:
http://www.amazon.com/Inside-Bluetooth-Communications-Sensing-Library/dp/1608075796/ref=pd_bxgy_14_img_z
http://www.amazon.com/Bluetooth-Low-Energy-Developers-Handbook/dp/013288836X/ref=pd_bxgy_14_img_y
Edit: I forgot, have you tried setting the advertiser to non-connectable? That way you should be able to get duplicate scan results
I am dealing with a similar issue, that is, reliably track the RSSI values of multiple advertising devices over time.
It is sad, the most reliable way i found is not nice, rather dirty and battery consuming. It seems due to the number of android devices that handle BLE differently the most reliable.
I start LE scan, as soon as i get a callback i set a flag to stop and start scan again. That way you work around that DUPLICATE_PACKET filter issue since it resets whenever you start a fresh scan.
The ScanResults i dump into a sqlite db wich i shrink and evaluate once every x seconds.
It should be easy to adapt the shrinking to your use case, i.e. removing entries that are older than X, and then query for existance of a device to find out if you received a ScanResult in the last X seconds. However dont put that X value too low, as you must take into account that you still lose alot of advertisement packets on android LE scan, compared to a BLE scan on i.e. bluez..
Edit:
I can add some information i already found for speeding up the performance on Advertisement discovery. It involves modifying and compiling the bluedroid sources and root access to the device. Easiest would be building a full android yourself, i.e. Cyanogenmod.
When a LE scan is running, the bluetooth module sends the scan sesponse via HCI to the bluedroid stack. There various checks are done until it finally gets handed to the Java onScanResult(...) which is accessed via JNI.
By comparing the log of the hci data sent from the bluetooth module (can be enabled in /etc/bluetooth/bt_stack.conf) with debug output in the bluedroid stack aswell as the Java side i noticed that alot of advertisement packets are discarded, especially in some check. i dont really understand, beside that it has something to do with the bluedroid inquiry database
From the documentation of ScanResult we see that the ScanRecord includes the advertisement data plus the scan response data. So it might be that android blocks the report until it got the scan response data/ until it is clear there is no scan response data. This i could not verify, however a possibility.
As i am only interested in rapid updates on the RSSI of those packets, i simply commented that check out. It seems that way every single packet i get from the bluetooth moduly by hci is handed through to the Java side.
In file btm_ble_gap.c in function BOOLEAN btm_ble_update_inq_result(tINQ_DB_ENT *p_i, UINT8 addr_type, UINT8 evt_type, UINT8 *p)
comment out to_report = FALSE; in the following check starting on line 2265.
/* active scan, always wait until get scan_rsp to report the result */
if ((btm_cb.ble_ctr_cb.inq_var.scan_type == BTM_BLE_SCAN_MODE_ACTI &&
(evt_type == BTM_BLE_CONNECT_EVT || evt_type == BTM_BLE_DISCOVER_EVT)))
{
BTM_TRACE_DEBUG("btm_ble_update_inq_result scan_rsp=false, to_report=false,\
scan_type_active=%d", btm_cb.ble_ctr_cb.inq_var.scan_type);
p_i->scan_rsp = FALSE;
// to_report = FALSE; // to_report is initialized as TRUE, so we basically leave it to report it anyways.
}
else
p_i->scan_rsp = TRUE;
I have an android client that functions as a central and have an app on my MAC (peripheral) that this central connects to and sends data.
At this point, I need to wait almost 100ms after I call writeCharacteristic(..) to receive the onCharacteristicWrite(..) callback. I am sending strings. If I send smaller strings, the throughput is great (understandably). When the string contains about 200 characters and I send 20 byte chunks, it takes almost a second before the entire string is seen at the peripheral. When I set the write type to NO_RESPONSE before writing the characteristic, I see no data on the peripheral.
After I connect, I have done the following to improve throughput:
Stopped discovery after services are discovered because it is an expensive operation
I set the write type to default first - When I do this, I see data on the peripheral. But, there is a significant delay. When I set the writeType to NO_RESPONSE, I see no data on the peripheral. I have no logic in onCharacteristicWrite(..) either. Sometimes, I see the data getting truncated on the peripheral.
I have set the desired connection latency to low on my mac app. Is there a way to set a value (as 7.5ms perhaps?).
When I set the write type to default and send a string of 200 characters - I split the string into 20 byte chunks. I now have 10 chunks to send. If I set characteristic value and call writeCharacteristic(..) in loop, I see no data. When I add a ~100ms delay after writeCharacteristic(..) before it executes the next iteration of the loop, I see data on the peripheral.
I see a huge increase in throughput between an iOS central - iOS peripheral. I don't see why Android central - iOS peripheral shouldn't work he same way. From my understanding, Android and iOS use the same chip.
Any reason the performance is so poor? Is there anything else I can do to improve throughput?
Please have a look at the MTU size. My experience:
Using a iOS central, the central automatically starts the MTU size negotiation with some large value. I think it is larger than 200 bytes.
On most Android devices I tested this does not start automatically but you have to start the MTU size negotiation by your app (central). If you do not do that, Android cuts your data into 20 byte pieces. This has big influence on your throughput.