My client is building an application using a Bluetooth Low Energy dongle. This unit takes continuous measurements. They originally went with the BLE for energy purposes and power saving. Since this unit will run for about 8 hours at a time while recording data, they will need it to operate the full 8 hours.
They want me to setup their android app to Poll the device on 4 different channels at a frequency of up to 250/sec. To me that defeats the Low Energy aspect. I am pretty sure that BLE was designed with notifications in mind which should send data to an app onChange instead of the app asking the device for data at a given frequency.
At the moment I have the App setup with device notifications. That means that the GattServer is sending data and then the app is only receiving notifications of data change. The problem with that is that there are 2 channels for a sensor which require an amount of data at a rate of 250/sec. Even if there is no change.
The server may or may not send at that rate. For one channel I get about 25 reads er sec and for the other channel I get maybe 4 reads per second. Since this is all setup with Notifications which only notify onChange I suspect that the app is doing exactly what it is designed to do. So since there are not enough data points, we are thinking of switching those two channels to polling instead.
Does this make sense to you and doesn't this defeat the Low Energy or even the entire design of a Low Energy chip design?
250/sec = 1 packet per 4ms.
BLE's smallest connection interval is 7.5ms. All data exchanged between master and slave must be done within a connection interval. Even though multiple packets can be sent back and forth within a single connection interval, I don't know how you are going to make Android squeeze multiple messages into one connection interval. In that way, you cannot make sure the 250 polls are equally spaced in time.
I think Android simply wouldn't let you send messages at that frequency. Send networking packets is usually a blocking operation. So your app will be blocked by the operating system when tx is busy. In the end, what you might get is simply tens of messages per second.
Related
I'm currentily developing an application which uses Bluetooth Low Energy to communicate with a BLE device. The problem is that the project require an high continuous exchange of data for working.
Currently i've developed 4 fragments which share the same BluetoothGatt istance and the same data array. When i connect to the BLE device, i set the connection priority to high, then i start a writing loop which writes the data, usually just 4 bytes, every 50 mls.
At the same time i start reading and i update my interface.
I've noticed that if i stop the writing i receive a packet of data every 50 mls, but if i let the writing loop working the reading time increase from 50mls to 100 or more.
That's not a real big problem but it reduce all the sistem performances.
I looked on the internet for solutions but i didn't find nothing, except the connection priority that already helped me a lot, i'd like to know if someone have never managed such problems and how he did it. thanks
BLE Device that you are using has something which is called "connection interval". It is set in firmware of device. Minimal value for it is 7.5ms, but usually is set to 30ms or more (iOS even will not work with intervals lower than 20-30ms or it will simply miss the packets).
So when BLE device firmware is designed, connection interval is set to some value which is safe and will work with most mobile devices, and also very important to save the battery.
Al this means that you can transfer to, or from device once per connection interval, no mater if it is read, write or notification.
Some devices has configuration settings which allows changing of connection interval, but if you just wanted to know what happens, thats it.
I've read this tutorial about data transfer in a battery efficient way.
All the lessons are based on one, simple concept: polling the server is Android is battery inefficient. For this reason, Google Cloud Messaging is introduced in order to send messages from the server to the device only when needed.
There is only one problem: I'm trying to implement a "mobile cloud", so a cloud composed by mobile devices, where each device can join/leave the network with high frequency. So I need some mechanism to detect when a device is not reachable anymore. Until now, in all the works that I've seen on the topic, the only solution was to periodically ping the main server to say "Hey, I'm still alive!" from the mobile device. Obviously this solution is battery killing, but until now I've not seen/found any better solution.
Do you know any battery efficient solution for this problem?
There's no reason why pinging the server periodically (heartbeat) is necessarily wasteful of the battery/inefficient. That depends upon how frequently you need to ping, and whether your ping needs to initiate its own transmission vs piggy backing on some other transmission.
Let me explain. Battery inefficiency depends upon whether or not you are increasing the frequency or duration of the transceiver being in an active state. If the transceiver is continually active anyway, such as it is continuously exchanging data or audio, then a heartbeat adds no additional burden. If it is not active, then there will be additional energy usage but that depends upon the frequency of your heartbeat compared to how long a ping will cause the transceiver to be powered. Even then, it's probably irrelevant to your application as I suspect "cloud" means the devices are active and connected.
Let's assume that your heartbeat is such that it will increase the duration of your transceiver being active. There are still techniques you can use to decrease this impact, such as caching your beat and sending it only when it can piggy back on another transmission. Of course, such solutions depend upon whether your heartbeat is implemented in an application, OS or kernel.
I suggest you do actual tests to see if there is truly an impact on your devices.
PS I'm not saying the tutorial is wrong. It isn't. But it is addressing a broader and more general problem then what you have.
android.net.wifi.WifiManager has the startScan() method to perform a passive scan of the WiFi channel and when the scanning is completed onReceive() method is called to access the WiFi channel readings.
However, as this webpage shows, which I have also confirmed with my own code implementation, the passive scanning of the WiFi channels take different times with different phones. Sometimes, some platforms are even around 10 times slower..
I would like to know what causes the phones to use so much time. Is it the driver? Is it some energy saving features? Or none of them and something very different is the reason?
The article gives you a hint:
Passive scans are slower to perform, because the device needs to
listen on every channel for some period of time, waiting for
broadcast beacons. Beacon frames are transmitted by APs periodically
to announce the presence of a wireless LAN. Beacon frames contain all
the information about its network. This approach consumes less energy,
since the radio doesn't use transceiver, but only the receiver. It
also takes more time to finish, since it has to listen on every
channel.
Some period of time is different for each device. If you listen for too short a time on a channel, you might miss a beacon frame. Too long and it might take a while to enumerate all the available APs when the user first scans a new location.
Furthermore, I didn't see actual details on how those results were generated. One might imagine a smart algorithm would use a longer listen time when first in a new location but switch to shorter ones after it's been there for a while.
I'm developing a Bluetooth low energy application to connect with a device which will be sending 20 byte long transmissions in notification mode in intervals of 6 milliseconds or more.
So far the application is working fine. It can scan, discover and then subscribe to the characteristic to receive data notifications. The issue is that for the first 2-4 seconds the data will be read nicely in a sequential order but after that the notification data starts to appear in bursts or as in chunks of data but not in consistent intervals between each transmission.
This doesn't happen when i check the data transmission with the Texas Instruments BLE evaluation kit, there my reader shows a perfect transfer with not bursts appearing. Only on android it's become visible.
Could this be an issue that can be configured to fix in android side?
Could this be a problem with the high transmission rate (~milliseconds intervals)?
Thank you..
So it sums up to that optimal throughput can be achieved with the proper configuration of connection parameters for the BLE connection. It is usually done at the peripherals end and may have to differ for the platform connecting to (i.e. IOS , Android may have different connection requirements..)
P.S. : Since i was looking at android found this method documented here https://developer.android.com/reference/android/bluetooth/BluetoothGatt.html#requestConnectionPriority(int) which is calling for a connection priority( CONNECTION_PRIORITY_BALANCED, CONNECTION_PRIORITY_HIGH or CONNECTION_PRIORITY_LOW_POWER) But I didn't test it.
You could try to enable Bluetooth HCI Snoop Log in Developer Options and then view the log file in WireShark. Look for connection update commands, these can be issued by either side of communication. This command change the transmission settings and slow down the transfer. Also look for GAPROLE_PARAM_UPDATE_ENABLE in your TI BLE app.
Yes Michael we use CC2650 and for our requirement BLE is sufficient bur I'm not sure if it really supports bluetooth classic (http://www.ti.com/product/CC2650/description) .
You can try playing around the BLE connection parameters to get the setup tuned, that's what we did other than trying to build the app giving priority to BLE operations.Take a look at this for more information on connection parameters.
https://devzone.nordicsemi.com/question/60/what-is-connection-parameters/
You can't configure the connection parameters on the phone but the peripheral(i.e. SensorTag) even so there's not guarantee that the given parameters will actually be accepted by the central device in case will settle with a set of parameters accepted by the central device. (Android and IOS have different policies in terms of these..)
In our case we are transmitting in intervals of 15ms and seems quite stable. But all these high frequent transfers at the cost of the low power consumption capabilities of BLE which is really what it is intended for. We could go even below that close to 7.5ms which is the minimum connection interval supported by Android. Our initial tests were stable but reliability of such a low latency is questionable.
I noticed that when looking at the best practices for Bluetooth Low Energy(BLE), Apple mentions that using the API comparable the Android GATT's onNotificationChanged() is more efficient than calling calling getValue() on a characteristic that updates regularly.
"Though reading the value of a characteristic using the readValueForCharacteristic: method can be effective for some use cases, it is not the most efficient way to retrieve a value that changes. For most characteristic values that change—for instance, your heart rate at any given time—you should retrieve them by subscribing to them."
Link for reference
I am working on a device where we have multiple sensors (similar to the Text Insturment's CC2541). What is going on behind the scenes (i.e. Bluetooth antenna power, sensor activity) that makes the notifications more efficient?
I am seeing if it is beneficial to pace the transmission through setting the update period on the device, by pinging the device (the queries will be to get the reading 1-2 times a second for each sensor).
The difference is that generally event-driven behaviours are more efficient than polling behaviours. This is particularly important on a battery-powered device, such as a phone or an embedded sensor. Making a connection and reading a value requires transmission from both devices - consuming battery power.
Using notifications allows the device to transmit only when it has new data.
Consider a couple of different scenarios -
First, a temperature sensor. In many environments (rooms, even outdoors) temperatures are relatively stable - changing by a few degrees over minutes. If you poll the sensor every second then 99.99% of the time you are going to get the same answer as last time. If the sensor notifies when the temperature changes you will only transmit for 0.01% of the time - a huge saving in power.
Second, a heart-rate sensor which reports a "table" of rates and durations for that rate. The data here can change much more frequently and less predictably - you don't know when the wearer is going to start or stop exercising - but outside of these "edges" the heart rate may be quite stable - such as when resting, watching TV etc. In this case you could poll, but you are going to waste power during the stable periods. With notification the sensor can periodically notify that a new collection of heart rate information is available and the notification period will vary depending on how variable the heart rate is. The device could also notify immediately if it detected a serious but rare event (such as a heart-attack).
You can, of course, include a maximum time before a notification is forced on your sensor, such as 30 seconds, to ensure that the app receives regular updates even when "nothing" is happening - this still represents an 83% reduction in transmissions compared to polling twice a second.