I'm doing a long write to a BLE for making an OTA update, but I need to wait for the write response of the BLE device for sending more data but I don't know how to catch the device write response, I'm using a Samsung galaxy tab s2 with android 7, and Kotlin for my code
override fun otaDataWrite(data:ByteArray) {
manager.connection?.flatMap { rxBleConnection: RxBleConnection? -> rxBleConnection?.createNewLongWriteBuilder()
?.setCharacteristicUuid(OTACharacteristics.OTA_DATA.uuid)
?.setBytes(data)
?.setMaxBatchSize(totalPackages)
?.build()
}?.subscribe({ t: ByteArray? ->
Log.i("arrive", "data ${converter.bytesToHex(t)}")
manageOtaWrite()
}, { t: Throwable? -> t?.printStackTrace() })
every time that I write the characteristic the subscriptions respond me immediately with the written data, I need capture the response of the characteristic, for sending more data
You are writing about response from the characteristic — I assume that the characteristic you refer is the one with UUID=OTA_DATA. The Long Write consist of small writes internally (so called batches).
What you probably want to achieve is something like:
fun otaDataWrite(data: ByteArray) {
manager.connection!!.setupNotification(OTA_DATA) // first we need to get the notification on to get the response
.flatMap { responseNotificationObservable -> // when the notification is ready we create the long write
connection.createNewLongWriteBuilder()
.setCharacteristicUuid(OTA_DATA)
.setBytes(data)
// .setMaxBatchSize() // -> if omitted will default to the MTU (20 bytes if MTU was not changed). Should be used only if a single write should be less than MTU in size
.setWriteOperationAckStrategy { writeCompletedObservable -> // we need to postpone writing of the next batch of data till we get the response notification
Observable.zip( // so we zip the response notification
responseNotificationObservable,
writeCompletedObservable, // with the acknowledgement of the written batch
{ _, writeCompletedBoolean -> writeCompletedBoolean } // when both are available the next batch will be written
)
}
.build()
}
.take(1) // with this line the notification that was set above will be discarded after the long write will finish
.subscribe(
{ byteArray ->
Log.i("arrive", "data ${converter.bytesToHex(byteArray)}")
manageOtaWrite()
},
{ it.printStackTrace() }
)
}
Well, after a lot of testing, I finally develop a standalone class for the OTA update with the android BLE API, and I used it together with all my RxBle methods, I don't know if I have a hardware problem or something else, but I solve the problem, thanks a lot.
Related
I am currently developing an app that will get Fitness History Data from Google Fit. Getting the steps and weight are okay but getting the sleep data is a bit of a problem. I want to get the accurate start and end time but the only way to get that is to bucket it by activity segment. The problem is, when there's a lot of data you are trying to get (the app that I'm currently developing requires to get data from 365 days ago at the most), it will not even return a timeout error and my app will keep loading. It will not even start to read the data from Google Fit. So, I wanna ask if there's a way to get the sleep data by activity segment despite its large size? And please do share your code. And by the way, this is how I get my sleep data:
val sleepReadRequest = DataReadRequest.Builder()
.aggregate(DataType.TYPE_ACTIVITY_SEGMENT, DataType.AGGREGATE_ACTIVITY_SUMMARY)
.bucketByActivitySegment(1, TimeUnit.MINUTES)
.setTimeRange(offset, end, TimeUnit.MILLISECONDS)
.build()
LogUtil.d(TAG, "getting sleep data...")
Fitness.getHistoryClient(
context,
Objects.requireNonNull<GoogleSignInAccount>(GoogleSignIn.getLastSignedInAccount(context))
)
.readData(sleepReadRequest)
.addOnSuccessListener { dataReadResponse ->
LogUtil.d(TAG, "success sleep data")
val secondSet = handleDataReturned(dataReadResponse, false, DateUtil.convertTimeStampToDate(offset, DateUtil.DATE_FORMAT))
dailyData.addAll(secondSet)
val allDailyList = getDailyDataList(dailyData, userHeight)
callback.onGetDataSuccess(allDailyList)
}
.addOnFailureListener { e ->
LogUtil.d(TAG, "fail sleep data")
if (e is ApiException && e.statusCode == GoogleFitError.NOT_SIGNED.code) { // not signed app exception
revokePermission(context)
callback.onGetDataFailure(GoogleFitError.parse(e.statusCode))
} else {
callback.onGetDataFailure(AppError.parse(Throwable(e)))
}
}
.addOnCompleteListener { task ->
LogUtil.d(TAG, "complete sleep data")
callback.onGetDataComplete(task)
}
Rather than aggregating, can you just read the activity segments and iterate through them yourself?
val sleepReadRequest =
DataReadRequest.Builder()
.read(DataType.TYPE_ACTIVITY_SEGMENT)
.setTimeRange(offset, end, TimeUnit.MILLISECONDS)
.build()
You can then retrieve the returned data with DataReadResult#getDataSet(DataType).
If you find that it's timing out (a year of data at once is potentially rather a lot!) I'd suggest batching the request into smaller ones and caching data in the past which is unlikely to change.
I am changing the way our application works to use retrofit instead of just OkHTTP
The way it used to work is we would send the request, retrieve the body as an input stream and read all bytes into a string.
After that we would parse the body using gson.
The problem is that the server seems to have a problem with the configuration (which I am told is on the list of things to fix but will take a long time) so for example it may return 400 bytes of data, but will send the message that the bytes are actually 402.
The way we currently handle it is by catching the EOF exception and ignoring it, and then parsing the returned string normally.
right now I use the following request to get the entities I want
#GET("/services/v1/entities")
suspend fun getEntities() : List<ServerEntity>
which , when there is no error, works correctly
the solutions I've found so far are either
a) use the following code to retry all requests until I do not get an EOF exception:
internal suspend fun <T> tryTimes(times: Int = 3, func: suspend () -> T): T {
var tries = times.coerceAtLeast(2)
try {
var lastException: EOFException? = null
while (tries > 0) {
try {
return func.invoke()
} catch (eof: EOFException) {
lastException = eof
tries--
}
}
throw lastException!!
} finally {
log.d("DM", "tried request ${times.coerceAtLeast(2) - tries} times")
}
}
which most of the time logs either 0 or 1 tries
or change all my requests to
#GET("/services/v1/entities")
suspend fun getEntities() : ResponseBody
and parse the stream manually ( ResponseBody may be incorrect but you can understand what I mean)
is there a way to use my original function and make retrofit know that in the case of an EOF exception it should resume instead of stopping?
I am trying to build a BLE Gatt Server with multiple custom services and multiple characteristics.
To begin with I used the Google Example: https://github.com/androidthings/sample-bluetooth-le-gattserver/tree/master/kotlin
This was straight forward and worked very well. I modified the UUIDs to fit mine and I could receive notifications and write to the chars with no problem.
This is where I define the services and chars:
fun createTimeService(): BluetoothGattService {
val service = BluetoothGattService(TIME_SERVICE,
BluetoothGattService.SERVICE_TYPE_PRIMARY)
// Current Time characteristic
val currentTime = BluetoothGattCharacteristic(CURRENT_TIME,
//Read-only characteristic, supports notifications
BluetoothGattCharacteristic.PROPERTY_READ or BluetoothGattCharacteristic.PROPERTY_NOTIFY,
BluetoothGattCharacteristic.PERMISSION_READ)
val configDescriptor = BluetoothGattDescriptor(CLIENT_CONFIG,
//Read/write descriptor
BluetoothGattDescriptor.PERMISSION_READ or BluetoothGattDescriptor.PERMISSION_WRITE)
currentTime.addDescriptor(configDescriptor)
// Local Time Information characteristic
val localTime = BluetoothGattCharacteristic(LOCAL_TIME_INFO,
BluetoothGattCharacteristic.PROPERTY_WRITE,
BluetoothGattCharacteristic.PERMISSION_WRITE)
service.addCharacteristic(currentTime)
service.addCharacteristic(localTime)
return service
}
fun createSerialService(): BluetoothGattService {
val service = BluetoothGattService(serialPortServiceID,
BluetoothGattService.SERVICE_TYPE_PRIMARY)
val serialData = BluetoothGattCharacteristic(serialDataCharacteristicID,
BluetoothGattCharacteristic.PROPERTY_WRITE,
BluetoothGattCharacteristic.PERMISSION_WRITE)
service.addCharacteristic(serialData)
return service
}
And here I am applying them to my server:
private fun startServer() {
bluetoothGattServer = bluetoothManager.openGattServer(this, gattServerCallback)
bluetoothGattServer?.addService(TimeProfile.createTimeService())
?: Log.w(TAG, "Unable to create GATT server")
bluetoothGattServer?.addService(TimeProfile.createSerialService())
?: Log.w(TAG, "Unable to create GATT server")
// Initialize the local UI
updateLocalUi(System.currentTimeMillis())
}
I would expect that everything would be working like before after adding the second service. But now if I try to write/subscribe to any of the characteristics (doesn't matter in which service) I just receive this:
W/BluetoothGattServer: onCharacteristicWriteRequest() no char for handle 42
W/BluetoothGattServer: onDescriptorWriteRequest() no desc for handle 43
I found what was going wrong. Apparently you cannot just add all services at once like I did. Adding the second service before the first one was confirmed lead to an Exception setting the services to null.
In the end I solved this by adding only one service initially.
Then in the onServiceAdded() Callback of the BleGattServerCallback() I started one after another.
We are currently trying to implement the transmission of images from a mobile device (in this case an IPhone) to a desktop application. We tried already the Bluetooth Serial plugin which works fine for Android but does not list any devices when scanning for our desktop application.
To cover iOS support (AFAIK iOS only supports BluetoothLE), we reimplemented our desktop application to use BluetoothLE and behave like a peripheral. Also we altered our Ionic application to use BLE plugin.
Now BluetoothLE only supports the transmission of packages with the size of 20 Byte whilst our image is about 500kb big. So we could obviously split our image into chunks and transmit it with the following function (taken from this gist):
function writeLargeData(buffer) {
console.log('writeLargeData', buffer.byteLength, 'bytes in',MAX_DATA_SEND_SIZE, 'byte chunks.');
var chunkCount = Math.ceil(buffer.byteLength / MAX_DATA_SEND_SIZE);
var chunkTotal = chunkCount;
var index = 0;
var startTime = new Date();
var transferComplete = function () {
console.log("Transfer Complete");
}
var sendChunk = function () {
if (!chunkCount) {
transferComplete();
return; // so we don't send an empty buffer
}
console.log('Sending data chunk', chunkCount + '.');
var chunk = buffer.slice(index, index + MAX_DATA_SEND_SIZE);
index += MAX_DATA_SEND_SIZE;
chunkCount--;
ble.write(
device_id,
service_uuid,
characteristic_uuid,
chunk,
sendChunk, // success callback - call sendChunk() (recursive)
function(reason) { // error callback
console.log('Write failed ' + reason);
}
)
}
// send the first chunk
sendChunk();
}
Still this would mean for us that we would have to launch about 25k transmissions which I assume will take a long time to complete. Now I wonder why is that the data transmission via Bluetooth is that handicapped.
If you want to try out L2CAP your could modify your Central desktop app somehow like this:
private let characteristicUUID = CBUUID(string: CBUUIDL2CAPPSMCharacteristicString)
...
Then advertize and publish a L2CAP channel:
let service = CBMutableService(type: peripheralUUID, primary: true)
let properties: CBCharacteristicProperties = [.read, .indicate]
let permissions: CBAttributePermissions = [.readable]
let characteristic = CBMutableCharacteristic(type: characteristicUUID, properties: properties, value: nil, permissions: permissions)
self.characteristic = characteristic
service.characteristics = [characteristic]
self.manager.add(service)
self.manager.publishL2CAPChannel(withEncryption: false)
let data = [CBAdvertisementDataLocalNameKey : "Peripherial-42", CBAdvertisementDataServiceUUIDsKey: [peripheralUUID]] as [String : Any]
self.manager.startAdvertising(data)
In your
func peripheralManager(_ peripheral: CBPeripheralManager, central: CBCentral, didSubscribeTo characteristic: CBCharacteristic) {
respective your
func peripheralManager(_ peripheral: CBPeripheralManager, didPublishL2CAPChannel PSM: CBL2CAPPSM, error: Error?) {
offer the PSM value (= kind of socket handle (UInt16), for Bluetooth stream connections):
let data = withUnsafeBytes(of: PSM) { Data($0) }
if let characteristic = self.characteristic {
characteristic.value = data
self.manager.updateValue(data, for: characteristic, onSubscribedCentrals: self.subscribedCentrals)
}
finally in
func peripheralManager(_ peripheral: CBPeripheralManager, didOpen channel: CBL2CAPChannel?, error: Error?)
open an input stream:
channel.inputStream.delegate = self
channel.inputStream.schedule(in: RunLoop.current, forMode: .default)
channel.inputStream.open()
where the delegate could look something like this:
func stream(_ aStream: Stream, handle eventCode: Stream.Event) {
switch eventCode {
case Stream.Event.hasBytesAvailable:
if let stream = aStream as? InputStream {
...
//buffer is some UnsafeMutablePointer<UInt8>
let read = stream.read(buffer, maxLength: capacity)
print("\(read) bytes read")
}
case ...
}
iOS app with Central Role
Assuming you have something like that in your iOS code:
func sendImage(imageData: Data) {
self.manager = CBCentralManager(delegate: self, queue: nil)
self.imageData = imageData
self.bytesToWrite = imageData.count
NSLog("start")
}
then you can modify your peripheral on your iOS client to work with the L2Cap channel like this:
func peripheral(_ peripheral: CBPeripheral, didUpdateValueFor characteristic: CBCharacteristic, error: Error?) {
...
if let characteristicValue = characteristic.value {
let psm = characteristicValue.withUnsafeBytes {
$0.load(as: UInt16.self)
}
print("using psm \(psm) for l2cap channel!")
peripheral.openL2CAPChannel(psm)
}
}
and as soon as you are notified of the opened channel, open the output stream on it:
func peripheral(_ peripheral: CBPeripheral, didOpen channel: CBL2CAPChannel?, error: Error?)
...
channel.outputStream.delegate = self.streamDelegate
channel.outputStream.schedule(in: RunLoop.current, forMode: .default)
channel.outputStream.open()
Your supplied stream delegate might look like this:
func stream(_ aStream: Stream, handle eventCode: Stream.Event) {
switch eventCode {
case Stream.Event.hasSpaceAvailable:
if let stream = aStream as? OutputStream, let imageData = self.imageData {
if self.bytesToWrite > 0 {
let bytesWritten = imageData.withUnsafeBytes {
stream.write(
$0.advanced(by: totalBytes),
maxLength: self.bytesToWrite
)
}
self.bytesToWrite -= bytesWritten
self.totalBytes += bytesWritten
print("\(bytesWritten) bytes written, \(bytesToWrite) remain")
} else {
NSLog("finished")
}
}
case ...
There is a cool WWDC video from 2017, What's New in Core Bluetooth, see here https://developer.apple.com/videos/play/wwdc2017/712/
At around 14:45 it starts to discuss how L2Cap channels are working.
At 28:47, the Get the Most out of Core Bluetooth topic starts, in which performance-related things are discussed in detail. That's probably exactly what you're interested in.
Finally, at 37:59 you will see various possible throughputs in kbps.
Based on the data shown on the slide, the maximum possible speed with L2CAP + EDL (Extended Data Length) + 15ms interval is 394 kbps.
Please have a look at this comment
The following snippet is taken from there
ble.requestMtu(yourDeviceId, 512, () => {
console.log('MTU Size ok.');
}, error => {
console.log('MTU Size failed.');
});
It is suggesting that you need to request the Mtu after connection and then I think you can break your message into chunks of 512 bytes rather than 20 bytes.
They have done this for android specific issue
First I should say that there are already tons of blog posts and Q&As on the exact same topic, so please read them first.
If you run iPhone 7, you have the LE Data Length Extension. The default MTU is also 185 bytes, which means you can send notifications or write without response commands with 182 bytes of payload. And please make sure you absolutely not use Write With Response or Indications since that will almost stall the transfer. When you run iOS in central mode you are restricted to 30 ms connection interval. Using a shorter connection interval can have benefits, so I would suggest you to run iOS in peripheral mode instead so you from the central side can set a connection interval of something short, say 12 ms. Since iPhone X and iPhone 8, you can also switch to the 2MBit/s PHY to get increased transfer speed. So to answer your actual question why BLE data transfer is handicapped: it's not, at least if you follow best practice.
You also haven't told anything about the system that runs your desktop application. If it supports 2 MBit/s PHY, LE Data Length Extension and a MTU of at least 185, then you should be happy and make sure your connections use all those features. If not, you should still get higher performance if you enable at least one of them.
In my Android App I have a presenter which handles user interactions, contains kind of request manager and if needed sends user input over request manager to request manager.
Request manager itself contains server API and handles server request using this RxJava.
I have a code, which sends a request to server everytime a user enters a message and show the response from server:
private Observable<List<Answer>> sendRequest(String request) {
MyRequest request = new MyRequest();
request.setInput(request);
return Observable.fromCallable(() -> serverApi.process(request))
.doOnNext(myResponse -> {
// store some data
})
.map(MyResponse::getAnswers)
.subscribeOn(Schedulers.newThread())
.observeOn(AndroidSchedulers.mainThread());
}
However now I need to have kind of queue. The user may send a new message before the server has responded. Each message from the queue should be processed sequentially. I.e. the second message will be sent after we've got a response to the first message and so on.
In case an error occurs no further requests should be handled.
I also need to display the answers within a RecyclerView.
I have no idea how to change the code above to achieve the handling described above
I see kind of problem. On one hand, this queue can be anytime updated by the user, on the other hand anytime server sent a response the message should be removed from the queue.
Maybe there is a rxjava operator or special way I just missed.
I saw a similar answer here, however, the "queue" there is constant.
Making N sequential api calls using RxJava and Retrofit
I'll be very thankful for any solution or link
I don't fnd any elegant native-RxJava solution. So I will custom a Subscriber to do your work.
For your 3 points:
For sequential execution, we create a single thread scheduler
Scheduler sequential = Schedulers.from(Executors.newFixedThreadPool(1));
For stop all requests when error occur, we should subscribe all request together instead of create a Flowable every time. So we define following functions (here I request is Integer and response String):
void sendRequest(Integer request)
Flowable<String> reciveResponse()
and define a field to make association of request and response flow:
FlowableProcessor<Integer> requestQueue = UnicastProcessor.create();
For re-run the not-sent request, we define the rerun function:
void rerun()
Then we can use it:
reciveResponse().subscribe(/**your subscriber**/)
Now let us implement them.
When send request, we simply push it into requestQueue
public void sendRequest(Integer request) {
requestQueue.onNext(request);
}
First, to do the request sequentialy, we should schedule work to sequential:
requestQueue
.observeOn(sequential)
.map(i -> mockLongTimeRequest(i)) // mock for your serverApi.process
.observeOn(AndroidSchedulers.mainThread());
Second, to stop request when error occur. It's a default behavior. If we do nothing, an error will broken the subscription and any futher items will not be emitted.
Third, to re-run the not-sent requests. First because that the native operator will cancel the stream, like MapSubscriber do (RxJava-2.1.0-FlowableMap#63):
try {
v = ObjectHelper.requireNonNull(mapper.apply(t), "The mapper function returned a null value.");
} catch (Throwable ex) {
fail(ex);// fail will call cancel
return;
}
We should wrap the error. Here I use my Try class to wrap the possible exception, you can use any other implementation that can wrap the exception instead of throw it:
.map(i -> Try.to(() -> mockLongTimeRequest(i)))
And then it's the custom OnErrorStopSubscriber implements Subscriber<Try<T>>, Subscription.
It request and emits items normally. When error occur(in fact is a failed Try emitted) it stopped there and won't request or emit even downstream request it. After call rerun method, it will back to the running statu and emit normally. The class is about 80 lines. You can see the code on my github.
Now we can test our code:
public static void main(String[] args) throws InterruptedException {
Q47264933 q = new Q47264933();
IntStream.range(1, 10).forEach(i -> q.sendRequest(i));// emit 1 to 10
q.reciveResponse().subscribe(e -> System.out.println("\tdo for: " + e));
Thread.sleep(10000);
q.rerun(); // re-run after 10s
Thread.sleep(10000);// wait for it complete because the worker thread is deamon
}
private String mockLongTimeRequest(int i) {
Thread.sleep((long) (1000 * Math.random()));
if (i == 5) {
throw new RuntimeException(); // error occur when request 5
}
return Integer.toString(i);
}
and output:
1 start at:129
1 done at:948
2 start at:950
do for: 1
2 done at:1383
3 start at:1383
do for: 2
3 done at:1778
4 start at:1778
do for: 3
4 done at:2397
5 start at:2397
do for: 4
error happen: java.lang.RuntimeException
6 start at:10129
6 done at:10253
7 start at:10253
do for: 6
7 done at:10415
8 start at:10415
do for: 7
8 done at:10874
9 start at:10874
do for: 8
9 done at:11544
do for: 9
You can see it runs sequentialy. And stopped when error occur. After call rerun method, it continue handle the left not-sent request.
For complete code, see my github.
For this kind of behaviour I'm using Flowable backpressure implementation.
Create outer stream that is parent for your api request stream, flatMap the api request with maxConcurrency = 1 and implement some sort of buffer strategy, so your Flowable doesn't throw exception.
Flowable.create(emitter -> {/* user input stream*/}, BackpressureStrategy.BUFFER)
.onBackpressureBuffer(127, // buffer size
() -> {/* overflow action*/},
BackpressureOverflowStrategy.DROP_LATEST) // action when buffer exceeds 127
.flatMap(request -> sendRequest(request), 1) // very important parameter
.subscribe(results -> {
// work with results
}, error -> {
// work with errors
});
It will buffer user input up to given threshold, and then drop it(if you don't do this it will throw exception, but it is highly unlikely that user will exceed such buffer), it will execute sequentially 1 by 1 like a queue. Don't try to implement this behaviour yourself if there are operators for thing kind of behaviour in libary itself.
Oh I forgot to mention, your sendRequest() method must return Flowable or you can convert it to Flowable.
Hope this helps!
My solutions would be as follows (I did something similar in Swift before):
You will need a wrapper interface (let's call it "Event") for both requests and responses.
You will need a state object (let's make it class "State") that will contain request queue and the latest server response, and a method that will accept "Event" as parameter and return 'this'.
Your main processing chain will look like Observable state = Observable.merge(serverResponsesMappedToEventObservable, requestsMappedToEventObservable).scan(new State(), (state, event) -> { state.apply(event) })
Both parameters of the .merge() method will probably be Subjects.
Queue processing will happen in the only method of "State" object (pick and send request from the queue on any event, add to queue on request event, update latest response on response event).
i suggest to create asynchronous observable methods , here a sample :
public Observable<Integer> sendRequest(int x){
return Observable.defer(() -> {
System.out.println("Sending Request : you get Here X ");
return storeYourData(x);
});
}
public Observable<Integer> storeYourData(int x){
return Observable.defer(() -> {
System.out.println("X Stored : "+x);
return readAnswers(x);
}).doOnError(this::handlingStoreErrors);
}
public Observable<Integer> readAnswers(int h){
return Observable.just(h);
}
public void handlingStoreErrors(Throwable throwable){
//Handle Your Exception.
}
the first observable will send request when he get response will proceed the second one and you can chain , you can customize each method to handle errors or success, this sample like queue.
here the result for execution :
for (int i = 0; i < 1000; i++) {
rx.sendRequest(i).subscribe(integer -> System.out.println(integer));
}
Sending Request : you get Here X
X Stored : 0
0
Sending Request : you get Here X
X Stored : 1
1
Sending Request : you get Here X
X Stored : 2
2
Sending Request : you get Here X
X Stored : 3
3
.
.
.
Sending Request : you get Here X
X Stored : 996
996
Sending Request : you get Here X
X Stored : 997
997
Sending Request : you get Here X
X Stored : 998
998
Sending Request : you get Here X
X Stored : 999
999