I have a grpc-js server and a Kotlin for Android client that makes a server streaming call. This is the GRPCService class.
class GRPCService {
private val mChannel = ManagedChannelBuilder
.forAddress(GRPC_HOST_ADDRESS, GRPC_HOST_PORT)
.usePlaintext()
.keepAliveTime(10, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.build()
val asyncStub : ResponderServiceGrpc.ResponderServiceStub =
ResponderServiceGrpc.newStub(mChannel)
}
And the method is called from a foreground service.
override fun onCreate() {
super.onCreate()
...
startForeground(MyNotificationBuilder.SERVICE_NOTIFICATION_ID, notificationBuilder.getServiceNotification())
}
override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
val userId = sharedPreferencesManager.getInt(SharedPreferencesManager.USER_ID)
val taskRequest = Responder.TaskRequest.newBuilder()
.setUserId(userId)
.build()
grpcService.asyncStub.getTasks(taskRequest, object :
StreamObserver<Responder.TaskResponse> {
override fun onCompleted() {
Log.d("grpc Tasks", "Completed")
}
override fun onError(t: Throwable?) {
Log.d("grpc error cause", t?.cause.toString())
t?.cause?.printStackTrace()
Log.d("grpc error", "AFTER CAUSE")
t!!.printStackTrace()
}
override fun onNext(value: Responder.TaskResponse?) {
if (value != null) {
when (value.command) {
...
}
}
}
})
return super.onStartCommand(intent, flags, startId)
}
The connection opens and stays open for about a minute of no communication and then fails with the following error.
D/grpc error cause: null
D/grpc error: AFTER CAUSE
io.grpc.StatusRuntimeException: INTERNAL: Internal error
io.grpc.Status.asRuntimeException(Status.java:533)
io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:460)
io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:689)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$900(ClientCallImpl.java:577)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:751)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:740)
io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
The grpc-js server is created with the following options.
var server = new grpc.Server({
"grpc.http2.min_ping_interval_without_data_ms" : 10000,
"grpc.keepalive_permit_without_calls" : true,
"grpc.http2.min_time_between_pings_ms" : 10000,
"grpc.keepalive_time_ms" : 10000,
"grpc.http2.max_pings_without_data" : 0,
'grpc.http2.min_ping_interval_without_data_ms': 5000
});
I never received the too many pings error either.
I noticed that if there is periodic communication (like the server pinging the client with a small amount of data every 30s or so) through this connection then I don't get the error and the connection stays open for as long as the pinging continues (tested for 2 days).
How do I keep the connection open without resorting to periodically pinging the client?
The managed channel has a property called keepAliveWithoutCalls which has a default value of false as seen here. If this is not set to true then the keepAlive will not happen if there are no current active calls happening. You would need to set this like so:
private val mChannel = ManagedChannelBuilder
.forAddress(GRPC_HOST_ADDRESS, GRPC_HOST_PORT)
.usePlaintext()
.keepAliveTime(30, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.build()
There is a possibility that you will have to do some other settings on the server as well to have the connection stay open without any data passing. You might get an error on the server saying "too many pings". This happens because there are some other settings GRPC needs. I am not sure exactly how to achieve this with a JS server but it shouldn't be too difficult. These settings include:
GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS
Minimum allowed time between a server receiving successive ping frames without sending any data/header/window_update frame.
And this one:
GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS
Minimum time between sending successive ping frames without receiving any data/header/window_update frame, Int valued, milliseconds.
And this one:
GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS
Is it permissible to send keepalive pings without any outstanding streams.
There is a Keepalive User Guide for gRPC which I suggest you read through to understand how gRPC should keep connections open. This is the core standard that all server and client implementations should follow, but I have noticed this is not always the case. You can have a look at a previous but similar question I asked a while back here.
Have you tried the ManagedChannelBuilder.keepAliveTime setting (https://github.com/grpc/grpc-java/blob/master/api/src/main/java/io/grpc/ManagedChannelBuilder.java#L357) ? I am assuming it will work in the middle of a server streaming call.
Related
I have an external camera that's connected via a USB C dongle to my Android tablet. My goal is to have a constant stream of data from the camera into my phone, showing it to the user and allowing him to record it and save it to the local storage.
I am following the official docs from the following link -
https://developer.android.com/guide/topics/connectivity/usb/host#working-d
And I have spent the last couple of hours trying to figure out how things work, mapping the interfaces and endpoints, eventually finding an interface that has an endpoint that when I call bulkTransfer() on, does not return a failed value (-1).
I currently am facing 2 issues:
I have indeed got a valid response from the bulkTransfer() function, but my ByteArray does not fill with relevant information - when trying to print out the values they are all 0's. I though it may be a wrong endpoint as suggested in the official docs, but I have tried all combinations of interfaces and endpoints until I get an indexOutOfBoundException. That combination of interface + endpoint that I used is the only one that produced a valid bulk response. What am I missing?
I am looking for a stream of data that doesn't stop, but it seems like when calling bulkTransfer() it's one a one time oppression, unlike CameraX library for example that I get a constant callback each time a new chunck of data is available.
Here is the code on my main screen -
LaunchedEffect(key1 = true) {
val usbManager = context.getSystemService(Context.USB_SERVICE) as UsbManager
val filter = IntentFilter(ACTION_USB_PERMISSION)
registerReceiver(context, UsbBroadcastReceiver(), filter, RECEIVER_NOT_EXPORTED)
val hdCamera = usbManager.deviceList.values.find { device ->
val name = device.productName ?: return#LaunchedEffect
name.contains("HD camera")
} ?: return#LaunchedEffect
val permissionIntent = PendingIntent.getBroadcast(
context,
0, Intent(ACTION_USB_PERMISSION),
0
)
usbManager.requestPermission(hdCamera, permissionIntent)
}
And here is my BroadcastReceiver -
override fun onReceive(context: Context?, intent: Intent?) {
if (intent?.action != ACTION_USB_PERMISSION) return
synchronized(this) {
val usbManager = context?.getSystemService(Context.USB_SERVICE) as UsbManager
val device: UsbDevice? = intent.getParcelable(UsbManager.EXTRA_DEVICE)
val usbInterface = device?.getInterface(0)
val endpoint = usbInterface?.getEndpoint(1) ?: return#synchronized
usbManager.openDevice(device)?.apply {
val array = ByteArray(endpoint.maxPacketSize)
claimInterface(usbInterface, true)
val bulkTransfer = bulkTransfer(endpoint, array, array.size, 0)
Log.d("defaultAppDebuger", "bulk array: $bulkTransfer") //prints a valid number - 512
array.forEach {
Log.d("defaultAppDebuger", "bulk array: $it") //the array values are empty
}
}
}
}
edit:
I have tried to move the BroadcastReceiver code to an async coroutine thinking that the loading of the information is related to the fact that I am in the wrong thread. Still didn't work, I get a valid result from the bulkTransfer and the byteArray is not filled -
fun BroadcastReceiver.goAsync(
context: CoroutineContext = Dispatchers.IO,
block: suspend CoroutineScope.() -> Unit
) {
val pendingResult = goAsync()
CoroutineScope(SupervisorJob()).launch(context) {
try {
block()
} finally {
pendingResult.finish()
}
}
}
override fun onReceive(context: Context?, intent: Intent?) = goAsync { .... }
Thanks!
After carefully researching I was not able to get an answer and ditched that mini project that I worked on. I followed this comment on the following thread -
https://stackoverflow.com/a/68120774/8943516
That, combined with a 2.5 days of deep researched of both USB Host protocol which was not able to connect to my camera and Camera2API which couldn't recognize my external camera brought me to a dead end.
I am developing an chat android app using WebSocket (OkHttp)
To do this, I implemented the okhttp3.WebSocketListener interface.
And I am receiving the chat messages from the onMessage callback method.
I already developed it using the Rx-PublishSubject, and it works fine.
But I want to change it to Coroutine-Channel.
To do this, I added the Channel in my WebSocketListener class.
#Singleton
class MyWebSocketService #Inject constructor(
private val ioDispatcher: CoroutineDispatcher
): WebSocketListener() {
// previous
val messageSubject: PublishSubject<WsMsg> = PublishSubject.create()
// new
val messageChannel: Channel<WsMsg> by lazy { Channel() }
override fun onMessage(webSocket: WebSocket, text: String) {
super.onMessage(webSocket, text)
// previous
messageSubject.onNext(text)
// new
runBlocking(ioDispatcher) {
Log.d(TAG, "message: $text")
messageChannel.send(text)
}
}
}
But... the coroutine channel doesn't work...
It receives and prints the Log only once.
But it doesn't print the log after the second message.
But when I change the code like below, it works!
override fun onMessage(webSocket: WebSocket, text: String) {
super.onMessage(webSocket, text)
GlobalScope.launch(ioDispatcher) {
Log.d(TAG, "message: $text")
messageChannel.send(text)
}
}
The difference is runBlocking vs GlobalScope.
I head that the GlobakScope may not ensure the message's ordering.
So It is not suitable for the Chat app.
How can I solve this issue?
The default Channel() has no buffer, which causes send(message) to suspend until a consumer of the channel calls channel.receive() (which is implicitely done in a for(element in channel){} loop)
Since you are using runBlocking, suspending effectively means blocking the current thread. It appears that okhttp will always deliver messages on the same thread, but it can not do that because you are still blocking that thread.
The correct solution to this would be to add a buffer to your channel. If it is unlikely that messages will pour in faster than you can process them, you can simply replace Channel() with Channel(Channel.UNLIMITED)
In my app I start a WebSocketWorker tasks that runs periodically every 15 minutes. As the name implies, it contains a WebSocket for listening to a socket in the background:
// MainApplication.kt
override fun onCreate() {
super.onCreate()
if (BuildConfig.DEBUG) {
Timber.plant(DebugTree())
}
val work = PeriodicWorkRequestBuilder<WebSocketWorker>(15, TimeUnit.MINUTES).build()
workManager.enqueueUniquePeriodicWork("UniqueWebSocketWorker", ExistingPeriodicWorkPolicy.KEEP, work)
}
The WebSocketWorker contains the following logic:
#HiltWorker
class WebSocketWorker #AssistedInject constructor(
#Assisted appContext: Context,
#Assisted workerParams: WorkerParameters
) : CoroutineWorker(appContext, workerParams) {
inner class MyWebSocketListener : WebSocketListener() {
override fun onMessage(webSocket: WebSocket, text: String) {
Timber.d("The message sent is %s", text)
// do sth. with the message
}
override fun onFailure(webSocket: WebSocket, t: Throwable, response: Response?) {
t.localizedMessage?.let { Timber.e("onFailure: %s", it) }
response?.message?.let { Timber.e("onFailure: %s", it) }
}
}
override suspend fun doWork(): Result {
try{
// code to be executed
val request = Request.Builder().url("ws://***.***.**.***:8000/ws/chat/lobby/").build()
val myWebSocketListener = MyWebSocketListener()
val client = OkHttpClient()
client.newWebSocket(request, myWebSocketListener)
return Result.success()
}
catch (throwable:Throwable){
Timber.e("There is a failure")
Timber.e("throwable.localizedMessage: %s", throwable.localizedMessage)
// clean up and log
return Result.failure()
}
}
}
As you can see, in the Worker class I set the WebSocket and everything is fine. Listening to the socket works.
Now, I also want to add the "sending of messages" functionality to my app. How can I reuse the websocket created in WebSocketWorker? Can I pass input data to the WebSocketWorker that runs in the background ?
Let's say I have a EditText for typing the message and a Button to send the message with a setOnClickListener attached like this:
binding.sendButton.setOnClickListener {
// get message
val message = binding.chatMessageEditText.text.toString()
// check if not empty
if(message.isNotEmpty()) {
// HOW CAN I REUSE THE WEBSOCKET RUNNING PERIODICALLY IN THE BACKGROUND?
// CAN I PASS THE MESSAGE TO THAT WEBSOCKET ?
// OR SHOULD I CREATE A DIFFERENT WORKER FOR SENDING MESSAGES (e.g.: a OneTimeRequest<SendMessageWorker> for sending messages ?
}
}
From the documentation, I know that you need to build Data objects for passing inputs and so on but there was no example which showcased how to pass input to a worker running periodically in the background.
My experience is saying that you can. Basically you "can't" interact with the worker object via the API. It is really annoying.
For example, with the JS you have the option to get a job and check the parameters of the job. There is no such option with the work. For example, I want to check what is the current state of the restrictions - what is satisfied, what is not. Nothing like this. You can just check states, cancel and that is almost all.
My suggestions is that it is because the WorkManager is a "facade/adapter" over other libraries like JS. It has it's own DB to restore JS jobs on device restart and stuff like this, but beside that if you want to interact with the internals I guess it was just too complicated for them to do so they just skipped.
You can just inject some other object and every time the work can ask it for it's data. I don't see other option.
I'm having issues with trying to poll a print queue which is hosted online
Essentially the point of the print queue is that when you call the url address if there is an item in the queue it returns it, if not it returns nothing i.e. timeout
The print queue is emptied by 1 on each call
-- Removed --
** Edit ** Tested this code actually works if the app is open and in the background, but not when the app is closed. I would ideally like to cover that
Alternate solutions welcome here is my code:
Printer Service
override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
super.onStartCommand(intent, flags, startId)
mRunning = true
Thread{
while(mRunning){
Thread.sleep(30000)
val response = transmit
.sendRequestManual("printerURL")
if (response != null){
val openData = response.getElementsByTagName("printdata")
if(openData.length > 0){
for (socket in sockets) {
/* Bluetooth Printers */
write(openData.item(0).textContent, socket)
}
}
}
}
}.start()
return START_STICKY
}
where mRunning is set to false if the service is destroyed
Transmit
fun sendRequestManual(sendURL: String): Document?{
/* create the XML form to be sent */
val client = OkHttpClient()
val blockingQueue: BlockingQueue<Document> = ArrayBlockingQueue(1)
val request = Request.Builder()
.url(sendURL)
.get()
.build()
client.newCall(request).enqueue(object: Callback{
override fun onFailure(call: Call, e: IOException){
/* onFailure is called each time */
}
override fun onResponse(call: Call, response: Response) {
if(!response.isSuccessful){
} else {
responseXML = response.body?.string()
blockingQueue.add(parseStringXMLDoc(responseXML!!))
}
}
})
return blockingQueue.poll(10,TimeUnit.SECONDS)
}
Manifest
<service
android:name=".services.PrintService"
android:enabled="true"
android:exported="false"
android:stopWithTask="false"/>
My code was not polling correctly, still seeking answer for polling on app close
Increased blocking queue time to 90 seconds
So I have solved this issue, to correctly poll a server using okHTTP we need to set the response and connection time via the builder
var client = OkHttpClient().newBuilder()
.connectTimeout(90,TimeUnit.SECONDS)
.readTimeout(15,TimeUnit.SECONDS)
.build()
Essentially when the request is made the app will try to connect for up to 90 seconds, if a connection is made the server has 15 seconds to send it's data.
I removed
Thread.sleep(30000)
because it was causing a gap in the polling, making me think it wasn't working.
I increased
blockingQueue.poll(10,TimeUnit.SECONDS)
to 90 seconds to match the server connection time, the blocking queue will stop waiting as soon as data is received
So Essentially whenever a request is made the server has 90 seconds to connect, if not the app sends a new request.
If the server does respond, after processing the result, a new request is made and sent, which is perfect for this intended purpose (reading off a printer queue)
Our server is used to Nginx as webserver and add compile module of nginx_push_stream. Before used to push stream had used to Restful then changed to Websocket but WebSocket sometimes lost when the client or server had small bandwidth. In the 2019 year, from Websocekt to Server-Sent Event (SSE) / event-source such as event stream or text/event-stream to reduce loss both of client or server.
Please, anyone, have any idea for library event stream is able to use to the android client and iPhone client.
I have already used to Okhttp but there is not ready yet used event stream, RxSSE is not able to use in Android no response at all.
I hope that next year OkHttp is already updated OkHttp-EventSource for Android Client also iPhone Client
After 3 days, Struggling had search library for supporting SSE of Android client. Then, i found this blog Accessing SSE help me a lot to implementation SSE, also this the library SSE
This sample implementation SSE in kotlin version, even thought library is java version.
1. Preparing for event handler source
interface DefaultEventHandler : EventHandler {
#Throws(Exception::class)
override fun onOpen() {
Log.i("open","open")
}
#Throws(Exception::class)
override fun onClosed() {
Log.i("close","close")
}
#Throws(Exception::class)
override fun onMessage(event: String, messageEvent: MessageEvent) {
Log.i("event", messageEvent.data)
}
override fun onError(t: Throwable) {
Log.e("error", t.toString())
}
override fun onComment(comment: String) {
Log.i("event", comment)
}
}
class MessageEventHandler : DefaultEventHandler {
override fun onMessage(event: String, messageEvent: MessageEvent) {
super.onMessage(event, messageEvent)
val data = messageEvent.data
Log.i("message", data)
}
}
2. implementation event source
import java.net.URI
import java.util.concurrent.TimeUnit
.....
fun initEventSource(url: String) {
val eventHandler = MessageEventHandler()
try {
val eventSource: EventSource = EventSource.Builder(handler, URI.create(url))
.reconnectTimeMs(3000)
.build()
eventSource.start()
TimeUnit.SECONDS.sleep(10)
} catch (e: Exception) {
Log.e("error", e.toString())
}
}
I hope this would be alternative method protocol from client to server than used RESTfull or Websocket. When server always sent data to client without need request from client as stream.
I have added a gist of using library SSE https://gist.github.com/subhanshuja/9079ec9df0927b1da26ee57cf9da1f26.