I'm having issues with trying to poll a print queue which is hosted online
Essentially the point of the print queue is that when you call the url address if there is an item in the queue it returns it, if not it returns nothing i.e. timeout
The print queue is emptied by 1 on each call
-- Removed --
** Edit ** Tested this code actually works if the app is open and in the background, but not when the app is closed. I would ideally like to cover that
Alternate solutions welcome here is my code:
Printer Service
override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
super.onStartCommand(intent, flags, startId)
mRunning = true
Thread{
while(mRunning){
Thread.sleep(30000)
val response = transmit
.sendRequestManual("printerURL")
if (response != null){
val openData = response.getElementsByTagName("printdata")
if(openData.length > 0){
for (socket in sockets) {
/* Bluetooth Printers */
write(openData.item(0).textContent, socket)
}
}
}
}
}.start()
return START_STICKY
}
where mRunning is set to false if the service is destroyed
Transmit
fun sendRequestManual(sendURL: String): Document?{
/* create the XML form to be sent */
val client = OkHttpClient()
val blockingQueue: BlockingQueue<Document> = ArrayBlockingQueue(1)
val request = Request.Builder()
.url(sendURL)
.get()
.build()
client.newCall(request).enqueue(object: Callback{
override fun onFailure(call: Call, e: IOException){
/* onFailure is called each time */
}
override fun onResponse(call: Call, response: Response) {
if(!response.isSuccessful){
} else {
responseXML = response.body?.string()
blockingQueue.add(parseStringXMLDoc(responseXML!!))
}
}
})
return blockingQueue.poll(10,TimeUnit.SECONDS)
}
Manifest
<service
android:name=".services.PrintService"
android:enabled="true"
android:exported="false"
android:stopWithTask="false"/>
My code was not polling correctly, still seeking answer for polling on app close
Increased blocking queue time to 90 seconds
So I have solved this issue, to correctly poll a server using okHTTP we need to set the response and connection time via the builder
var client = OkHttpClient().newBuilder()
.connectTimeout(90,TimeUnit.SECONDS)
.readTimeout(15,TimeUnit.SECONDS)
.build()
Essentially when the request is made the app will try to connect for up to 90 seconds, if a connection is made the server has 15 seconds to send it's data.
I removed
Thread.sleep(30000)
because it was causing a gap in the polling, making me think it wasn't working.
I increased
blockingQueue.poll(10,TimeUnit.SECONDS)
to 90 seconds to match the server connection time, the blocking queue will stop waiting as soon as data is received
So Essentially whenever a request is made the server has 90 seconds to connect, if not the app sends a new request.
If the server does respond, after processing the result, a new request is made and sent, which is perfect for this intended purpose (reading off a printer queue)
Related
I want to connect to a service inside a texture view. The connection can be cut off if the service crashes. For this case a new connection needs to be established.
On every onSurfaceTextureAvailable I check if the service is still alive and running. If not i try to reconnect to the service.
Due that multiple things happen in this view I have to wait till the service is connect and can then continue. I would like to solve it with blocking coroutines like this:
suspend fun bindServiceAndWait() = suspendCoroutine<IService> { continuation ->
Log.i(TAG, "In co-routine")
val conn = object: ServiceConnection {
override fun onServiceConnected(name: ComponentName?, service: IBinder?) {
continuation.resume(IService.Stub.asInterface(service))
}
override fun onServiceDisconnected(name: ComponentName?) {
// don't care here
}
}
// Context is coming from the application it is running
context.bindService(serviceIntent, conn, Context.BIND_AUTO_CREATE)
}
There problem is that the callback is never executed and so the blocking is never continued.
If I remove the blocking continuation part, everything works fine.
Do you have any idea why this behavior is like this ?
I am developing an chat android app using WebSocket (OkHttp)
To do this, I implemented the okhttp3.WebSocketListener interface.
And I am receiving the chat messages from the onMessage callback method.
I already developed it using the Rx-PublishSubject, and it works fine.
But I want to change it to Coroutine-Channel.
To do this, I added the Channel in my WebSocketListener class.
#Singleton
class MyWebSocketService #Inject constructor(
private val ioDispatcher: CoroutineDispatcher
): WebSocketListener() {
// previous
val messageSubject: PublishSubject<WsMsg> = PublishSubject.create()
// new
val messageChannel: Channel<WsMsg> by lazy { Channel() }
override fun onMessage(webSocket: WebSocket, text: String) {
super.onMessage(webSocket, text)
// previous
messageSubject.onNext(text)
// new
runBlocking(ioDispatcher) {
Log.d(TAG, "message: $text")
messageChannel.send(text)
}
}
}
But... the coroutine channel doesn't work...
It receives and prints the Log only once.
But it doesn't print the log after the second message.
But when I change the code like below, it works!
override fun onMessage(webSocket: WebSocket, text: String) {
super.onMessage(webSocket, text)
GlobalScope.launch(ioDispatcher) {
Log.d(TAG, "message: $text")
messageChannel.send(text)
}
}
The difference is runBlocking vs GlobalScope.
I head that the GlobakScope may not ensure the message's ordering.
So It is not suitable for the Chat app.
How can I solve this issue?
The default Channel() has no buffer, which causes send(message) to suspend until a consumer of the channel calls channel.receive() (which is implicitely done in a for(element in channel){} loop)
Since you are using runBlocking, suspending effectively means blocking the current thread. It appears that okhttp will always deliver messages on the same thread, but it can not do that because you are still blocking that thread.
The correct solution to this would be to add a buffer to your channel. If it is unlikely that messages will pour in faster than you can process them, you can simply replace Channel() with Channel(Channel.UNLIMITED)
I have a chat application which of course works with Sockets. So i have build a SocketManager where i have the callbacks and the sendMethod from the implementation 'com.neovisionaries:nv-websocket-client:2.14' library
override fun sendMessage(text: String) {
println("## SEND: $text")
webSocket?.let {
it.sendText(text)
}
}
override fun onTextMessage(websocket: WebSocket?, message: String?) {
super.onTextMessage(websocket, message)
println("## RECEIVED: Something received")
try {
flowSocketHandler.webSocketEventResolver(s, message) {
sendMessage(it)
}
} catch (e: Exception) {
e.printStackTrace()
}
}
When a new text comes from Socket the FlowSocketHandler resolves the type of message and calls the proper handler to handle the message. For example, FileHandler for File messages, MEssageHandler for simple messages, VideoHandler for video call messages.
private val scope = CoroutineScope(Dispatchers.Main + SupervisorJob())
override fun webSocketEventResolver(server: Server, message: String, socketCallback: (message: String) -> Unit) {
scope.launch(Dispatchers.IO) {
try {
val json = JSONObject(message)
when(Enums.SocketResponses.toSocketEvent(json)) {
Enums.SocketResponses.MESSAGE_RECEIVED -> messagesHandler.onMessageReceived(server, json.fromJson(), true, socketCallback)
Enums.SocketResponses.FILE_PART_RECEIVED -> filesHandler.onFilePartReceived(server, json.fromJson(), socketCallback)
else -> {}
}
} catch (e: Exception) {
e.printStackTrace()
}
}
}
Also, the philopsophy of downloading an incoming File message, is that
I have an incoming Text message that says "Hey, you have to download a file that consists of 1000 parts" which calls the MESSAGE_RECEIVED
I send 1000 socket messages requesting each part accordingly
I receive each part from Socket and handle it which calls the FILE_PART_RECEIVED
As you can see above, all this is taking place in a scope CoroutineScope
All this is taking place inside the FilesHandler
class FilesHandlerImpl(private val appContext: Context): FilesHandler, KoinComponent {
private val scope = CoroutineScope(Dispatchers.Main + SupervisorJob())
override suspend fun onFileHeaderReceived(
server: Server,
receiver: ReceiveNewMessage,
isFromSocket: Boolean,
socketCallback: ((message: String) -> Unit)?
) {
scope.launch(Dispatchers.IO) {
// Do some checks and start send requests for each part
println("## Time header message ${receiver.fileHash}")
pendingList.forEach { p ->
socketCallback?.invoke(requestFilePart(p))
}
// So here in the Log i see
// ## Send {json for each part}
}
}
// Take the part and create a file
override suspend fun onFilePartReceived(
server: Server,
receiver: FilePartRcv,
socketCallback: (message: String) -> Unit
) {
scope.launch(Dispatchers.IO) {
println("## File part received ${receiver.filePart.segment}")
filesRepository.createAndWriteFilePart(server, receiver)
filesRepository.updateFilePartStatus(server, receiver, FILE_PART_RECEIVED)
if (complete) {
// Do stuff
}
}
// So here in every part we receive we see in the Log
// ## File part received 1
// ## File part received 4
// ## File part received 2
// ## File part received 6
// ## File part received 9
......
}
}
The problem is the Following.
While the client sends requests for the File parts (So Log is full of ## SEND {...}) meanwhile the Socket callback onTextMessage is called so i see also ## RECEIVED: Something received.
Those callbacks are the incoming File Parts. So i should also see the ## File part received X. But i don't see any of those UNTIL the Send loop finishes. Then suddenly i see all the ## File part received X.
What i did, is to remove the scope from onFileHeaderReceived and onFilePartReceived and seems to play a bit smoother.
Can anybody explain me why is that happening?
scope.launch starts a job asynchronously. it returns a Job that you can hold onto and cancel on await.
If you just want to ensure the job happens on the IO Dispatcher, then you probably want withContext instead. This will be blocking so operations will happen sequentially.
withContext(Dispatchers.IO) {
...
}
Dispatchers.Main will typically be single threaded also, so forming a queue as you suggest. You could use Dispatchers.IO or Default to have concurrency.
https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/-dispatchers/-main.html
A coroutine dispatcher that is confined to the Main thread operating with UI objects. Usually such dispatchers are single-threaded.
Also suspend isn't needed is it just calls scope.launch, since that isn't a suspending function.
In my app I start a WebSocketWorker tasks that runs periodically every 15 minutes. As the name implies, it contains a WebSocket for listening to a socket in the background:
// MainApplication.kt
override fun onCreate() {
super.onCreate()
if (BuildConfig.DEBUG) {
Timber.plant(DebugTree())
}
val work = PeriodicWorkRequestBuilder<WebSocketWorker>(15, TimeUnit.MINUTES).build()
workManager.enqueueUniquePeriodicWork("UniqueWebSocketWorker", ExistingPeriodicWorkPolicy.KEEP, work)
}
The WebSocketWorker contains the following logic:
#HiltWorker
class WebSocketWorker #AssistedInject constructor(
#Assisted appContext: Context,
#Assisted workerParams: WorkerParameters
) : CoroutineWorker(appContext, workerParams) {
inner class MyWebSocketListener : WebSocketListener() {
override fun onMessage(webSocket: WebSocket, text: String) {
Timber.d("The message sent is %s", text)
// do sth. with the message
}
override fun onFailure(webSocket: WebSocket, t: Throwable, response: Response?) {
t.localizedMessage?.let { Timber.e("onFailure: %s", it) }
response?.message?.let { Timber.e("onFailure: %s", it) }
}
}
override suspend fun doWork(): Result {
try{
// code to be executed
val request = Request.Builder().url("ws://***.***.**.***:8000/ws/chat/lobby/").build()
val myWebSocketListener = MyWebSocketListener()
val client = OkHttpClient()
client.newWebSocket(request, myWebSocketListener)
return Result.success()
}
catch (throwable:Throwable){
Timber.e("There is a failure")
Timber.e("throwable.localizedMessage: %s", throwable.localizedMessage)
// clean up and log
return Result.failure()
}
}
}
As you can see, in the Worker class I set the WebSocket and everything is fine. Listening to the socket works.
Now, I also want to add the "sending of messages" functionality to my app. How can I reuse the websocket created in WebSocketWorker? Can I pass input data to the WebSocketWorker that runs in the background ?
Let's say I have a EditText for typing the message and a Button to send the message with a setOnClickListener attached like this:
binding.sendButton.setOnClickListener {
// get message
val message = binding.chatMessageEditText.text.toString()
// check if not empty
if(message.isNotEmpty()) {
// HOW CAN I REUSE THE WEBSOCKET RUNNING PERIODICALLY IN THE BACKGROUND?
// CAN I PASS THE MESSAGE TO THAT WEBSOCKET ?
// OR SHOULD I CREATE A DIFFERENT WORKER FOR SENDING MESSAGES (e.g.: a OneTimeRequest<SendMessageWorker> for sending messages ?
}
}
From the documentation, I know that you need to build Data objects for passing inputs and so on but there was no example which showcased how to pass input to a worker running periodically in the background.
My experience is saying that you can. Basically you "can't" interact with the worker object via the API. It is really annoying.
For example, with the JS you have the option to get a job and check the parameters of the job. There is no such option with the work. For example, I want to check what is the current state of the restrictions - what is satisfied, what is not. Nothing like this. You can just check states, cancel and that is almost all.
My suggestions is that it is because the WorkManager is a "facade/adapter" over other libraries like JS. It has it's own DB to restore JS jobs on device restart and stuff like this, but beside that if you want to interact with the internals I guess it was just too complicated for them to do so they just skipped.
You can just inject some other object and every time the work can ask it for it's data. I don't see other option.
I have a grpc-js server and a Kotlin for Android client that makes a server streaming call. This is the GRPCService class.
class GRPCService {
private val mChannel = ManagedChannelBuilder
.forAddress(GRPC_HOST_ADDRESS, GRPC_HOST_PORT)
.usePlaintext()
.keepAliveTime(10, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.build()
val asyncStub : ResponderServiceGrpc.ResponderServiceStub =
ResponderServiceGrpc.newStub(mChannel)
}
And the method is called from a foreground service.
override fun onCreate() {
super.onCreate()
...
startForeground(MyNotificationBuilder.SERVICE_NOTIFICATION_ID, notificationBuilder.getServiceNotification())
}
override fun onStartCommand(intent: Intent?, flags: Int, startId: Int): Int {
val userId = sharedPreferencesManager.getInt(SharedPreferencesManager.USER_ID)
val taskRequest = Responder.TaskRequest.newBuilder()
.setUserId(userId)
.build()
grpcService.asyncStub.getTasks(taskRequest, object :
StreamObserver<Responder.TaskResponse> {
override fun onCompleted() {
Log.d("grpc Tasks", "Completed")
}
override fun onError(t: Throwable?) {
Log.d("grpc error cause", t?.cause.toString())
t?.cause?.printStackTrace()
Log.d("grpc error", "AFTER CAUSE")
t!!.printStackTrace()
}
override fun onNext(value: Responder.TaskResponse?) {
if (value != null) {
when (value.command) {
...
}
}
}
})
return super.onStartCommand(intent, flags, startId)
}
The connection opens and stays open for about a minute of no communication and then fails with the following error.
D/grpc error cause: null
D/grpc error: AFTER CAUSE
io.grpc.StatusRuntimeException: INTERNAL: Internal error
io.grpc.Status.asRuntimeException(Status.java:533)
io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:460)
io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:689)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$900(ClientCallImpl.java:577)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:751)
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:740)
io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
The grpc-js server is created with the following options.
var server = new grpc.Server({
"grpc.http2.min_ping_interval_without_data_ms" : 10000,
"grpc.keepalive_permit_without_calls" : true,
"grpc.http2.min_time_between_pings_ms" : 10000,
"grpc.keepalive_time_ms" : 10000,
"grpc.http2.max_pings_without_data" : 0,
'grpc.http2.min_ping_interval_without_data_ms': 5000
});
I never received the too many pings error either.
I noticed that if there is periodic communication (like the server pinging the client with a small amount of data every 30s or so) through this connection then I don't get the error and the connection stays open for as long as the pinging continues (tested for 2 days).
How do I keep the connection open without resorting to periodically pinging the client?
The managed channel has a property called keepAliveWithoutCalls which has a default value of false as seen here. If this is not set to true then the keepAlive will not happen if there are no current active calls happening. You would need to set this like so:
private val mChannel = ManagedChannelBuilder
.forAddress(GRPC_HOST_ADDRESS, GRPC_HOST_PORT)
.usePlaintext()
.keepAliveTime(30, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.build()
There is a possibility that you will have to do some other settings on the server as well to have the connection stay open without any data passing. You might get an error on the server saying "too many pings". This happens because there are some other settings GRPC needs. I am not sure exactly how to achieve this with a JS server but it shouldn't be too difficult. These settings include:
GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS
Minimum allowed time between a server receiving successive ping frames without sending any data/header/window_update frame.
And this one:
GRPC_ARG_HTTP2_MIN_SENT_PING_INTERVAL_WITHOUT_DATA_MS
Minimum time between sending successive ping frames without receiving any data/header/window_update frame, Int valued, milliseconds.
And this one:
GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS
Is it permissible to send keepalive pings without any outstanding streams.
There is a Keepalive User Guide for gRPC which I suggest you read through to understand how gRPC should keep connections open. This is the core standard that all server and client implementations should follow, but I have noticed this is not always the case. You can have a look at a previous but similar question I asked a while back here.
Have you tried the ManagedChannelBuilder.keepAliveTime setting (https://github.com/grpc/grpc-java/blob/master/api/src/main/java/io/grpc/ManagedChannelBuilder.java#L357) ? I am assuming it will work in the middle of a server streaming call.