Google Sign-In: how to retry when there is a failure? - android

Issue: I need to refresh the Google token used for signing into my server. Most of the time this works well, but sometimes the call to Google to get a fresh token (with a TTL of ~1hr) fails for a variety of reasons.
Desired solution: some means of retrying the call to Google that will actually work.
I have code like the following in my app:
private val googleSignInClient: GoogleSignInClient by lazy {
// This takes a measurable amount of time to compute, so do it lazily
val gso = GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)
.requestIdToken(WEB_CLIENT_ID) // need this to get user ID token later
.requestEmail()
.build()
GoogleSignIn.getClient(appContext, gso)
}
override fun getToken() = getRefreshedGoogleInfo()?.googleToken()
/**
* Here we "silently sign in" to get a refreshed Google ID Token.
*
* This method might block, so do not call it from the main thread.
*/
private fun getRefreshedGoogleInfo(): GoogleUserInfo? {
val task = googleSignInClient.silentSignIn()
// If the task is already complete, return the result immediately
if (task.isComplete) {
val info = task.result.toGoogleUserInfo()
Logger.v(TAG, "silentSignIn result from already-completed task = %s", info.toString())
return info
}
// If the task is not complete, await up to 5s and return result, or null
return try {
val info = task.await().toGoogleUserInfo()
Logger.v(TAG, "silentSignIn result from await task = %s", info.toString())
info
} catch (e: Exception) {
Logger.e(TAG, e, "silentSignIn result from await task = null\nerror = ${e.localizedMessage}")
null
}
}
private fun Task<GoogleSignInAccount>.await() = Tasks.await(this, 5, TimeUnit.SECONDS)
Sometimes, the call task.await() will fail because it timed out. In such a case, what is the best strategy to try again? I have tried a naive strategy of just trying again immediately up to some arbitrary numerical limit, but I have observed that if it fails the first time, it always fails on subsequent attempts. The Google docs aren't very helpful with respect to this scenario.

Instead of waiting the task up to 5 seconds, why not try again with requests and intents ? Let me show you in your code with edits.
private fun getRefreshedGoogleInfo(): GoogleUserInfo? {
val task = googleSignInClient.silentSignIn()
// If the task is already complete, return the result immediately
if (task.isComplete) {
val info = task.result.toGoogleUserInfo()
Logger.v(TAG, "silentSignIn result from already-completed task = %s", info.toString())
return info
}
else{ //Else is not necessary, but it will somehow increase readability.
// There's no immediate result ready, displays some progress indicator and waits for the async callback.
task.addOnCompleteListener(this){ signInTask ->
//We repeat the same task control again, but this time it is async.
if(signInTask.isComplete){
val info = task.result.toGoogleUserInfo() //Redone the same things in first task.isComplete scope.
Logger.v(TAG, "silentSignIn result from already-completed task = %s", info.toString())
return info
}
else{ //Again, not necessary but still.
//This is where we are gonna try again.
signInToGoogle() //Go below to see the trick.
}
}
}
}
With the code above alone, you do not have to wait for 5 seconds, but it will not try again. To try again with more persistant way ( I assume you use these functions on an activity ) :
private fun signInToGoogle(){
val signInIntent = googleSignInClient!!.signInIntent
startActivityForResult(signInIntent, RC_SIGN_IN) //You might consider checking for internet connection before calling this
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if(requestCode == RC_SIGN_IN){
val task = GoogleSignIn.getSignedInAccountFromIntent(data)
/* your code with task handle goes here */
//task.result carries the same object as above. But since this is a listener, it can not be returned with a value. So it is up to you how to handle again. I handle this with a class-wide variable which contains GoogleUserInfo?
}
}
RC_SIGN_IN is just a number for request. You could add it to your companion objects like this or use it as it is :
companion object {
private const val RC_SIGN_IN = 9001
}
Note : You could use intents for the first try too. Or if intents are too much hassle for you, you could Make function sleep for 5 seconds and call the function again.
But i strongly recommend you to use intent.

Related

Kotlin coroutine does not run synchronously

In all cases that I have been using corrutines, so far, it has been executing its "lines" synchronously, so that I have been able to use the result of a variable in the next line of code.
I have the ImageRepository class that calls the server, gets a list of images, and once obtained, creates a json with the images and related information.
class ImageRepository {
val API_IMAGES = "https://api.MY_API_IMAGES"
suspend fun fetch (activity: AppCompatActivity) {
activity.lifecycleScope.launch() {
val imagesResponse = withContext(Dispatchers.IO) {
getRequest(API_IMAGES)
}
if (imagesResponse != null) {
val jsonWithImagesAndInfo = composeJsonWithImagesAndInfo(imagesResponse)
} else {
// TODO Warning to user
Log.e(TAG, "Error: Get request returned no response")
}
...// All the rest of code
}
}
}
Well, the suspend function executes correctly synchronously, it first makes the call to the server in the getRequest and, when there is response, then composes the JSON. So far, so good.
And this is the call to the "ImageRepository" suspension function from my main activity:
lifecycleScope.launch {
val result = withContext(Dispatchers.IO) { neoRepository.fetch(this#MainActivity) }
Log.i(TAG, "After suspend fun")
}
The problem is that, as soon as it is executed, it calls the suspension function and then displays the log, obviously empty. It doesn't wait for the suspension function to finish and then display the log.
Why? What am I doing wrong?
I have tried the different Dispatchers, etc, but without success.
I appreciate any help.
Thanks and best regards.
It’s because you are launching another coroutine in parallel from inside your suspend function. Instead of launching another coroutine there, call the contents of that launch directly in your suspend function.
A suspend function is just like a regular function, it executes one instruction after another. The only difference is that it can be suspended, meaning the runtime environment can decide to halt / suspend execution to do other work and then resume execution later.
This is true unless you start an asynchronous operation which you should not be doing. Your fetch operation should look like:
class ImageRepository {
suspend fun fetch () {
val imagesResponse = getRequest(API_IMAGES)
if (imagesResponse != null) {
val jsonWithImagesAndInfo = composeJsonWithImagesAndInfo(imagesResponse)
} else {
// TODO Warning to user
Log.e(TAG, "Error: Get request returned no response")
}
... // All the rest of code
}
}
-> just like a regular function. Of course you need to all it from a coroutine:
lifecycleScope.launch {
val result = withContext(Dispatchers.IO) { neoRepository.fetch() }
Log.i(TAG, "After suspend fun")
}
Google recommends to inject the dispatcher into the lower level classes (https://developer.android.com/kotlin/coroutines/coroutines-best-practices) so ideally you'd do:
val neoRepository = ImageRepository(Dispatchers.IO)
lifecycleScope.launch {
val result = neoRepository.fetch()
Log.i(TAG, "After suspend fun")
}
class ImageRepository(private val dispatcher: Dispatcher) {
suspend fun fetch () = withContext(dispatcher) {
val imagesResponse = getRequest(API_IMAGES)
if (imagesResponse != null) {
val jsonWithImagesAndInfo = composeJsonWithImagesAndInfo(imagesResponse)
} else {
// TODO Warning to user
Log.e(TAG, "Error: Get request returned no response")
}
... // All the rest of code
}
}

How to use Flow retry function on a specific event?

I see that Flow has a retry mechanism, but my use case is somehow different from what I see in the doc, I have a fragment containing a list that fills from API when opening this fragment, but the API calls may be failed and throw an exception for any reason, in this case, I want to show a button that calls the API again on clicking, as follows:
Repository
fun getData(): Flow<Result<T>> = service.getData()
ViewModel
val data: Flow<Result<T>> = repo.getData()
Fragment
viewModel.data.collect{ result ->
if(result is Error){
showRetryButton() // Show the retry button on failed API
}
....
}
retryButton.setOnClickListener{
// do something to retry the API call
}
Can Flow retry help me here? if not, what do you think is the best way to call the failed API again?
Thanks in advance :)
I don't think retry can help in this case. Basically to fire an API request repo.getData() should be called, so just call it and collect returned Flow:
// Fragment:
private var job: Job? = null // save here the job of collecting data
private fun fetchData() {
job?.cancel() // cancel previous job
job = lifecycleScope.launch {
viewModel.getData().collect {
if(result is Error) {
showRetryButton() // Show the retry button on failed API
}
//...
}
}
}
retryButton.setOnClickListener{
fetchData()
}
// ViewModel:
fun getData() = repo.getData()

How to handle race condition with Coroutines in Kotlin?

I have a coroutine/flow problem that I'm trying to solve
I have this method getClosesRegion that's suppose to do the following:
Attempt to connect to every region
The first region to connect (I use launch to attempt to connect to all concurrently), should be returned and the rest of the region requests should be cancelled
If all regions failed to connect OR after a 30 second timeout, throw an exception
That's currently what I have:
override suspend fun getClosestRegion(): Region {
val regions = regionsRepository.getRegions()
val firstSuccessResult = MutableSharedFlow<Region>(replay = 1)
val scope = CoroutineScope(Dispatchers.IO)
// Attempts to connect to every region until the first success
scope.launch {
regions.forEach { region ->
launch {
val retrofitClient = buildRetrofitClient(region.backendUrl)
val regionAuthenticationAPI = retrofitClient.create(AuthenticationAPI::class.java)
val response = regionAuthenticationAPI.canConnect()
if (response.isSuccessful && scope.isActive) {
scope.cancel()
firstSuccessResult.emit(region)
}
}
}
}
val result = withTimeoutOrNull(TimeUnit.SECONDS.toMillis(30)) { firstSuccessResult.first() }
if (result != null)
return result
throw Exception("Failed to connect to any region")
}
Issues with current code:
If 1 region was successfully connected, I expect that the of the requests will be cancelled (by scope.cancel()), but in reality other regions that have successfully connected AFTER the first one are also emitting value to the flow (scope.isActive returns true)
I don't know how to handle the race condition of throw exception if all regions failed to connect or after 30 second timeout
Also I'm pretty new to kotlin Flow and Coroutines so I don't know if creating a flow is really necessary here
You don't need to create a CoroutineScope and manage it from within a coroutine. You can use the coroutineScope function instead.
I of course didn't test any of the below, so please excuse syntax errors and omitted <types> that the compiler can't infer.
Here's how you might do it using a select clause, but I think it's kind of awkward:
override suspend fun getClosestRegion(): Region = coroutineScope {
val regions = regionsRepository.getRegions()
val result = select<Region?> {
onTimeout(30.seconds) { null }
for (region in regions) {
launch {
val retrofitClient = buildRetrofitClient(region.backendUrl)
val regionAuthenticationAPI = retrofitClient.create(AuthenticationAPI::class.java)
val result = regionAuthenticationAPI.canConnect()
if (!it.isSuccessful) {
delay(30.seconds) // prevent this one from being selected
}
}.onJoin { region }
}
}
coroutineContext.cancelChildren() // Cancel any remaining async jobs
requireNotNull(result) { "Failed to connect to any region" }
}
Here's how you could do it with channelFlow:
override suspend fun getClosestRegion(): Region = coroutineScope {
val regions = regionsRepository.getRegions()
val flow = channelFlow {
for (region in regions) {
launch {
val retrofitClient = buildRetrofitClient(region.backendUrl)
val regionAuthenticationAPI = retrofitClient.create(AuthenticationAPI::class.java)
val result = regionAuthenticationAPI.canConnect()
if (result.isSuccessful) {
send(region)
}
}
}
}
val result = withTimeoutOrNull(30.seconds) {
flow.firstOrNull()
}
coroutineContext.cancelChildren() // Cancel any remaining async jobs
requireNotNull(result) { "Failed to connect to any region" }
}
I think your MutableSharedFlow technique could also work if you dropped the isActive check and used coroutineScope { } and cancelChildren() like I did above. But it seems awkward to create a shared flow that isn't shared by anything (it's only used by the same coroutine that created it).
If 1 region was successfully connected, I expect that the of the requests will be cancelled (by scope.cancel()), but in reality other regions that have successfully connected AFTER the first one are also emitting value to the flow (scope.isActive returns true)
To quote the documentation...
Coroutine cancellation is cooperative. A coroutine code has to cooperate to be cancellable.
Once your client is initiated, you can't cancel it - the client has be able to interrupt what it's doing. That probably isn't happening inside of Retrofit.
I'll presume that it's not a problem that you're sending more requests than you need - otherwise you won't be able to make simultaneous requests.
I don't know how to handle the race condition of throw exception if all regions failed to connect or after 30 second timeout
As I understand there are three situations
There's one successful response - other responses should be ignored
All responses are unsuccessful - an error should be thrown
All responses take longer than 30 seconds - again, throw an error
Additionally I don't want to keep track of how many requests are active/failed/successful. That requires shared state, and is complicated and brittle. Instead, I want to use parent-child relationships to manage this.
Timeout
The timeout is already handled by withTimeoutOrNull() - easy enough!
First success
Selects could be useful here, and I see #Tenfour04 has provided that answer. I'll give an alternative.
Using suspendCancellableCoroutine() provides a way to
return as soon as there's a success - resume(...)
throw an error when all requests fail - resumeWithException
suspend fun getClosestRegion(
regions: List<Region>
): Region = withTimeoutOrNull(10.seconds) {
// don't give the supervisor a parent, because if one response is successful
// the parent will be await the cancellation of the other children
val supervisorJob = SupervisorJob()
// suspend the current coroutine. We'll use cont to continue when
// there's a definite outcome
suspendCancellableCoroutine<Region> { cont ->
launch(supervisorJob) {
regions
.map { region ->
// note: use async instead of launch so we can do awaitAll()
// to track when all tasks have completed, but none have resumed
async(supervisorJob) {
coroutineContext.job.invokeOnCompletion {
log("cancelling async job for $region")
}
val retrofitClient = buildRetrofitClient(region)
val response = retrofitClient.connect()
// if there's a success, then try to complete the supervisor.
// complete() prevents multiple jobs from continuing the suspended
// coroutine
if (response.isSuccess && supervisorJob.complete()) {
log("got success for $region - resuming")
// happy flow - we can return
cont.resume(region)
}
}
}.awaitAll()
// uh-oh, nothing was a success
if (supervisorJob.complete()) {
log("no successful regions - throwing exception & resuming")
cont.resumeWithException(Exception("no region response was successful"))
}
}
}
} ?: error("Timeout error - unable to get region")
examples
all responses are successful
If all tasks are successful, then it takes the shortest amount of time to return
getClosestRegion(
List(5) {
Region("attempt1-region$it", success = true)
}
)
...
log("result for all success: $regionSuccess, time $time")
got success for Region(name=attempt1-region1, success=true, delay=2s) - resuming
cancelling async job for Region(name=attempt1-region3, success=true, delay=2s)
result for all success: Region(name=attempt1-region1, success=true, delay=2s), time 2.131312600s
cancelling async job for Region(name=attempt1-region1, success=true, delay=2s)
all responses fail
When all responses fail, it should take the only as long as the maximum timeout.
getClosestRegion(
List(5) {
Region("attempt2-region$it", success = false)
}
)
...
log("failure: $allFailEx, time $time")
[DefaultDispatcher-worker-6 #all-fail#6] cancelling async job for Region(name=attempt2-region4, success=false, delay=1s)
[DefaultDispatcher-worker-4 #all-fail#4] cancelling async job for Region(name=attempt2-region2, success=false, delay=4s)
[DefaultDispatcher-worker-3 #all-fail#3] cancelling async job for Region(name=attempt2-region1, success=false, delay=4s)
[DefaultDispatcher-worker-6 #all-fail#5] cancelling async job for Region(name=attempt2-region3, success=false, delay=4s)
[DefaultDispatcher-worker-6 #all-fail#2] cancelling async job for Region(name=attempt2-region0, success=false, delay=5s)
[DefaultDispatcher-worker-6 #all-fail#1] no successful regions - throwing exception resuming
[DefaultDispatcher-worker-6 #all-fail#1] failure: java.lang.Exception: no region response was successful, time 5.225431500s
all responses timeout
And if all responses take longer than the timeout (I reduced it to 10 seconds in my example), then an exception will be thrown.
getClosestRegion(
List(5) {
Region("attempt3-region$it", false, 100.seconds)
}
)
...
log("timeout: $timeoutEx, time $time")
[kotlinx.coroutines.DefaultExecutor] timeout: java.lang.IllegalStateException: Timeout error - unable to get region, time 10.070052700s
Full demo code
import kotlin.coroutines.*
import kotlin.random.*
import kotlin.time.Duration.Companion.seconds
import kotlin.time.*
import kotlinx.coroutines.*
suspend fun main() {
System.getProperties().setProperty("kotlinx.coroutines.debug", "")
withContext(CoroutineName("all-success")) {
val (regionSuccess, time) = measureTimedValue {
getClosestRegion(
List(5) {
Region("attempt1-region$it", true)
}
)
}
log("result for all success: $regionSuccess, time $time")
}
log("\n------\n")
withContext(CoroutineName("all-fail")) {
val (allFailEx, time) = measureTimedValue {
try {
getClosestRegion(
List(5) {
Region("attempt2-region$it", false)
}
)
} catch (exception: Exception) {
exception
}
}
log("failure: $allFailEx, time $time")
}
log("\n------\n")
withContext(CoroutineName("timeout")) {
val (timeoutEx, time) = measureTimedValue {
try {
getClosestRegion(
List(5) {
Region("attempt3-region$it", false, 100.seconds)
}
)
} catch (exception: Exception) {
exception
}
}
log("timeout: $timeoutEx, time $time")
}
}
suspend fun getClosestRegion(
regions: List<Region>
): Region = withTimeoutOrNull(10.seconds) {
val supervisorJob = SupervisorJob()
suspendCancellableCoroutine<Region> { cont ->
launch(supervisorJob) {
regions
.map { region ->
async(supervisorJob) {
coroutineContext.job.invokeOnCompletion {
log("cancelling async job for $region")
}
val retrofitClient = buildRetrofitClient(region)
val response = retrofitClient.connect()
if (response.isSuccess && supervisorJob.complete()) {
log("got success for $region - resuming")
cont.resume(region)
}
}
}.awaitAll()
// uh-oh, nothing was a success
if (supervisorJob.complete()) {
log("no successful regions - throwing exception resuming")
cont.resumeWithException(Exception("no region response was successful"))
}
}
}
} ?: error("Timeout error - unable to get region")
////////////////////////////////////////////////////////////////////////////////////////////////////
data class Region(
val name: String,
val success: Boolean,
val delay: Duration = Random(name.hashCode()).nextInt(1..5).seconds,
) {
val backendUrl = "http://localhost/$name"
}
fun buildRetrofitClient(region: Region) = RetrofitClient(region)
class RetrofitClient(private val region: Region) {
suspend fun connect(): ClientResponse {
delay(region.delay)
return ClientResponse(region.backendUrl, region.success)
}
}
data class ClientResponse(
val url: String,
val isSuccess: Boolean,
)
fun log(msg: String) = println("[${Thread.currentThread().name}] $msg")

How to complete a Kotlin Flow in Android Worker

I'm investigating the use of Kotlin Flow within my current Android application
My application retrieves its data from a remote server via Retrofit API calls.
Some of these API's return 50,000 data items in 500 item pages.
Each API response contains an HTTP Link header containing the Next pages complete URL.
These calls can take up to 2 seconds to complete.
In an attempt to reduce the elapsed time I have employed a Kotlin Flow to concurrently process each page
of data while also making the next page API call.
My flow is defined as follows:
private val persistenceThreadPool = Executors.newFixedThreadPool(3).asCoroutineDispatcher()
private val internalWorkWorkState = MutableStateFlow<Response<List<MyPage>>?>(null)
private val workWorkState = internalWorkWorkState.asStateFlow()
private val myJob: Job
init {
myJob = GlobalScope.launch(persistenceThreadPool) {
workWorkState.collect { page ->
if (page == null) {
} else managePage(page!!)
}
}
}
My Recursive function is defined as follows that fetches all pages:-
private suspend fun managePages(accessToken: String, response: Response<List<MyPage>>) {
when {
result != null -> return
response.isSuccessful -> internalWorkWorkState.emit(response)
else -> {
manageError(response.errorBody())
result = Result.failure()
return
}
}
response.headers().filter { it.first == HTTP_HEADER_LINK && it.second.contains(REL_NEXT) }.forEach {
val parts = it.second.split(OPEN_ANGLE, CLOSE_ANGLE)
if (parts.size >= 2) {
managePages(accessToken, service.myApiCall(accessToken, parts[1]))
}
}
}
private suspend fun managePage(response: Response<List<MyPage>>) {
val pages = response.body()
pages?.let {
persistResponse(it)
}
}
private suspend fun persistResponse(myPage: List<MyPage>) {
val myPageDOs = ArrayList<MyPageDO>()
myPage.forEach { page ->
myPageDOs.add(page.mapDO())
}
database.myPageDAO().insertAsync(myPageDOs)
}
My numerous issues are
This code does not insert all data items that I retrieve
How do complete the flow when all data items have been retrieved
How do I complete the GlobalScope job once all the data items have been retrieved and persisted
UPDATE
By making the following changes I have managed to insert all the data
private val persistenceThreadPool = Executors.newFixedThreadPool(3).asCoroutineDispatcher()
private val completed = CompletableDeferred<Int>()
private val channel = Channel<Response<List<MyPage>>?>(UNLIMITED)
private val channelFlow = channel.consumeAsFlow().flowOn(persistenceThreadPool)
private val frank: Job
init {
frank = GlobalScope.launch(persistenceThreadPool) {
channelFlow.collect { page ->
if (page == null) {
completed.complete(totalItems)
} else managePage(page!!)
}
}
}
...
...
...
channel.send(null)
completed.await()
return result ?: Result.success(outputData)
I do not like having to rely on a CompletableDeferred, is there a better approach than this to know when the Flow has completed everything?
You are looking for the flow builder and Flow.buffer():
suspend fun getData(): Flow<Data> = flow {
var pageData: List<Data>
var pageUrl: String? = "bla"
while (pageUrl != null) {
TODO("fetch pageData from pageUrl and change pageUrl to the next page")
emitAll(pageData)
}
}
.flowOn(Dispatchers.IO /* no need for a thread pool executor, IO does it automatically */)
.buffer(3)
You can use it just like a normal Flow, iterate, etc. If you want to know the total length of the output, you should calculate it on the consumer with a mutable closure variable. Note you shouldn't need to use GlobalScope anywhere (ideally ever).
There are a few ways to achieve the desired behaviour. I would suggest to use coroutineScope which is designed specifically for parallel decomposition. It also provides good cancellation and error handling behaviour out of the box. In conjunction with Channel.close behaviour it makes the implementation pretty simple. Conceptually the implementation may look like this:
suspend fun fetchAllPages() {
coroutineScope {
val channel = Channel<MyPage>(Channel.UNLIMITED)
launch(Dispatchers.IO){ loadData(channel) }
launch(Dispatchers.IO){ processData(channel) }
}
}
suspend fun loadData(sendChannel: SendChannel<MyPage>){
while(hasMoreData()){
sendChannel.send(loadPage())
}
sendChannel.close()
}
suspend fun processData(channel: ReceiveChannel<MyPage>){
for(page in channel){
// process page
}
}
It works in the following way:
coroutineScope suspends until all children are finished. So you don't need CompletableDeferred anymore.
loadData() loads pages in cycle and posts them into the channel. It closes the channel as soon as all pages have been loaded.
processData fetches items from the channel one by one and process them. The cycle will finish as soon as all the items have been processed (and the channel has been closed).
In this implementation the producer coroutine works independently, with no back-pressure, so it can take a lot of memory if the processing is slow. Limit the buffer capacity to have the producer coroutine suspend when the buffer is full.
It might be also a good idea to use channels fan-out behaviour to launch multiple processors to speed up the computation.

Android WorkManager observe progress

I'm using WorkManager for deferred work in my app.
The total work is divided into a number of chained workers, and I'm having trouble showing the workers' progress to the user (using progress bar).
I tried creating one tag and add it to the different workers, and inside the workers update the progress by that tag, but when I debug I always get progress is '0'.
Another thing I noticed is that the workManager's list of work infos is getting bigger each time I start the work (even if the workers finished their work).
Here is my code:
//inside view model
private val workManager = WorkManager.getInstance(appContext)
internal val progressWorkInfoItems: LiveData<List<WorkInfo>>
init
{
progressWorkInfoItems = workManager.getWorkInfosByTagLiveData(TAG_SAVING_PROGRESS)
}
companion object
{
const val TAG_SAVING_PROGRESS = "saving_progress_tag"
}
//inside a method
var workContinuation = workManager.beginWith(OneTimeWorkRequest.from(firstWorker::class.java))
val secondWorkRequest = OneTimeWorkRequestBuilder<SecondWorker>()
secondWorkRequest.addTag(TAG_SAVING_PROGRESS)
secondWorkRequest.setInputData(createData())
workContinuation = workContinuation.then(secondWorkRequest.build())
val thirdWorkRequest = OneTimeWorkRequestBuilder<ThirdWorker>()
thirdWorkRequest.addTag(TAG_SAVING_PROGRESS)
thirdWorkRequest.setInputData(createData())
workContinuation = workContinuation.then(thirdWorkRequest.build())
workContinuation.enqueue()
//inside the Activity
viewModel.progressWorkInfoItems.observe(this, observeProgress())
private fun observeProgress(): Observer<List<WorkInfo>>
{
return Observer { listOfWorkInfo ->
if (listOfWorkInfo.isNullOrEmpty()) { return#Observer }
listOfWorkInfo.forEach { workInfo ->
if (WorkInfo.State.RUNNING == workInfo.state)
{
val progress = workInfo.progress.getFloat(TAG_SAVING_PROGRESS, 0f)
progress_bar?.progress = progress
}
}
}
}
//inside the worker
override suspend fun doWork(): Result = withContext(Dispatchers.IO)
{
setProgress(workDataOf(TAG_SAVING_PROGRESS to 10f))
...
...
Result.success()
}
The setProgress method is to observe intermediate progress in a single Worker (as explained in the guide):
Progress information can only be observed and updated while the ListenableWorker is running.
For this reason, the progress information is available only till a Worker is active (e.g. it is not in a terminal state like SUCCEEDED, FAILED and CANCELLED). This WorkManager guide covers Worker's states.
My suggestion is to use the Worker's unique ID to identify which worker in your chain is not yet in a terminal state. You can use WorkRequest's getId method to retrieve its unique ID.
According to my analysis I have found that there might be two reasons why you always get 0
setProgress is set just before the Result.success() in the doWork() of the worker then it's lost and you never get that value in your listener. This could be because the state of the worker is now SUCCEEDED
the worker is completing its work in fraction of seconds
Lets take a look at the following code
class Worker1(context: Context, workerParameters: WorkerParameters) : Worker(context,workerParameters) {
override fun doWork(): Result {
setProgressAsync(Data.Builder().putInt("progress",10).build())
for (i in 1..5) {
SystemClock.sleep(1000)
}
setProgressAsync(Data.Builder().putInt("progress",50).build())
SystemClock.sleep(1000)
return Result.success()
}
}
In the above code
if you remove only the first sleep method then the listener only get the progres50
if you remove only the second sleep method then the listener only get the progress 10
If you remove both then the you get the default value 0
This analysis is based on the WorkManager version 2.4.0
Hence I found that the following way is better and always reliable to show the progress of various workers of your chain work.
I have two workers that needs to be run one after the other. If the first work is completed then 50% of the work is done and 100% would be done when the second work is completed.
Two workers
class Worker1(context: Context, workerParameters: WorkerParameters) : Worker(context,workerParameters) {
override fun doWork(): Result {
for (i in 1..5) {
Log.e("worker", "worker1----$i")
}
return Result.success(Data.Builder().putInt("progress",50).build())
}
}
class Worker2(context: Context, workerParameters: WorkerParameters) : Worker(context,workerParameters) {
override fun doWork(): Result {
for (i in 5..10) {
Log.e("worker", "worker1----$i")
}
return Result.success(Data.Builder().putInt("progress",100).build())
}
}
Inside the activity
workManager = WorkManager.getInstance(this)
workRequest1 = OneTimeWorkRequest.Builder(Worker1::class.java)
.addTag(TAG_SAVING_PROGRESS)
.build()
workRequest2 = OneTimeWorkRequest.Builder(Worker2::class.java)
.addTag(TAG_SAVING_PROGRESS)
.build()
findViewById<Button>(R.id.btn).setOnClickListener(View.OnClickListener { view ->
workManager?.
beginUniqueWork(TAG_SAVING_PROGRESS,ExistingWorkPolicy.REPLACE,workRequest1)
?.then(workRequest2)
?.enqueue()
})
progressBar = findViewById(R.id.progressBar)
workManager?.getWorkInfoByIdLiveData(workRequest1.id)
?.observe(this, Observer { workInfo: WorkInfo? ->
if (workInfo != null && workInfo.state == WorkInfo.State.SUCCEEDED) {
val progress = workInfo.outputData
val value = progress.getInt("progress", 0)
progressBar?.progress = value
}
})
workManager?.getWorkInfoByIdLiveData(workRequest2.id)
?.observe(this, Observer { workInfo: WorkInfo? ->
if (workInfo != null && workInfo.state == WorkInfo.State.SUCCEEDED) {
val progress = workInfo.outputData
val value = progress.getInt("progress", 0)
progressBar?.progress = value
}
})
The reason workManager's list of work infos is getting bigger each time the work is started even if the workers finished their work is because of
workManager.beginWith(OneTimeWorkRequest.from(firstWorker::class.java))
instead one need to use
workManager?.beginUniqueWork(TAG_SAVING_PROGRESS, ExistingWorkPolicy.REPLACE,OneTimeWorkRequest.from(firstWorker::class.java))
You can read more about it here

Categories

Resources