Lets say that i have a model
data class PendingFile(segment: Int, fileHash: String, url: String)
So when i have a list with pendingFiles i want to download each file concurrently.
private suspend fun downloadLinks(pendingFiles: List<PendingFile>) {
scope.launch {
val deferredList = pendingFiles.map {
async(Dispatchers.IO) {
// runs in parallel in background thread
networkCallToGetData(it)
}
}
// Waiting all requests are finished without blocking the current thread
val listOfReturnData = deferredList.awaitAll()
val (success, failed) = listOfReturnData.partition {
// What should i put here??
}
if (failed.isNotEmpty()) {
// Back off to the half size
currentDownloadParts /= 2
}
if (success.isNotEmpty()) {
// Continue double size
currentDownloadParts *= 2
}
}
}
I want my success / failed to be distinguished and i also want the lists to have the PendingFile models accordingly in order to know which one succeeded and which one failed. How can i do that?
You can improve the concurrent code using coroutineScope see: https://kotlinlang.org/docs/composing-suspending-functions.html#structured-concurrency-with-async
Too see what failed and what worked you can use null as a fallback value in the list (or create a sealed class with Success/Failure values)
suspend fun downloadLinks(pendingFiles: List<PendingFile>) = coroutineScope {
val deferredList = pendingFiles.map {
async(Dispatchers.IO) {
// runs in parallel in background thread
try {
networkCallToGetData(it)
} catch (e: Exception) { // might wanna adjust this depending on your use case
null // null here means failure, alternately you could use a sealed class with success and failure
}
}
// Waiting all requests are finished without blocking the current thread
val listOfReturnData = deferredList.awaitAll()
val (success, failed) = listOfReturnData.partition {
it != null
}
TODO() // rest of your code
}
}
Related
In all cases that I have been using corrutines, so far, it has been executing its "lines" synchronously, so that I have been able to use the result of a variable in the next line of code.
I have the ImageRepository class that calls the server, gets a list of images, and once obtained, creates a json with the images and related information.
class ImageRepository {
val API_IMAGES = "https://api.MY_API_IMAGES"
suspend fun fetch (activity: AppCompatActivity) {
activity.lifecycleScope.launch() {
val imagesResponse = withContext(Dispatchers.IO) {
getRequest(API_IMAGES)
}
if (imagesResponse != null) {
val jsonWithImagesAndInfo = composeJsonWithImagesAndInfo(imagesResponse)
} else {
// TODO Warning to user
Log.e(TAG, "Error: Get request returned no response")
}
...// All the rest of code
}
}
}
Well, the suspend function executes correctly synchronously, it first makes the call to the server in the getRequest and, when there is response, then composes the JSON. So far, so good.
And this is the call to the "ImageRepository" suspension function from my main activity:
lifecycleScope.launch {
val result = withContext(Dispatchers.IO) { neoRepository.fetch(this#MainActivity) }
Log.i(TAG, "After suspend fun")
}
The problem is that, as soon as it is executed, it calls the suspension function and then displays the log, obviously empty. It doesn't wait for the suspension function to finish and then display the log.
Why? What am I doing wrong?
I have tried the different Dispatchers, etc, but without success.
I appreciate any help.
Thanks and best regards.
It’s because you are launching another coroutine in parallel from inside your suspend function. Instead of launching another coroutine there, call the contents of that launch directly in your suspend function.
A suspend function is just like a regular function, it executes one instruction after another. The only difference is that it can be suspended, meaning the runtime environment can decide to halt / suspend execution to do other work and then resume execution later.
This is true unless you start an asynchronous operation which you should not be doing. Your fetch operation should look like:
class ImageRepository {
suspend fun fetch () {
val imagesResponse = getRequest(API_IMAGES)
if (imagesResponse != null) {
val jsonWithImagesAndInfo = composeJsonWithImagesAndInfo(imagesResponse)
} else {
// TODO Warning to user
Log.e(TAG, "Error: Get request returned no response")
}
... // All the rest of code
}
}
-> just like a regular function. Of course you need to all it from a coroutine:
lifecycleScope.launch {
val result = withContext(Dispatchers.IO) { neoRepository.fetch() }
Log.i(TAG, "After suspend fun")
}
Google recommends to inject the dispatcher into the lower level classes (https://developer.android.com/kotlin/coroutines/coroutines-best-practices) so ideally you'd do:
val neoRepository = ImageRepository(Dispatchers.IO)
lifecycleScope.launch {
val result = neoRepository.fetch()
Log.i(TAG, "After suspend fun")
}
class ImageRepository(private val dispatcher: Dispatcher) {
suspend fun fetch () = withContext(dispatcher) {
val imagesResponse = getRequest(API_IMAGES)
if (imagesResponse != null) {
val jsonWithImagesAndInfo = composeJsonWithImagesAndInfo(imagesResponse)
} else {
// TODO Warning to user
Log.e(TAG, "Error: Get request returned no response")
}
... // All the rest of code
}
}
I have a coroutine/flow problem that I'm trying to solve
I have this method getClosesRegion that's suppose to do the following:
Attempt to connect to every region
The first region to connect (I use launch to attempt to connect to all concurrently), should be returned and the rest of the region requests should be cancelled
If all regions failed to connect OR after a 30 second timeout, throw an exception
That's currently what I have:
override suspend fun getClosestRegion(): Region {
val regions = regionsRepository.getRegions()
val firstSuccessResult = MutableSharedFlow<Region>(replay = 1)
val scope = CoroutineScope(Dispatchers.IO)
// Attempts to connect to every region until the first success
scope.launch {
regions.forEach { region ->
launch {
val retrofitClient = buildRetrofitClient(region.backendUrl)
val regionAuthenticationAPI = retrofitClient.create(AuthenticationAPI::class.java)
val response = regionAuthenticationAPI.canConnect()
if (response.isSuccessful && scope.isActive) {
scope.cancel()
firstSuccessResult.emit(region)
}
}
}
}
val result = withTimeoutOrNull(TimeUnit.SECONDS.toMillis(30)) { firstSuccessResult.first() }
if (result != null)
return result
throw Exception("Failed to connect to any region")
}
Issues with current code:
If 1 region was successfully connected, I expect that the of the requests will be cancelled (by scope.cancel()), but in reality other regions that have successfully connected AFTER the first one are also emitting value to the flow (scope.isActive returns true)
I don't know how to handle the race condition of throw exception if all regions failed to connect or after 30 second timeout
Also I'm pretty new to kotlin Flow and Coroutines so I don't know if creating a flow is really necessary here
You don't need to create a CoroutineScope and manage it from within a coroutine. You can use the coroutineScope function instead.
I of course didn't test any of the below, so please excuse syntax errors and omitted <types> that the compiler can't infer.
Here's how you might do it using a select clause, but I think it's kind of awkward:
override suspend fun getClosestRegion(): Region = coroutineScope {
val regions = regionsRepository.getRegions()
val result = select<Region?> {
onTimeout(30.seconds) { null }
for (region in regions) {
launch {
val retrofitClient = buildRetrofitClient(region.backendUrl)
val regionAuthenticationAPI = retrofitClient.create(AuthenticationAPI::class.java)
val result = regionAuthenticationAPI.canConnect()
if (!it.isSuccessful) {
delay(30.seconds) // prevent this one from being selected
}
}.onJoin { region }
}
}
coroutineContext.cancelChildren() // Cancel any remaining async jobs
requireNotNull(result) { "Failed to connect to any region" }
}
Here's how you could do it with channelFlow:
override suspend fun getClosestRegion(): Region = coroutineScope {
val regions = regionsRepository.getRegions()
val flow = channelFlow {
for (region in regions) {
launch {
val retrofitClient = buildRetrofitClient(region.backendUrl)
val regionAuthenticationAPI = retrofitClient.create(AuthenticationAPI::class.java)
val result = regionAuthenticationAPI.canConnect()
if (result.isSuccessful) {
send(region)
}
}
}
}
val result = withTimeoutOrNull(30.seconds) {
flow.firstOrNull()
}
coroutineContext.cancelChildren() // Cancel any remaining async jobs
requireNotNull(result) { "Failed to connect to any region" }
}
I think your MutableSharedFlow technique could also work if you dropped the isActive check and used coroutineScope { } and cancelChildren() like I did above. But it seems awkward to create a shared flow that isn't shared by anything (it's only used by the same coroutine that created it).
If 1 region was successfully connected, I expect that the of the requests will be cancelled (by scope.cancel()), but in reality other regions that have successfully connected AFTER the first one are also emitting value to the flow (scope.isActive returns true)
To quote the documentation...
Coroutine cancellation is cooperative. A coroutine code has to cooperate to be cancellable.
Once your client is initiated, you can't cancel it - the client has be able to interrupt what it's doing. That probably isn't happening inside of Retrofit.
I'll presume that it's not a problem that you're sending more requests than you need - otherwise you won't be able to make simultaneous requests.
I don't know how to handle the race condition of throw exception if all regions failed to connect or after 30 second timeout
As I understand there are three situations
There's one successful response - other responses should be ignored
All responses are unsuccessful - an error should be thrown
All responses take longer than 30 seconds - again, throw an error
Additionally I don't want to keep track of how many requests are active/failed/successful. That requires shared state, and is complicated and brittle. Instead, I want to use parent-child relationships to manage this.
Timeout
The timeout is already handled by withTimeoutOrNull() - easy enough!
First success
Selects could be useful here, and I see #Tenfour04 has provided that answer. I'll give an alternative.
Using suspendCancellableCoroutine() provides a way to
return as soon as there's a success - resume(...)
throw an error when all requests fail - resumeWithException
suspend fun getClosestRegion(
regions: List<Region>
): Region = withTimeoutOrNull(10.seconds) {
// don't give the supervisor a parent, because if one response is successful
// the parent will be await the cancellation of the other children
val supervisorJob = SupervisorJob()
// suspend the current coroutine. We'll use cont to continue when
// there's a definite outcome
suspendCancellableCoroutine<Region> { cont ->
launch(supervisorJob) {
regions
.map { region ->
// note: use async instead of launch so we can do awaitAll()
// to track when all tasks have completed, but none have resumed
async(supervisorJob) {
coroutineContext.job.invokeOnCompletion {
log("cancelling async job for $region")
}
val retrofitClient = buildRetrofitClient(region)
val response = retrofitClient.connect()
// if there's a success, then try to complete the supervisor.
// complete() prevents multiple jobs from continuing the suspended
// coroutine
if (response.isSuccess && supervisorJob.complete()) {
log("got success for $region - resuming")
// happy flow - we can return
cont.resume(region)
}
}
}.awaitAll()
// uh-oh, nothing was a success
if (supervisorJob.complete()) {
log("no successful regions - throwing exception & resuming")
cont.resumeWithException(Exception("no region response was successful"))
}
}
}
} ?: error("Timeout error - unable to get region")
examples
all responses are successful
If all tasks are successful, then it takes the shortest amount of time to return
getClosestRegion(
List(5) {
Region("attempt1-region$it", success = true)
}
)
...
log("result for all success: $regionSuccess, time $time")
got success for Region(name=attempt1-region1, success=true, delay=2s) - resuming
cancelling async job for Region(name=attempt1-region3, success=true, delay=2s)
result for all success: Region(name=attempt1-region1, success=true, delay=2s), time 2.131312600s
cancelling async job for Region(name=attempt1-region1, success=true, delay=2s)
all responses fail
When all responses fail, it should take the only as long as the maximum timeout.
getClosestRegion(
List(5) {
Region("attempt2-region$it", success = false)
}
)
...
log("failure: $allFailEx, time $time")
[DefaultDispatcher-worker-6 #all-fail#6] cancelling async job for Region(name=attempt2-region4, success=false, delay=1s)
[DefaultDispatcher-worker-4 #all-fail#4] cancelling async job for Region(name=attempt2-region2, success=false, delay=4s)
[DefaultDispatcher-worker-3 #all-fail#3] cancelling async job for Region(name=attempt2-region1, success=false, delay=4s)
[DefaultDispatcher-worker-6 #all-fail#5] cancelling async job for Region(name=attempt2-region3, success=false, delay=4s)
[DefaultDispatcher-worker-6 #all-fail#2] cancelling async job for Region(name=attempt2-region0, success=false, delay=5s)
[DefaultDispatcher-worker-6 #all-fail#1] no successful regions - throwing exception resuming
[DefaultDispatcher-worker-6 #all-fail#1] failure: java.lang.Exception: no region response was successful, time 5.225431500s
all responses timeout
And if all responses take longer than the timeout (I reduced it to 10 seconds in my example), then an exception will be thrown.
getClosestRegion(
List(5) {
Region("attempt3-region$it", false, 100.seconds)
}
)
...
log("timeout: $timeoutEx, time $time")
[kotlinx.coroutines.DefaultExecutor] timeout: java.lang.IllegalStateException: Timeout error - unable to get region, time 10.070052700s
Full demo code
import kotlin.coroutines.*
import kotlin.random.*
import kotlin.time.Duration.Companion.seconds
import kotlin.time.*
import kotlinx.coroutines.*
suspend fun main() {
System.getProperties().setProperty("kotlinx.coroutines.debug", "")
withContext(CoroutineName("all-success")) {
val (regionSuccess, time) = measureTimedValue {
getClosestRegion(
List(5) {
Region("attempt1-region$it", true)
}
)
}
log("result for all success: $regionSuccess, time $time")
}
log("\n------\n")
withContext(CoroutineName("all-fail")) {
val (allFailEx, time) = measureTimedValue {
try {
getClosestRegion(
List(5) {
Region("attempt2-region$it", false)
}
)
} catch (exception: Exception) {
exception
}
}
log("failure: $allFailEx, time $time")
}
log("\n------\n")
withContext(CoroutineName("timeout")) {
val (timeoutEx, time) = measureTimedValue {
try {
getClosestRegion(
List(5) {
Region("attempt3-region$it", false, 100.seconds)
}
)
} catch (exception: Exception) {
exception
}
}
log("timeout: $timeoutEx, time $time")
}
}
suspend fun getClosestRegion(
regions: List<Region>
): Region = withTimeoutOrNull(10.seconds) {
val supervisorJob = SupervisorJob()
suspendCancellableCoroutine<Region> { cont ->
launch(supervisorJob) {
regions
.map { region ->
async(supervisorJob) {
coroutineContext.job.invokeOnCompletion {
log("cancelling async job for $region")
}
val retrofitClient = buildRetrofitClient(region)
val response = retrofitClient.connect()
if (response.isSuccess && supervisorJob.complete()) {
log("got success for $region - resuming")
cont.resume(region)
}
}
}.awaitAll()
// uh-oh, nothing was a success
if (supervisorJob.complete()) {
log("no successful regions - throwing exception resuming")
cont.resumeWithException(Exception("no region response was successful"))
}
}
}
} ?: error("Timeout error - unable to get region")
////////////////////////////////////////////////////////////////////////////////////////////////////
data class Region(
val name: String,
val success: Boolean,
val delay: Duration = Random(name.hashCode()).nextInt(1..5).seconds,
) {
val backendUrl = "http://localhost/$name"
}
fun buildRetrofitClient(region: Region) = RetrofitClient(region)
class RetrofitClient(private val region: Region) {
suspend fun connect(): ClientResponse {
delay(region.delay)
return ClientResponse(region.backendUrl, region.success)
}
}
data class ClientResponse(
val url: String,
val isSuccess: Boolean,
)
fun log(msg: String) = println("[${Thread.currentThread().name}] $msg")
I'm investigating the use of Kotlin Flow within my current Android application
My application retrieves its data from a remote server via Retrofit API calls.
Some of these API's return 50,000 data items in 500 item pages.
Each API response contains an HTTP Link header containing the Next pages complete URL.
These calls can take up to 2 seconds to complete.
In an attempt to reduce the elapsed time I have employed a Kotlin Flow to concurrently process each page
of data while also making the next page API call.
My flow is defined as follows:
private val persistenceThreadPool = Executors.newFixedThreadPool(3).asCoroutineDispatcher()
private val internalWorkWorkState = MutableStateFlow<Response<List<MyPage>>?>(null)
private val workWorkState = internalWorkWorkState.asStateFlow()
private val myJob: Job
init {
myJob = GlobalScope.launch(persistenceThreadPool) {
workWorkState.collect { page ->
if (page == null) {
} else managePage(page!!)
}
}
}
My Recursive function is defined as follows that fetches all pages:-
private suspend fun managePages(accessToken: String, response: Response<List<MyPage>>) {
when {
result != null -> return
response.isSuccessful -> internalWorkWorkState.emit(response)
else -> {
manageError(response.errorBody())
result = Result.failure()
return
}
}
response.headers().filter { it.first == HTTP_HEADER_LINK && it.second.contains(REL_NEXT) }.forEach {
val parts = it.second.split(OPEN_ANGLE, CLOSE_ANGLE)
if (parts.size >= 2) {
managePages(accessToken, service.myApiCall(accessToken, parts[1]))
}
}
}
private suspend fun managePage(response: Response<List<MyPage>>) {
val pages = response.body()
pages?.let {
persistResponse(it)
}
}
private suspend fun persistResponse(myPage: List<MyPage>) {
val myPageDOs = ArrayList<MyPageDO>()
myPage.forEach { page ->
myPageDOs.add(page.mapDO())
}
database.myPageDAO().insertAsync(myPageDOs)
}
My numerous issues are
This code does not insert all data items that I retrieve
How do complete the flow when all data items have been retrieved
How do I complete the GlobalScope job once all the data items have been retrieved and persisted
UPDATE
By making the following changes I have managed to insert all the data
private val persistenceThreadPool = Executors.newFixedThreadPool(3).asCoroutineDispatcher()
private val completed = CompletableDeferred<Int>()
private val channel = Channel<Response<List<MyPage>>?>(UNLIMITED)
private val channelFlow = channel.consumeAsFlow().flowOn(persistenceThreadPool)
private val frank: Job
init {
frank = GlobalScope.launch(persistenceThreadPool) {
channelFlow.collect { page ->
if (page == null) {
completed.complete(totalItems)
} else managePage(page!!)
}
}
}
...
...
...
channel.send(null)
completed.await()
return result ?: Result.success(outputData)
I do not like having to rely on a CompletableDeferred, is there a better approach than this to know when the Flow has completed everything?
You are looking for the flow builder and Flow.buffer():
suspend fun getData(): Flow<Data> = flow {
var pageData: List<Data>
var pageUrl: String? = "bla"
while (pageUrl != null) {
TODO("fetch pageData from pageUrl and change pageUrl to the next page")
emitAll(pageData)
}
}
.flowOn(Dispatchers.IO /* no need for a thread pool executor, IO does it automatically */)
.buffer(3)
You can use it just like a normal Flow, iterate, etc. If you want to know the total length of the output, you should calculate it on the consumer with a mutable closure variable. Note you shouldn't need to use GlobalScope anywhere (ideally ever).
There are a few ways to achieve the desired behaviour. I would suggest to use coroutineScope which is designed specifically for parallel decomposition. It also provides good cancellation and error handling behaviour out of the box. In conjunction with Channel.close behaviour it makes the implementation pretty simple. Conceptually the implementation may look like this:
suspend fun fetchAllPages() {
coroutineScope {
val channel = Channel<MyPage>(Channel.UNLIMITED)
launch(Dispatchers.IO){ loadData(channel) }
launch(Dispatchers.IO){ processData(channel) }
}
}
suspend fun loadData(sendChannel: SendChannel<MyPage>){
while(hasMoreData()){
sendChannel.send(loadPage())
}
sendChannel.close()
}
suspend fun processData(channel: ReceiveChannel<MyPage>){
for(page in channel){
// process page
}
}
It works in the following way:
coroutineScope suspends until all children are finished. So you don't need CompletableDeferred anymore.
loadData() loads pages in cycle and posts them into the channel. It closes the channel as soon as all pages have been loaded.
processData fetches items from the channel one by one and process them. The cycle will finish as soon as all the items have been processed (and the channel has been closed).
In this implementation the producer coroutine works independently, with no back-pressure, so it can take a lot of memory if the processing is slow. Limit the buffer capacity to have the producer coroutine suspend when the buffer is full.
It might be also a good idea to use channels fan-out behaviour to launch multiple processors to speed up the computation.
I am making a network repository that supports multiple data retrieval configs, therefore I want to separate those configs' logic into functions.
However, I have a config that fetches the data continuously at specified intervals. Everything is fine when I emit those values to the original Flow. But when I take the logic into another function and return another Flow through it, it stops caring about its coroutine scope. Even after the scope's cancelation, it keeps on fetching the data.
TLDR: Suspend function returning a flow runs forever when currentCoroutineContext is used to control its loop's termination.
What am I doing wrong here?
Here's the simplified version of my code:
Fragment calling the viewmodels function that basically calls the getData()
lifecycleScope.launch {
viewModel.getLatestDataList()
}
Repository
suspend fun getData(config: MyConfig): Flow<List<Data>>
{
return flow {
when (config)
{
CONTINUOUS ->
{
//It worked fine when fetchContinuously was ingrained to here and emitted directly to the current flow
//And now it keeps on running eternally
fetchContinuously().collect { updatedList ->
emit(updatedList)
}
}
}
}
}
//Note logic of this function is greatly reduced to keep the focus on the problem
private suspend fun fetchContinuously(): Flow<List<Data>>
{
return flow {
while (currentCoroutineContext().isActive)
{
val updatedList = fetchDataListOverNetwork().await()
if (updatedList != null)
{
emit(updatedList)
}
delay(refreshIntervalInMs)
}
Timber.i("Context is no longer active - terminating the continuous-fetch coroutine")
}
}
private suspend fun fetchDataListOverNetwork(): Deferred<List<Data>?> =
withContext(Dispatchers.IO) {
return#withContext async {
var list: List<Data>? = null
try
{
val response = apiService.getDataList().execute()
if (response.isSuccessful && response.body() != null)
{
list = response.body()!!.list
}
else
{
Timber.w("Failed to fetch data from the network database. Error body: ${response.errorBody()}, Response body: ${response.body()}")
}
}
catch (e: Exception)
{
Timber.w("Exception while trying to fetch data from the network database. Stacktrace: ${e.printStackTrace()}")
}
finally
{
return#async list
}
list //IDE is not smart enough to realize we are already returning no matter what inside of the finally block; therefore, this needs to stay here
}
}
I am not sure whether this is a solution to your problem, but you do not need to have a suspending function that returns a Flow. The lambda you are passing is a suspending function itself:
fun <T> flow(block: suspend FlowCollector<T>.() -> Unit): Flow<T> (source)
Here is an example of a flow that repeats a (GraphQl) query (simplified - without type parameters) I am using:
override fun query(query: Query,
updateIntervalMillis: Long): Flow<Result<T>> {
return flow {
// this ensures at least one query
val result: Result<T> = execute(query)
emit(result)
while (coroutineContext[Job]?.isActive == true && updateIntervalMillis > 0) {
delay(updateIntervalMillis)
val otherResult: Result<T> = execute(query)
emit(otherResult)
}
}
}
I'm not that good at Flow but I think the problem is that you are delaying only the getData() flow instead of delaying both of them.
Try adding this:
suspend fun getData(config: MyConfig): Flow<List<Data>>
{
return flow {
when (config)
{
CONTINUOUS ->
{
fetchContinuously().collect { updatedList ->
emit(updatedList)
delay(refreshIntervalInMs)
}
}
}
}
}
Take note of the delay(refreshIntervalInMs).
I'm using WorkManager for deferred work in my app.
The total work is divided into a number of chained workers, and I'm having trouble showing the workers' progress to the user (using progress bar).
I tried creating one tag and add it to the different workers, and inside the workers update the progress by that tag, but when I debug I always get progress is '0'.
Another thing I noticed is that the workManager's list of work infos is getting bigger each time I start the work (even if the workers finished their work).
Here is my code:
//inside view model
private val workManager = WorkManager.getInstance(appContext)
internal val progressWorkInfoItems: LiveData<List<WorkInfo>>
init
{
progressWorkInfoItems = workManager.getWorkInfosByTagLiveData(TAG_SAVING_PROGRESS)
}
companion object
{
const val TAG_SAVING_PROGRESS = "saving_progress_tag"
}
//inside a method
var workContinuation = workManager.beginWith(OneTimeWorkRequest.from(firstWorker::class.java))
val secondWorkRequest = OneTimeWorkRequestBuilder<SecondWorker>()
secondWorkRequest.addTag(TAG_SAVING_PROGRESS)
secondWorkRequest.setInputData(createData())
workContinuation = workContinuation.then(secondWorkRequest.build())
val thirdWorkRequest = OneTimeWorkRequestBuilder<ThirdWorker>()
thirdWorkRequest.addTag(TAG_SAVING_PROGRESS)
thirdWorkRequest.setInputData(createData())
workContinuation = workContinuation.then(thirdWorkRequest.build())
workContinuation.enqueue()
//inside the Activity
viewModel.progressWorkInfoItems.observe(this, observeProgress())
private fun observeProgress(): Observer<List<WorkInfo>>
{
return Observer { listOfWorkInfo ->
if (listOfWorkInfo.isNullOrEmpty()) { return#Observer }
listOfWorkInfo.forEach { workInfo ->
if (WorkInfo.State.RUNNING == workInfo.state)
{
val progress = workInfo.progress.getFloat(TAG_SAVING_PROGRESS, 0f)
progress_bar?.progress = progress
}
}
}
}
//inside the worker
override suspend fun doWork(): Result = withContext(Dispatchers.IO)
{
setProgress(workDataOf(TAG_SAVING_PROGRESS to 10f))
...
...
Result.success()
}
The setProgress method is to observe intermediate progress in a single Worker (as explained in the guide):
Progress information can only be observed and updated while the ListenableWorker is running.
For this reason, the progress information is available only till a Worker is active (e.g. it is not in a terminal state like SUCCEEDED, FAILED and CANCELLED). This WorkManager guide covers Worker's states.
My suggestion is to use the Worker's unique ID to identify which worker in your chain is not yet in a terminal state. You can use WorkRequest's getId method to retrieve its unique ID.
According to my analysis I have found that there might be two reasons why you always get 0
setProgress is set just before the Result.success() in the doWork() of the worker then it's lost and you never get that value in your listener. This could be because the state of the worker is now SUCCEEDED
the worker is completing its work in fraction of seconds
Lets take a look at the following code
class Worker1(context: Context, workerParameters: WorkerParameters) : Worker(context,workerParameters) {
override fun doWork(): Result {
setProgressAsync(Data.Builder().putInt("progress",10).build())
for (i in 1..5) {
SystemClock.sleep(1000)
}
setProgressAsync(Data.Builder().putInt("progress",50).build())
SystemClock.sleep(1000)
return Result.success()
}
}
In the above code
if you remove only the first sleep method then the listener only get the progres50
if you remove only the second sleep method then the listener only get the progress 10
If you remove both then the you get the default value 0
This analysis is based on the WorkManager version 2.4.0
Hence I found that the following way is better and always reliable to show the progress of various workers of your chain work.
I have two workers that needs to be run one after the other. If the first work is completed then 50% of the work is done and 100% would be done when the second work is completed.
Two workers
class Worker1(context: Context, workerParameters: WorkerParameters) : Worker(context,workerParameters) {
override fun doWork(): Result {
for (i in 1..5) {
Log.e("worker", "worker1----$i")
}
return Result.success(Data.Builder().putInt("progress",50).build())
}
}
class Worker2(context: Context, workerParameters: WorkerParameters) : Worker(context,workerParameters) {
override fun doWork(): Result {
for (i in 5..10) {
Log.e("worker", "worker1----$i")
}
return Result.success(Data.Builder().putInt("progress",100).build())
}
}
Inside the activity
workManager = WorkManager.getInstance(this)
workRequest1 = OneTimeWorkRequest.Builder(Worker1::class.java)
.addTag(TAG_SAVING_PROGRESS)
.build()
workRequest2 = OneTimeWorkRequest.Builder(Worker2::class.java)
.addTag(TAG_SAVING_PROGRESS)
.build()
findViewById<Button>(R.id.btn).setOnClickListener(View.OnClickListener { view ->
workManager?.
beginUniqueWork(TAG_SAVING_PROGRESS,ExistingWorkPolicy.REPLACE,workRequest1)
?.then(workRequest2)
?.enqueue()
})
progressBar = findViewById(R.id.progressBar)
workManager?.getWorkInfoByIdLiveData(workRequest1.id)
?.observe(this, Observer { workInfo: WorkInfo? ->
if (workInfo != null && workInfo.state == WorkInfo.State.SUCCEEDED) {
val progress = workInfo.outputData
val value = progress.getInt("progress", 0)
progressBar?.progress = value
}
})
workManager?.getWorkInfoByIdLiveData(workRequest2.id)
?.observe(this, Observer { workInfo: WorkInfo? ->
if (workInfo != null && workInfo.state == WorkInfo.State.SUCCEEDED) {
val progress = workInfo.outputData
val value = progress.getInt("progress", 0)
progressBar?.progress = value
}
})
The reason workManager's list of work infos is getting bigger each time the work is started even if the workers finished their work is because of
workManager.beginWith(OneTimeWorkRequest.from(firstWorker::class.java))
instead one need to use
workManager?.beginUniqueWork(TAG_SAVING_PROGRESS, ExistingWorkPolicy.REPLACE,OneTimeWorkRequest.from(firstWorker::class.java))
You can read more about it here