EDIT (TL;DR)
I didn't realize there was more than one constructor for periodic work requests. The clue to my confusion was in the comments of the accepted answer.
Background
I have a few special cases I am trying to solve for while scheduling work. One of them involves doing work immediately and then creating a periodic work request. I found this in the Android's PeriodicWorkRequest documentation:
This work executes multiple times until it is cancelled, with the first execution happening immediately or as soon as the given Constraints are met.
I figured that this meant work would execute upon creating a request. However, this was not what happened in my test implementation. (For this work there is no need for a CoroutineWorker or network connection constraints but its applicable to my business need so I am testing it)
Starting Worker
object WorkerManager {
private val TAG = "WORKER_MANAGER_TEST"
fun buildWorkRequest(
startingNumber: Int,
context: Context
) {
val constraints =
Constraints.Builder().setRequiredNetworkType(NetworkType.CONNECTED).build()
val workRequest = PeriodicWorkRequest.Builder(
PeriodicWorker::class.java,
1,
TimeUnit.HOURS,
15,
TimeUnit.MINUTES
)
.setInputData(
workDataOf(Constants.INPUT_DATA_NUMBER to startingNumber)
)
.addTag(Constants.PERIODIC_WORKER_TAG)
.setConstraints(constraints)
.build()
WorkManager.getInstance(context).enqueueUniquePeriodicWork(
Constants.PERIODIC_WORKER_NAME,
ExistingPeriodicWorkPolicy.REPLACE,
workRequest
)
Log.d(TAG, "Worker started. Starting number: $startingNumber")
}
}
Worker:
class PeriodicWorker(context: Context, workerParams: WorkerParameters): CoroutineWorker(context,
workerParams
) {
companion object {
var isInit = false
var count: Int = 1
}
override suspend fun doWork(): Result = try {
if (!isInit) {
count = inputData.getInt(Constants.INPUT_DATA_NUMBER, Constants.DEFAULT_DATA_NUMBER)
isInit = true
} else {
count += 1
}
Repository.updateNumber(count)
Result.success()
} catch (exception: Exception) {
Result.failure()
}
}
Repo:
object Repository {
private val TAG = "REPOSITORY_TAG"
private val _number = MutableStateFlow(0)
val number: StateFlow<Int> = _number
suspend fun updateNumber(number: Int) {
Log.d(TAG, "Number updated to: $number")
_number.emit(number)
}
}
ViewModel:
class NumberViewModel : ViewModel() {
private val _count = MutableLiveData(0)
val count: LiveData<Int> = _count
init {
viewModelScope.launch {
Repository.number.collect {
_count.postValue(it)
}
}
}
}
Results
I started a worker with 10 as the starting number.
Logs:
8:45am - Worker started. Starting number: 10
9:37am - Number updated to: 10 // work executed
10:37am - Number updated to: 11 // work executed
11:37am - Number updated to: 12 // work executed
Device Info
OS Version 28 -- Samsung SM-T390
My Conclusion
Constraints -
Cannot be an issue. I had network connection during the above test and that is the only given constraint.
Battery Optimizations -
I am sure that this app was white listed prior to running this test.
So in conclusion it seems that PeriodicWorkRequests DO NOT perform immediate work. The Android documentation should instead say:
This work executes multiple times until it is cancelled, with the first period beginning immediately. The first work execution then happens within the first flex interval given the constraints are met.
Question
Does my conclusion seem reasonable? Is there something I haven't considered?
You are overthinking it. Please dump the JS:
https://developer.android.com/topic/libraries/architecture/workmanager/how-to/debugging
Use adb shell dumpsys jobscheduler
And just check what are the Unsatisfied constraints in the dump:
Required constraints: TIMING_DELAY CONNECTIVITY [0x90000000]
Satisfied constraints: DEVICE_NOT_DOZING BACKGROUND_NOT_RESTRICTED WITHIN_QUOTA [0x3400000]
Unsatisfied constraints: TIMING_DELAY CONNECTIVITY [0x90000000]
Also:
Minimum latency: +1h29m59s687ms
As I understand this constructor:
PeriodicWorkRequest.Builder(Class<? extends ListenableWorker> workerClass,
long repeatInterval, TimeUnit repeatIntervalTimeUnit,
long flexInterval, TimeUnit flexIntervalTimeUnit)
means that your work will be executed inside flexInterval of repeatInterval
Related
I have an android app that uses CouchBase lite, I'm trying to save a document and get the acknowledgement using coroutin channel, the reason why I use a channel is to make sure every operation is done on the same scope
here is my try based on the selected answer here
How to properly have a queue of pending operations using Kotlin Coroutines?
object DatabaseQueue {
private val scope = CoroutineScope(IOCoroutineScope)
private val queue = Channel<Job>(Channel.UNLIMITED)
init {
scope.launch(Dispatchers.Default) {
for (job in queue) job.join()
}
}
fun submit(
context: CoroutineContext = EmptyCoroutineContext,
block: suspend CoroutineScope.() -> Unit
) {
val job = scope.launch(context, CoroutineStart.LAZY, block)
queue.trySendBlocking(job)
}
fun submitAsync(
context: CoroutineContext = EmptyCoroutineContext,
id: String,
database: Database
): Deferred<Document?> {
val job = scope.async(context, CoroutineStart.LAZY) {
database.getDocument(id)
}
queue.trySendBlocking(job)
return job
}
fun cancel() {
queue.cancel()
scope.cancel()
}
}
fun Database.saveDocument(document: MutableDocument) {
DatabaseQueue.submit {
Timber.tag("quechk").d("saving :: ${document.id}")
this#saveDocument.save(document)
}
}
fun Database.getDocumentQ(id: String): Document? {
return runBlocking {
DatabaseQueue.submitAsync(id = id, database = this#getDocumentQ).also {
Timber.tag("quechk").d("getting :: $id")
}.await()
}
}
my issue here is that when I have many db operations to write and read the reads are performing faster than the writes which gives me a null results, so,what I need to know is
is this the best way to do it or there is another optimal solution
how can I proccess the job and return the result from the channel in order to avoid the null result
By modifying the original solution you actually made it work improperly. The whole idea was to create an inactive coroutine for each submitted block of code and then start executing these coroutines one by one. In your case you exposed a Deferred to a caller, so the caller is able to start executing a coroutine and as a result, coroutines no longer run sequentially, but concurrently.
The easiest way to fix this while keeping almost the same code would be to introduce another Deferred, which is not directly tight to the queued coroutine:
fun submitAsync(
context: CoroutineContext = EmptyCoroutineContext,
id: String,
database: Database
): Deferred<Document?> {
val ret = CompletableDeferred<Document?>()
val job = scope.launch(context, CoroutineStart.LAZY) {
ret.completeWith(runCatching { database.getDocument(id) })
}
queue.trySendBlocking(job)
return ret
}
However, depending on your case it may be an overkill. For example, if you don't need to guarantee a strict FIFO ordering, a simple Mutex would be enough. Also, please note that classic approach of returning futures/deferreds only to await on them is an anti-pattern in coroutines. We should simply use a suspend function and call it directly.
I am using Workmanager to execute a task within a time period of minutes but it gets executed for the first time only. From my point of view it should execute every minutes.
I am testing on device while the app is in foreground running and power is on.
Code:
class MainActivity : AppCompatActivity() {
val TAG: String = "MainActivity"
lateinit var workLiveData: LiveData<List<WorkInfo>>
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
initWM()
}
private fun initWM() {
val request = PeriodicWorkRequestBuilder<DemoWorker>(1, TimeUnit.MINUTES)
.addTag(TAG)
.build()
WorkManager.getInstance(this).enqueueUniquePeriodicWork(TAG,
ExistingPeriodicWorkPolicy.REPLACE, request)
}
}
DemoWorker:
class DemoWorker(
context: Context,
params: WorkerParameters
) : Worker(context, params) {
val TAG: String = "MainActivity"
override fun doWork(): Result {
Log.d(TAG, "doWork: ")
return try {
Result.success(workDataOf("KEY" to "SUCCESS"))
} catch (e: Exception) {
Result.failure()
}
}
}
A reminder about the “minimal interval”. WorkManager is balancing two different requirements: the application with its WorkRequest, and the Android operating system with its need to limit battery consumption. For this reason, even if all the constraints set on a WorkRequest are satisfied, your Work can still be run with some additional delay.
So you are replacing one work after another. The OS may not have the proper time to execute the work. So the best option will be to try with a 1-hour delay.
You can use a flexInterval.Let’s look at an example. Imagine you want to build a periodic Work request with a 30 minutes period. You can specify a flexInterval, smaller than this period, say a 15 minute flexInterval.
The actual code to build a PeriodicWorkPequest with this parameters is:
val logBuilder = PeriodicWorkRequestBuilder<MyWorker>(
30, TimeUnit.MINUTES,
15, TimeUnit.MINUTES)
The result is that our worker will be executed in the second half of the period (the flexInterval is always positioned at the end of the repetition period):
I have a connection to a Bluetooth device that emits data every 250ms
In my viewmodel I wish to subscribe to said data , run some suspending code (which takes approximatelly 1000ms to run) and then present the result.
the following is a simple example of what I'm trying to do
Repository:
class Repo() : CoroutineScope {
private val supervisor = SupervisorJob()
override val coroutineContext: CoroutineContext = supervisor + Dispatchers.Default
private val _dataFlow = MutableSharedFlow<Int>()
private var dataJob: Job? = null
val dataFlow: Flow<Int> = _dataFlow
init {
launch {
var counter = 0
while (true) {
counter++
Log.d("Repo", "emmitting $counter")
_dataFlow.emit(counter)
delay(250)
}
}
}
}
the viewmodel
class VM(app:Application):AndroidViewModel(app) {
private val _reading = MutableLiveData<String>()
val latestReading :LiveData<String>() = _reading
init {
viewModelScope.launch(Dispatchers.Main) {
repo.dataFlow
.map {
validateData() //this is where some validation happens it is very fast
}
.flowOn(Dispatchers.Default)
.forEach {
delay(1000) //this is to simulate the work that is done,
}
.flowOn(Dispatchers.IO)
.map {
transformData() //this will transform the data to be human readable
}
.flowOn(Dispatchers.Default)
.collect {
_reading.postValue(it)
}
}
}
}
as you can see, when data comes, first I validate it to make sure it is not corrupt (on Default dispatcher) then I perform some operation on it (saving and running a long algorithm that takes time on the IO dispatcher) then I change it so the application user can understand it (switching back to Default dispatcher) then I post it to mutable live data so if there is a subscriber from the ui layer they can see the current data (on the Main dispatcher)
I have two questions
a) If validateData fails how can I cancel the current emission and move on to the next one?
b) Is there a way for the dataFlow subscriber working on the viewModel to generate new threads so the delay parts can run in parallel?
the timeline right now looks like the first part, but I want it to run like the second one
Is there a way to do this?
I've tried using buffer() which as the documentation states "Buffers flow emissions via channel of a specified capacity and runs collector in a separate coroutine." but when I set it to BufferOverflow.SUSPEND I get the behaviour of the first part, and when I set it to BufferOverflow.DROP_OLDEST or BufferOverflow.DORP_LATEST I loose emissions
I have also tried using .conflate() like so:
repo.dataFlow
.conflate()
.map { ....
and even though the emissions start one after the other, the part with the delay still waits for the previous one to finish before starting the next one
when I use .flowOn(Dispatchers.Default) for that part , I loose emissions, and when I use .flowOn(Dispatchers.IO) or something like Executors.newFixedThreadPool(4).asCoroutineDispatcher() they always wait for the previous one to finish before starting a new one
Edit 2:
After about 3 hours of experiments this seems to work
viewModelScope.launch(Dispatchers.Default) {
repo.dataFlow
.map {
validateData(it)
}
.flowOn(Dispatchers.Default)
.map {
async {
delay(1000)
it
}
}
.flowOn(Dispatchers.IO) // NOTE (A)
.map {
val result = it.await()
transformData(result)
}
.flowOn(Dispatchers.Default)
.collect {
_readings.postValue(it)
}
}
however I still haven't figured out how to cancel the emission if validatedata fails
and for some reason it only works if I use Dispatchers.IO , Executors.newFixedThreadPool(20).asCoroutineDispatcher() and Dispatchers.Unconfined where I put note (A), Dispatchers.Main does not seem to work (which I expected) but Dispatchers.Default also does not seem to work and I don't know why
First question: Well you cannot recover from an exception in a sense of continuing
the collection of the flow, as per docs "Flow collection can complete with an exception when an emitter or code inside the operators throw an exception." therefore once an exception has been thrown the collection is completed (exceptionally) you can however handle the exception by either wrapping your collection inside try/catch block or using the catch() operator.
Second question: You cannot, while the producer (emitting side) can be made concurrent
by using the buffer() operator, collection is always sequential.
As per your diagram, you need fan out (one producer, many consumers), you cannot
achieve that with flows. Flows are cold, each time you collect from them, they start
emitting from the beginning.
Fan out can be achieved using channels, where you can have one coroutine producing
values and many coroutines that consume those values.
Edit: Oh you meant the validation failed not the function itself, in that case you can use the filter() operator.
The BroadcastChannel and ConflatedBroadcastChannel are getting deprecated. SharedFlow cannot help you in your use case, as they emit values in a broadcast fashion, meaning producer waits until all consumers consume each value before producing the next one. That is still sequential, you need parallelism. You can achieve it using the produce() channel builder.
A simple example:
val scope = CoroutineScope(Job() + Dispatchers.IO)
val producer: ReceiveChannel<Int> = scope.produce {
var counter = 0
val startTime = System.currentTimeMillis()
while (isActive) {
counter++
send(counter)
println("producer produced $counter at ${System.currentTimeMillis() - startTime} ms from the beginning")
delay(250)
}
}
val consumerOne = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerOne consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
val consumerTwo = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerTwo consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
val consumerThree = scope.launch {
val startTime = System.currentTimeMillis()
for (x in producer) {
println("consumerThree consumd $x at ${System.currentTimeMillis() - startTime}ms from the beginning.")
delay(1000)
}
}
Observe production and consumption times.
I'm using WorkManager for deferred work in my app.
The total work is divided into a number of chained workers, and I'm having trouble showing the workers' progress to the user (using progress bar).
I tried creating one tag and add it to the different workers, and inside the workers update the progress by that tag, but when I debug I always get progress is '0'.
Another thing I noticed is that the workManager's list of work infos is getting bigger each time I start the work (even if the workers finished their work).
Here is my code:
//inside view model
private val workManager = WorkManager.getInstance(appContext)
internal val progressWorkInfoItems: LiveData<List<WorkInfo>>
init
{
progressWorkInfoItems = workManager.getWorkInfosByTagLiveData(TAG_SAVING_PROGRESS)
}
companion object
{
const val TAG_SAVING_PROGRESS = "saving_progress_tag"
}
//inside a method
var workContinuation = workManager.beginWith(OneTimeWorkRequest.from(firstWorker::class.java))
val secondWorkRequest = OneTimeWorkRequestBuilder<SecondWorker>()
secondWorkRequest.addTag(TAG_SAVING_PROGRESS)
secondWorkRequest.setInputData(createData())
workContinuation = workContinuation.then(secondWorkRequest.build())
val thirdWorkRequest = OneTimeWorkRequestBuilder<ThirdWorker>()
thirdWorkRequest.addTag(TAG_SAVING_PROGRESS)
thirdWorkRequest.setInputData(createData())
workContinuation = workContinuation.then(thirdWorkRequest.build())
workContinuation.enqueue()
//inside the Activity
viewModel.progressWorkInfoItems.observe(this, observeProgress())
private fun observeProgress(): Observer<List<WorkInfo>>
{
return Observer { listOfWorkInfo ->
if (listOfWorkInfo.isNullOrEmpty()) { return#Observer }
listOfWorkInfo.forEach { workInfo ->
if (WorkInfo.State.RUNNING == workInfo.state)
{
val progress = workInfo.progress.getFloat(TAG_SAVING_PROGRESS, 0f)
progress_bar?.progress = progress
}
}
}
}
//inside the worker
override suspend fun doWork(): Result = withContext(Dispatchers.IO)
{
setProgress(workDataOf(TAG_SAVING_PROGRESS to 10f))
...
...
Result.success()
}
The setProgress method is to observe intermediate progress in a single Worker (as explained in the guide):
Progress information can only be observed and updated while the ListenableWorker is running.
For this reason, the progress information is available only till a Worker is active (e.g. it is not in a terminal state like SUCCEEDED, FAILED and CANCELLED). This WorkManager guide covers Worker's states.
My suggestion is to use the Worker's unique ID to identify which worker in your chain is not yet in a terminal state. You can use WorkRequest's getId method to retrieve its unique ID.
According to my analysis I have found that there might be two reasons why you always get 0
setProgress is set just before the Result.success() in the doWork() of the worker then it's lost and you never get that value in your listener. This could be because the state of the worker is now SUCCEEDED
the worker is completing its work in fraction of seconds
Lets take a look at the following code
class Worker1(context: Context, workerParameters: WorkerParameters) : Worker(context,workerParameters) {
override fun doWork(): Result {
setProgressAsync(Data.Builder().putInt("progress",10).build())
for (i in 1..5) {
SystemClock.sleep(1000)
}
setProgressAsync(Data.Builder().putInt("progress",50).build())
SystemClock.sleep(1000)
return Result.success()
}
}
In the above code
if you remove only the first sleep method then the listener only get the progres50
if you remove only the second sleep method then the listener only get the progress 10
If you remove both then the you get the default value 0
This analysis is based on the WorkManager version 2.4.0
Hence I found that the following way is better and always reliable to show the progress of various workers of your chain work.
I have two workers that needs to be run one after the other. If the first work is completed then 50% of the work is done and 100% would be done when the second work is completed.
Two workers
class Worker1(context: Context, workerParameters: WorkerParameters) : Worker(context,workerParameters) {
override fun doWork(): Result {
for (i in 1..5) {
Log.e("worker", "worker1----$i")
}
return Result.success(Data.Builder().putInt("progress",50).build())
}
}
class Worker2(context: Context, workerParameters: WorkerParameters) : Worker(context,workerParameters) {
override fun doWork(): Result {
for (i in 5..10) {
Log.e("worker", "worker1----$i")
}
return Result.success(Data.Builder().putInt("progress",100).build())
}
}
Inside the activity
workManager = WorkManager.getInstance(this)
workRequest1 = OneTimeWorkRequest.Builder(Worker1::class.java)
.addTag(TAG_SAVING_PROGRESS)
.build()
workRequest2 = OneTimeWorkRequest.Builder(Worker2::class.java)
.addTag(TAG_SAVING_PROGRESS)
.build()
findViewById<Button>(R.id.btn).setOnClickListener(View.OnClickListener { view ->
workManager?.
beginUniqueWork(TAG_SAVING_PROGRESS,ExistingWorkPolicy.REPLACE,workRequest1)
?.then(workRequest2)
?.enqueue()
})
progressBar = findViewById(R.id.progressBar)
workManager?.getWorkInfoByIdLiveData(workRequest1.id)
?.observe(this, Observer { workInfo: WorkInfo? ->
if (workInfo != null && workInfo.state == WorkInfo.State.SUCCEEDED) {
val progress = workInfo.outputData
val value = progress.getInt("progress", 0)
progressBar?.progress = value
}
})
workManager?.getWorkInfoByIdLiveData(workRequest2.id)
?.observe(this, Observer { workInfo: WorkInfo? ->
if (workInfo != null && workInfo.state == WorkInfo.State.SUCCEEDED) {
val progress = workInfo.outputData
val value = progress.getInt("progress", 0)
progressBar?.progress = value
}
})
The reason workManager's list of work infos is getting bigger each time the work is started even if the workers finished their work is because of
workManager.beginWith(OneTimeWorkRequest.from(firstWorker::class.java))
instead one need to use
workManager?.beginUniqueWork(TAG_SAVING_PROGRESS, ExistingWorkPolicy.REPLACE,OneTimeWorkRequest.from(firstWorker::class.java))
You can read more about it here
I would like to use WorkManager to update the DB every 24 hours from midnight.
First, I'm understand that the Workmanager's PeriodicWorkRequest does not specify that the worker should operate at any given time.
So I used OneTimeWorkRequest() to give the delay and then put the PeriodicWorkRequest() in the queue, which runs every 24 hours.
1.Constraints
private fun getConstraints(): Constraints {
return Constraints.Builder()
.setRequiredNetworkType(NetworkType.NOT_REQUIRED)
.build()
}
2.OneTimeWorkRequest
fun applyMidnightWorker() {
val onTimeDailyWorker = OneTimeWorkRequest
.Builder(MidnightWorker::class.java)
.setInitialDelay(getDelayTime(), TimeUnit.SECONDS)
.setConstraints(getConstraints())
.build()
val workerContinuation =
workManager.beginUniqueWork(Const.DAILY_WORKER_TAG,
ExistingWorkPolicy.KEEP,
onTimeDailyWorker)
workerContinuation.enqueue()
}
getDelayTime() is
private fun getDelayTime(): Long {
...
return midNightTime - System.currentTimeMillis()
}
3.MidnightWorker Class
class MidnightWorker : Worker() {
override fun doWork(): Result {
DailyWorkerUtil.applyDailyWorker()
return Worker.Result.SUCCESS
}
}
4.PeriodicWorkRequest
fun applyDailyWorker() {
val periodicWorkRequest = PeriodicWorkRequest
.Builder(DailyWorker::class.java, 24, TimeUnit.HOURS)
.addTag(Const.DAILY_WORKER)
.setConstraints(getConstraints()).build()
workManager.enqueue(periodicWorkRequest)
}
And I confirmed that the delayTime passed and the midnightWorker was running.
Of course, It worked normally without any relation to the network.
However, test results showed that the delay time worked regardless of device time. As if it were server time
This is my question.
1. No delay according to device time. I want to know if the delay works as per the server time standard.
2. I wonder if the PeriodicWorkRequest can provide InitialDelay like OneTimeWorkRequest.
you've used TimeUnit.SECONDS with setInitialDelay yet getDelayTime is working with milliseconds.
This maybe the cause of you problems