I have a job inside my AndroidViewModel class. Job is triggered by viewModelScope.launch. Job is a long running process which return result by lambda functions. According to requirement If user want to cancel the job while remaining in the scope on button click it should cancel the job. The problem is when I cancel the job, process is still running in the background and it is computing the background task. Below is my ViewModelClass with its job and cancel function.
import androidx.lifecycle.AndroidViewModel
import androidx.lifecycle.viewModelScope
import kotlinx.coroutines.*
class SelectionViewModel(val app: Application) : AndroidViewModel(app) {
private var mainJob: Job? = null
private var context: Context? = null
fun performAction(
fileOptions: FileOptions,
onSuccess: (ArrayList<String>?) -> Unit,
onFailure: (String?) -> Unit,
onProgress: (Pair<Int, Int>) -> Unit
) {
mainJob = viewModelScope.launch {
withContext(Dispatchers.IO) {
kotlin.runCatching {
while (isActive) {
val mOutputFilePaths: ArrayList<String> = ArrayList()
// Long running Background task
.. progress
OnProgress.invoke(pair)
// resul success
onSuccess.invoke(mOutputFilePaths)
}
}.onFailure {
withContext(Dispatchers.Main) {
onFailure.invoke(it.localizedMessage)
}
}
}
}
}
fun cancelJob() {
mainJob?.cancel()
}
}
Here it is I am initiating my ViewModel
val viewModel: SelectionViewModelby lazy {
ViewModelProviders.of(this).get(SelectionViewModel::class.java)
}
and when I start the job I call the following method
viewModel.performAction(fileOptions,{success->},{failure->},{progress->})
When I want to cancel the task. I call the following method.
viewModel.cancelJob()
Problem is even after canceling the job I am still receiving the progress as it is being invoked. This means job has not been canceled.
I want to implement the correct way to start and cancel the job while remaining in the viewmodel scope.
So what is the proper way to implement the viewmodel to start and cancel the job?
In order to cancel the job you have to have a suspending function call.
This means that if your job has code like
while (canRead) {
read()
addResults()
}
return result
this can never be cancelled the way you wish it to be cancelled.
there are two ways you can cancel this code
a) add a delay function (this will check for cancelling and cancel your job)
b) (which in the above case is the correct way) periodically add a yield() function
so the above code should look like this:
while(canRead) {
yield()
read()
addResults()
}
return result
edit: some further explanations are probably necessary to make this clear
just because you run something withContext, does not mean that coroutines can stop or break it at any time
what coroutines do is basically change the old way of doing things with callbacks and replace it with suspending functions
what we used to do for complex calculations was start a thread ,which would execute the calculations and then get a callback with the results.
at any point you could cancel the thread and the work would stop.
cancelling coroutines is not the same
if you cancel a coroutine what you basically do is tell it that the job is cancelled , and at the next opportune moment it should stop
but if you don't use yield() delay() or any suspending function such an opportune moment will never arrive
it is the equivalent of running something like this with threads
while(canRead && !cancelled) {
doStuff
}
where you would manually set the cancelled flag, if you set it but didn't check it in your code , it would never stop
as a side note, be careful because right now you have a big block of calculations running code, this will run on one thread because you never called a suspending function. When you add the yield() call , it could change threads or context (within what you defined ofc) so make sure it is thread safe
Related
I have an instance of CoroutineScope and log() function which look like the following:
private val scope = CoroutineScope(Dispatchers.IO)
fun log(message: String) = scope.launch { // launching a coroutine
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
And I use this test code to launch coroutines:
repeat(5) { item ->
log("Log $item")
}
The log() function can be called from any place, in any Thread, but not from a coroutine.
After a couple of tests I can see not sequential result like the following:
Log 0
Log 2
Log 4
Log 1
Log 3
There can be different order of printed logs. If I understand correctly the execution of coroutines doesn't guarantee to be sequential. What it means is that a coroutine for item 2 can be launched before the coroutine for item 0.
I want that coroutines were launched sequentially for each item and "some blocking operation" would execute sequentially, to always achieve next logs:
Log 0
Log 1
Log 2
Log 3
Log 4
Is there a way to make launching coroutines sequential? Or maybe there are other ways to achieve what I want?
Thanks in advance for any help!
One possible strategy is to use a Channel to join the launched jobs in order. You need to launch the jobs lazily so they don't start until join is called on them. trySend always succeeds when the Channel has unlimited capacity. You need to use trySend so it can be called from outside a coroutine.
private val lazyJobChannel = Channel<Job>(capacity = Channel.UNLIMITED).apply {
scope.launch {
consumeEach { it.join() }
}
}
fun log(message: String) {
lazyJobChannel.trySend(
scope.launch(start = CoroutineStart.LAZY) {
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
)
}
Since Flows are sequential we can use MutableSharedFlow to collect and handle data sequentially:
class Info {
// make sure replay(in case some jobs were emitted before sharedFlow is being collected and could be lost)
// and extraBufferCapacity are large enough to handle all the jobs.
// In case some jobs are lost try to increase either of the values.
private val sharedFlow = MutableSharedFlow<String>(replay = 10, extraBufferCapacity = 10)
private val scope = CoroutineScope(Dispatchers.IO)
init {
sharedFlow.onEach { message ->
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking or suspend operation
}.launchIn(scope)
}
fun log(message: String) {
sharedFlow.tryEmit(message)
}
}
fun test() {
val info = Info()
repeat(10) { item ->
info.log("Log $item")
}
}
It always prints the logs in the correct order:
Log 0
Log 1
Log 2
...
Log 9
It works for all cases, but need to be sure there are enough elements set to replay and extraBufferCapacity parameters of MutableSharedFlow to handle all items.
Another approach is
Using Dispatchers.IO.limitedParallelism(1) as a context for the CoroutineScope. It makes coroutines run sequentially if they don't contain calls to suspend functions and launched from the same Thread, e.g. Main Thread. So this solution works only with blocking (not suspend) operation inside launch coroutine builder:
private val scope = CoroutineScope(Dispatchers.IO.limitedParallelism(1))
fun log(message: String) = scope.launch { // launching a coroutine from the same Thread, e.g. Main Thread
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // only blocking operation, not `suspend` operation
}
It turns out that the single thread dispatcher is a FIFO executor. So limiting the CoroutineScope execution to one thread solves the problem.
I have an instance of CoroutineScope and log() function which look like the following:
private val scope = CoroutineScope(Dispatchers.IO)
fun log(message: String) = scope.launch { // launching a coroutine
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
And I use this test code to launch coroutines:
repeat(5) { item ->
log("Log $item")
}
The log() function can be called from any place, in any Thread, but not from a coroutine.
After a couple of tests I can see not sequential result like the following:
Log 0
Log 2
Log 4
Log 1
Log 3
There can be different order of printed logs. If I understand correctly the execution of coroutines doesn't guarantee to be sequential. What it means is that a coroutine for item 2 can be launched before the coroutine for item 0.
I want that coroutines were launched sequentially for each item and "some blocking operation" would execute sequentially, to always achieve next logs:
Log 0
Log 1
Log 2
Log 3
Log 4
Is there a way to make launching coroutines sequential? Or maybe there are other ways to achieve what I want?
Thanks in advance for any help!
One possible strategy is to use a Channel to join the launched jobs in order. You need to launch the jobs lazily so they don't start until join is called on them. trySend always succeeds when the Channel has unlimited capacity. You need to use trySend so it can be called from outside a coroutine.
private val lazyJobChannel = Channel<Job>(capacity = Channel.UNLIMITED).apply {
scope.launch {
consumeEach { it.join() }
}
}
fun log(message: String) {
lazyJobChannel.trySend(
scope.launch(start = CoroutineStart.LAZY) {
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking operation
}
)
}
Since Flows are sequential we can use MutableSharedFlow to collect and handle data sequentially:
class Info {
// make sure replay(in case some jobs were emitted before sharedFlow is being collected and could be lost)
// and extraBufferCapacity are large enough to handle all the jobs.
// In case some jobs are lost try to increase either of the values.
private val sharedFlow = MutableSharedFlow<String>(replay = 10, extraBufferCapacity = 10)
private val scope = CoroutineScope(Dispatchers.IO)
init {
sharedFlow.onEach { message ->
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // some blocking or suspend operation
}.launchIn(scope)
}
fun log(message: String) {
sharedFlow.tryEmit(message)
}
}
fun test() {
val info = Info()
repeat(10) { item ->
info.log("Log $item")
}
}
It always prints the logs in the correct order:
Log 0
Log 1
Log 2
...
Log 9
It works for all cases, but need to be sure there are enough elements set to replay and extraBufferCapacity parameters of MutableSharedFlow to handle all items.
Another approach is
Using Dispatchers.IO.limitedParallelism(1) as a context for the CoroutineScope. It makes coroutines run sequentially if they don't contain calls to suspend functions and launched from the same Thread, e.g. Main Thread. So this solution works only with blocking (not suspend) operation inside launch coroutine builder:
private val scope = CoroutineScope(Dispatchers.IO.limitedParallelism(1))
fun log(message: String) = scope.launch { // launching a coroutine from the same Thread, e.g. Main Thread
println("$message")
TimeUnit.MILLISECONDS.sleep(100) // only blocking operation, not `suspend` operation
}
It turns out that the single thread dispatcher is a FIFO executor. So limiting the CoroutineScope execution to one thread solves the problem.
So when I press a button I need to wait 3 seconds before executing another method, I worked that out with the followin
val job = CoroutineScope(Dispatchers.Main).launch(Dispatchers.Default, CoroutineStart.DEFAULT) {
delay(THREE_SECONDS)
if (this.isActive)
product?.let { listener?.removeProduct(it) }
}
override fun onRemoveProduct(product: Product) {
job.start()
}
now, if I press a cancel button right after I start the job I stop the job from happening and that is working fine
override fun onClick(v: View?) {
when(v?.id) {
R.id.dismissBtn -> {
job.cancel()
}
}
}
The problem is that when I execute again the onRemoveProduct that executes the job.start() it will not start again, seems like that job.isActive never yields to true, why is this happening ?
A Job once cancelled cannot be started again. You need to do that in a different way. One way is to create a new job everytime onRemoveProduct is called.
private var job: Job? = null
fun onRemoveProduct(product: Product) {
job = scope.launch {
delay(THREE_SECONDS)
listener?.removeProduct(product) // Assuming the two products are same. If they aren't you can modify this statement accordingly.
}
}
fun cancelRemoval() { // You can call this function from the click listener
job?.cancel()
}
Also, in this line of your code CoroutineScope(Dispatchers.Main).launch(Dispatchers.Default, CoroutineStart.DEFAULT),
You shouldn't/needn't create a new coroutine scope by yourself. You can/should use the already provided viewModelScope or lifecycleScope. They are better choices as they are lifecycle aware and get cancelled at the right time.
Dispatchers.Main is useless because it gets replaced by Dispatchers.Default anyways. Dispatchers.Default is also not required here because you aren't doing any heavy calculations (or calling some blocking code) here.
CoroutineStart.DEFAULT is the default parameter so you could have skipped that one.
And you also need not check if (this.isActive) because
If the [Job] of the current coroutine is cancelled or completed while delay is waiting, it immediately resumes with [CancellationException].
First question here, I will do my best.
I have a Data class that retrieve a data object with firestore at the creation.
I have done some code to the setters with coroutines. I am not sure of my solution but it is working. However, for the getters, I am struggling to wait the initialisation.
In the initialisation, I have a callback to retrieve the data. The issue that the callback is always called from the main thread, event if I use it in a coroutine in another thread. I check this with:
Log.d("THREAD", "Execution thread1: "+Thread.currentThread().name)
For the setter I use a coroutine in useTask to not block the main thread. And a mutex to block this coroutine until the initialisation in the init is done. Not sure about waitInitialisationSuspend but it is working.
But for the getter, I just want to block the main thread (even if it is bad design, it is a first solution) until the initialisation is done, and resume the getter to retrieve the value.
But I am not enable to block the main thread without also blocking the callback in the initialisation because there are in the same thread.
I have read many documentation about coroutine, scope, runBlocking, thread etc. but everything gets mixed up in my head.
class Story(val id: String) : BaseObservable() {
private val storyRef = StoryHelper.getStoryRef(id)!!
private var isInitialized = false
private val initMutex = Mutex(true)
#get:Bindable
var dbStory: DbStory? = null
init {
storyRef.get().addOnCompleteListener { task ->
if (task.isSuccessful && task.result != null) {
dbStory = snapshot.toObject(DbStory::class.java)!!
if (!isInitialized) {
initMutex.unlock()
isInitialized = true
}
notifyPropertyChanged(BR.dbStory)
}
}
}
fun interface StoryListener {
fun onEvent()
}
private fun useTask(function: (task: Task) -> Unit): Task {
val task = Task()
GlobalScope.launch {
waitInitialisationSuspend()
function(task)
}
return task
}
private suspend fun waitInitialisationSuspend()
{
initMutex.withLock {
// no op wait for unlock mutex
}
}
fun typicalSetFunction(value: String) : Task {
return useTask { task ->
storyRef.update("fieldName", value).addOnSuccessListener {
task.doEvent()
}
}
}
fun typicalGetFunction(): String
{
var result = ""
// want something to wait the callback in the init.
return result
}
}
RunBlocking seems to block the main tread, so I can not use it if the callback still use the main thread.
It is the same problem if I use a while loop in main thread.
#1
runBlocking {
initMutex.withLock {
result = dbStory!!.value
}
}
#2
while (!isInitialized){
}
result = dbStory!!.value
#3
Because maybe the callback in the init is in the main thread also. I have tried to launch this initialisation in a coroutines with a IO dispatcher but without success. The coroutine is well in a different thread but the callback still called in the main thread.
private val scope = CoroutineScope(Dispatchers.IO + SupervisorJob())
scope.launch() {
reference.get().addOnCompleteListener { task ->
In the getter, I have to work with the main thread. The solution is maybe to put the callback execution in another thread but I do not know how to do this. And maybe there is a better solution.
Another solution will be te be able to wait the callback in the main thread without blocking the callback but I have no solution for this.
Any ideas ?
I have loocked for many solutions and the conclusion is, don't do it.
This design is worse than I thougt. Android does not want you to block the main thread even for a short time. Blocking the main thread is blocking all UI and synchronisation mecanism, it is really bad solution.
Even using another thread for the callback (that you can do with an Executor) is, I think, a bad idea here. The good way to wait the end of the task in the callback is to retrieve the task and use:
Tasks.await(initTask)
But it is not allowed in the main thread. Android prevent you to do bad design here.
We should deal with the asynchronous way to manage firebase data base, it is the best way to do that.
I can still use my cache on the data. Here I was waiting to display a dialog with a text I retrieve in firebase. So, I can just display the dialog asynchronously when the text data is retrieved. If the cache is available, it will use it.
Keep also in mind that firebase seems to have some API to use a cache.
I'm using a coroutine to format a list.
This is how it looks
val job : Job? = null
private fun formatList(originalList:MutableList<MY_OBJECT>, callback : (MutableList<MY_OBJECT>) -> Unit){
if(job != null && !job?.isCompleted!!){
job?.cancel()
}
job = viewModelScope.launch(Dispatchers.IO) {
originalList.foreach { item ->
//do something with the item.
}
}
}
This method can be called several times during runtime, and to avoid it from doing the samething, I added a cancel call if the job isn't done yet.
The problem is, in runtime, the stuffs inside the foreach block randomly produces some index related crashes.
Why is this happening? Is this something about things going behind coroutine execution? Or are there something I don't know about the foreach loop?
Making your code cancellable
For job.cancel() to work properly, you need to make your code inside coroutine cancellable.
job = viewModelScope.launch(Dispatchers.IO) {
originalList.foreach { item ->
if(!isActive) return#launch
//do something with the item.
}
if(!isActive) return#launch
// do something
}
Here the line if(!isActive) return#launch is checking if the coroutine is still active or not. Checking for isActive before and after computationally intensive code is good practice.
Note: if you are not doing any network or database calls, always try to use Dispatchers.Default which is optimized for computation instead of Dispatchers.IO.