How to avoid concurrency issues with Kotlin coroutines? - android

I am going to implement a chat feature in the android application. In order to do that, I fetch chat messages every five seconds from the server by a coroutine flow. The problem is when I want to send a message sometimes the server receives two concurrent requests and it returns an error. How should I make sure that these requests run sequentially in my chat repository? Here is my chat repository:
class ChatRepositoryImpl #Inject constructor(
private val api: ApolloApi,
private val checkTokenIsSetDataStore: CheckTokenIsSetDataStore
) : ChatRepository {
override fun chatMessages(
lastIndex: Int,
limit: Int,
offset: Int,
channelId: Int,
): Flow<Resource<ChatMessages>> = flow {
var token = ""
checkTokenIsSetDataStore.get.first {
token = it
true
}
while (true) {
val response = ChatMessagesQuery(
lastIndex = Input.fromNullable(lastIndex),
limit = Input.fromNullable(limit),
offset = Input.fromNullable(offset),
channelId
).let {
api.getApolloClient(token)
.query(it)
.await()
}
response.data?.let {
emit(
Resource.Success<ChatMessages>(
it.chatMessages
)
)
}
if (response.data == null)
emit(Resource.Error<ChatMessages>(message = response.errors?.get(0)?.message))
delay(5000L)
}
}.flowOn(Dispatchers.IO)
override fun chatSendText(channelId: Int, text: String): Flow<Resource<ChatSendText>> = flow {
var token = ""
checkTokenIsSetDataStore.get.first {
token = it
true
}
val response = ChatSendTextMutation(
channelId = channelId,
text = text
).let {
api.getApolloClient(token)
.mutate(it)
.await()
}
response.data?.let {
return#flow emit(
Resource.Success<ChatSendText>(
it.chatSendText
)
)
}
return#flow emit(Resource.Error<ChatSendText>(message = response.errors?.get(0)?.message))
}.flowOn(Dispatchers.IO)
}

One way to limit concurrency is to use utils like Mutex or Semaphore. We can very easily solve your problem with mutex:
class ChatRepositoryImpl ... {
private val apolloMutex = Mutex()
override fun chatMessages(...) {
...
apolloMutex.withLock {
api.getApolloClient(token)
.query(it)
.await()
}
...
}
override fun chatSendText(...) {
...
apolloMutex.withLock {
api.getApolloClient(token)
.mutate(it)
.await()
}
...
}
However, this problem should not be really fixed on the client side, but on the server side. Your attempted solution doesn't protect you against concurrent requests entirely. If for some reasons two instances of the application has the same token or if the user attempts to manipulate your application, it could still send concurrent requests.
If you can't easily fix the problem properly, you can apply the same fix on the server side that you intend to apply on the client side. Just handle requests or part of requests sequentially. It is more error-proof and also more performant, because this way only part of the whole request time has to be done sequentially.

Related

Queue download using Retrofit

I am trying to create a Queue manager for my Android app.
In my app, I show a list of videos in the RecyclerView. When the user clicks on any video, I download the video on the device. The download itself is working fine and I can even download multiple videos concurrently and show download progress for each download.
The Issue:
I want to download only 3 videos concurrently and put all the other download in the queue.
Here is my Retrofit service generator class:
object RetrofitInstance {
private val downloadRetrofit by lazy {
val dispatcher = Dispatcher()
dispatcher.maxRequestsPerHost = 1
dispatcher.maxRequests = 3
val client = OkHttpClient
.Builder()
.dispatcher(dispatcher)
.build()
Retrofit.Builder()
.baseUrl(BASE_URL)
.client(client)
.addConverterFactory(GsonConverterFactory.create())
.build()
}
val downloadApi: Endpoints by lazy {
downloadRetrofit.create(Endpoints::class.java)
}
}
And here is my endpoint interface class:
interface Endpoints {
#GET
#Streaming
suspend fun downloadFile(#Url fileURL: String): Response<ResponseBody>
}
And I am using Kotlin coroutine to start the download:
suspend fun startDownload(url: String, filePath: String) {
val downloadService = RetrofitInstance.downloadApi.downloadFile(url)
if (downloadService.isSuccessful) {
saveFile(downloadService.body(), filePath)
} else {
// callback for error
}
}
I also tried reducing the number of threads Retrofit could use by using Dispatcher(Executors.newFixedThreadPool(1)) but that didn't help as well. It still downloads all the files concurrently.
Any help would be appreciated. Thanks!
EDIT
Forgot to mention one thing. I am using a custom view for the recyclerView item. These custom views are managing their own downloading state by directly calling the Download class.
You can use CoroutineWorker to download videos in the background thread and handle a download queue.
Create the worker
class DownloadVideoWorker(
private val context: Context,
private val params: WorkerParameters,
private val downloadApi: DownloadApi
) : CoroutineWorker(context, params) {
override suspend fun doWork(): Result {
val videos = inputData.getStringArray(VIDEOS)
//Download videos
return success()
}
companion object {
const val VIDEOS: String = "VIDEOS"
fun enqueue(videos: Array<String>): LiveData<WorkInfo> {
val downloadWorker = OneTimeWorkRequestBuilder<DownloadVideoWorker>()
.setInputData(Data.Builder().putStringArray(VIDEOS, videos).build())
.build()
val workManager = WorkManager.getInstance()
workManager.enqueue(downloadWorker)
return workManager.getWorkInfoByIdLiveData(downloadWorker.id)
}
}
}
In your viewModel add function to call worker from your Fragment/Activity
class DownloadViewModel() : ViewModel() {
private var listOfVideos: Array<String> // Videos urls
fun downloadVideos(): LiveData<WorkInfo> {
val videosToDownload = retrieveNextThreeVideos()
return DownloadVideoWorker.enqueue(videos)
}
fun retrieveNextThreeVideos(): Array<String> {
if(listOfVideos.size >= 3) {
val videosToDownload = listOfVideos.subList(0, 3)
videosToDownload.forEach { listOfVideos.remove(it) }
return videosToDownload
}
return listOfVideos
}
}
Observe LiveData and handle worker result
fun downloadVideos() {
documentsViewModel.downloadVideos().observe(this, Observer {
when (it.state) {
WorkInfo.State.SUCCEEDED -> {
downloadVideos()
}
WorkInfo.State.FAILED -> {
// Handle error result
}
}
})
}
NOTE: To learn more about Coroutine Worker, see: https://developer.android.com/topic/libraries/architecture/workmanager/advanced/coroutineworker
I was finally able to achieve it but I am still not sure if this is the most efficient way to do it. I used a singleton variable of ThreadPool. Here is what I did:
In my Download class, I created a companion object of ThreadPoolExecutor:
companion object {
private val executor: ThreadPoolExecutor = Executors.newFixedThreadPool(3) as ThreadPoolExecutor
}
Then I made the following changes in my startDownload function:
fun startDownloading(url: String, filePath: String) {
downloadUtilImp.downloadQueued()
runBlocking {
downloadJob = launch(executor.asCoroutineDispatcher()) {
val downloadService = RetrofitInstance.api.downloadFile(url)
if (downloadService.isSuccessful) saveFile(downloadService.body(), filePath)
else downloadUtilImp.downloadFailed(downloadService.errorBody().toString())
}
}
}
This code only downloads 3 videos at a time and queues all the other download requests.
I am still open to suggestions if there is a better way to do it. Thanks for the help!

Android live data transformations on a background thread

I saw this but I'm not sure how to implement it or if this is the same issue, I have a mediator live data that updates when either of its 2 source live datas update or when the underlying data (Room db) updates, it seems to work fine but if the data updates a lot it refreshes a lot in quick succession and I get an error
Cannot run invalidation tracker. Is the db closed?
Cannot access database on the main thread since it may potentially lock the UI for a long period of time
this doesn't happen everytime, only when there are a lot of updates to the database in very quick succession heres the problem part of the view model,
var search: MutableLiveData<String> = getSearchState()
val filters: MutableLiveData<MutableSet<String>> = getCurrentFiltersState()
val searchPokemon: LiveData<PagingData<PokemonWithTypesAndSpeciesForList>>
val isFiltersLayoutExpanded: MutableLiveData<Boolean> = getFiltersLayoutExpanded()
init {
val combinedValues =
MediatorLiveData<Pair<String?, MutableSet<String>?>?>().apply {
addSource(search) {
value = Pair(it, filters.value)
}
addSource(filters) {
value = Pair(search.value, it)
}
}
searchPokemon = Transformations.switchMap(combinedValues) { pair ->
val search = pair?.first
val filters = pair?.second
if (search != null && filters != null) {
searchAndFilterPokemonPager(search, filters.toList())
} else null
}.distinctUntilChanged()
}
#SuppressLint("DefaultLocale")
private fun searchAndFilterPokemonPager(search: String, filters: List<String>): LiveData<PagingData<PokemonWithTypesAndSpeciesForList>> {
return Pager(
config = PagingConfig(
pageSize = 20,
enablePlaceholders = false,
maxSize = 60
)
) {
if (filters.isEmpty()){
searchPokemonForPaging(search)
} else {
searchAndFilterPokemonForPaging(search, filters)
}
}.liveData.cachedIn(viewModelScope)
}
#SuppressLint("DefaultLocale")
private fun getAllPokemonForPaging(): PagingSource<Int, PokemonWithTypesAndSpecies> {
return repository.getAllPokemonWithTypesAndSpeciesForPaging()
}
#SuppressLint("DefaultLocale")
private fun searchPokemonForPaging(search: String): PagingSource<Int, PokemonWithTypesAndSpeciesForList> {
return repository.searchPokemonWithTypesAndSpeciesForPaging(search)
}
#SuppressLint("DefaultLocale")
private fun searchAndFilterPokemonForPaging(search: String, filters: List<String>): PagingSource<Int, PokemonWithTypesAndSpeciesForList> {
return repository.searchAndFilterPokemonWithTypesAndSpeciesForPaging(search, filters)
}
the error is actually thrown from the function searchPokemonForPaging
for instance it happens when the app starts which does about 300 writes but if I force the calls off the main thread by making everything suspend and use runBlocking to return the Pager it does work and I don't get the error anymore but it obviously blocks the ui, so is there a way to maybe make the switchmap asynchronous or make the searchAndFilterPokemonPager method return a pager asynchronously? i know the second is technically possible (return from async) but maybe there is a way for coroutines to solve this
thanks for any help
You can simplify combining using combineTuple (which is available as a library that I wrote for this specific purpose) (optional)
Afterwards, you can use the liveData { coroutine builder to move execution to background thread
Now your code will look like
val search: MutableLiveData<String> = getSearchState()
val filters: MutableLiveData<Set<String>> = getCurrentFiltersState()
val searchPokemon: LiveData<PagingData<PokemonWithTypesAndSpeciesForList>>
val isFiltersLayoutExpanded: MutableLiveData<Boolean> = getFiltersLayoutExpanded()
init {
searchPokemon = combineTuple(search, filters).switchMap { (search, filters) ->
liveData {
val search = search ?: return#liveData
val filters = filters ?: return#liveData
withContext(Dispatchers.IO) {
emit(searchAndFilterPokemonPager(search, filters.toList()))
}
}
}.distinctUntilChanged()
}

How to complete a Kotlin Flow in Android Worker

I'm investigating the use of Kotlin Flow within my current Android application
My application retrieves its data from a remote server via Retrofit API calls.
Some of these API's return 50,000 data items in 500 item pages.
Each API response contains an HTTP Link header containing the Next pages complete URL.
These calls can take up to 2 seconds to complete.
In an attempt to reduce the elapsed time I have employed a Kotlin Flow to concurrently process each page
of data while also making the next page API call.
My flow is defined as follows:
private val persistenceThreadPool = Executors.newFixedThreadPool(3).asCoroutineDispatcher()
private val internalWorkWorkState = MutableStateFlow<Response<List<MyPage>>?>(null)
private val workWorkState = internalWorkWorkState.asStateFlow()
private val myJob: Job
init {
myJob = GlobalScope.launch(persistenceThreadPool) {
workWorkState.collect { page ->
if (page == null) {
} else managePage(page!!)
}
}
}
My Recursive function is defined as follows that fetches all pages:-
private suspend fun managePages(accessToken: String, response: Response<List<MyPage>>) {
when {
result != null -> return
response.isSuccessful -> internalWorkWorkState.emit(response)
else -> {
manageError(response.errorBody())
result = Result.failure()
return
}
}
response.headers().filter { it.first == HTTP_HEADER_LINK && it.second.contains(REL_NEXT) }.forEach {
val parts = it.second.split(OPEN_ANGLE, CLOSE_ANGLE)
if (parts.size >= 2) {
managePages(accessToken, service.myApiCall(accessToken, parts[1]))
}
}
}
private suspend fun managePage(response: Response<List<MyPage>>) {
val pages = response.body()
pages?.let {
persistResponse(it)
}
}
private suspend fun persistResponse(myPage: List<MyPage>) {
val myPageDOs = ArrayList<MyPageDO>()
myPage.forEach { page ->
myPageDOs.add(page.mapDO())
}
database.myPageDAO().insertAsync(myPageDOs)
}
My numerous issues are
This code does not insert all data items that I retrieve
How do complete the flow when all data items have been retrieved
How do I complete the GlobalScope job once all the data items have been retrieved and persisted
UPDATE
By making the following changes I have managed to insert all the data
private val persistenceThreadPool = Executors.newFixedThreadPool(3).asCoroutineDispatcher()
private val completed = CompletableDeferred<Int>()
private val channel = Channel<Response<List<MyPage>>?>(UNLIMITED)
private val channelFlow = channel.consumeAsFlow().flowOn(persistenceThreadPool)
private val frank: Job
init {
frank = GlobalScope.launch(persistenceThreadPool) {
channelFlow.collect { page ->
if (page == null) {
completed.complete(totalItems)
} else managePage(page!!)
}
}
}
...
...
...
channel.send(null)
completed.await()
return result ?: Result.success(outputData)
I do not like having to rely on a CompletableDeferred, is there a better approach than this to know when the Flow has completed everything?
You are looking for the flow builder and Flow.buffer():
suspend fun getData(): Flow<Data> = flow {
var pageData: List<Data>
var pageUrl: String? = "bla"
while (pageUrl != null) {
TODO("fetch pageData from pageUrl and change pageUrl to the next page")
emitAll(pageData)
}
}
.flowOn(Dispatchers.IO /* no need for a thread pool executor, IO does it automatically */)
.buffer(3)
You can use it just like a normal Flow, iterate, etc. If you want to know the total length of the output, you should calculate it on the consumer with a mutable closure variable. Note you shouldn't need to use GlobalScope anywhere (ideally ever).
There are a few ways to achieve the desired behaviour. I would suggest to use coroutineScope which is designed specifically for parallel decomposition. It also provides good cancellation and error handling behaviour out of the box. In conjunction with Channel.close behaviour it makes the implementation pretty simple. Conceptually the implementation may look like this:
suspend fun fetchAllPages() {
coroutineScope {
val channel = Channel<MyPage>(Channel.UNLIMITED)
launch(Dispatchers.IO){ loadData(channel) }
launch(Dispatchers.IO){ processData(channel) }
}
}
suspend fun loadData(sendChannel: SendChannel<MyPage>){
while(hasMoreData()){
sendChannel.send(loadPage())
}
sendChannel.close()
}
suspend fun processData(channel: ReceiveChannel<MyPage>){
for(page in channel){
// process page
}
}
It works in the following way:
coroutineScope suspends until all children are finished. So you don't need CompletableDeferred anymore.
loadData() loads pages in cycle and posts them into the channel. It closes the channel as soon as all pages have been loaded.
processData fetches items from the channel one by one and process them. The cycle will finish as soon as all the items have been processed (and the channel has been closed).
In this implementation the producer coroutine works independently, with no back-pressure, so it can take a lot of memory if the processing is slow. Limit the buffer capacity to have the producer coroutine suspend when the buffer is full.
It might be also a good idea to use channels fan-out behaviour to launch multiple processors to speed up the computation.

How to enqueue sequential coroutines blocks

What I'm trying to do
I have an app that's using Room with Coroutines to save search queries in the database. It's also possible to add search suggestions and later on I retrieve this data to show them on a list. I've also made it possible to "pin" some of those suggestions.
My data structure is something like this:
#Entity(
tableName = "SEARCH_HISTORY",
indices = [Index(value = ["text"], unique = true)]
)
data class Suggestion(
#PrimaryKey(autoGenerate = true)
#ColumnInfo(name = "suggestion_id")
val suggestionId: Long = 0L,
val text: String,
val type: SuggestionType,
#ColumnInfo(name = "insert_date")
val insertDate: Calendar
)
enum class SuggestionType(val value: Int) {
PINNED(0), HISTORY(1), SUGGESTION(2)
}
I have made the "text" field unique to avoid repeated suggestions with different states/types. E.g.: A suggestion that's a pinned item and a previously queried text.
My Coroutine setup looks like this:
private val parentJob: Job = Job()
private val IO: CoroutineContext
get() = parentJob + Dispatchers.IO
private val MAIN: CoroutineContext
get() = parentJob + Dispatchers.Main
private val COMPUTATION: CoroutineContext
get() = parentJob + Dispatchers.Default
And my DAOs are basically like this:
#Insert(onConflict = OnConflictStrategy.REPLACE)
suspend fun insert(obj: Suggestion): Long
#Insert(onConflict = OnConflictStrategy.REPLACE)
suspend fun insert(objList: List<Suggestion>): List<Long>
I also have the following public functions to insert the data into the database:
fun saveQueryToDb(query: String, insertDate: Calendar) {
if (query.isBlank()) {
return
}
val suggestion = Suggestion(
text = query,
insertDate = insertDate,
type = SuggestionType.HISTORY
)
CoroutineScope(IO).launch {
suggestionDAO.insert(suggestion)
}
}
fun addPin(pin: String) {
if (pin.isBlank()) {
return
}
val suggestion = Suggestion(
text = pin,
insertDate = Calendar.getInstance(),
type = SuggestionType.PINNED
)
CoroutineScope(IO).launch {
suggestionDAO.insert(suggestion)
}
}
fun addSuggestions(suggestions: List<String>) {
addItems(suggestions, SuggestionType.SUGGESTION)
}
private fun addItems(items: List<String>, suggestionType: SuggestionType) {
if (items.isEmpty()) {
return
}
CoroutineScope(COMPUTATION).launch {
val insertDate = Calendar.getInstance()
val filteredList = items.filterNot { it.isBlank() }
val suggestionList = filteredList.map { History(text = it, insertDate = insertDate, suggestionType = suggestionType) }
withContext(IO) {
suggestionDAO.insert(suggestionList)
}
}
}
There are also some other methods, but let's focus on the ones above.
EDIT: All of the methods above are part of a lib that I made, they're are not made suspend because I don't want to force a particular type of programming to the user, like forcing to use Rx or Coroutines when using the lib.
The problem
Let's say I try to add a list of suggestions using the addSuggestions() method stated above, and that I also try to add a pinned suggestion using the addPin() method. The pinned text is also present in the suggestion list.
val list = getSuggestions() // Getting a list somewhere
addSuggestions(list)
addPin(list.first())
When I try to do this, sometimes the pin is added first and then it's overwritten by the suggestion present in the list, which makes me think I might've been dealing with some sort of race condition. Since the addSuggestions() method has more data to handle, and both methods will run in parallel, I believe the addPin() method is completing first.
Now, my Coroutines knowledge is pretty limited and I'd like to know if there's a way to enqueue those method calls and make sure they'll execute in the exact same order I invoked them, that must be strongly guaranteed to avoid overriding data and getting funky results later on. How can I achieve such behavior?
I'd follow the Go language slogan "Don't communicate by sharing memory; share memory by communicating", that means instead of maintaining atomic variables or jobs and trying to synchronize between them, model your operations as messages and use Coroutines actors to handle them.
sealed class Message {
data AddSuggestions(val suggestions: List<String>) : Message()
data AddPin(val pin: String) : Message()
}
And in your class
private val parentScope = CoroutineScope(Job())
private val actor = parentScope.actor<Message>(Dispatchers.IO) {
for (msg in channel) {
when (msg) {
is Message.AddSuggestions -> TODO("Map to the Suggestion and do suggestionDAO.insert(suggestions)")
is Message.AddPin -> TODO("Map to the Pin and do suggestionDAO.insert(pin)")
}
}
}
fun addSuggestions(suggestions: List<String>) {
actor.offer(Message.AddSuggestions(suggestions))
}
fun addPin(pin: String) {
actor.offer(Message.AddPin(pin))
}
By using actors you'll be able to queue messages and they will be processed in FIFO order.
By default when you call .launch{}, it launches a new coroutine without blocking the current thread and returns a reference to the coroutine as a Job. The coroutine is canceled when the resulting job is canceled.
It doesn't care or wait for other parts of your code it just runs.
But you can pass a parameter to basically tell it to run immediately or wait for other Coroutine to finish(LAZY).
For Example:
val work_1 = CoroutineScope(IO).launch( start = CoroutineStart.LAZY ){
//do dome work
}
val work_2 = CoroutineScope(IO).launch( start = CoroutineStart.LAZY ){
//do dome work
work_1.join()
}
val work_3 = CoroutineScope(IO).launch( ) {
//do dome work
work_2.join()
}
When you execute the above code first work_3 will finish and invoke work_2 when inturn invoke Work_1 and so on,
The summary of coroutine start options is:
DEFAULT -- immediately schedules coroutine for execution according to its context
LAZY -- starts coroutine lazily, only when it is needed
ATOMIC -- atomically (in a non-cancellable way) schedules coroutine for execution according to its context
UNDISPATCHED -- immediately executes coroutine until its first suspension point in the current thread.
So by default when you call .launch{} start = CoroutineStart.DEFAULT is passed because it is default parameter.
Don't launch coroutines from your database or repository. Use suspending functions and then switch dispatchers like:
suspend fun addPin(pin: String) {
...
withContext(Dispatchers.IO) {
suggestionDAO.insert(suggestion)
}
}
Then from your ViewModel (or Activity/Fragment) make the call:
fun addSuggestionsAndPinFirst(suggestions: List<Suggestion>) {
myCoroutineScope.launch {
repository.addSuggestions(suggestions)
repository.addPin(suggestions.first())
}
}
Why do you have a separate addPin() function anyways? You can just modify a suggestion and then store it:
fun pinAndStoreSuggestion(suggestion: Suggestion) {
myCoroutineScope.launch {
repository.storeSuggestion(suggestion.copy(type = SuggestionType.PINNED)
}
}
Also be careful using a Job like that. If any coroutine fails all your coroutines will be cancelled. Use a SupervisorJob instead. Read more on that here.
Disclaimer: I do not approve of the solution below. I'd rather use an old-fashioned ExecutorService and submit() my Runnable's
So if you really want to synchronize your coroutines in a way that the first function called is also the first one to write to your database. (I'm not sure it is guaranteed since your DAO functions are also suspending and Room uses it's own threads too). Try something like the following unit test:
class TestCoroutineSynchronization {
private val jobId = AtomicInteger(0)
private val jobToRun = AtomicInteger(0)
private val jobMap = mutableMapOf<Int, () -> Unit>()
#Test
fun testCoroutines() = runBlocking {
first()
second()
delay(2000) // delay so our coroutines finish
}
private fun first() {
val jobId = jobId.getAndIncrement()
CoroutineScope(SupervisorJob() + Dispatchers.Default).launch {
delay(1000) // intentionally delay your first coroutine
withContext(Dispatchers.IO) {
submitAndTryRunNextJob(jobId) { println(1) }
}
}
}
private fun second() {
val jobId = jobId.getAndIncrement()
CoroutineScope(SupervisorJob()).launch(Dispatchers.IO) {
submitAndTryRunNextJob(jobId) { println(2) }
}
}
private fun submitAndTryRunNextJob(jobId: Int, action: () -> Unit) {
synchronized(jobMap) {
jobMap[jobId] = action
tryRunNextJob()
}
}
private fun tryRunNextJob() {
var action = jobMap.remove(jobToRun.get())
while (action != null) {
action()
action = jobMap.remove(jobToRun.incrementAndGet())
}
}
}
So what I do on each call is increment a value (jobId) that is later used to prioritize what action to run first. Since you are using suspending function you probably need to add that modifier to the action submitted too (e.g. suspend () -> Unit).

How to use Fuel with coroutines in Kotlin?

I want to get an API request and save request's data to a DB. Also want to return the data (that is written to DB). I know, this is possible in RxJava, but now I write in Kotlin coroutines, currently use Fuel instead of Retrofit (but a difference is not so large). I read How to use Fuel with a Kotlin coroutine, but don't understand it.
How to write a coroutine and methods?
UPDATE
Say, we have a Java and Retrofit, RxJava. Then we can write a code.
RegionResponse:
#AutoValue
public abstract class RegionResponse {
#SerializedName("id")
public abstract Integer id;
#SerializedName("name")
public abstract String name;
#SerializedName("countryId")
public abstract Integer countryId();
public static RegionResponse create(int id, String name, int countryId) {
....
}
...
}
Region:
data class Region(
val id: Int,
val name: String,
val countryId: Int)
Network:
public Single<List<RegionResponse>> getRegions() {
return api.getRegions();
// #GET("/regions")
// Single<List<RegionResponse>> getRegions();
}
RegionRepository:
fun getRegion(countryId: Int): Single<Region> {
val dbSource = db.getRegion(countryId)
val lazyApiSource = Single.defer { api.regions }
.flattenAsFlowable { it }
.map { apiMapper.map(it) }
.toList()
.doOnSuccess { db.updateRegions(it) }
.flattenAsFlowable { it }
.filter({ it.countryId == countryId })
.singleOrError()
return dbSource
.map { dbMapper.map(it) }
.switchIfEmpty(lazyApiSource)
}
RegionInteractor:
class RegionInteractor(
private val repo: RegionRepository,
private val prefsRepository: PrefsRepository) {
fun getRegion(): Single<Region> {
return Single.fromCallable { prefsRepository.countryId }
.flatMap { repo.getRegion(it) }
.subscribeOn(Schedulers.io())
}
}
Let's look at it layer by layer.
First, your RegionResponse and Region are totally fine for this use case, as far as I can see, so we won't touch them at all.
Your network layer is written in Java, so we'll assume it always expects synchronous behavior, and won't touch it either.
So, we start with the repo:
fun getRegion(countryId: Int) = async {
val regionFromDb = db.getRegion(countryId)
if (regionFromDb == null) {
return apiMapper.map(api.regions).
filter({ it.countryId == countryId }).
first().
also {
db.updateRegions(it)
}
}
return dbMapper.map(regionFromDb)
}
Remember that I don't have your code, so maybe the details will differ a bit. But the general idea with coroutines, is that you launch them with async() in case they need to return the result, and then write your code as if you were in the perfect world where you don't need to concern yourself with concurrency.
Now to the interactor:
class RegionInteractor(
private val repo: RegionRepository,
private val prefsRepository: PrefsRepository) {
fun getRegion() = withContext(Schedulers.io().asCoroutineDispatcher()) {
val countryId = prefsRepository.countryId
return repo.getRegion(countryId).await()
}
}
You need something to convert from asynchronous code back to synchronous one. And for that you need some kind of thread pool to execute on. Here we use thread pool from Rx, but if you want to use some other pool, so do.
After researching How to use Fuel with a Kotlin coroutine, Fuel coroutines and https://github.com/kittinunf/Fuel/ (looked for awaitStringResponse), I made another solution. Assume that you have Kotlin 1.3 with coroutines 1.0.0 and Fuel 1.16.0.
We have to avoid asynhronous requests with callbacks and make synchronous (every request in it's coroutine). Say, we want to show a country name by it's code.
// POST-request to a server with country id.
fun getCountry(countryId: Int): Request =
"map/country/"
.httpPost(listOf("country_id" to countryId))
.addJsonHeader()
// Adding headers to the request, if needed.
private fun Request.addJsonHeader(): Request =
header("Content-Type" to "application/json",
"Accept" to "application/json")
It gives a JSON:
{
"country": {
"name": "France"
}
}
To decode the JSON response we have to write a model class:
data class CountryResponse(
val country: Country,
val errors: ErrorsResponse?
) {
data class Country(
val name: String
)
// If the server prints errors.
data class ErrorsResponse(val message: String?)
// Needed for awaitObjectResponse, awaitObject, etc.
class Deserializer : ResponseDeserializable<CountryResponse> {
override fun deserialize(content: String) =
Gson().fromJson(content, CountryResponse::class.java)
}
}
Then we should create a UseCase or Interactor to receive a result synchronously:
suspend fun getCountry(countryId: Int): Result<CountryResponse, FuelError> =
api.getCountry(countryId).awaitObjectResponse(CountryResponse.Deserializer()).third
I use third to access response data. But if you wish to check for a HTTP error code != 200, remove third and later get all three variables (as Triple variable).
Now you can write a method to print the country name.
private fun showLocation(
useCase: UseCaseImpl,
countryId: Int,
regionId: Int,
cityId: Int
) {
GlobalScope.launch(Dispatchers.IO) {
// Titles of country, region, city.
var country: String? = null
var region: String? = null
var city: String? = null
val countryTask = GlobalScope.async {
val result = useCase.getCountry(countryId)
// Receive a name of the country if it exists.
result.fold({ response -> country = response.country.name }
, { fuelError -> fuelError.message })
}
}
val regionTask = GlobalScope.async {
val result = useCase.getRegion(regionId)
result.fold({ response -> region = response.region?.name }
, { fuelError -> fuelError.message })
}
val cityTask = GlobalScope.async {
val result = useCase.getCity(cityId)
result.fold({ response -> city = response.city?.name }
, { fuelError -> fuelError.message })
}
// Wait for three requests to execute.
countryTask.await()
regionTask.await()
cityTask.await()
// Now update UI.
GlobalScope.launch(Dispatchers.Main) {
updateLocation(country, region, city)
}
}
}
In build.gradle:
ext {
fuelVersion = "1.16.0"
}
dependencies {
...
implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-android:1.0.0'
// Fuel.
//for JVM
implementation "com.github.kittinunf.fuel:fuel:${fuelVersion}"
//for Android
implementation "com.github.kittinunf.fuel:fuel-android:${fuelVersion}"
//for Gson support
implementation "com.github.kittinunf.fuel:fuel-gson:${fuelVersion}"
//for Coroutines
implementation "com.github.kittinunf.fuel:fuel-coroutines:${fuelVersion}"
// Gson.
implementation 'com.google.code.gson:gson:2.8.5'
}
If you want to work with coroutines and Retrofit, please, read https://medium.com/exploring-android/android-networking-with-coroutines-and-retrofit-a2f20dd40a83 (or https://habr.com/post/428994/ in Russian).
You should be able to significantly simplify your code. Declare your use case similar to the following:
class UseCaseImpl {
suspend fun getCountry(countryId: Int): Country =
api.getCountry(countryId).awaitObject(CountryResponse.Deserializer()).country
suspend fun getRegion(regionId: Int): Region =
api.getRegion(regionId).awaitObject(RegionResponse.Deserializer()).region
suspend fun getCity(countryId: Int): City=
api.getCity(countryId).awaitObject(CityResponse.Deserializer()).city
}
Now you can write your showLocation function like this:
private fun showLocation(
useCase: UseCaseImpl,
countryId: Int,
regionId: Int,
cityId: Int
) {
GlobalScope.launch(Dispatchers.Main) {
val countryTask = async { useCase.getCountry(countryId) }
val regionTask = async { useCase.getRegion(regionId) }
val cityTask = async { useCase.getCity(cityId) }
updateLocation(countryTask.await(), regionTask.await(), cityTask.await())
}
}
You have no need to launch in the IO dispatcher because your network requests are non-blocking.
I must also note that you shouldn't launch in the GlobalScope. Define a proper coroutine scope that aligns its lifetime with the lifetime of the Android activity or whatever else its parent is.

Categories

Resources