I'm using working on a project in the CLEAN architecture where the project is broken into the "Presentation", "Domain" and "Data" modules, where the Domain module hosts the "Entities" that are basically the data models specific to this project. An example of this architecture is here.
Unlike the other two module, "Domain" is a pure Java library module, which is great for clarity and testing as it doesn't have the Android overhead, however it also means I'm now not able to use libraries like "Parceler" to which is very Android specific. Is there a way around this?
Parceler allows you to configure beans outside of the given module to generate a wrapping Parcelable via the #ParcelClass annotation. This means you can configure the given bean as a #Parcel outside of the Data layer, and in the presentation layer (or wherever else you want). See http://parceler.org/#classes_without_java_source for specifics.
The org.parceler:parceler-api module is also pure Java, it has no dependencies on the Android api. Therefore you should be free to annotate your Data module without violating the CLEAN archetecture you're seeking. The annotation compiler portion (org.parceler:parceler) of the library, however, does rely on the Android API, so you'll need to run it in the android-specific module. This leaves you with the follow:
Include the parceler-api library in your Data module and annotate your Data layer beans (#Transient, #ParcelProperty, etc). If you don't need any specific configuration, you can avoid including the parceler-api as a dependency.
Add the parceler and parceler-api libraries to your Android-specific module (Presentation?).
Add a #ParcelClass annotation with each class from your data module you want to be an #Parcel to an arbitrary class (Application?). This will direct Parceler to generate a Parcelable for each class identified within the #ParcelClass parameter.
Related
Square Inc. has presented it's internal modular architecture at Droidcon SF'19:
https://www.droidcon.com/media-detail?video=380843878
However, I'm a bit confused with some bullets. Could you please help me?
Why do they actually need :wiring modules? I find it adding complexity:
you get extra gradle module for each new feature
you have to make a sort of global injection into your Fragments somewhere in :app, because Fragments defined in :impl modules cannot access it's DaggerComponent, which is defined in :impl-wiring modules. :impl doesn't depend on :impl-wiring, because the dependency is reversed.
you cannot have an Android Dynamic Feature modules, because they should know about it's DaggerComponent in order to inject it's Fragment. But there is no way to do such injection from :app module, which is base-module for Dynamic Features.
so why :wiring modules at all?
One can merge :impl and :impl-wiring, or :fake and :fake-wiring together to eliminate all the issues mentioned above. And also, in :demo-apps one could just have a dependency on either :impl or :fake``, and not on :impl-wiring(or:fake-wiring```).
The creation of this type of modules is to separate even more. With this you generate an abstraction of the type of component you use (koin, dagger) and how. If the project is large, it makes sense to do it.
Currently I generate the following flow of dependencies between modules:
WARNING: Check the directionalities well.
:feature-A:open <- :feature-A:impl -> :feature-A:impl-wiring
:feature-A:impl-wiring -> :feature-A:impl, :feature-A:open
:app -> :feature-A:open, :feature-A:impl-wiring
I'm still not sure if app should depend on open and impl-wiring, or which app should only depend on open and open from impl-wiring.
Eventually, I came up with the following solution:
each feature consists of the following gradle-modules:
api
impl and fake
data:api
data:impl1 ... data:implN and data:fake
data:wiring
ui
demo
So, here api, impl and fake as usual, but I've my data layers separated. I bought myself that I need multiple different implementation of data layers sometimes, for example - if I develop Stock-Charts App, I could rely on Finnhub Open API or MBOUM API or provide fake implementation.
Thus I have data:api, data:implX. Indeed, data:api defines FeatureRepository interface (one or many) and data:implX provides actual implementation for them. In order to bind interface and implementation, I use data:wiring, which defines Dagger modules and component(s). In addition, I keep the same package names within each data:implX module in order to "write-once" the data:wiring module. And to replace one implementation with another, I just change a single line in data:wiring/build.gradle which states a sort of:
implementation project(":data:implA")
to
implementation project(":data:implB")
Also, to break the confusion mentioned in my original question, I introduce ui module, which contains some Views of a particular feature. Fragments go in demo (a standalone app to test feature) or in ui, they refer to viewModel which have some bindings ctor-injected from Dagger component of a feature. But the UI and library are separated here. Fragment instantiates a dedicated Dagger component that uses component dependencies to refer to feature's library bindings, such as interactor or repository etc.
So, to wrap up - separation between UI and business logic implementation (a "library") for each feature makes it possible to solve the issue. Feature's api declares an entry point to it's functionality as a library, and it's global access via Dagger multibindings from :app. So it can be used further in any :demo, :ui and :dynamic-feature.
I'm investigation
implementation "androidx.datastore:datastore-core:1.0.0-alpha01"
implementation "com.google.protobuf:protobuf-javalite:3.10.0"
via this codelab
I do not understand why the associated DataStore generated classes are Java
I thought Google announced that Kotlin had replaced Java as the primary Android development language?
I was under the impression Kotlin had many advantages over Java
is the issue proto buffers do not support Kotlin?
I can answer that quickly!
According to codelab link you shared, If you visit page 6 first line indicates something where your answer relies:
Protocol buffers are a mechanism for serializing structured data. You
define how you want your data to be structured once and then the
compiler generates source code to easily write and read the structured
data.
So, basically under the hood, library is using code generator plugin that generates classes required for protobuf to work with project. (Yes, you can relate same thing with data-binding where you write code on xml and under the hood there's generated class that actually implements that logic for you)
And that's the reason 'generated classes are in Java'. It has nothing to do with source in this context (Java/Kotlin support to library). Anything written in Java will seamlessly work on Kotlin and vice versa (Considering Android development context)
Side note: You can also relate it to annotation processor where we actually generate Java code based on annotation on any criteria.
I'm very familiar with MVVM architectural pattern in android. To take it much further, I'm doing my next project by following clean code principles (SOLID). I have separated the entire app in three modules. 1) App (presentation + framework) 2) Data 3) Domain. My doubt is that whether I can keep library dependencies (i.e. Firebase) in Data module or not. Right now, I'm using interface to access app related stuffs like shared preferences, location fetchers, retrofit, etc.
I need to expect values like AuthResult from Data module. For that I need to add Firebase dependencies in the data module's Gradle file. I think that will violate the Higher level module should not depend on lower lever module rule.
Can anyone clarify this for me?
After going through several articles on MVVM + Clean Code, I came to a conclusion that I cannot be using any dependencies related to android framework inside either domain or data module. Otherwise it will be violating the Dependency Inversion principle of SOLID.
Dependency Inversion principle
Higher level module should not be dependent on Lower level modules, and both should be dependent on abstraction.
In English -> You cannot directly access framework related components like database, gps, retrofit, etcetera from data or domain layers. They should not care about those stuffs. They should be totally independent of android related components. We can use interface to satisfy the rule of abstraction.
Therefore, my data module and domain module contains only language dependencies. Whatever android-framework-related data I want, I acquire it by implementing interfaces.
In Clean Architecture, a Repository contains Remote (Retrofit) and Local (Room) data source. I see Remote is pure Kotlin module. However because Room need to access Android Context, thus Local is Android module.
So, must Repository is Android module because of Local module? And if yes, do you know any abstraction to avoid Context in Local module and make that module becomes pure Kotlin?
The distinction isn't between programming language. The deciding factor is whether it relies on any components from Android, such as Context, to operate.
I have a module written in kotlin that is a java-library. This library contains my "domain" logic and does not contain any Android components.
In your case, because you're using Room, yes this module will need to be a com.android.library module.
I don't think there is a way round this. You could split your module into two obviously: one for Retrofit (data-api) and one for Room (data-local)
Here's link
It's good to see the open source on the link above.
It is based on the Java Specification Request (JSR) 330. It uses code generation and is based on annotations. The generated code is very relatively easy to read and debug.
1) Dagger provide simplifies access to shared instances
- That means If we declare #Inject annotation in our code then we can get reference everywhere in our projects
2) Easy configuration of complex dependencies
dagger follow the implicit order and generate Objects so in dependency and generated code both are easy to understand and traced and we can reduce large boilerplate of code,
normally we can hand to obtain reference and pass them to other objects so we can focus on modules the build rather then focusing the order in which they need to be created
3) Easier unit and integration testing Because dependency graph is created for us so its easily swap out module,network response