Hoping someone can explain why data binding per module seems not work (returns null) when a specific module/s are declared as runtimeOnly vs when it's declared as implementation.
e.g.
Say I've got some feature modules which I want to include or exclude (similar to dynamic feature modules) except I'm not using that right now for other reasons I will not go into, this is more of an experiment. So the idea was to have multiple components detached from the main application that are only made available at runtime, so in other words virtual no coupling between app and any of the add-on features and this works fine until I add data binding into the mix, crashing with a *Binding cannot be null.
The only way I've gotten this setup to work is by switching back from runtimeOnly to implementation and from what I know so far the difference between runtimeOnly and impelentation are on the gradle website:
So my question is why does this happen, does enabling databinding in modules require the module to be configured with implementation? Or perhaps I'm doing it all wrong and have misunderstood the purpose of runtimeOnly.
Thank you in advance, and apologies if the question is not comprehensive enough
implementation: mostly we use implementation configuration. It hides the internal dependency of the module to its consumer to avoid accidental use of any transitive dependency, hence faster compilation and less recompilation.
runtimeOnly: when we want to change or swap the behaviour of the library at runtime (in final build).
In the case of runtimeOnly, you need two dependencies one which will help you to access the code at compile-time and one which will replace/use at runtime.
Example of Runtime:
SLF4J is one of the best examples of runtimeOnly, where we will use slf4j-api as implementation configuration and implementation of slf4j-api (like slf4j-log4j12 or logback-classic, etc) as runtimeOnly configuration.
I have created a post with an in-depth understanding of each one with Working Example: source code
https://medium.com/#gauraw.negi/how-gradle-dependency-configurations-work-underhood-e934906752e5
Related
Square Inc. has presented it's internal modular architecture at Droidcon SF'19:
https://www.droidcon.com/media-detail?video=380843878
However, I'm a bit confused with some bullets. Could you please help me?
Why do they actually need :wiring modules? I find it adding complexity:
you get extra gradle module for each new feature
you have to make a sort of global injection into your Fragments somewhere in :app, because Fragments defined in :impl modules cannot access it's DaggerComponent, which is defined in :impl-wiring modules. :impl doesn't depend on :impl-wiring, because the dependency is reversed.
you cannot have an Android Dynamic Feature modules, because they should know about it's DaggerComponent in order to inject it's Fragment. But there is no way to do such injection from :app module, which is base-module for Dynamic Features.
so why :wiring modules at all?
One can merge :impl and :impl-wiring, or :fake and :fake-wiring together to eliminate all the issues mentioned above. And also, in :demo-apps one could just have a dependency on either :impl or :fake``, and not on :impl-wiring(or:fake-wiring```).
The creation of this type of modules is to separate even more. With this you generate an abstraction of the type of component you use (koin, dagger) and how. If the project is large, it makes sense to do it.
Currently I generate the following flow of dependencies between modules:
WARNING: Check the directionalities well.
:feature-A:open <- :feature-A:impl -> :feature-A:impl-wiring
:feature-A:impl-wiring -> :feature-A:impl, :feature-A:open
:app -> :feature-A:open, :feature-A:impl-wiring
I'm still not sure if app should depend on open and impl-wiring, or which app should only depend on open and open from impl-wiring.
Eventually, I came up with the following solution:
each feature consists of the following gradle-modules:
api
impl and fake
data:api
data:impl1 ... data:implN and data:fake
data:wiring
ui
demo
So, here api, impl and fake as usual, but I've my data layers separated. I bought myself that I need multiple different implementation of data layers sometimes, for example - if I develop Stock-Charts App, I could rely on Finnhub Open API or MBOUM API or provide fake implementation.
Thus I have data:api, data:implX. Indeed, data:api defines FeatureRepository interface (one or many) and data:implX provides actual implementation for them. In order to bind interface and implementation, I use data:wiring, which defines Dagger modules and component(s). In addition, I keep the same package names within each data:implX module in order to "write-once" the data:wiring module. And to replace one implementation with another, I just change a single line in data:wiring/build.gradle which states a sort of:
implementation project(":data:implA")
to
implementation project(":data:implB")
Also, to break the confusion mentioned in my original question, I introduce ui module, which contains some Views of a particular feature. Fragments go in demo (a standalone app to test feature) or in ui, they refer to viewModel which have some bindings ctor-injected from Dagger component of a feature. But the UI and library are separated here. Fragment instantiates a dedicated Dagger component that uses component dependencies to refer to feature's library bindings, such as interactor or repository etc.
So, to wrap up - separation between UI and business logic implementation (a "library") for each feature makes it possible to solve the issue. Feature's api declares an entry point to it's functionality as a library, and it's global access via Dagger multibindings from :app. So it can be used further in any :demo, :ui and :dynamic-feature.
I have an Android project where Glide v4 is one of its dependency.
This project has another dependency, let's call it dependency A , where it depends on Glide v3 instead.
I don't know if it matters, but dependency A can only be included as an aar.
So this is part of my build.gradle:
implementation(name: 'dependency_a', ext: 'aar')
implementation ("com.github.bumptech.glide:glide:4.7.1") {
exclude group: "com.android.support"
}
annotationProcessor 'com.github.bumptech.glide:compiler:4.7.1'
The app can be compiled; but when code in dependency A runs that uses Glide v3:
Glide.with(context).load(imageUrl).asBitmap().into(new SimpleTarget<Bitmap>() {...}
The app crashes with this message:
java.lang.NoSuchMethodError: No virtual method load(Ljava/lang/String;)Lcom/bumptech/glide/DrawableTypeRequest; in class Lcom/bumptech/glide/RequestManager; or its super classes (declaration of 'com.bumptech.glide.RequestManager' appears in /data/app/{my.package.name}}-LItMzBkBqXw3lyYYdKp-SA==/base.apk:classes15.dex)
I am finding a way to preserve Glide v3 in dependency A, but still use Glide v4 for my app and other dependencies.
Is it even possible?
Why don't I simply use Glide v3 for my app as well
This is because another dependency B needs me to use Glide v4.
Gradle's dependency resolution is about choosing one version when multiple alternatives are available or required.
Ultimately there can only be one Class for a given class full name in a given ClassLoader, so the possibilities are :
Change the package name. For example Spongy Castle moved from org.bouncycastle.* to org.spongycastle.* to avoid conflicts with the platform's version.
Use multiple class loaders. I believe that Android support custom class loader, but this would probably involve quite a bit of work with subtle pitfalls.
I think that unfortunately, none of this is a practical solution in your case.
Gradle resolves version conflicts by picking the highest version of a module as mentioned here. So if you have v3 and v4 as your depencency, v4 will be used.
You are getting the crash since there are major changes from v3 to v4 for Glide, Dependency A cannot use methods of v4.
Solution 1 - Dependency B has to use v3 to avoid conflicts. Upgrade to v4 when Dependency A has upgraded to v4.
Solution 2 - If Dependency A can function normally without Glide dependency then, Glide can be excluded from Dependency A.
I guess you don't have any options but to used the updated one which is Glide v4, because as you already state you are also using Glide v4. Also, it would be good if you used updated dependency/libraries because there are performance improvements and bug fixes applied in new versions.
Could your app be split into multiple binaries? Could one binary be linked with dependency A and Glade 3, and the other binary be linked with dependency B and Glade 4?
This would only make sense if the parts of the app that depend on A and Glade 3, and the parts of the app that depend on B and Glade 4, could be cleanly differentiated, so that each binary serves a particular, well-defined purpose. This approach would be conceptually similar to object oriented design, or the Unix philosophy of combining single-purpose tools to facilitate complex workflows, or the COM architecture on Windows.
When evaluating whether or not such a split is feasible, you'll want to consider user workflows and data stores, as well as characteristics that are specific to your app. If users would move sequentially from binary A to binary B, for example, if binary A was part of the startup sequence, and users never return to binary A, that suggests that a split may be possible. On the other hand, if users would bounce back and forth between binary A and binary B, that could make a split much more difficult. Likewise, if data is stored in a database, and each binary can access the data independently, a split may be feasible. Conversely, if the data is primarily stored in process, and/or there is a lot of data to stream between the processes, then a split might not be feasible.
When working with multiple binaries in this fashion, they likely need to communicate using some sort of API, for example, the command line, pipes, file sockets, network sockets, or even simply a shared external data store that both binaries access asynchronously.
In general, the more you can limit interactions between the binaries, the better. You may be able to create a simple wrapper around dependency A and Glade 3, and call that wrapper from the remainder of your code.
Finally, if you evaluate and discover it may be feasible to split your app into multiple binaries, before proceeding, also consider the relative effort of splitting your app versus specifying new dependencies for A and B.
Overriding one of the package names can be a possible solution
Try creating 2 AAR libs for each part that use different versions.
I would be creating a library (.aar) at my current workplace. It has a lot of complicated business processes and would definitely need a lot of automated tests, due to which I was planning on using dagger in my library.
But as it is a library, it needs to be as small as possible and depend on as fewer dependencies as possible. Not to mention that dagger just bloats anything it is used with.
So, I am in crossroads and unable to decide what should be my approach.
Can someone please help me come to a conclusion.
there isnt any problem with using dagger in a library if you use the dagger just inside. i mean as long as you dont expect the user of that to provide some dependencies for you from out of the library.
dagger makes the code complicated but not for yourself. assume the person who uses the library knows nothing about DI or dagger.
I myself have some project including a library using a dagger and even needing some dependencies to be provided from out of the library but since the whole project is mine and im not gonna export the library everything is ok.
so this depends how you gonna use it and i suggest if you wanna give this library to others dont expect them to implement dagger and provide some dependencies for you.
I'm using Awareness API, and wondering, what is the best way to test is during development, on a device (not an emulator)?
I want to emulate to test its accuracy
location / activity / weather
changes for example.
How can I achieve it?
Thank you!
Your question is quite general, so I will give the general answer. Basically, you shouldn't test external APIs and libraries, so you should have named this thread differently. What you want to do is to emulate the specific behavior of the API.
You can do it in the following way:
use Dependency Injection library like Dagger and hide API
implementation behind the interface
add library implementation of the interface
add another stub implementation simulating behavior you want to achieve
in Gradle configuration, you can assign proper
implementation of the interface or use flavors functionality to
configure build variants (e.g. awareness-testing, production, etc.)
Remember to exclude Stub class from production build.
I'm not sure about exact implementation, but I'd go, more or less, this way.
Moreover, you can test specific functionalities via unit tests
Every unconsidered usage of external library results in Gradle error about reaching 65k methods. So I'm wondering what is the best solution to prevent this, and I would like not consider using Proguard custom settings.
What comes to my mind (of course if we are using open source libraries) is just downloading code and putting it to our project manually. Then we can delete unused classes and methods. It looks like we are playing Proguard role and it's time consuming.
Question
But what is the difference between using Gradle to fetch jars and
putting code manually? Or is it any difference in performance?
I will be thankful for any best practices in creating project with many libraries.
Note: I would like to stop thinking for now about Proguard custom settings, because it often produces warnings, where omitting them looks a bit weird for me by e.g. -dontwarn. Also I'm a bit afraid of using MultiDex support, which I thougt is not recommended.
One thing you can do in gradle is to get very specific about what code modules you're including.
For example:
compile 'com.google.android.gms:play-services:8.4.0'
will include a lot more stuff than just...
compile 'com.google.android.gms:play-services-location:8.4.0'
Generally it's best to let gradle manage the external dependencies. The biggest reason is that you can rest assured that you're getting a snapshot of the code tied to a specific version. If you just manually import code, it's very easy to lose track of which version of the code you're working with. A big risk to modifying (or selectively including) code from a 3rd party is when the 3rd party updates their code, perhaps to fix a major bug caused by a new version of Android, and they included some changes that botch your manual tweaks. Now you have more a complex problem on your hands than if you had just included it as a gradle dependency.