What does the annotations do exactly in Android at compile time? - android

#SuppressWarnings("unsued")
#Override
#SuppressLint({ "InflateParams", "SimpleDateFormat" })
I don't get why we need to declare annotations.

We want to facilitate the writing and the maintenance of Android applications.
We believe that simple code with clear intents is the best way to achieve those goals.
Robert C. Martin wrote:
The ratio of time spent reading [code] versus writing is well over 10 to 1 [therefore] making it easy to read makes it easier to write.
While we all enjoy developing Android applications, we often wonder: Why do we always need to write the same code over and over? Why are our apps harder and harder to maintain? Context and Activity god objects, complexity of juggling with threads, hard to discover API, loads of anonymous listener classes, tons of unneeded casts... can't we improve that?
How?
Using Java annotations, developers can show their intent and let AndroidAnnotations generate the plumbing code at compile time.
Features
Dependency injection: inject views, extras, system services, resources, ...
Simplified threading model: annotate your methods so that they execute on the UI thread or on a background thread.
Event binding: annotate methods to handle events on views, no more ugly anonymous listener classes!
REST client: create a client interface, AndroidAnnotations generates the implementation.
No magic: As AndroidAnnotations generate subclasses at compile time, you can check the code to see how it works.
AndroidAnnotations provide those good things and even more for less than 50kb, without any runtime perf impact!
Is your Android code easy to write, read, and maintain?
Look at that:
#EActivity(R.layout.translate) // Sets content view to R.layout.translate
public class TranslateActivity extends Activity {
#ViewById // Injects R.id.textInput
EditText textInput;
#ViewById(R.id.myTextView) // Injects R.id.myTextView
TextView result;
#AnimationRes // Injects android.R.anim.fade_in
Animation fadeIn;
#Click // When R.id.doTranslate button is clicked
void doTranslate() {
translateInBackground(textInput.getText().toString());
}
#Background // Executed in a background thread
void translateInBackground(String textToTranslate) {
String translatedText = callGoogleTranslate(textToTranslate);
showResult(translatedText);
}
#UiThread // Executed in the ui thread
void showResult(String translatedText) {
result.setText(translatedText);
result.startAnimation(fadeIn);
}
// [...]
}

Java annotations bind specific conditions to be satisfied with code. Consider a scenario where we think we are overriding a method from anther class and we implemented code that (we think) is overriding the method. But if we somehow missed to exactly override one (e.g. we misspelled name. In superclass it was "mMethodOverridden" and we typed "mMethodoverridden"). The method will still compile and execute but it will not be doing what it should do.
So #Override is our way of telling Java to let us know if we are doing right thing. If we annotate a method with #override and it is not overriding anything, compiler will give us an error.
Other annotations work in a very similar way.
For more information, read docs Lesson: annotations

Annotations are basically syntactic metadata that can be added to Java source code.Classes, methods, variables, parameters and packages may be annotated .
Metadata is data about data
Why Were Annotations Introduced?
Prior to annotation (and even after) XML were extensively used for metadata and somehow a particular set of Application Developers and Architects thought XML maintenance was getting troublesome. They wanted something which could be coupled closely with code instead of XML which is very loosely coupled (in some cases almost separate) from code. If you google “XML vs. annotations”, you will find a lot of interesting debates. Interesting point is XML configurations were introduced to separate configuration from code. Last two statements might create a doubt in your mind that these two are creating a cycle, but both have their pros and cons.
For eg:
#Override
It instructs the compiler to check parent classes for matching methods.

Related

Kotlin Extension functions to split big classes

Recently at my company a debate started after reviewing a different approach for writing heavy duty classes.
A big Java class holding component specific logic (no standard OOP principles made sense) had to be rewritten in Kotlin. The solution provided was splitting the logic in categories and the categories into separate files with internal extension functions to the main class.
Example:
Main.kt
class BigClass {
// internal fields exposed to the extension functions in different files
// Some main logic here
}
BusinessLogic.kt
internal fun BigClass.handleBussinessCase() {
// Complex business logic handled here accessing the exposed internal fields from BigClass
}
What are your thoughts on this? I haven't seen it used anywhere maybe for a good reason, but the alternative of thousand lines classes seems worse.
You have to consider that an extension function is nothing more than a function with an implicit first parameter which is referenced with this.
So in your case you'd have something like:
internal fun handleBussinessCase(ref: BigClass)
which would translate to Java as:
static void handleBussinessCase(BigClass ref)
But this could be assumed to be a delegate pattern, which could be encapsulated much cleaner in Kotlin as well.
Since the properties have to be internal anyhow, you could just inject these as a data class into smaller use-cases. If you define an interface around these (which would make the properties public though), you could create a delegate pattern with it and still reference each property with this in your implementation.
Here are some thoughts on making extension functions for the class:
It will be a utility function that will operate with the object you're extending, it will not be an object function, meaning that it will have access to only public methods and properties;
If you're planning to use class that being extended in unit tests, these methods (extensions) will be harder to mock;
Most likely they wont behave as you expect when used with inherited objects.
Maybe I missed something, so please read more about extensions here.

strings.xml on MVPand clean architecture

I'm developing an android app implementing MVP and clean architecture. I have the following scenario:
One core module with presenters and view interfaces,...
One domain module with repositories, data sources,..
App module with the core implementation (so the Fragment/Activities).
Currently the strings.xml file is in the app module, but I'm thinking whether it should be in a commons module or not. The problem is that, sometimes, the presenter must set the text to the view, so the presenter should need to access to the strings.xml. I've thought in two possible solutions:
1) Create a TextHelper interface on core module that will be implemented on the app module and injected to the presenter, so the presenter will use this helper to get the strings it requires. (This is the solution I have implemented).
2) Move the strings.xml file to a common module so the file can be accessed from core module. But this solution would have a problem: the presenter doesn't have a context.
What do you think? What is the best approach?
Thanks in advance
If your view has nested if/elses related to strings, then they should probably be unit-tested. Therefore, that logic should stay in presenters or use-cases, where can be tested more quickly.
Your question is about how to retrieve the actual strings, given that they reside in the "outer layers" of the Clean Architecture scheme, i.e. in the Context object. IMHO your TextHelper is the right approach, as it allows to inject a mock when writing unit tests: you're interested in how the strings are processed, rather than how the strings actually look. I'm trying a very similar approach and calling it StringsRepository.
A point of uncertainty is how the the repository API should look like:
A single method like getString(#StringRes int stringResId, Object... formatArgs) that simply wraps Context.getString(): very simple to implement, but will make the presenters depend on your R.string class, which in turns requires strings.xml to be in the same module as your code under test;
One method per string with optional arguments, each one containing the reference to the appropriate string ID. This solution allows for best abstraction, but may become big (both the interface and the implementation...) and many domain classes may depend upon it. Handle with care.
Like (2), but with several classes, one per each part of your app. Each class may have a base class similar to (1) but with that method with protected visibility.
The best options for your case would be (2) or (3), but your mileage may vary.
You can use Application class to get the context any where from the app.
public class MVPApplication extends Application {
private static Context context;
public static Context getContext() {
return context;
}
#Override
public void onCreate() {
super.onCreate();
context = getApplicationContext();
}
}

should presenters(mvP) be injected(dagger2) to views in android?

In the context of developing and android app, should I use presenters directly in views using 'new' or would it be better if I injected them to the view.
Pros/cons for not using injected presenters:
Faster development time, without having to write components and modules.
Presenters are tightly coupled with the views, I don't see this as much of a problem as most of the time presenters are not shared across multiple views(ie. one single view for a presenter).
Might be a problem for testing, as with using dependency injection mock implementations of the presenters can be provided(not sure if this is any useful, need more insight on this).
You're right. Using injection will only help you in the long run. You could either spend 5 minutes setting up your module / component, or you could be just coding.
As long as you don't do proper testing, there is not much difference to it, if you presenter looks like the following
mPresenter = new Presenter();
Assuming you use constructor injection properly, after creating your component, you save some lines as compared to
#Inject Presenter mPresenter;
// onCreate or some other place
{
getComponent().inject(this); /* getComponent() also 4-5 lines */
}
Again. If you use proper constructor injection there is a good chance you don't have many module code. Just creating some component will do.
But you saved some minutes and once you want to do testing this is just some easy refactoring, which can quickly be done.
So why Dagger?
This was assuming your presenter has no dependencies on other objects. But what if it does?
SharedPreferences preferences = getPreferences();
MyStorage storage = new MyStorage(preferences);
mPresenter = new Presenter(storage);
Using something to store your data is properly a good use case. While you just added some more logic about object creation to your activity, the dagger implementation would still look the same.
Even more?
Now let's assume you want to share this storage from above between activities. Now you have to add some logic to your Application or in some other place where you can create a Singleton to use throughout your app.
This will probably not be your only singleton, though, and you will start to clutter up your Application as well. Don't get me started on managing those objects lifecycle, e.g. user logging in or out, make sure to clear that cached data!
Again. The dagger implementation still looks the same. If there is some more logic needed it is well placed in the modules and abstracted with component dependencies.
Once you start thinking I could just create classes that handle object construction and injection you know that you could have just used dagger in the first place ;)
I also wrote a blog post about dagger basics including how constructor injection works, which many beginners are not using properly for some reason.

How does Dagger 2 make testing easier on Android?

One of the best advantages of using DI is it makes testing a lot easier (What is dependency injection? backs it too). Most of DI frameworks I've worked with on other programming languages (MEF on .NET, Typhoon on Obj-C/Swift, Laravel's IoC Container on PHP, and some others) allows the developer do register dependencies on a single entry point for each component, thus preventing the "creation" of dependency on the object itself.
After I read Dagger 2 documentation, it sounds great the whole "no reflection" business, but I fail to see how it makes testing easier as objects are still kind of creating their own dependencies.
For instance, in the CoffeMaker example:
public class CoffeeApp {
public static void main(String[] args) {
// THIS LINE
CoffeeShop coffeeShop = DaggerCoffeeShop.create();
coffeeShop.maker().brew();
}
}
Even though you're not explicitly calling new, you still have to create your dependency.
Now for a more detailed example, let's go to an Android Example.
If you open up DemoActivity class, you will notice the onCreate implementation goes like this:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// Perform injection so that when this call returns all dependencies will be available for use.
((DemoApplication) getApplication()).component().inject(this);
}
You can clearly see there is no decoupling from the DI component, to the actual code. In summary, you'd need to mock/stub ((DemoApplication) getApplication()).component().inject(this); on a test case (if that's even possible).
Up to this point, I am aware Dagger 2 is pretty popular, so there is got to be something I am not seeing. So how does Dagger 2 makes testing classes easier? How would I mock, let's say a network service class that my Activity depends on? I would like the answer to be as simple as possible as I'm only interested in testing.
Dagger 2 doesn't make testing easier
...beyond encouraging you to inject dependencies in the first place, which naturally makes individual classes more testable.
The last I heard, the Dagger 2 team were still considering potential approaches to improving support for testing - though whatever discussions are going on, they don't seem to be very public.
So how do I test now?
You're correct to point out that classes which want to make explicit use of a Component have a dependency on it. So... inject that dependency! You'll have to inject the Component 'by hand', but that shouldn't be too much trouble.
The official way
Currently, the officially-recommended approach to swapping dependencies for testing is to create a test Component which extends your production one, then have that use custom modules where necessary. Something like this:
public class CoffeeApp {
public static CoffeeShop sCoffeeShop;
public static void main(String[] args) {
if (sCoffeeShop == null) {
sCoffeeShop = DaggerCoffeeShop.create();
}
coffeeShop.maker().brew();
}
}
// Then, in your test code you inject your test Component.
CoffeeApp.sCoffeeShop = DaggerTestCoffeeShop.create();
This approach works well for the things you always want to replace when you are running tests - e.g. Networking code where you want to run against a mock server instead, or IdlingResource implementations of things for running Espresso tests.
The unofficial way
Unfortunately, it the official way can involve a lot of boilerplate code - fine as a one-off, but a real pain if you only want to swap out a single dependency for one particular set of tests.
My favourite hack for this is to simply extend whichever Module has the dependency you want to replace, then override the #Provides method. Like so:
CoffeeApp.sCoffeeShop = DaggerCoffeeShop.builder()
.networkModule(new NetworkModule() {
// Do not add any #Provides or #Scope annotations here or you'll get an error from Dagger at compile time.
#Override
public RequestFactory provideRequestFactory() {
return new MockRequestFactory();
}
})
.build();
Check this gist for a full example.
"allows the developer do register dependencies on a single entry point for
each component" - analogues in Dagger 2 are the Modules and Components where you define the dependencies. The advantage is that you don't define the dependencies directly in your component thus decoupling it so later when writing unit tests you may switch the Dagger 2 component with a test one.
"it sounds great the whole "no reflection" business" - the "no reflection" thing is not the "big deal" about dagger. The "big deal" is the full dependency graph validation at compile time. Others DI frameworks don't have this feature and if you fail to define how some dependency is satisfied you will get an error late at runtime. If the error is located in some rarely used codepath your program may look like it is correct but it will fail at some point in the future.
"Even though you're not explicitly calling new, you still have to create your dependency." - well, you always have to somehow initiate dependency injection. Other DI may "hide"/automate this activity but at the end somewhere building of the graph is performed. For dagger 1&2 this is done at app start. For "normal" apps (as you shown in the example) in the main(), For android apps - in the Application class.
"You can clearly see there is no decoupling from the DI component, to the actual code" - Yes, you are 100% correct. That arises from the fact that you don't control directly the lifecycle of the activities, fragments and services in Android, i.e. the OS creates these objects for you and the OS is not aware that you are using DI. You need manually to inject your activities, fragments and services. At first this seem seems awkward but in real life the only problem is that sometimes you may forget to inject your activity in onCreate() and get NPE at runtime.

Dagger - What does 'Separate "do" from the "how"' means?

I've been watching Jake's slides about Dagger. At the page #29, he said 'Separate "do" from the "how"'. What specifically does that mean? And what are the benefits of it?
https://speakerdeck.com/jakewharton/android-testing?slide=29
If I had to expand the sentence it would be: "The code that needs to 'do' an action should not need to know 'how' that action is fulfilled."
This is a fundamental principle that can be lumped under a lot of different phrases but can most concretely be explained as interfaces and implementations in Java.
When you use a Set in Java you are concerned with the behavior: each element only appears once. Java provides multiple implementations of a Set. Each implementation fulfills the contract of the interface using different techniques: hash buckets, trees, or bits.
Code that interacts with the Set only cares about the behavior, not how it is achieved.
The same semantics can be applied to many different things in your application. My favorite example is something that posts tweets.
interface Tweeter {
void postTweet(String message);
}
If I'm a class that wants to post tweets, all I care about it the ability to send a tweet.
public MyClass(Tweeter tweeter) {
this.tweeter = tweeter;
}
#Override
public void run() {
tweeter.postTweet("I'm a Runnable that's being run!");
}
How does the tweet get sent? From the perspective of MyClass, who cares!
In my real application I will of course use a TwitterTweeter (either directly when instantiating MyClass or via dependency injection). However, I now have the ability to pass in a MockTweeter of my own implementation to write a test that asserts that when a MyClass is run() it posts a tweet.
Things like this might seem obvious, but it's not a rarity to see code that (for this example) needs to posts a tweet and knows exactly how to do so.
Before I leave, two important points:
Interfaces and implementations are not the only way to accomplish this, they're just a very effective means of doing so.
Abstraction in this form is icing on a cake. A cake without icing sucks. But a cake with 5inch-thick icing also sucks. Apply it where it's needed and logical.

Categories

Resources