What exactly does the Timber library do? - android

I heard about Timber and was reading github README, but it's quietly confusing me.
Behavior is added through Tree instances. You can install an instance
by calling Timber.plant. Installation of Trees should be done as early
as possible. The onCreate of your application is the most logical
choice.
What behavior?
This is a logger with a small, extensible API which provides utility
on top of Android's normal Log class.
What more does it provide on top of Android's Log?
The DebugTree implementation will automatically figure out from which
class it's being called and use that class name as its tag. Since the
tags vary, it works really well when coupled with a log reader like
Pidcat.
What is DebugTree?
There are no Tree implementations installed by default because every
time you log in production, a puppy dies.
Again, what is a tree implementation? What does it do? And how do I stop killing puppies?
Two easy steps:
Install any Tree instances you want in the onCreate of your
application class.
Call Timber's static methods everywhere throughout your app.
Two easy steps for accomplishing what?
None of this has been explained in the Readme. It's pretty much a description for someone who already knows what it is :/

Problem :-
We do not want to print logs in Signed application as we may sometimes log sensible information . Generally to overcome this developers tend to write if condition before writing log
Example:-
if(BuildConfig.DEBUG) {
Log.d(TAG,userName);
}
so every time you want to print a log you need to write a if condition and a TAG which most times will be class name
Timber tackels these two problems
You just need to check condition once in application class and initialize Timber.plant
class MyApplication : Application() {
override fun onCreate() {
super.onCreate()
if (BuildConfig.DEBUG) {
Timber.plant(DebugTree())
}
}
}
remaining all places we can just write Timber.d("Message") without any tag or if condition .
If you want a different tag then you can use
Timber.tag("Tag").d("message");
Edit :
Also you can plant your own trees in Timber , and do some cool stuff like Log only warnings and error in release , send warnings and error logs to server etc . eg
import timber.log.Timber;
public class ReleaseTree extends Timber.Tree {
#Override
protected void log(int priority, String tag, String message, Throwable t) {
if (priority == ERROR || priority == WARNING){
//Send to server
}
}
}
You can plant different trees for different build flavours .
Check out this article and have a listen to this podcast

Related

How to send multiple commands from a MVVM to the View using the SingleLiveEvent class

I have a ViewModel in my android app, that has some logic, and the view needs to be adjusted/perform different things depending on the result of that logic. At first I tried to do it exclusively with observers and reacting to the state of the data in the viewmodel, but it was too complicated.
Then I found the concept of commands using the SingleLiveEvent class, and I found it good because it reminds me the same pattern when using Xamarin and the microsoft's mvvvm. One of the (few) good things that working with xamarin has ;)
Well, the problem in my case is that when I have more than one command that needs to be send to the view, the view is receiving only one command. Sometimes the last one, sometimes the first one. For example, a couple of commands that order the view to perform complicated things:
sealed class CustomCommands{
class CustomCommand1 : CustomCommands()
class CustomCommand2() : CustomCommands()
class CustomCommand3() : CustomCommands()
class CustomCommand4() : CustomCommands()
class CustomCommand5() : CustomCommands()
}
Then in my viewModel, I have the commands SingleLiveEvent object:
class CustomViewModel...{
val commands: SingleLiveEvent<CustomCommands> = SingleLiveEvent()
private fun doComplicatedThingsOnTheUI() {
GlobalScope.launch(Dispatchers.IO) {
if (someConditionsInvolvingRestRequestsAndDatabaseOperations()){
commands.postValue(CustomCommands.CustomCommand1())
commands.postValue(CustomCommands.CustomCommand2())
} else {
commands.postValue(CustomCommands.CustomCommand3())
commands.postValue(CustomCommands.CustomCommand4())
}
commands.postValue(CustomCommands.CustomCommand5())
}
}
}
And in the Activity/Fragment, I have the observer for the commands, that should react for each command and does the work:
class MainActivity...{
viewModel.commands.observe(this, Observer { command ->
Rlog.d("SingleLiveEvent", "Observer received event: " + command.javaClass.simpleName)
when (command) {
Command1->doSomething1()
Command2->doSomething2()
}
}
Well, the problem is that the view is normally receiving only the last command (Command5). But the behaviour depends on the api level of the Android SDK. By api 16, the view receives the last command. By Api 28, the view receives normally the first and the last command (for example, Command1 and Command5, but not Command2).
Maybe I'm understanding the capabilities of the SingleLiveEvent class wrong, or the whole Command thing wrong, but I need a way to allow the viewmodel to order the view to do somethings depending on the state of many objects and variables. The code above is only a sample, the reality es more complicated than that.
I don't want to use callbacks between the viewmodel and the view, because I read somewhere that that breaks the whole MVVM pattern.
Maybe someone has an advice for me. Any help would be welcome.
Thank you in advance.
I think I found a workaround, that seems to work (I have tested it only a couple of hours).
The thing is that I'm using "command.postValue(XXX)", because that piece of code is running inside a couroutine, that is, in other thread. Because of that I can not use command.value directly.
But the fact is that using command.value=Command1(), it works. I mean, the view receives all the commands sent, and very important too, with the same order as they were sent. Because of that I wrote a little funcion to send the commands to the UI switching the thread.
I'm not sure if this is correct, I'm a new to Kotlin coroutines and I have to admit that I don't understand them very well yet:
private suspend fun sendCommandToView(vararg icommands: CustomCommands) = withContext(Dispatchers.Main) {
icommands.forEach {
commands.value = it
}
}
And then I send the commands
sendCommandToView(CustomCommand1(),CustomCommand2(),CustomCommand5())
This seems to work. Thougt that the "post" method would work in a similar way, but it does not.
Regards.

How to implement Jake Wharton's robot pattern to Espresso UI testing?

Jake Wharton delivered a fascinating talk where he proposes some smart ways to improve our UI tests by abstracting the detail of how we perform UI out of the tests: https://news.realm.io/news/kau-jake-wharton-testing-robots/
An example he gives is a test that looks as follows, where the PaymentRobot object contains the detail of how the payment amount & recipient are entered into the UI. Putting that one place makes a lot of sense so when the UI inevitably changes (e.g. renaming a field ID, or switching from a TextEdit to a TextInputLayout), it only needs updating in one place not a whole series of tests. It also makes the tests much more terse and readable. He proposes using Kotlin to make them even terser. I do not use Kotlin but still want to benefit from this approach.
#Test public void singleFundingSourceSuccess {
PaymentRobot payment = new PaymentRobot();
ResultRobot result = payment
.amount(42_00)
.recipient("foo#bar.com")
.send();
result.isSuccess();
}
He provides an outline of how the Robot class may be structured, with an explicit isSuccess() response, returning another Robot which is either the next screen, or the state of the current one:
class PaymentRobot {
PaymentRobot amount(long amount) { ... }
PaymentRobot recipient(String recipient) { .. }
ResultRobot send() { ... }
}
class ResultRobot {
ResultRobot isSuccess() { ... }
}
My questions are:
How does the Robot interface with the Activity/Fragment, and specifically where is it instantiated? I would expect that happens in the test by the runner, but his examples seem to suggest otherwise. The approach looks like it could be very useful, but I do not see how to implement it in practice, either for a single Activity/Fragment, or for a sequence of them.
How can this approach be extended so that the isSuccess() method can handle various scenarios. e.g. if we're testing a Login screen, how can isSuccess() handle various expected results like: authentication success, API network failure, and auth failed (e.g. 403 server response)? Ideally the API would be mocked behind Retrofit, and each result tested with an end to end UI test.
I've not been able to find any examples of implementation beyond Jake's overview talk.
I had totally misunderstood how Espresso works, which led to even more confusion in my mind about how to apply it to the page object pattern. I now see that Espresso does not require any kind of reference to the Activity under test, and just operates within the context of the runner rule. For anyone else struggling, here is a fleshed-out example of applying the robot/page object pattern to a test of the validation on a login screen with a username and password field where we are testing that an error message is shown when either field is empty:
LoginRobot.java (to abstract automation of the login activity)
public class LoginRobot {
public LoginRobot() {
onView(withId(R.id.username)).check(matches(isDisplayed()));
}
public void enterUsername(String username) {
onView(withId(R.id.username)).perform(replaceText(username));
}
public void enterPassword(String password) {
onView(withId(R.id.password)).perform(replaceText(password));
}
public void clickLogin() {
onView(withId(R.id.login_button)).perform(click());
}
}
(Note that the constructor is testing to make sure the current screen is the screen we expect)
LoginValidationTests.java:
#LargeTest
#RunWith(AndroidJUnit4.class)
public class LoginValidationTests {
#Rule
public ActivityTestRule<LoginActivity> mActivityTestRule = new ActivityTestRule<>(LoginActivity.class);
#Test
public void loginPasswordValidationTest() {
LoginRobot loginPage = new LoginRobot();
loginPage.enterPassword("");
loginPage.enterUsername("123");
loginPage.clickLogin();
onView(withText(R.string.login_bad_password))
.check(matches(withEffectiveVisibility(ViewMatchers.Visibility.VISIBLE)));
}
#Test
public void loginUsernameValidationTest() {
LoginRobot loginPage = new LoginRobot();
loginPage.enterUsername("");
loginPage.enterPassword("123");
loginPage.clickLogin();
onView(withText(R.string.login_bad_username)).check(matches(withEffectiveVisibility(ViewMatchers.Visibility.VISIBLE)));
}
}
Abstracting the mechanisms to automate the UI like this avoids a mass of duplication across tests, and also means changes are less likely to need to be reflected across many tests. e.g. if a layout ID changes, it only needs updating in the robot class, not every test that refers to that field. The tests are also significantly shorter and more readable.
The robot methods, e.g. the login button method, can return the next robot in the chain (ie that operating on the activity after the login screen). e.g. LoginRobot.clickLogin() returns a HomeRobot (for the app's main home screen).
I've put the assertions in the tests, but if assertions are reused in many tests it might make sense to abstract some into the robot.
In some cases it might make sense to use a view model object to hold a set of fake data that is reused across tests. e.g. if testing a registration screen with many fields that is operated on by many tests it might make sense to build a factory to create a RegistrationViewModel that contains the first name, last name, email address etc, and refer to that in tests rather than duplicating that code.
How does the Robot interface with the Activity/Fragment, and
specifically where is it instantiated? I would expect that happens in
the test by the runner, but his examples seem to suggest otherwise.
The approach looks like it could be very useful, but I do not see how
to implement it in practice, either for a single Activity/Fragment, or
for a sequence of them.
The Robot is supposed to be a utility class used for the purpose of the test. It's not supposed to be a production code included as a part of your Fragment/Activity or whatever you wanna use. Jake's analogy is pretty much perfect. The Robot acts like a person interacting with the screen of the app. So the exposed api of a Robot should be screen specific, regardless of what your implementation is underneath. It can be across multiple activities, fragments, dialogs, etc. or it can reflect an interaction with just a single component. It really depends on your application and test cases you have.
How can this approach be extended so that the isSuccess() method can
handle various scenarios. e.g. if we're testing a Login screen, how
can isSuccess() handle various expected results like: authentication
success, API network failure, and auth failed (e.g. 403 server
response)? Ideally the API would be mocked behind Retrofit, and each
result tested with an end to end UI test.
The API of your Robot really should specify the what in your tests. Not the how. So from the perspective of using a Robot it wouldn't care if you got the API network failure or the auth failure. This is how. "How did you end up with a failure". A human QA tester (=== Robot) wouldn't look at http stream to notice the difference between api failure or http timeout. He/she would only see that your screen said Failure. The Robot would only care if that was a Success or Failure.
The other thing you might want to test here is whether your app showed a message notifying user of a connection error (regardless of the exact cause).
class ResultRobot {
ResultRobot isSuccess() { ... }
ResultRobot isFailure() { ... }
ResultRobot signalsConnectionError() { ... }
}
result.isFailure().signalsConnectionError();

How to unit test Log.e in android?

I need to perform an unit test where I need to check if an error message is logged when a certain condition occurs in my app.
try {
//do something
} catch (ClassCastException | IndexOutOfBoundsException e) {
Log.e(INFOTAG, "Exception "+e.getMessage());
}
How can I test this? I am getting the below error while unit testing.
Caused by: java.lang.RuntimeException: Method e in android.util.Log not mocked.
There are two ways to do this:
You turn to Powermock or Powermokito; as those mocking frameworks will allow you to mock/check that static call to Log.e().
You could consider replacing the static call.
Example:
interface LogWrapper {
public void e( whatever Log.e needs);
}
class LogImpl implements LogWrapper {
#Override
e ( whatever ) {
Log.e (whatever) ;
}
And then, you have to use dependency injection to make a LogWrapper object available within the classes you want to log. For normal "production" usage, that object is simply an instance of LogImpl; for testing, you can either use a self-written impl (that keeps track of the logs send to it); or you can use any of the non-power mocking frameworks (like EasyMock or Mokito) to mock it. And then you use the checking/verification aspect of the mocking framework to check "log was called with the expected parametes".
Please note: depending on your setup, option 2 might be overkill. But me, personally, I avoid any usage of Powermock; simply because I have wasted too many hours of my life hunting down bizarre problems with Powermock. And I like to do coverage measurements; and sometimes Powermock gives you problems there, too.
But as you are asking about Powermock, you basically want to look here (powermockito) or here (powermock). And for the record: try using your favorite search engine the next time. It is really not like you are the first person asking this.

Rising error in Robolectric test when you have try-catch in your application code

In my android application I have written try-catch in every event method. So when an exception occurs, the catch gets the exception and a method shows a message box containing the exception details and I can handle and find my application's bugs.
For example:
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
try {
}
catch (Exception e) {
MessageBox.showException(this, e);
}
}
Now in Robolectric which there is no device to show the ui results, I cannot find out if an exception occurred. Now I want to do something when my code went to catch part or when MessageBox.showException is called, the test fails.
How can I do that?
The only way I can think of solving this is for you to inject the component that handles the errors into the classes that use it and after that, load a customized one for your tests.
There are several ways to achieve this and probably some better than what I will suggest, but I will try and present the option that requires minimum changes to what I think is your current architecture.
1 - Whatever you use for showing exceptions, instantiate this in your Application class and keep it there. Or at least provide it from your application class, so now whenever you need to use MessageBox, instead of a static method, you fetch it from the Application first. For example:
((MyApplication)getApplication()).getMessageBox().showException(this,e)
2 - Create a TestMessageBox and a TestApplication (that extends your normal Application class). In your TestApplication, override getMessageBox() to return the TestMessageBox instead of the normal MessageBox. In your TestMessageBox do whatever you want to be able to observe the errors in your tests.
3 - In your tests, use the TestApplication. When you run tests, Robolectric will load this instead of the normal application so your tests will now use your TestMessageBox and you can capture the info you need.
#Config(application = TestApplication.class)

What does the annotations do exactly in Android at compile time?

#SuppressWarnings("unsued")
#Override
#SuppressLint({ "InflateParams", "SimpleDateFormat" })
I don't get why we need to declare annotations.
We want to facilitate the writing and the maintenance of Android applications.
We believe that simple code with clear intents is the best way to achieve those goals.
Robert C. Martin wrote:
The ratio of time spent reading [code] versus writing is well over 10 to 1 [therefore] making it easy to read makes it easier to write.
While we all enjoy developing Android applications, we often wonder: Why do we always need to write the same code over and over? Why are our apps harder and harder to maintain? Context and Activity god objects, complexity of juggling with threads, hard to discover API, loads of anonymous listener classes, tons of unneeded casts... can't we improve that?
How?
Using Java annotations, developers can show their intent and let AndroidAnnotations generate the plumbing code at compile time.
Features
Dependency injection: inject views, extras, system services, resources, ...
Simplified threading model: annotate your methods so that they execute on the UI thread or on a background thread.
Event binding: annotate methods to handle events on views, no more ugly anonymous listener classes!
REST client: create a client interface, AndroidAnnotations generates the implementation.
No magic: As AndroidAnnotations generate subclasses at compile time, you can check the code to see how it works.
AndroidAnnotations provide those good things and even more for less than 50kb, without any runtime perf impact!
Is your Android code easy to write, read, and maintain?
Look at that:
#EActivity(R.layout.translate) // Sets content view to R.layout.translate
public class TranslateActivity extends Activity {
#ViewById // Injects R.id.textInput
EditText textInput;
#ViewById(R.id.myTextView) // Injects R.id.myTextView
TextView result;
#AnimationRes // Injects android.R.anim.fade_in
Animation fadeIn;
#Click // When R.id.doTranslate button is clicked
void doTranslate() {
translateInBackground(textInput.getText().toString());
}
#Background // Executed in a background thread
void translateInBackground(String textToTranslate) {
String translatedText = callGoogleTranslate(textToTranslate);
showResult(translatedText);
}
#UiThread // Executed in the ui thread
void showResult(String translatedText) {
result.setText(translatedText);
result.startAnimation(fadeIn);
}
// [...]
}
Java annotations bind specific conditions to be satisfied with code. Consider a scenario where we think we are overriding a method from anther class and we implemented code that (we think) is overriding the method. But if we somehow missed to exactly override one (e.g. we misspelled name. In superclass it was "mMethodOverridden" and we typed "mMethodoverridden"). The method will still compile and execute but it will not be doing what it should do.
So #Override is our way of telling Java to let us know if we are doing right thing. If we annotate a method with #override and it is not overriding anything, compiler will give us an error.
Other annotations work in a very similar way.
For more information, read docs Lesson: annotations
Annotations are basically syntactic metadata that can be added to Java source code.Classes, methods, variables, parameters and packages may be annotated .
Metadata is data about data
Why Were Annotations Introduced?
Prior to annotation (and even after) XML were extensively used for metadata and somehow a particular set of Application Developers and Architects thought XML maintenance was getting troublesome. They wanted something which could be coupled closely with code instead of XML which is very loosely coupled (in some cases almost separate) from code. If you google “XML vs. annotations”, you will find a lot of interesting debates. Interesting point is XML configurations were introduced to separate configuration from code. Last two statements might create a doubt in your mind that these two are creating a cycle, but both have their pros and cons.
For eg:
#Override
It instructs the compiler to check parent classes for matching methods.

Categories

Resources