A bit of history: I used to code on main frames with COBOL back in the 90s when top down programming was all that was needed. I then lived through 2-tier, 3-tier and n-tier programming, so I understand abstracting the UI layer from the data layer, but the use of a content provider seems very restrictive and frankly counter intuitive.
I like the concept of abstracting the UI code from the data layer code, but the way the ContentProvider (CP) works seems to break this model. For instance, let's look at an example of an insert/update. Using N-tier, the UI code would call the data abstraction layer with a specific ItemID which would then hand back a POCO for the UI to use. The UI only needed to know about the getters and setters for data binding to work. When it came time to update/insert, the UI code would simply hand the POCO back to the data layer which would then determine if the item needed to be updated or inserted. The UI code didn't need to be concerned with HOW the data got to the DB, only that it either succeeded or failed.
With a CP, the UI code needs to know which URI to use to query based on getting a list of items or a specific item (which I am ok with). But both URIs return a cursor that I have to pass to my POCO which then populates the values so they can be changed by my data binding because cursors can't be updated. Then the UI code must determine whether the POCO needs to be updated or inserted and make a call to the appropriate function again using a specific URI. It then has to build a list of all the fields and all the values that need to be updated as well as providing a selection list to update the DB with.
It seems wrong for the UI layer to have to know about any of these items. Shouldn't all of that be encapsulated within the CP? It really seems to break the abstraction model entirely. What am I missing here with this new fangled technology?
BTW, I've already written an app using a CP so I understand HOW it works, I just don't understand the advantages of WHY it works the way it does. Other than keeping track of changes to data sourced by the CP and automagically updating them in the UI it seems like a lot of unnecessary overhead and programming and the implementation FEELS wrong.
Related
Recently kotlin flow is gaining a lot of attention. I have never done any reactive programming before so i thought now is a good time to learn it. Even though I have access to books and some articles I could not understand how to integrate it say on an existing app that does not have any rxjava. I tried looking for some sample but the only thing they would give me is very basic. Im really confuse about this reactive programming thing. For example, I have a list that I needed to get on database. Why would I use flow to get that data? If I visualize it as streams, that would give me one data each. While if I get that list I could get the whole list without waiting for each streams to come if I had use flow. I read a lot of articles about this kotlin flow, even rx java. But still, I wanted to understand why streams and how is it any different from other way like the example I just gave?
For example, I have a list that I needed to get on database. Why would I use flow to get that data?
Well, that depends entirely on what you are using to access that database and how it uses Flow.
Let's suppose that you are using Room from the Android Jetpack. In that case, you can use Kotlin coroutines in two ways, via suspend functions and via Flow:
#Query("SELECT * FROM stuff")
suspend fun getStuff(): List<Stuff>
#Query("SELECT * FROM stuff")
fun getStuffNowPlusChanges(): Flow<List<Stuff>>
In both cases, Room will do the database I/O on a background thread, and you can use coroutines to get the results on your desired thread (e.g., Android's main application thread). And initially, the results will be the same: you get a List<Stuff> representing the current contents of the stuff table.
The difference is what happens when the data changes.
In the case of the suspend function, you get just the one List<Stuff> from the point when you call the function. If you change the data in the stuff table, you would need to arrange to call that function again.
However, in the case of the Flow-returning function, if you change the data in the stuff table while you still have an observer of that Flow, the observer will get a fresh List<Stuff> automatically. You do not need to manually call some function again — Room handles that for you.
You will have to decide whether that particular feature is useful to you or not. And if you are using something else for database access, you will need to see if it supports Flow and how Flow is used.
While refractoring an app I decided to use room (and other architecture components). Everything went well until I reached database querying, which is async. It is fine by me, I can update views with LiveData callbacks.
But the problem arose with smaller queries following each other - with no thread restrictions it was easy, you could use variables straight away.
In legacy code there are plenty of setups, where quite many small data pieces are required one after another, from different tables. E.g., querying if item exists in one table, some calculations, querying another table, etc.
Disabling async requirement for queries is not an option, I prefer using Room as intended.
First thought was to nest callbacks, but it is too ugly.
Second thought was to query for all required data, start a method only after receiving all callbacks. It also does not sound nice and there are cases where one callback has data required for the other query.
Strangely I have not found any related forum posts or articles dealing with this problem.
Did anyone handle it already? Any ideas?
Most #Dao methods are synchronous, returning their results on whatever thread you call them on. The exceptions are #Query methods with reactive return types, such as Maybe<List<Goal>> or LiveData<List<Goal>>, where the methods return the reactive type and the results are delivered asynchronously to subscribers.
So, for cases where you have more complex business logic, you have three main courses of action (that I can think of right now):
Use RxJava and try to squish all that business logic into an observable chain. There are a lot of RxJava operators, and so some combination of map(), flatMap(), switchMap(), weAreLostWhereDidWePutTheMap(), etc. might suffice.
Do the work on a background thread, mediated by a LiveData subclass, so the consumer can subscribe to the LiveData.
Use classic threading options (e.g., IntentService) or more modern replacements (e.g., JobIntentService).
The app has data in a SQLite database. The UI is primarily a RecyclerView. The question is how to best to transfer data from the database into the UI, whilst keeping off the main thread?
I originally planned to use a CursorLoader, ContentProvider, and RecyclerView. But reading around it looks like RecyclerView has no out-of-the-box support for Cursor-supplied data. Dang.
That then leaves me with a few other options...
AsyncTask to load the data, put it into model objects, and pass into the RecyclerView Adapter. Aside from being ugly, it isn't config-change friendly.
A custom Loader that loads the data from SQL and pushes it into model objects.
Use a Cursor loader, and when it returns the Cursor iterate through it to push the data into model objects. I suspect this would occur on the main thread and may damage performance.
Use Otto to send a request message to request data, and then return a model objects collection by return message. There may be ~500 objects so I think I may rather abusing Otto doing this.
If I am using a collection of model objects instead of a Cursor I see less benefit to a ContentProvider, and I also lose the ability for the UI to auto-refresh on data changes (which may be useful).
None of these options appeal much, is there a better way? The app is under time pressure so whatever it is needs to be fairly quick to implement. Unfortunately the UI needs to scroll horizontally and only targets Lollipop, so RecyclerView does seem a better bet than ListView.
use this simple adapter https://gist.github.com/Shywim/127f207e7248fe48400b, alternatively you could use android.support.v17.leanback.widget.ItemBridgeAdapter with android.support.v17.leanback.widget.CursorObjectAdapter but why to make own life harder?
In my application I need a lot of CRUD stuffs: read records from the local SQLite database, insert objects and updating stuffs. Most of the queries are so simple that they won't block even if run on the UI thread, however in this application I want to adopt the Windows Phone pattern: an out animation started immediatelty and an in animation started when the result is delivered.
I planned to use an AsyncTask for the job, however I noticed that Honeycomb (and the compat package) introduces this new Loader framework. The main advantage seems that data loaded by a Loader survive config changes. The LoaderEx project by Commonsware bridges between SQLite and the framework, but some problems arise.
Resources cleanup: I use a single activity, create the SQLiteOpenHelper in onCreate() and close it onDestroy(). Since the loader manager may still be running, i check it and set a pendingClose flag on my callbacks object, so it will close the cursor and the helper when load finishes. I think not closing the database is not harmful, but SQLite complains if you don't do it, and I don't like error messages :) The point here is that data doesn't survive config changes, so the Loader advantage vanishes
How many loaders should I create? Let's say I have the beloved Customer and Order tables. Loaders are identified by ID's like CUST_L or ORD_L, but every time the user clicks on some summary I want to bring in a screen with the detail. Should I restart a loader with different params, or should I init a new one with a random ID? This may happen dozens of times. Is the Loader framework intended for lots of small running jobs, or just for a few long running tasks?
What's the purpose of using ID's inside the LoaderCallbacks interface? Why not a simple initLoader(params, callback)? I don't think one can reuse some piece of logic inside a callback: eventually he will branch (with if-else or switch on ID) so I don't understand the point of giving an identifier to the callbacks object, instead of a naive approach one-callbacks-per-operation.
I'm asking this because the whole framework seems overengineered to me and without real utility. I don't understand the point of centralizing code with a LoaderManager, and I can't see any new opportunity AsyncTask did not offer.
The only win point is config changes survival, but I can't exploit it because of resources cleanup, and I can't figure out an alternative way to close the SQLiteOpenHelper because (quite obviously) the SQLiteCursorLoader requires it but clean it up is up to the user. So AsyncTask seems the winner choice here, but maybe I'm missing something.
Content providers are much more powerful than "raw-DB" approach. Lots of links on stackoverflow lead to discussions on this.
LoaderManager tries to distinguish loaders by their IDs (what's why signature of initLoader specifies this argument). ID for loader is needed to re-deliver cached result in case if data for loader with specific ID already exists (hence no need to asynchronously re-load it again).
restartLoader call forces LoaderManager to initiate async opertation specified by previously created loader. initLoader attempts to reuse existing loader before creating a new one.
Fragments and Activities have their own LoaderManagers that don't overlap.
My experience shows that even though using Content Providers sounds like overkill to implement, it actually pays off pretty good in the future. Performance hit is insignificant (tried measuring it), UI-Data bindings are added out of the box (because of content observer and CursorLoaders being able to subscribe to Uri notifications), synchronicity implemented by framework via loaders. IMHO, whenever database is needed, using content provider with loaders most of the times is the best solution you can come up with.
Other scenarios that involve using database directly, will force you to implement everything manually.
This is a question I've now had for a few different apps I've built, and I have yet to be satisified with any of the solutions I've come up with. I thought I'd put it out there to the community to see other solutions there might be.
Let's say you have an Activity that downloads a complex tree of data (in this case via json, but it could be anything), unmarshalls that data to a set of java objects (in this case using gson, but again, could be whatever), then spawns additional activities to view different parts of that data. There might be one activity to view Trips in your response, and another to view Flights in those trips, and maybe another to view Passengers of those flights.
My initial implementation of this app was to unmarshall all the Trips in the first activity, then pass them by value (as an extra in the intent) to the TripActivity. The TripActivity then passes individual flights to the FlightActivity, and so on.
The problem with this is that there's a noticeable pause between activities while the app serializes and deserializes the data. We're talking several seconds. The pause is quite noticeable when my tree uses Serialization or Parcelable to pass data around. Initial performance testing with using google's Parcelable instead show a roughly 30% speedup over serialization, but Parcelable is difficult to work with and doesn't seem to handle circular object references well like Serialization does, and besides it still pauses for almost as many seconds, so I've put that experiment on the backburner while I try other things.
So then I tried moving the tree of objects directly into the Application class. Each activity just gets the tree directly from the app whenever it needs it. This makes performance quite snappy, but handling corner cases like unexpected activity start/stops (either due to activity crashes or because the activity has been closed temporarily to make more memory available, or whatever other cause) seems tricky. Perhaps it's no more than implementing onSaveInstanceState(), I'm not sure, but the solution seems a bit hacky so I haven't investigated further yet.
So in search of a less cobbled-together solution, I tried creating a custom ContentProvider to store and retrieve my objects. Since ContentProviders can be configured to run in-process using multiprocess=true, I thought that would be an excellent way to avoid serialization costs while doing something more "standard" than storing data in the Application object. However, ContentProviders were clearly not intended to return arbitrary object types -- they only support types such as numbers, strings, booleans, etc. It appears I can finagle one to store arbitrary objects by using ContentResolver.getContentProviderClient().getLocalContentProvider() and accessing my custom class directly, but I'm not sure that's less hacky than storing data in the Application object.
Surely someone must have a good solution to this problem. What am I doing wrong?
In addition to fiXedd's solution, another one is to use a local service. Have the service "own" the objects, with activities calling service APIs to get whatever it needs. The service can also be responsible for fetching and parsing the data, encapsulating that bit of logic.
The Application object is the "red-headed step-child" of Android components. Members of the core Android team have come out against the practice of creating custom Application subclasses, though it is certainly supported by the API. Having engineered one ADC2 200 application that leveraged a custom Application subclass, I can say that I should have gone with a service in my case as well. Live and learn...
By using the local binding pattern, your service will automatically be created and destroyed as needed, so you don't have to worry about that. And, by definition, a local service runs in the same process/VM as your activities, so you don't have to worry about marshaling overhead like you would in the ContentProvider scenario.
The way I'm handling this in one of my apps is downloading the data then shoving it into a database. This way I don't have to carry all those objects around (which, IIRC, eat about 1kb each just for the object instantiation) and I can easily pull just the data that I need. I don't know if this will work for you, but it worked for my use-case.
Another approach would be to save the data objects to a shared preferences file. That's how we implemented one of our apps, but I didn't like that approach because it seems too slow.
It's bad coding practice, but the fastest way may be to just use a service to parse the data and save the data to a static class that you can use for the rest of the app's life.