I'm writing android using couchdb. I have around 1000 documents. Every DB operation invokes a view,my view is taking a lot of time. Is there a way to optimize views in couch db? If there are less documents then fetching documents is working fast.
The main things to note with views are that both map and reduce values are cached in the view index (see http://horicky.blogspot.co.uk/2008/10/couchdb-implementation.html for details), that views are only rebuilt when you look at them, and that the CouchDB JavaScript engine is not particularly fast.
There's a few options to use all this for actual performance improvements:
Accept stale data in your views, and periodically rebuild the view index asynchronously. You can query views with ?stale=ok to immediately return the currently cached view index, from the last time the view was built, and then have some other background task querying with stale != ok to actually do the rebuild. The typical strategies for this are either to rebuild the view every X minutes or watch /db/_changes rebuild the view after every Y changes. Depends on your application.
Accept stale data and then always immediately rebuild the view asynchronously afterwards. This uses ?stale=update_after, which I believe will immediately return you a value and then do the view rebuild in the background. Whether to do this or the above depends on your use case and how important up to date values are to you; this might end up with your rebuilding the view far more than is really necessary, and thereby actually slowing down your queries. This does seem easier than the previous option though.
Push as much of your code into your map function as possible. This should improve performance in quickly changing databases, because map values are cached and don't need updating until the underlying document changes, whereas reduces need recalculating whenever one of a larger set of documents changes. I'm not sure exactly how reduce recalculation is tuned in CouchDB, i.e. how big the set that needs recalculating is, but it's definitely going to happen more the map recalculations, and potentially much much more.
Use built-in reduce functions (see http://wiki.apache.org/couchdb/Built-In_Reduce_Functions) instead of rewriting them in JavaScript. These fulfil many standard reduce cases, and are much much faster than writing the equivalent function yourself.
Rewrite your map/reduce in Erlang. See http://wiki.apache.org/couchdb/EnableErlangViews. This does require you to learn Erlang, but should just take away big percentage of your view rebuilding time.
The map function in a view is executed only once per document (plus as many times as you update the document). This happens at the first time you query the view. After that the result of the map function does not have to be computed anymore and therefore the query to the view should be extremely fast. As views are already efficient there is no general way to optimize them further.
This is not the case for temporary views. If you are using these, please store them in a design document to turn them into regular views.
Emit the smallest amount of data as possible in your document in the map function. You can access the entire document using the include_docs=true url parameter if you actually need the entire document
Good
{
map: function(doc) {
emit(doc._id, null)
}
}
Bad
{
map: function(doc) {
emit(doc._id, doc)
}
}
Related
I use here-map sdk. I have db file with 16500 ! paths (coordinates of a point). I need to draw all paths on the map, when user activate function "show additional paths". But i think, if i try to fetch big number of path and add all poplilynes object on here map, it will take a huge amount of time.
Help to find the optimal solution.
I would filter your data based on the visible viewport and disable this functionality where it doesn't make much sense (continental or globe level).
So, let's assume you app shows the map on zoomlevel 16 or 17 (district level), you can retrieve the viewport as GeoBoundingBox from the Map instance (e.g. via mapView.getMap()) with getBoundingBox().
The GeoBoundingBox makes it easy for you now to check for collisions with your own objects, since it has several "contains()" methods.
So everything that collides with your viewport should be shown, everything else is ignored.
You can update whenever the map viewport changes with either listening for OnTransformListener in the Map class or register for the MapGesture events (get MapGesture via getMapGesture() and listen for zooming events via addOnGestureListener())
If the amount of data for filtering is still too big, you can also think about preparing your data for more efficient filtering, like partitioning (region based would be my first idea) so only a subset of your data needs to be filtered.
It seems that Custom Location Extension (https://developer.here.com/platform-extensions/documentation/custom-location/topics/what-is.html) can help with this case.
In short, it allows you to upload a custom data to the HERE backend and query it later.
I have elements in a listview that change the way they look based on a network response
by the time the network responds the listview item (or item in the arraylist) could be at a different index
What I can do:
Make an alternate api call back to the server which returns all the items in the list (in their most updated form), and then call notifyDataSetChanged() on the adapter
but this seems like a waste of processes, and so does some alternative of searching an arraylist for the equivalent object, updating it and then calling notifyDataSetChanged()
Is there a way instead to have something like a BroadcastReceiver within the adapter that can keep track of the adapter item which started the network call or service? any maybe only respond to the receiver if the view is not currently recycled
It's hard to give an exact answer as your best approach since what you described is a really high level overview. I'll have to give an equal high level answer. Hopefully it help.
There's not many ways around searching an ArrayList in the adapter for a given item. One good idea:
You could create a custom adapter which is backed by an ArrayList but also maintains a Set of the data as well. The benefit is finding an item is O(1) however any adds or removes require you to modify two collections instead of one...which will cause a slight slow down. I've personally had to use this solution once for a highly complex adapter/listview. It could get updated quite often (to the point throttling notifyDataSetChanged() was once discussed) Surprisingly the slow down in maintaining a List and Set was hardly noticeable and overall worked well.
You could use a similar approach if your data has some sort of unique id associated with it. In which case you could build a Map of the data and use the maps values() method to obtain the List to use for the adapter. While using the keys to quickly find and update the required data. This may or may not be more difficult then the Set idea. Further if you can get your data into a SparseArray (having a unique int for each item), then you could use a SparseArrayAdapter which can get you O(log n) search times. Of course you loose the ability to sort your data in any meaningful way.
I'm not sure how viable the BroadcastReceiver idea is. I would see it more like each item's object instance would control the network request/response for itself, but that would seem tricky and odd. There's always the option of using a CursorAdapter. Just store all your data to DB. Have the network calls update the DB which can then be reflected within the CursorAdapter.
I will start this by saying that on iOS this algorithm takes, on average, <2 seconds to complete and given a simpler, more specific input that is the same between how I test it on iOS vs. Android it takes 0.09 seconds and 2.5 seconds respectively, and the Android version simply quits on me, no idea if that would be significantly longer. (The test data gives the sorting algorithm a relatively simple task)
More specifically, I have a HashMap (Using an NSMutableDictionary on iOS) that maps a unique key(Its a string of only integers called its course. For example: "12345") used to get specific sections under a course title. The hash map knows what course a specific section falls under because each section has a value "Course". Once they are retrieved these section objects are compared, to see if they can fit into a schedule together based on user input and their "timeBegin", "timeEnd", and "days" values.
For Example: If I asked for schedules with only the Course ABC1234(There are 50 different time slots or "sections" under that course title) and DEF5678(50 sections) it will iterate through the Hashmap to find every section that falls under those two courses. Then it will sort them into schedules of two classes each(one ABC1234 and one DEF5678) If no two courses have a conflict then a total of 2500(50*50) schedules are possible.
These "schedules" (Stored in ArrayLists since the number of user inputs varies from 1-8 and possible number of results varies from 1-100,000. The group of all schedules is a double ArrayList that looks like this ArrayList>. On iOS I use NSMutableArray) are then fed into the intent that is the next Activity. This Activity (Fragment techincally?) will be a pager that allows the user to scroll through the different combinations.
I copied the method of search and sort exactly as it is in iOS(This may not be the right thing to do since the languages and data structures may be fundamentally different) and it works correctly with small output but when it gets too large it can't handle it.
So is multithreading the answer? Should I use something other than a HashMap? Something other than ArrayLists? I only assume multithreading because the errors indicate that too much is being done on the main thread. I've also read that there is a limit to the size of data passed using Intents but I have no idea.
If I was unclear on anything feel free to ask for clarification. Also, I've been doing Android for ~2 weeks so I may completely off track but hopefully not, this is a fully functional and complete app in the iTunes Store already so I don't think I'm that far off. Thanks!
1) I think you should go with AsynTask of Android .The way it handle the View into `UI
threadandBackground threadfor operations (Like Sorting` ) is sufficient enough to help
you to get the Data Processed into Background thread And on Processing you can get the
Content on UI Thread.
Follow This ShorHand Example for This:
Example to Use Asyntask
2) Example(How to Proceed):
a) define your view into onPreExecute()
b) Do your Background Operation into doInBackground()
c) Get the Result into onPostExceute() and call the content for New Activty
Hope this could help...
I think it's better for you to use TreeMap instead of HashMap, which sorts data automatically everytime you mutate it. Therefore you won't have to sort your data before start another activity, you just pass it and that's all.
Also for using it you have to implement Comparable interface in your class which represents value of Map.
You can also read about TreeMap class there:
http://docs.oracle.com/javase/7/docs/api/java/util/TreeMap.html
Could someone tell me how to make a good mechanism for async. download of images for use in a ListView/GridView?
There are many suggestions, but each only considers a small subset of the typical requirements.
Below I've listed some reasonable factors (requirements or things to take into account) that I, and my collegues, are unable to satisfy at once.
I am not asking for code (though it would be welcome), just an approach that manages the Bitmaps as described.
No duplication of downloaders or Bitmaps
Canceling downloads/assigning of images that would no longer be needed, or are likely to be automatically removed (SoftReference, etc)
Note: an adapter can have multiple Views for the same ID (calls to getView(0) are very frequent)
Note: there is no guarantee that a view will not be lost instead of recycled (consider List/GridView resizing or filtering by text)
A separation of views and data/logic (as much as possible)
Not starting a separate Thread for each download (visible slowdown of UI). Use a queue/stack (BlockingQueue?) and thread pool, or somesuch.... but need to end that if the Activity is stopped.
Purging Bitmaps sufficiently distant from the current position in the list/grid, preferably only when memory is needed
Calling recycle() on every Bitmap that is to be discarded.
Note: External memory may not be available (at all or all the time), and, if used, should be cleared (of only the images downloaded here) asap (consider Activity destruction/recreation by Android)
Note: Data can be changed: entries removed (multi-selection & delete) and added (in a background Thread). Already downloaded Bitmaps should be kept, as long as the entries they're linked to still exist.
setTextFilterEnabled(true) (if based on ArrayAdapter's mechanism, will affect array indexes)
Usable in ExpandableList (affects the order the thumbnails are shown in)
(optional) when a Bitmap is downloaded, refresh ONLY the relevant ImageView (the list items may be very complex)
Please do not post answers for individual points. My problem is that that the more we focus on some aspects, the fuzzier others become, Heisenberg-like.
Each adds a dimension of difficulty, especially Bitmap.recycle, which needs to be called during operation and on Activity destruction (note that onDestroy, even onStop might not be called).
This also precludes relying on SoftReferences.
It is necessary, or I get OutOfMemoryError even after any number of gc, sleep (20s, even), yield and huge array allocations in a try-catch (to force a controlled OutOfMemory) after nulling a Bitmap.
I am resampling the Bitmaps already.
Check this example. As Its is used by Google and I am also using the same logic to avoid OutOfMemory Error.
http://developer.android.com/resources/samples/XmlAdapters/index.html
Basically this ImageDownlaoder is your answer ( As It cover most of your requirements) some you can also implement in that.
http://developer.android.com/resources/samples/XmlAdapters/src/com/example/android/xmladapters/ImageDownloader.html
In the end, I chose to disregard the recycling bug entirely. it just adds a layer of impossible difficulty on top of a manageable process.
Without that burden (just making adapters, etc stop showing images), I made a manager using Map<String, SoftReference<Bitmap>> to store the downloaded Bitmaps under URLs.
Also, 2-4 AsyncTasks (making use of both doInBackground and onProgressUpdate; stopped by adding special jobs that throw InterruptedException) taking jobs from a LinkedBlockingDeque<WeakReference<DownloadingJob>> supported by a WeakHashMap<Object, Set<DownloadingJob>>.The deque (LinkedBlockingDeque code copied for use on earlier API) is a queue where jobs can leave if they're no longer needed. The map has job creators as keys, so, if an Adapter demands downloads and then is removed, it is removed from the map, and, as a consequence, all its jobs disappear from the queue.
A job will, if the image is already present, return synchronously. it can also contain a Bundle of data that can identify which position in an AdapterView it concerns.
Caching is also done on an SD card, if available, under URLEncoded names. (cleaned partially, starting with oldest, on app start, and/or using deleteOnExit()
requests include "If-Modified-Since" if we have a cached version, to check for updates.
The same thing can also be used for XML parsing, and most other data acquisition.
If I ever clean that class up, I'll post the code.
In a game I need to keeps tabs of which of my pooled sprites are in use. When "active" multiple sprites at once I want to transfer them from my passivePool to activePool both of which are immutable HashSets (ok, i'll be creating new sets each time to be exact). So my basic idea is to along the lines of:
activePool ++= passivePool.take(5)
passivePool = passivePool.drop(5)
but reading the scala documentation I'm guessing that the 5 that I take might be different that the 5 I then drop. Which is definitely not what I want. I could also say something like:
val moved = passivePool.take(5)
activePool ++= moved
passivePool --= moved
but as this is something I need to do pretty much every frame in realtime on a limited device (Android phone) I guess this would be much slower as I will have to search one by one each of the moved sprites from the passivePool.
Any clever solutions? Or am I missing something basic? Remember the efficiency is a primary concern here. And I can't use Lists instead of Sets because I also need random-access removal of sprites from activePools when the sprites are destroyed in the game.
There's nothing like benchmarking for getting answers to these questions. Let's take 100 sets of size 1000 and drop them 5 at a time until they're empty, and see how long it takes.
passivePool.take(5); passivePool.drop(5) // 2.5 s
passivePool.splitAt(5) // 2.4 s
val a = passivePool.take(5); passivePool --= a // 0.042 s
repeat(5){ val a = passivePool.head; passivePool -= a } // 0.020 s
What is going on?
The reason things work this way is that immutable.HashSet is built as a hash trie with optimized (effectively O(1)) add and remove operations, but many of the other methods are not re-implemented; instead, they are inherited from collections that don't support add/remove and therefore can't get the efficient methods for free. They therefore mostly rebuild the entire hash set from scratch. Unless your hash set has only a handful of elements in it, this is bad idea. (In contrast to the 50-100x slowdown with sets of size 1000, a set of size 100 has "only" a 6-10x slowdown....)
So, bottom line: until the library is improved, do it the "inefficient" way. You'll be vastly faster.
I think there may be some mileage in using splitAt here, which will give you back both the five sprites to move and the trimmed pool in a single method invocation:
val (moved, newPassivePool) = passivePool.splitAt(5)
activePool ++= moved
passivePool = newPassivePool
Bonus points if you can assign directly back to passivePool on the first line, though I don't think it's possible in a short example where you're defining the new variable moved as well.