On a JBoss server, lies a JSP. Lets call it takestoolong.jsp
It does some processing that takes up to 30-45 seconds. (Yes, I know it should be optimized).
Then it returns. The 30-45 seconds is deemed too long for user experience for obvious reasons. So Akamai and load balancers are brought in so that this time can be reduced by caching the result of the request. At some point however, the jsp return content will change, and the cache will timeout. How do you prevent users from again seeing the 20-45 second download time? In particular how to you configure Akamai so that it does not use ip or other factors but returns processed result to the android device/user without the 30+ second delay? How to configure Akamai for Android devices?
My suggestion would be to isolate the takestoolong.jsp from the user all together, so that they only ever see the cached result....
to do that you'd want a secondary process that makes the request to the takestoolong.jsp page (it could be a simple cron job that hits the service and writes the result to an html page) and then point the users (or Akamai at the server delivering the static fragment of HTML.
that way you can refresh the results without the user seeing a delay and even when the content does change until the moment that the write is committed the user will still see the old content but no delay
[FWIW used this approach to deal with a similar issue ... huge, horribly complex SQL query that had to grab data from SQL Server then run a bunch of sub-queries against a MySQL database and consolidate the response. By using the intermediary output page and relying on IIS and browser caching caching that the users sometimes had slightly more stale data that was the absolute truth but they never got exposed to the actual response time of the underlying query]
Related
I have an Android app Activity which fills a RecyclerView with some content from a DB hosted in Google's GCP CloudSQL. The data is served via a python flask app run by Google's AppEngine.
My problem is that some step or steps along the way from sending the GET request to having the RecyclerView inflated and full with the content introduce a heavy delay. The process takes about 10 to 12 seconds.
I have trouble understanding where the problem lies. I have tried the following and haven't been able to isolate a candidate for the delay. Taking into account that the Android app runs on an Android Studio emulator on my localhost:
If I run my flask app locally on my computer, but still request the data from the CloudSQL DB, the process is fast. So, it would seem that the problem is neither the DB nor the Android app RecyclerView inflating step (therefore, it must be the AppEngine flask app).
But if I run both the DB and the flask web server both on GCP and request the data via my web browser, I also get the data (JSON) fast. So, it would seem that the flask app hosted on GCP's AppEngine is also fine.
So, if according to the above tests all the three individual elements, the DB, the AppEngine Flask app and the RecyclerView inflation, all seem to behave good in terms of speed, why is it the chained process so slow in my app?
Most information out there for similar problems ascribe the slow response to AppEngine's cold start, but after many attempts, I am starting to think that this might not be my problem. Aside from the fact that, when I request the DB content via a web browser I get the response decently fast, I have checked and/or tested:
Reducing number of items in the list.
Setting the minimum number of instances to 1 + have enabled warmup requests (and my app processes them).
Setting the minimum number of idle instances to 1.
Use a WSGI production server (waitress) instead of the development flask service.
the AppEngine is located in "right" GCP Zone (europe-west-3). By "right" I mean the geographically closest to me, and the same region in which the CloudSQL DB is hosted.
Have set a "keep-alive" refresh cron job every minute to ensure no instance has to start cold.
Have tried going to manual scaling with one fixed instance.
None of the above solved the problem or reduced the total waiting time in the app. According to the AppEngine Dashboard, the loading latency of many requests is taking between 2 and 4 seconds, which is not the 10-12 seconds I have to wait on the Android app, but still seems abnormally, long taking into account all the measures in place for avoiding cold start (and, again, the fact that retrieving the DB info via web browser works at normal speed). This makes me think that either I have not successfully solved the cold start thing, or the latency problem lies elsewhere.
I am lost, and I do not know where to continue looking for issues. I would appreciate getting some tips in the right direction before I have to implement an in-device DB cache.
EDIT
Below there is a summary of HTTP request latencies to the web server (/refresh is the instance keep-alive resource, and /allrecords is an actual working endpoint). As it can be seen, the latencies are quite OK (which matches the good speed when retrieving the data via web browser).
I am quite confident the problem does not have to do with AppEngine cold start, so one would think the problem must lie within the Android app, but if I do the DB request via a web server in my local machine, the Android app works at normal speed.
EDIT 2
Retrieving info from the web server in JSON format via the web browser of the emulated device also works fast. So it does not seem to be a problem of the emulated device with internet connection speed.
Using WebExtensions, I am trying to write a content script which would count the number of HTTP responses per web page.
I just started looking into WebExtensions, so I may be completely off the track, but so far my approach has been to modify the background script to listen to browser.webRequest.onHeadersReceived. Once fired, callback to that function would invoke browser.tabs.sendMessage to the tabId from where the request to this response originated.
In the content script I use browser.runtime.onMessage.addListener to listen to those messages from the background script.
With this approach I have two problems:
The background script is trying to send messages to the content script even before the content script has started listening, resulting in errors from the sendMessage and thus lost information.
On page reload, some of the messages from the background script are received before the content script is reloaded, some are lost (presumably during the period where one content script was unloaded and before another one started) and some are received after.
I've been looking whether browser.storage.local can help me with this situation. With this I can avoid the problem of messages being lost by simply keeping the count in that storage.
But I still have the problem in the background script that I don't know whether to increment the count for the web page that was displayed before the reload happened, or the one after (due to the problem #2 above).
I think instead of using tabId as a "pointer" to where the count is increased, having some kind of an ID for the web page to which the response belongs to would help. But I haven't been able to find anything like that in the documentation yet. But again, I'm novice to WebExtensions, so there may be a completely different approach I'm missing.
I'm currently working on an Android application that requires reading from call history and text history. And for further analysis, I've extracted these the huge amount of entries from the corresponding content provider, created and inserted all of them to a SQLite database.
But the problem I've encountered is when this is running on a phone that has been used for years (meaning there's an enormous amount of data generated from calls and texts), the retrieval process and database building process takes too much time and may even cause the app to crash. Even if i tried to put these process in a AsyncTask, the problem still exists. So my question is:
Am i doing it in a good way to just put any time consuming operations away from Main UI, OR What's a better way, if any, to handle very very large amount of data in Android?
Use pagination logic. Fetch only the most recent and relevant data and load older data if the user requests it.
Call history on most android phones is limited to 500 entries CallLog.Calls, while SMS provider has not such limits, you can query the count of the tables and limit your queries to 50 at a time and pass it to a separate thread for processing. Also make sure you run this in a background service with low priority so as to not disturb any other operations ongoing in the device.
The Scenario: My App has many activities. At the end the user uploads his all data in just one click.
The Problem: A problem might arise if the user is on the move and goes out of service(internet service / poor connectivity), then he/she couldn't upload the data.
In this context I want to know what might be a best efficient approach.
I have thought of one approach. If due to poor connectivity / no service I will save the data locally in SQLite. Keep a thread alive when user opens the App next time to check if service/connectivity available. If yes, it will be uploaded instantly.
I will be eagerly waiting for your comments.
Save all your data to SQLite with a sync flag. Use a service to constantly check for unsynced flags, and try to send to server in the background, update the flag when sync is completed.
Another approach if you are syncing to a direct SQL Server from SQLite, you can set transactions or batch updates, so if connectivity fails, it will revert back the transaction.
For learning how to upload data I recommend watching Google I/O 2012 - Making Good Apps Great: More Advanced Topics for Expert Android Developers : http://www.youtube.com/watch?v=PwC1OlJo5VM# from 16:43. It deals with efficiency and impact on battery life.
To summarize the video:
Do one large upload instead of several small uploads due to how the phone radio works, and try to minimize touching the network.
On a lower level do as user370305 said, try to upload the data, if there is no connectivity delay the upload for the next time the user opens the app or clicks the upload button.
I am developping an application that retrieves some data from a server.
I have two options:
Get the whole data set from the server and then work with it using the pattern 'singleton' to reduce the number of queries
For each query, get the corresponding data set from the server
What is the best choice?
In my opinion it depends.
It depends on the size of the data and if it even makes sense to return data that your user may not even need.
Personally, in my app that I am building I will be returning only the data that is required at that time. Obviously though, once I have the data I won't be fetching it again if it makes sense to keep hold of it for good or even just temporarily.
I agree with C0deAttack's answer. Your goal should be to minimize network traffic within the constraints of your app being a "good citizen" on the phone. (That means that your app does not negatively impact the user or other applications by using too many resources — including memory and file space.)
From the sound of it, I'm guessing that the data are not that voluminous. If so, I would recommend caching the response and use it locally, thus avoiding repeated queries to the server. Depending on how often the data changes, you might even consider making it persistent, so that the app doesn't have to query the server the next time it starts up. If the response includes an estimated time before it is considered outdated, that would help in establishing an update schedule. (Google's license server uses this idea.)
P.S. I don't see that this has anything (directly) to do with a singleton pattern.
How about storing your data in an sqlite database and do your queries with sql? Since you get sorting and ordering for free, it can help you writing less code. Also you have instant offline functionality if your user has no internet connection. :)