Using WebExtensions, I am trying to write a content script which would count the number of HTTP responses per web page.
I just started looking into WebExtensions, so I may be completely off the track, but so far my approach has been to modify the background script to listen to browser.webRequest.onHeadersReceived. Once fired, callback to that function would invoke browser.tabs.sendMessage to the tabId from where the request to this response originated.
In the content script I use browser.runtime.onMessage.addListener to listen to those messages from the background script.
With this approach I have two problems:
The background script is trying to send messages to the content script even before the content script has started listening, resulting in errors from the sendMessage and thus lost information.
On page reload, some of the messages from the background script are received before the content script is reloaded, some are lost (presumably during the period where one content script was unloaded and before another one started) and some are received after.
I've been looking whether browser.storage.local can help me with this situation. With this I can avoid the problem of messages being lost by simply keeping the count in that storage.
But I still have the problem in the background script that I don't know whether to increment the count for the web page that was displayed before the reload happened, or the one after (due to the problem #2 above).
I think instead of using tabId as a "pointer" to where the count is increased, having some kind of an ID for the web page to which the response belongs to would help. But I haven't been able to find anything like that in the documentation yet. But again, I'm novice to WebExtensions, so there may be a completely different approach I'm missing.
Related
I am deploying my Nodejs sample app to Google App Engine Flexible env and when I am using google app engine URL which is in the form appspot.com to hit my API, it is taking around 11 secs to send response from my mobile data, but other APIs are sending response in milisecs.
Also, the time delay is only happening when I am opening my android app and sending request to the server after that all requests are taking normal time, and again delay is coming when I again open the app and send request to the server.
Edit - I found that
This can be a caused when your application is still booting up or warming up instances to serve the request and can be called as loading latency. To avoid such scenarios you can implement health check handler like readiness check so that your application will only receive traffic when its ready
That's why I checked in my Logs that readiness check is performed sometimes around 1 sec
and sometimes around 200 ms
Can anyone please tell me is there anything wrong in warming up my instances because I don't think cold boot time is causing this problem.
Edit 2
I have also tried to set min_num_instances: 2 so that once loaded atleast my 2 instances will again not get boot up, but the thing is delay is again same.
Edit 3
runtime: nodejs
#vm: true
env: flex
automatic_scaling:
min_num_instances: 2
max_num_instances: 3
Edit 4
I am noticing a strange behaviour that when I am using this app Packet Capture to capture traffic, then all https requests (if I am not enabling SSL Proxying) and all Http requests are executing in milisecs whereas without using this app all Http/Https requests are taking 11-16 secs of delay.
I don't know how but is there any certificate kind of issue here?
Edit 5
Below I have attached Network Profiler where delay is coming 15 secs
Please Help
Depends on which App Engine you are using and how you setup the scaling, there's always a loading time if you don't have a ready instance to serve a request. But if you have readiness check to ensure your instance is ready (and not cold started for the request), then there shouldn't be a problem.
Can you find a loading request or any corresponding slow request in your logs? If not, then it's likely an issue with the app. If possible, instead of calling this API on your app, do it from two apps (one is already open, one is not). So you make calls from both apps and if you notice that the one that's already open is getting a response faster than the other one, that means that's a problem with the app itself. App Engine can't determine whether or not your app is pre-opened so any difference would be client side.
=== Additional information ===
In the your logs, there's no delay at all. The request enter Google and was processed within a few milliseconds. I am sure there's something application-side. Maybe your app is constructing the request URL (first request) from some other source that results in the delay? App Engine has no knowledge of whether or not your app is opened or not or whether it's sending a first request after being opened, it cannot act differently based on it. As long as your App Engine instance is ready and available, it will treat your request the same way regardless of whether or not it's your first request after the app is opened.
The issue is resolved now, it was happening because of network service provider which is Bharti Airtel, their DNS lookup was taking the time to resolve the hostname. After explicitly using alternative DNS like Google 8.8.8.8 the issue got completely resolved. Maybe it's a compatibility issue of Airtel with Google Cloud.
Last time I checked I remember having to put a warmup request handler so that Google would know that the instance is up and running and can be used to answer calls. Keep in mind that code has to be EXACTLY under the endpoint you specify in the handler under the yaml file. (Wouldn't be the first time someone forgets that)
Here are the docs https://cloud.google.com/appengine/docs/standard/python/configuring-warmup-requests this is python specific, but you can also check other languages like Go, Java, and such in the docs.
If the problem is client dependant (each time a new clients spawns and makes a call it gets the latency) then it is most likely, either a problem with the client app itself or with initialization, registration or DNS resolution.
You could also try to reproduce the requests with CURL or similar, and see if also with those you see the mentioned delay.
I have a scenario which I need to resolve. Currently I'm able to connect to an embedded system through socket connection, via android device.
I was able to use asynctask to send xml commands and receive them back, update UI with the results. But on the last step I need to use a command which will start the system to work, and I will keep getting messages from the system. it will be sent variously and the time can be different (we are talking about few 200-500 ms).
So my question is:
Asynctask wouldn't work. Because the 'work' varies more than 100ms and I'm not sure when the messages will be send, So I can't use async and show dialog for unknown time.
I have read that intent-service or service can do this work, but I'm not sure yet if it will be a good solution.
What would be a good solution for receiving these messages and for updating the UI?
I'm using https://github.com/google/go-gcm to send push notifications from our Go backend to Android devices. Recently, these push notifications started failing because the call to SendXmpp() was returning with the following error:
write tcp <IP>:<port>-><IP>:<port>: write: connection timed out
Restarting the Go process that called SendXmpp() makes this error go away, and push notifications start working again. But of course, restarting the Go process isn't ideal. Is there something I can do explicitly to handle this kind of error? For instance, should I close the current XmppClient and retry sending the message, so that the retry instantiates a new XmppClient and opens a new connection?
I would recommend using or implementing a (exponential) backoff. There are a number of options on GitHub here; https://github.com/search?utf8=%E2%9C%93&q=go+backoff though that's surely not a comprehensive list and it's not terribly difficult to implement.
The basic idea is you pass the function you'd like to call in to the back off function which calls it until it hits a max failures limit or it succeeds. Between each failure the amount of time waited is increased. If you're hammering a server, causing it to return errors, a method like this will typically solve your problems and make your application more reliable.
Additionally, I'd recommending looking for one that has an abort feature. This can be implemented fairly easily in Go by passing a channel into the backoff function (with the function you want to call). Then if your app needs to stop you can signal on the abort channel so that the back off isn't sitting there with like a 300 second wait.
Even if this doesn't resolve your specific problem it will generally have a positive effect on your apps reliability and 3rd party API's you interact with (don't want to DOS your partners).
I'm currently working on an app which has to query a web SQL db, and show the results in a ListView, and I would really appreciate some input as to what is the best way to do that.
I would like the results to be shown as quickly as possible, so if I can somehow show the first result immediately while still loading the rest that would be great.
Reading on the subject, it seems the best way to send the data (which includes a small image) is using a JSON object (or array).
The ideas I had so far:
* Http requests with index of last result - downside is that the server will run the same query over and over again and just send me a few results at a time.
* Open a socket between device and server until user leaves the results view - downside is excessive use of network resource.
Do they sound OK?
Is there something else I'm missing?
Thanks!
The ideas I had so far: * Http requests with index of last result -
downside is that the server will run the same query over and over
again and just send me a few results at a time. *
=> This is the standard idea but I would say sending request and fetching data depends on the particular requirement, like you should use Service concept if you would want to show first set of result and at the same make a call for another sets in background.
So whenever service gets another sets of data, it sends a message to your activity with the received new data set and your activity will update the UI with the available new data.
On a JBoss server, lies a JSP. Lets call it takestoolong.jsp
It does some processing that takes up to 30-45 seconds. (Yes, I know it should be optimized).
Then it returns. The 30-45 seconds is deemed too long for user experience for obvious reasons. So Akamai and load balancers are brought in so that this time can be reduced by caching the result of the request. At some point however, the jsp return content will change, and the cache will timeout. How do you prevent users from again seeing the 20-45 second download time? In particular how to you configure Akamai so that it does not use ip or other factors but returns processed result to the android device/user without the 30+ second delay? How to configure Akamai for Android devices?
My suggestion would be to isolate the takestoolong.jsp from the user all together, so that they only ever see the cached result....
to do that you'd want a secondary process that makes the request to the takestoolong.jsp page (it could be a simple cron job that hits the service and writes the result to an html page) and then point the users (or Akamai at the server delivering the static fragment of HTML.
that way you can refresh the results without the user seeing a delay and even when the content does change until the moment that the write is committed the user will still see the old content but no delay
[FWIW used this approach to deal with a similar issue ... huge, horribly complex SQL query that had to grab data from SQL Server then run a bunch of sub-queries against a MySQL database and consolidate the response. By using the intermediary output page and relying on IIS and browser caching caching that the users sometimes had slightly more stale data that was the absolute truth but they never got exposed to the actual response time of the underlying query]