I'm having a big performance problem when using AWS Lambda.
I created some basic lambdas that connect to RDS MySQL database and insert/select/delete data in order to return it to the user.
In my client which is an Android App I'm invoking the lambda using aws-android-sdk-lambda:2.2.17 library.
The mismatch is that in the CloudWatch I'm seeing that the Lambdas are being performed really quick - around 60 ms but in my Android app I'm getting the results only after 8-12 seconds.
Unfortunately all that happen after Lambda finishes its invocation and user gets his response is a black box. The problem might be in the implementation of the AWS SDK but I doubt it.
Note - the internet connection is not the problem and it's not a cold execution problem.
Please advice
Related
I am deploying my Nodejs sample app to Google App Engine Flexible env and when I am using google app engine URL which is in the form appspot.com to hit my API, it is taking around 11 secs to send response from my mobile data, but other APIs are sending response in milisecs.
Also, the time delay is only happening when I am opening my android app and sending request to the server after that all requests are taking normal time, and again delay is coming when I again open the app and send request to the server.
Edit - I found that
This can be a caused when your application is still booting up or warming up instances to serve the request and can be called as loading latency. To avoid such scenarios you can implement health check handler like readiness check so that your application will only receive traffic when its ready
That's why I checked in my Logs that readiness check is performed sometimes around 1 sec
and sometimes around 200 ms
Can anyone please tell me is there anything wrong in warming up my instances because I don't think cold boot time is causing this problem.
Edit 2
I have also tried to set min_num_instances: 2 so that once loaded atleast my 2 instances will again not get boot up, but the thing is delay is again same.
Edit 3
runtime: nodejs
#vm: true
env: flex
automatic_scaling:
min_num_instances: 2
max_num_instances: 3
Edit 4
I am noticing a strange behaviour that when I am using this app Packet Capture to capture traffic, then all https requests (if I am not enabling SSL Proxying) and all Http requests are executing in milisecs whereas without using this app all Http/Https requests are taking 11-16 secs of delay.
I don't know how but is there any certificate kind of issue here?
Edit 5
Below I have attached Network Profiler where delay is coming 15 secs
Please Help
Depends on which App Engine you are using and how you setup the scaling, there's always a loading time if you don't have a ready instance to serve a request. But if you have readiness check to ensure your instance is ready (and not cold started for the request), then there shouldn't be a problem.
Can you find a loading request or any corresponding slow request in your logs? If not, then it's likely an issue with the app. If possible, instead of calling this API on your app, do it from two apps (one is already open, one is not). So you make calls from both apps and if you notice that the one that's already open is getting a response faster than the other one, that means that's a problem with the app itself. App Engine can't determine whether or not your app is pre-opened so any difference would be client side.
=== Additional information ===
In the your logs, there's no delay at all. The request enter Google and was processed within a few milliseconds. I am sure there's something application-side. Maybe your app is constructing the request URL (first request) from some other source that results in the delay? App Engine has no knowledge of whether or not your app is opened or not or whether it's sending a first request after being opened, it cannot act differently based on it. As long as your App Engine instance is ready and available, it will treat your request the same way regardless of whether or not it's your first request after the app is opened.
The issue is resolved now, it was happening because of network service provider which is Bharti Airtel, their DNS lookup was taking the time to resolve the hostname. After explicitly using alternative DNS like Google 8.8.8.8 the issue got completely resolved. Maybe it's a compatibility issue of Airtel with Google Cloud.
Last time I checked I remember having to put a warmup request handler so that Google would know that the instance is up and running and can be used to answer calls. Keep in mind that code has to be EXACTLY under the endpoint you specify in the handler under the yaml file. (Wouldn't be the first time someone forgets that)
Here are the docs https://cloud.google.com/appengine/docs/standard/python/configuring-warmup-requests this is python specific, but you can also check other languages like Go, Java, and such in the docs.
If the problem is client dependant (each time a new clients spawns and makes a call it gets the latency) then it is most likely, either a problem with the client app itself or with initialization, registration or DNS resolution.
You could also try to reproduce the requests with CURL or similar, and see if also with those you see the mentioned delay.
I'm building a react-native app using Create React Native App. For the backend it uses Firebase Firestore. The app works fine on iOS but fails with the following error on Android (both with simulator and on a device) when trying to fetch data from the backend:
22:02:37: [2018-05-22T05:02:33.751Z] #firebase/firestore:, Firestore (4.10.1): Could not reach Firestore backend.
- node_modules\react-native\Libraries\ReactNative\YellowBox.js:71:16 in error
- node_modules\#firebase\logger\dist\cjs\src\logger.js:97:25 in defaultLogHandler
- ... 18 more stack frames from framework internals
Any idea what can be the problem and how to debug this?
The error seems to be a generic, as there are other questions with the same error message. But in this case it's specific to Android only.
Full log and the stack trace:
21:50:51: Warning: Expo version in package.json does not match sdkVersion in manifest.
21:50:51:
21:50:51: If there is an issue running your project, please run `npm install` in C:\Users\grigor\Documents\Bitbucket\AwesomeProject and restart.
21:51:08: Finished building JavaScript bundle in 26098ms
21:51:14: Running app on XT1053 in development mode
21:51:34: [2018-05-25T04:51:26.597Z] #firebase/firestore:, Firestore (4.10.1): Could not reach Firestore backend.
- node_modules\react-native\Libraries\ReactNative\YellowBox.js:71:16 in error
- node_modules\#firebase\logger\dist\cjs\src\logger.js:97:25 in defaultLogHandler
- ... 18 more stack frames from framework internals
21:51:37: Setting a timer for a long period of time, i.e. multiple minutes, is a performance and correctness issue on Android as it keeps the timer module awake, and timers can only be called when the app is in the foreground. See https://github.com/facebook/react-native/issues/12981 for more info.
(Saw setTimeout with duration 3299464ms)
- node_modules\react-native\Libraries\ReactNative\YellowBox.js:82:15 in warn
- node_modules\react-native\Libraries\Core\Timers\JSTimers.js:254:8 in setTimeout
- node_modules\#firebase\auth\dist\auth.js:37:577 in Hc
* null:null in <unknown>
- node_modules\#firebase\auth\dist\auth.js:15:932 in y
- node_modules\#firebase\auth\dist\auth.js:37:606 in Ic
- node_modules\#firebase\auth\dist\auth.js:210:0 in kk
- node_modules\#firebase\auth\dist\auth.js:209:665 in start
- node_modules\#firebase\auth\dist\auth.js:215:38 in Dk
- node_modules\#firebase\auth\dist\auth.js:253:425 in ql
- node_modules\#firebase\auth\dist\auth.js:255:146 in <unknown>
- node_modules\#firebase\auth\dist\auth.js:19:220 in <unknown>
* null:null in Gb
* null:null in Cb
- node_modules\#firebase\auth\dist\auth.js:22:103 in Sb
- node_modules\#firebase\auth\dist\auth.js:15:643 in jb
- ... 10 more stack frames from framework internals
I don't have an exact solution for the problem, but I realised that the way in which I was interacting with firebase made my application more susceptible. Maybe you can spot some of my own design flaws in your project?
What I've found is that I was calling initializeApp outside of a try/catch, which meant that the entire JavaScript module would fail whenever the error is encountered. So, the first work around is to properly handle initialization safely.
Secondly, this error became prominent in the way in which I structured my my calls to firestore(). For example, my first call to firebase.firestore() was embedded within a method that returned a Promise, i.e.:
() => firebase.firestore().collection('someCollection').get().then(...).catch(e => ...);
Now, with this approach, if the interaction with firestore failed before a Promise could be returned, we don't actually catch the error! This is because it occurs too early in the chain for a Promise to be created. This meant that again, the application would appear to fail at apparently some much deeper level than something that could be caught inside the application. But that's wrong!
The correct implementation would be to wrap the interaction with firebase.firestore() within a Promise first:
return new Promise(resolve => firebase.firestore().collection(...)).then(q => ...).catch(e =>...);
Hope this helps in some way. I know it's a tricky problem!
What I am trying to do: I want to develop REST APIs on AWS to use in my Android application. The purpose of this REST APIs is to call some other REST APIs, get and process data and send back as response.
What I tried: I followed this AWS tutorial Using AWS Lambda as Mobile Application Backend (Custom Event Source: Android). Everything works as expected but the first time response from AWS is too slow. It is something like ~8 secs. However, next time onwards, in the same session, it is responding in 1 to 2 secs.
That may be because it takes a long time in setting up connection and invoking my function at Lambda.
Question: Is there any alternative to this? I want to get the quick response every time, including the first time. Am I trying the right thing (AWS-Lambda) or should I try something else?
Increasing the memory size in the lambda configuration. It usually makes it run in a computer with more CPU power, making it slightly faster. However in your case it seems most of the delay is because the function goes "cold" and aws no longer has it in memory.
There are a few things you can try:
-> Reduce the size of your package so it loads faster, the first invocation will still be slow, but you might improve by a few seconds.
-> Create another dummy CRON type lambda function that triggers your real lambda every minute or so and makes a dummy request, this should help keep your function in memory. You can learn how to create a lambda cron function (aka lambda scheduled task) here: AWS Lambda Scheduled Tasks
I'm using OkHttp 3.9 to make a POST call in an Android app to update an AWS DynamoDb database by triggering a Lambda function via an Api Gateway. I notice that the database is updated 3 times with each call. Apparently this is due to the default retry option for OkHttp.
As I understand it, the retries can be prevented by setting retryOnConnectionFailure to false when building the client.I tried this but still the database is updated 3 times - so the call is still being made 3 times.
Some suggest to handle this behaviour on my server. The problem is that if I handle the issue in the Lambda function, then it means that the api has been called three times and so has the lambda function, all unecessary overhead and cost. Also, if setting retryOnConnectionFailure to false worked and the api was only called once, it means that there is no mechanism to handle failure.
So, why does it retry 3 times even when each call succeeds? and most importantly, how do I stop this from happening so that the api is only called once and then again only if the call failed (i.e to succeed in triggering the lambda function)? Setting retryOnConnectionFailure to false seems to have no effect.
So I have egg on my face!
There was no problem with OkHttp! After some further investigation I found a call to update the database in the lambda function that I was calling. So this was calling the extra updates and not OkHttp retries!
sorry.
I receive pseudo-randomely ECONNRESET from my backend's company. I say pseudo random because althought It doesn't happend the same way I can provoque it almost every time by launching a large amount of request.
I typicaly launch downloads from activity's life cycles events and therefore I use Retrofit's Call.enqueue() to network on background. In the part of the code that seems to cause trouble, I'm launching a series of download (~15 REST routes for jsons and 5-6 files) from a background thread. With that scenario, the ECONNRESET apears 2 out of 3 try on one of the called REST route.
There is no more explanation server side the only thing that we logged was read/write ECONNRESET.
Here is what I've tried :
Update to okhttp 3.5.0 (from 3.2.0) and retrofit 2.1.0 (from 2.0.2)
I added "Connection:close" int my requests header to prevent keep alive.
I reduced my total pool to 1 :
.connectionPool(new ConnectionPool(0, 1, TimeUnit.SECONDS))
It happend on my phone (Android 6), I don't have other phone to test the code. I've some unrelated trouble with my AVD that prevent me for testing on different android versions (soon to be fixed).
Would you know what could provoque this ?
Thanks,
For the records, I was using HttpURLConnection in my file download method (whearas my REST api was questioned throught retrofit and okhttp), I changed it to okHttp and it's all good now.