Our server is scaling out with 1-3 instances in some specific period of time everyday. We have Azure Redis Backplane for connection persistency of signalr. In addition to this, the server doesn't ARR Affinity enabled. By the way we are using ServerSentEvents for Androids and WebSocket for iOS.
The problem is our mobile users(moto couriers) are disconnecting or reconnecting to SignalR server frequently because of their provider when the mobile signal is low.
We have checked all the things over mobile side. We pretty sure that we have one and only one signalr connection at a time. In addition to this when they are connected, we are storing their connectionids on persistent storage (SQL Database).
While sending a message to users we choose the latest connection id stored on Database. This means that we only sent to client's only one connection id.
However we get some feedbacks about the messages we sent over server popping up twice on their phones (Most of the time messages are received twice in the rush hours where server has 2 or 3 instances).
We are not able to trace it down why it is received twice especially on rush hours.
The question is, is there any chance this is about ARR Affinity? Because Redis backplane uses subscribe and publish methodology and since couriers are disconnecting/reconnecting frequently they have a chance to connect different servers, thus, when server sends a message, 2 servers might try to send that message and it is popping up on their phone twice even though they have one connection.
Additional info;
SignalR DisconnectTimeOut = 60 seconds
SignalR KeepAlive = 20 seconds
This seems to be the reason, when new connection request is made, you might need to drop existing one from other server, using replication.
if replication interval is short enough, it will minimise number of duplicates, for the rest, you might need to solve it on client side by ignoring last notification if notification hash/id is already received.
Related
Hi I am developing android application in which I want show whether other person is online or not so that person can intiate the communication.I thought about few solution :
1) Implementing heartbeat mechanism, in which device will send ping request to server after fix interval of time.
2) Server will send push type ping to client and client will give response on that so that server will know that client is online.
First case causes battery and data issue, while second one causes delay in push which will affect the process.
Is there any better solution for this problem? Apart from these or improvise version of above one.
nilkash. Virtually any method for checking network connectivity will at the end result in sending periodical pings between device and the server. Even push type ping will actually do the same (but it saves battery because push notifications aggregate messages for all applications in-to a single connection to a google server). So the best solution is just a proper combination of optimizations and you have to choose them depending on your requests.
Server pushes are power efficient, mostly because they reuse the
same connection for all applications, but the delay can be huge,
something like 10 minutes.
You can subscribe to connectivity
events and send "online" message to server once you are online. (But
not once you are offline because you are... offline). This will give
you immediate online events.
Do not send pings from device when there is no connectivity. Your application should be absolutely idle so as not to use battery.
There is no easy way to find out
when client goes offline on server side. You have to trade
traffic/battery for time resolution. More often you send pings, the
better resolution is. But you can't change ping interval for pushes,
so if you need better resolution, then you need to use your own
connection. But you can send other useful data through that connection too.
If you keep a TCP connection, then your pings can be
very data efficient: TCP keep alive packets are just 60/54 bytes.
But then you have to keep open connections with all clients on the
server. This may be a problem if you have a lot of clients.
The best combination may be something like that: you always send online message from a client when it becomes online. You keep TCP connection while the application is in foreground. You use the same connection to transfer data to and from the application. When your application goes to background you fallback to power consuming push based pings and do them at 10 minutes basis.
Let's say have an app that has 10s of millions of installs and 10s of thousands of active users at a given point of time. I need to log my users' activity data to my servers. Currently, I make HTTP requests from the device to my servers. I have a bunch of machines running a web server, sitting behind amazon's ELB. They parse the data coming from the devices and put it in mongodb.
Now, I would like to capture device data by using upstream CCS provided by Google' GCM (so that I can piggyback on GCM for more reliable delivery of data) I have written a prototype XMPP server and I can make whole thing work, but I am worried about scaling it up. What will happen if Google starts sending me messages at a rate faster than I can consume? Earlier, I was able to use multiple servers behind load balancer to tackle high request rate. Is there a concept of load balancing here?
If I open multiple connections from my server to Google's server (Google says I can have till 1000 connections for a given sender id), will the incoming requests be load balanced between these connections?
Finally, is there recommended solution which takes care of solving most of the problems above? Will using ejabberd solve some of the problems above?
Thanks a bunch.
What will happen if Google starts sending me messages at a rate faster than I can consume?
At the end https://developers.google.com/cloud-messaging/ccs you may read
Conversely, to avoid overloading the app server, CCS stops sending if there are too many unacknowledged messages. Therefore, the app server should "ACK" upstream messages, received from the client application via CCS, as soon as possible to maintain a constant flow of incoming messages. The aforementioned pending message limit doesn't apply to these ACKs. Even if the pending message count reaches 100, the app server should continue sending ACKs for messages received from CCS to avoid blocking delivery of new upstream messages.
In the same document, you find partial answer to your second and third questions
If at any point the connection fails, you should immediately reconnect. There is no need to back off after a disconnect that happens after authentication.
For me it means, that Google implemented a simple redundancy logic and probably not a fair load balancing system (anyway I hope so). If you have that high volumes, I suggest you to contact them directly.
For the last ones, ejabberd is a good product, there are a lot of deployed systems with a clustered infrastructure and a plenty of documents on how do taht. I suggest you to start from here http://docs.ejabberd.im/admin/guide/clustering/ .
Anyway, for your high volumes I would evaluate RabbitMQ which is another Erlang jewel.
ejabberd can be clustered and placed behind a load balancer to distribute connections. A 3 or 4 server cluster should be able to handle that load fine and give you fail over protection. You can add servers if needed. Once you get close to 10 servers you may want to consider using Redis for the in memory DB rather than mnesia.
I have a server with sql database.
Also have about 100k users on android application.
What I need now is to send immediately notifications from the server to all devices.
Im researching the GCM system but as I see there`s a huge delay on the receiving side.
What I need is when I click the send button on my server,everyone device to receive it in a few seconds.
Is the delay only happening when using the HTTP connection?
Is it going to be different with the XMPP connection ?
You are trying to broadcast a message to nearly 100k users and currently xmpp downstream messaging does not support broadcasting. Use http server to send message to 1000 devices at a time. This can be improved by using multi curl. see this https://github.com/mseshachalam/GCMMessage-MultiCURL
In general the GCM is the right choice for massive broadcasting.
On the other hand the messages are not guaranteed to be delivered immediately, the delay might be up to 25(!) minutes given, that all devices have your app up and running.
See Google Cloud Messaging - messages either received instantly or with long delay for explanations why
As part of an Android app I'm developing there is a chat room feature. We have a server which can process the incoming messages and store the messages. Is it better to keep a socket connection open between the phone and the server so the server can send any new messages to the phone, or is it better for the phone to poll the server for new chat messages?
It is bad solution with poll for app that have randomly posting data. What I want to say is that polling data is useful when you have something that is happening discrete like every 5 minutes or something like that. this is not the case with chat, some user can post something ones in a hour , some can post 30 times in 2 minutes
so keep your sockets open
Polling lacks real-time connection and a persistant connection is battery draining. I think that what you are looking for is a combination of "push"-ing and persistant connection. You would wake your phone via push, and then establish a connection via sockets to handle chat.
I suggest reading this article.
I'm not sure if it mentions c2dm, the google push service.
I would keep the socket open if you are worried about instant messaging, it takes time to setup the socket connection especially if you are using the GSM connection. I have seen it take 10 secs or more to open up a socket on 3G, much less if on WiFi.
I'm writing application for android where two devices should communicate between each other via internet. In addition to this task they also communicate with the EJB3 server via REST. So I decided to kill two birds with one stone and use REST+EJB3 for transferring data between two paired android devices.
So the scenario I implemented is something like this:
Both devices connect to the server and acquire session id.
First device sends data to the second device
Server gets the data but does not end the http request, instead it puts into a waiting pool
Second device asks for data
Server transfers the data to the second device and releases waiting connection (and thread) for first device.
If there are no first or second device requests then opponent waits for a timeout on a server side, then sends the request again. We need to wait for the data on the server side to give immediate respose after data is arrived.
So in this schema I see two drawbacks:
- Waiting thread on the server side - they consume server resources and as the result limit server throughput
- If the server thread will not wait for an answer with timeout, then the client should repeat requests on and on and spend a lot of traffic.
What is the best practice solution for such problem?
P.S: Forgot to mention that two devices should exchange data as smoothly and quickly as possible.
You will need to use C2DM
http://android-developers.blogspot.com/2010/05/android-cloud-to-device-messaging.html
When message needs to be sent from A to B - A should connect to server and depending on data kind/amount - server will either push data via C2DM or just tell device B to come back and grab data.
I would store data on server anyway. If push fails - you can retry it. No need to reinvent wheel. Most issues/problems already solved in C2DM