Firebase FCM has become very unstable. Looking for a solution / alternatives - android

Our app with more than a million subscribers is facing huge delivery issues with FCM. It has become worse lately and the service is hardly working anymore. We are receiving errors like:
{ code: 'messaging/message-rate-exceeded',
message: 'Topic quota exceeded.' },
codePrefix: 'messaging' }
We get this error a lot. And it seems to be worse during EU / US evenings. In some cases over 90% of the notifications are failing.
We are in contact with the firebase support team, but so far there seems to be no solution. The gave us lots of information with some useful facts though:
resources are shared between developers. So the max message rate can be different because of other developers taking up resources.
OR queries should be converted to multiple AND queries because OR queries actually generate messages to all of user base, and then the filtering condition is applied
240 messages/minute and 5,000 messages/hour to a single device.
limit upstream messages at 15,000/minute per project (we don't understand this one)
limit upstream messages per device at 1,000/minute
They also updated their docs at https://firebase.google.com/docs/cloud-messaging/concept-options#topics_throttling
So we are aware of message rate limits and fanouts mechanism. In our case we have an approximate of 6000 different topic send requests per hour and on average 10k subscribers per topic.
A single user will never get more than 50-100 notifications per hour.
We believe we are not hitting the limits set by FCM.
Back in the GCM time everything worked fine. So we are quite unhappy about the current situation. The core functionality of the app is really bad right now. And a solution seems to be not there.
We are considering switching to a SSE solution.
There is a story about someone who succesfully moved away from FCM
https://f-droid.org/en/2018/09/03/replacing-gcm-in-tutanota.html
But since Google has made it very difficult lately to have background processes running, I wonder what other people with similar experience did.
Or can we still fix this situation?

One such alternative is Cloud Alert - it can replace FCM, provides high throughput and unlimited messages. It uses a background job and maintains its own connection to its dedicated servers. While a free plan is present, your 1 million connection requirement would put you into the paid bracket.
Disclosure: I work for Cloud Alert.

Related

Required to unsubscribe from FCM Topics?

I am using Firebase (FCM) in a project, where I use the topic feature to send out daily notifications to many different users in various timezones. Notifications are always sent out, when the user's time is 12:00PM, in order to achieve this, I create a unique topic every day, for the batch of users I am going to notify. At the moment our topics could look like: x_daily_notification_1550152751 - Previously I had a system that would create static topics based on the timezone, so users could be subscribed to a topic like: x_daily_notification_europe_london, but this proved to be too unreliable and hard to manage, due to users moving between timezones.
The way the system works is based on research and advise from this question I made late last year: FCM topic limits and expiration/invalidation of old unused topics?
So to sum it up:
Bulk subscribe users to a topic
5 minutes later, send out the topic. The delay is due to not knowing if FCM waits for all the bulk subscriptions to process, before sending out the notifications
10 mins after sending the topic, unsubscribe all tokens from the topic again, deleting the topic from FCM.
The reason for these delays, is that I have noticed that sometimes we do not get the notifications. The only reason I can think of, is because we send and unsubscribe the topic too fast, so if 100K users were to be notified and we deleted it straight away, only a few would get it. It seems that FCM does not wait for one request to finish before another is processed.
Even though I wait 10 minutes before I unsubscribe, it seems that sometimes not all users get notified, which could mean that they were not processed before I unsubscribed them again, so I would like to wait longer before I unsubscribe everyone from the topic, to ensure that they have received the notification. The first solution I thought of was to just wait, until the next batch is ready to be processed and then delete the previous topic, which would be about an hour later, where I can be more or less sure that everyone has received the notification at that point.
My real question is, do I need to unsubscribe at all? It could get messy on Firebase' end, but surely they have systems in place to handle this, as it does not mention anything in the documentation about cleaning up.

Is httpPost too demanding to be used in an Android service?

I need my app to receive notifications, but my boss does not want to rely on Google Cloud Messaging so I will use httpPost in a background service instead for periodically check for new messages.
My question is: will that be too demanding for the battery and data consumption? Do you know a better option?
Thank you.
Edit:
This is an app for a delivery store. The messaging starts when you ask for something and ends when you receive the item. The message query will be every minute for about 10 or 20 minutes.
In general, this would not seem like a good idea. The scenario you describe seems like a perfect fit for GCM. I would first try to convince your boss to reconsider. :)
"Polling" a server, even for a brief connection which transmits hardly any content, means the device must be woken (from a low power state) and the radio turned on, and it will stay on for a significant time. The impact on data consumption will probably be insignificant, but battery usage will be a concern.
In case it cannot be avoided, you should check Minimizing the Effect of Regular Updates section of the Efficient Downloads referenced by #CommonsWare in the comments.
Also, Reto Meier gave an excellent talk on this subject at the 2012 Google I/O. His series on Efficient Data Transfers is also very informative. Hope it's helpful.

How to maximize efficiency in this complex data transfer scenario

I'm not sure if this question belongs here, as it is solely based on theory, however I think this fits best in this stackexchange compared to the rest.
I have 500,000 taxis with Android 4 computers inside them. Everyday, after one person or party makes a trip, the computer sends the information about the trip to the Node.js server. There are roughly 35 trips a day, so that means 500,000 taxis * 35 trips = 17,500,000 reports sent a day to the Node.js server. Also, each report has roughly 4000 characters in it, sized around 5KB.
The report that the taxi computers send to the node.js server is just an http post. Node.js will then send back a confirmation to the taxi. If the taxi does not receive the confirmation for report A in an allotted amount of time, it will resend report A.
The node.js server simply receives the report. Sends the confirmation back to the taxi. And then sends the full report to the MongoDB.
One potential problem : Taxi 1 sends report A to node.js. Node.js does not respond within the allotted time, so Taxi 1 resends report A to node.js. Node.js eventually processes everything and sends report A twice to MongoDB.
Thus MongoDB is in charge of checking whether or not it received multiple of the same reports. Then MongoDB inserts the data.
I actually have a couple of questions. Is this too much for NodeJS to handle (I don't think so, but it could be a problem)? Is this too much for MongoDB to handle? I feel like checking for duplicate reports may severely hinder the performance.
How can I make this whole system more efficient? What should I alter or add?
First potential problem is easy to overcome. Calculate a hash of the trip and store them in mongo. Put the key on that field and then compare every next document if the same hash exists. This way checking for duplicate will be extremely easy and really fast. Keep in mind that this document should not have something like time of sending in it.
Second problem: 17,500,000/day is 196/second nontheless sound scary but in reality it is not so much for decent server and for sure is not a problem for Mongodb.
It is hard to tell how to make it more efficient and I highly doubt you should think about it now. Give it a try, do something, check what is not working efficiently and come back with specific questions.
P.S. not to answer all this in the comments. You have to understand that the question is extremely vague. No one knows what do you mean by trip document and how big is it. It can be 1kb, It may be 10Mb, it can be 100Mb (which is bigger then 16 Mb mongodb limit). No one knows. When I told that 196 documents/sec is not a problem, I did not said that exactly this amount of documents is the maximum cap, so even if it will be 2, 3 times more it is still sounds feasible.
You have to try it yourself. Take avarage amazon instance and see how many YOUR documents (create documents which are close to your size and structure) it can save per second. If it can not handle it, try to see how much it can, or can amazon big instance handle it.
I gave you a rough estimate that this is possible, and I have no idea that you want to "include admins using MongoDB, to update, select,". Have you told this in your question?

Is GCM service reliable for large scale push notification?

I want to push notifications to around 50,000 users at a time and about 50 notifications per day, is it a good choice to use GCM in this case?
If not can i know which other push services can i use , i dont mind even if its a paid service..
Thanks in advance
One notification can send only to 1000 devices (GCM limit).So you must split your array of devices.
50.000 users its ok for GCM.
Our application serve 100.000 users.
As case you can use airpush notification service:
http://www.airpush.com/
I think that GCM is a good choice to use. It's reliable and using it helps to conserve battery and data usage since it piggybacks other Google services. All you need is Android 2.2 or later with the Google services installed, which means no Kindle Fire.
I do not think that GCM would have any problems handling the number of messages or devices that you gave.
If you use it you will still have to write your own server component to handle registrations and message sending. I wrote a blog post that describes how this works.
Some commercial services that handle the server component for you (as well as other things) are AirBop, UrbanAirship, and ClixAp. Parse is a commercial solution that (I believe) does not use GCM. As I noted in the comment above I helped create AirBop
Like others we struggle with GCM as well for some time. However we believe we have finally figured out the factors which affect the performance of GCM the most:
For fastest delivery of Notifications with least amount of jitter:
1. delay_while_idle - set to false
2. time_to_live - set to zero (but we have set to 30 for just in case)
3. Canonical IDs - Make sure Canonical IDs returned by GCM replace the old PushID in database
4. collapse_key - The most important factor - set it to random or TOD to avoid Google to throttle notifications
With these, our GCM is working satisfactorily. Good wishes, post if you still have issues.

Android application stops receiving c2dm messages after a while

we've been trying to develop an android application which uses the c2dm service of Google.
When we start the application after clearing all data, the application receives the c2dm messages just fine, but after some time (maybe 2 minutes) the messages refuse to arrive.
We also checked the code we received after pushing the c2dm messages from the server, and the code was successful (code number 200 without error).
After searching relevant posts on Stack Overflow, we came across this post:
Why do Android C2DM push messages not always arrive?
but we verified that we don't register to the c2dm service each time the application starts.
What seems to be the problem in our case?
We use android 2.2 API 8 version .
Thanks in advance,
Mark.
You should always have in mind that Google's C2DM allows a certain limit of messages/day. I'm thinking that sending a large number of messages in 2-3 minutes (a client-chat, or something like that) could be the source of your problem.
And also, have in mind that there is no guarantee whatsoever that messages will arrive. Per Google's C2DM Introduction: C2DM makes no guarantees about delivery or the order of messages. But you probably already know this.
I am thinking that if your 2-3 minute average is a rule, then probably the limitation of the messages could be the cause. Try sending fewer messages and see if the interval doesn't get larger.
"maybe 2 minutes" - you should confirm that first of all. You must clarify:
Is this issue related to this one device?
Does it happen consistently? If not, what triggers it?
Has it happened once, or does it happen every time?
Do bear in mind that C2DM messages are not guaranteed. Some will not arrive.
Also be aware that sometimes Android devices "fall off" c2dm and don't receive messages for a period of time. You will see similar effects on some networks (e.g. in my experience some C2DM messages are not delivered over wifi networks, so try 3G too).

Categories

Resources