I Will describe my use case with figures attached and everything i can do to be clear so we may get up with a better idea ...
General idea:
Whatsapp chat application with firebase
Use case:
As we know one of the features of whatsapp is the last seen, when the user did (exited the app, log out, lost wifi connection, etc ..)
I tried to use:
onDisconnect, but onDisconnect gives bad result when losing wifi connection (because of the socket latency to be timed out)
the one I am using now, is every user updates its timestamp every 3 seconds (update document every 3 seconds), when the user loses connection,
he won't be able to update his timestamp, right? So, if another user wants to chat with this offline user, I can show him the user's last seen. Hope this
this is clear...
Developed using Flutter framework
Redux to manage app state
Firebase, cloud firestore
The code below is dispatching an action every three seconds, this dispatched action will update the last seen in firebase...
timer = Timer.periodic(Duration(seconds: 3), (Timer t) {
// store.dispatch(updateUserOnline());
});
As you can see in the figure below my data structure of how I am updating last seen for this user every 3 seconds ...
This implementation is very expensive to get satisfactory results for a last seen for a user, if we have million users and these million users are updating their
last seen every 3 seconds it will cost a lot $ per month, as we are doing a write operation, no?
So, my other solution is to implement a socket connection to my own server and let all the users listen to the onDisconnect socket event on my server instead of Firebase server, is this doable to avoid the huge cost of writing operations?
image attached: here
Firebase writes would indeed be a bit costlier since you would be sending in a lot of writes, which, apparently are just for the job of "last seen".
Instead, as you mentioned, having a socket connection with your own server will help reduce the number of queries you make. i.e. As soon as the socket disconnects from the server, you can send a write operation to Firebase. "Every 3 seconds vs Only when the user disconnects".
Plus (not something that you asked for), if you would be setting up a socket server of your own, then it shall help in the following scenarios as well:
Typing events (The indication we get when the other person is typing a message)
Quicker way to know if the person at the other end is online/offline (because of sockets)
Related
I can't find documentation on best practices for the maximum number of messages one should send to the Firebase database (or one like it) over a period of time, like one second, and also what rate an app could handle receiving without slowing down significantly. For example:
//send updated location of user character in MMORG
MyDatabaseReference.child(LOCATIONS).child(charid).setValue . . .
//recieive locations of other characters in a MMORG
MyDatabaseReference.child(LOCATIONS).addValueEventListener(new
ValueEventListener() { . . .
In testing, 3 devices each sending 20 messages per second to the database, and each receiving 60 messages per second, appears to work OK (S8 used, a fast device). I was wondering what would happen with, say, 100 devices, in which case each user app would be getting 2000 messages per second theoretically. I imagine there is some automatic throttling of this.
As mentioned in Firebase officil documentation regarding Firebase database limits, there is a maximum of 1000 write operations/second for the free plan.
If you want to stay on the free plan, remember that when you'll reach the maximum number of writes per second, it doesn't mean that you'll not be able to use Firebase database anymore. When 1001th simultaneous connection occurs, a queue of operations is created and Firebase will wait until one connection is closed, and than it uses your new connection.
I have an application that relies heavily on current timestamps. Currently, when user submits a request, I get the current timestamp in UTC using System.currentTimeMillis(). While this works fine, it becomes a problem when the user starts manipulating their Date/Time on their device and would result as inaccurate timestamps.
Why do it on the client then? Why not just handle it on the server? Well, my application needs to work offline. All my requests get pushed into a jobQueue when connectivity to the internet is unavailable. In these cases, I must have the original time wherein the user did the action so if I submit a request at 4:02pm, but due to network problems, server will only receive it around 7:30pm, server MUST know that I sent the request at 4:02pm.
Now what options have I considered?
Upon user login, I sync the device time with the server time and store that time locally. If any user manipulation occurs while the user is logged in, I'll have a BroadcastReceiver listening in onto any intents of Date/Time manipulation, then store the offset so that whenever a user submits a request, I will calculate the synced time with the offset to ensure the timestamp is accurate.
Have a server sync api done in the backend and set up a service within my application to continuously sync up with the server time and look for any drift while also listening in onto any user manipulation.
Use push notifications and listen downstream for time synchronization adjustments while also listening onto any user manipulation.
I could also make use of the NTP servers to synchronize time with my device.
I'm not entirely sure which would be the most optimal (assuming I have listed down all the possible solutions). If there are other solutions I haven't thought of, please let me know.
P.S. If I happen to use the BroadcastReceiver to listen onto any datetime manipulation on the device, how would I even calculate the offset in that?
It has been some time since I asked this question and there hasn't been any elegant answers to the problem so after some research and some trial and error, I decided to take the NTP route and after some digging around, I found a nice library that does the entire thing for you.
It can be found here:
NTP TRUE TIME
Credits to these guys who had made life a lot easier.
You must sync with the ntp servers just once and from there on, they will calculate the Delta for us giving us accurate UTC regardless of SystemClock time.
For time synchronization you can implement like getting time zone from server. Once we have timzone we can get the current time of server.
I'm creating mobile application for iOS and Android. The problem is when any data has changed on server, I cannot notify mobile devices.
I have found 3 solutions, each have minus and pluses.
Use push notifications. Since iOS always shows a notification to user this is not a solution at all. Also I cannot know if the notification will go to device or when it will.
For every X seconds ask server if any change exists. I don't want to do that, because creating too many HTTP connections and closing them is not a good idea I think. Also if the data is changed right after the device asks, the info change on device will occur late.
Use web socket. My application's one time usage expectation is ~2 minutes. So web socket looks like a good choice, because app will be terminated or go to background state quickly and battery consume won't be much. Also all the server side data changes will come to the device just in time. But I don't know much about web socket. Is my opinion acceptable? Also how many concurrent connections can be done by my server. Is it a question too.
Here are my all solutions.
The document would suggest assumption 1. above is incorrect.
If you read the The Notification Payload section, you'll come across this;
The aps dictionary can also contain the content-available property. The content-available property with a value of 1 lets the remote notification act as a “silent” notification. When a silent notification arrives, iOS wakes up your app in the background so that you can get new data from your server or do background information processing. Users aren’t told about the new or changed information that results from a silent notification, but they can find out about it the next time they open your app.
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/ApplePushService.html
I think for the most part this depends on what your app is doing.
I would say you should use a combination of #1 and #2.
2 - At the very base level if you need information from the server you are going to have to make a request. If this information needs to be up to date then you can proceed to make a request for the information when the ViewController is loaded. If you need this information to update as the ViewController is loaded then you will need to make subsequent requests every X seconds... In addition to this if your user is interacting with this data and sending an update to the server you can check at this point if the data is up to date and alert the user as well as return the current data.
1 - Push Notifications operate off of the 'send and forget' protocol. The notification is sent and is not verified if it is received or not. This is used as a supplement to #2 and is 'nice' but should not be depended upon.
Push notification is the intended way (from both Google through Google Cloud Messaging, and Apple through Apple Push Notification Service).
Both option 2 and 3 are frowned upon as they affect battery life, and they are unnecessary as most cases scenarios can be covered by push notifications.
I have an application that sends data to a server with a post request. This request can fail, and if it does I want it to retry until it's finally sent, something similar to WhatsApp: if u send a message when u are offline it stays as pendant and when you go online again the message is sent.
Since I don't know how WhatsApp internally works I have some doubts in how to implement that. I thought two ways:
1- Setting an AbstractThreadedSyncAdapter to be executed every X time (like 30 seconds) that checks if there is data to send and, if there is, it sends it to the server.
2- When the user clicks to send some data, I create a thread that tries to send it and, if it fails, it sleeps some seconds and try again.
I really don't like any of these options. The first one is going to increase the battery usage since the application is going to perform operations every X seconds even if it isn't needed. The second one is going to use a lot of battery if the request fails a lot of times.
Is there any better way to do it? It'd be awesome if there was an easy way to detect if the phone has connection to internet.
Thanks!
In your scenario 2, you can set an alarm when the initial post fails to trigger a re-sent some time later. If the send succeeds, you cancel the alarm (or don't schedule another one).
For getting noticed when the device goes online you may look at this answer: https://stackoverflow.com/a/11084311/100957
Recently google introduced push-to-device service, but it's only available 2.2 and up.
I need a similar system in my app, and I'm trying to get around limitations.
The issue is battery life. Since the user must be notified immediately about the changes on the server, I thought to implement a service that would live in the background (standard Android service) and query the server for updates.
Of course, querying the server, even each second, will cost a lot of bandwidth, as well as battery, so my question is this: does it make a difference if the server is holding the response for some period of time? (the idea behind Comet type ajax request)
Works like this:
Device sends request for data update
Server gets the request and goes in the loop for one minute, checking if there are updates on each iteration
If there are updates, server sends response back with updates
If not, service goes on to the next iteration.
After a minute, it finally sends the response that no data is yet available
After response (no matter whether empty or with data) Android fires another such request.
It will definitely cost less bandwidth, but will it consume less (or even more) battery?
Holding a TCP socket (and consequently waiting for an HTTP response) as you suggest is probably going to be your best option. What you've described is actually already implemented via HTTP continuation requests. Have a look at the Bayeux protocol for HTTP push notifications. Also, check out the Android implementation here. For what it's worth, that's definitely what I would use. I haven't done any sort of analysis of it, but this allows you to minimize the amount of data transmitted over the line (which is directly proportional to the power consumption) by allowing the connection to hang for as long as possible.
In short, the way Bayeux works is very similar to what you've suggested. The client opens a request and the server waits on it. If it has something to send, it sends it otherwise it simply waits. Eventually, the request will timeout. At that point, the client makes another request. What you attain is near instantaneous push to the client from the server without constant polling and duplication of information like HTTP headers, etc.
When the phone is actively using the networks, it's battery is used more. That is to say when it sends the request and when it receives the response. It will also be using battery just by listening for a response. However, will the phone download data, checking to see if there's a response? Or will the phone just be open to receiving it and the server will push the response out to the phone? That is mainly what it depends on. If the phone is just open to receiving the response but does not actually use the network while trying to download some response the whole time it's waiting, it should use less battery.
Additionally, the phone sending a query every minute instead of every second definitely uses less battery, as far as using the networks goes. However it depends on how you make the phone hold, if you tie it up with very complex logic to make it wait it may not help battery life. That's probably not the case, however, and I would say that in all likelihood this would work out for you.
In closing, it should help the battery but there are ways you could make it in which it would not. It wouldn't hurt to write the program and then just change some type of variable (for example WAIT_TIME to 1 second instead of 1 minute) and test battery usage though, would it?