Array.find() taking too long, bad user experience, node.js - android

This is not a question about concurrency or event loop of node
I have a question in how to magane a scenario in wich thosands of users are connected by socket and autentified by jswtoken to a Node.js server.
Let´s say there are between 10,000 and 50,000 users connected throught an Android app with socket.io client, all of them are pushed into an array users.push(newUser) when the connection is stablished by socket.io and removed on disconnect, users.splice(id,1) no problem there.
But all of them wants to update a var on his respctive object in the array each 5 seconds
I can idetify their respective object by his user.id using Array.prototype.find() but it takes too long.
In my tests (mocha,chai) it takes, to find and update X users:
For 1,000 users , 2XXX.XX ms (more than 2 seconds)
For 10,000 users , 4XXX.XX ms (more than 4 seconds)
For 25,000 users , 3XXXX.XX ms (more than 30 seconds)
Over that 4 seconds there is no way to keep a real time experience.
Is there any design pattern to walkaround this?
I have a few ideas but no one seems to be practical and scalable
Store in the client the position it is using currently (is not good because each time a user is disconnected i have to notify all other if index has changed or not)
Separate the users in multiple arrays of maximum 1000 positions and relate them to another mapping to do the .find() function directly on a relatively small array to search. (I don´t know if it is a good practice but the task of identify users relay on the server and it doesn´t take too long)
Totaly abandon Node and look for another solution (I would like to keep it in Node)

I don't think theres a simple answer to this question. Having 10 000 - 50 000 active users at the same time in a chat / game / what ever? Thats a massive payload and I really don't think your app has to worry about that kind of traffic.

Related

Maximum message rate to and from Firebase database

I can't find documentation on best practices for the maximum number of messages one should send to the Firebase database (or one like it) over a period of time, like one second, and also what rate an app could handle receiving without slowing down significantly. For example:
//send updated location of user character in MMORG
MyDatabaseReference.child(LOCATIONS).child(charid).setValue . . .
//recieive locations of other characters in a MMORG
MyDatabaseReference.child(LOCATIONS).addValueEventListener(new
ValueEventListener() { . . .
In testing, 3 devices each sending 20 messages per second to the database, and each receiving 60 messages per second, appears to work OK (S8 used, a fast device). I was wondering what would happen with, say, 100 devices, in which case each user app would be getting 2000 messages per second theoretically. I imagine there is some automatic throttling of this.
As mentioned in Firebase officil documentation regarding Firebase database limits, there is a maximum of 1000 write operations/second for the free plan.
If you want to stay on the free plan, remember that when you'll reach the maximum number of writes per second, it doesn't mean that you'll not be able to use Firebase database anymore. When 1001th simultaneous connection occurs, a queue of operations is created and Firebase will wait until one connection is closed, and than it uses your new connection.

How to make my application wait for a specific time interval if the system is overloaded

I am working on a restaurant project which sends the order of the customer from a android tablet to the kitchen, I want my application to wait for a specific time before sending the order to the kitchen if the server already containing more than 10 orders. In short I want my application to read the number of orders and wait, Kindly guide me if it is possible.
To wait big intervals of time I recommend you to use AlarmManager
http://developer.android.com/reference/android/app/AlarmManager.html
The pseudocode for the problem would be something like this:
1 Request to check number of orders
2a If <= 10, send the order
2b If not create an Alarma to wait X minutes and return to 1
Anyway, this has a lot of problems like what happends if a lot of tablet are waiting to send the order, how can the tablets sort the orders? One order can be waiting infinite time because other orders are scheluded before always....
In my opinion you should save all the orders in the server and manage the orders from there. That is the best and simple solution. Then in your server you can always show to the kitchen only 10 orders and manage what order show when one is finished and given to the client. If not, you will need a very complex system to comunicate between the tablets, but for me has no sense the cost of do that

Android streaming data to server large latencies

I'm sending requests from my android phone one at a time as fast as possible to my server, and then graphing the time (x axis) vs hit number (starting at 0). After running this on Andrid for about 10 minutes, I get a graph like this:
The flat parts are unexpected. This signifies that every 3 minutes or so, there's a 1 minute lag that happens. Why might this be happening?
A few things I've noticed:
It doesn't matter if the screen is on or off
My server is not the problem. During the long pauses, I'm able to hit the endpoint on my server separately from android.
Both POST requests and requests over websockets have the same problem
It is possible the problem is my android data connection; I'm not sure how to remove this as a possible variable. It would be odd that the data connection would do this.

How to maximize efficiency in this complex data transfer scenario

I'm not sure if this question belongs here, as it is solely based on theory, however I think this fits best in this stackexchange compared to the rest.
I have 500,000 taxis with Android 4 computers inside them. Everyday, after one person or party makes a trip, the computer sends the information about the trip to the Node.js server. There are roughly 35 trips a day, so that means 500,000 taxis * 35 trips = 17,500,000 reports sent a day to the Node.js server. Also, each report has roughly 4000 characters in it, sized around 5KB.
The report that the taxi computers send to the node.js server is just an http post. Node.js will then send back a confirmation to the taxi. If the taxi does not receive the confirmation for report A in an allotted amount of time, it will resend report A.
The node.js server simply receives the report. Sends the confirmation back to the taxi. And then sends the full report to the MongoDB.
One potential problem : Taxi 1 sends report A to node.js. Node.js does not respond within the allotted time, so Taxi 1 resends report A to node.js. Node.js eventually processes everything and sends report A twice to MongoDB.
Thus MongoDB is in charge of checking whether or not it received multiple of the same reports. Then MongoDB inserts the data.
I actually have a couple of questions. Is this too much for NodeJS to handle (I don't think so, but it could be a problem)? Is this too much for MongoDB to handle? I feel like checking for duplicate reports may severely hinder the performance.
How can I make this whole system more efficient? What should I alter or add?
First potential problem is easy to overcome. Calculate a hash of the trip and store them in mongo. Put the key on that field and then compare every next document if the same hash exists. This way checking for duplicate will be extremely easy and really fast. Keep in mind that this document should not have something like time of sending in it.
Second problem: 17,500,000/day is 196/second nontheless sound scary but in reality it is not so much for decent server and for sure is not a problem for Mongodb.
It is hard to tell how to make it more efficient and I highly doubt you should think about it now. Give it a try, do something, check what is not working efficiently and come back with specific questions.
P.S. not to answer all this in the comments. You have to understand that the question is extremely vague. No one knows what do you mean by trip document and how big is it. It can be 1kb, It may be 10Mb, it can be 100Mb (which is bigger then 16 Mb mongodb limit). No one knows. When I told that 196 documents/sec is not a problem, I did not said that exactly this amount of documents is the maximum cap, so even if it will be 2, 3 times more it is still sounds feasible.
You have to try it yourself. Take avarage amazon instance and see how many YOUR documents (create documents which are close to your size and structure) it can save per second. If it can not handle it, try to see how much it can, or can amazon big instance handle it.
I gave you a rough estimate that this is possible, and I have no idea that you want to "include admins using MongoDB, to update, select,". Have you told this in your question?

performance tips for android native app using sqlite

We are building an application which requires good amount of data exchanges between different users. We are using SQLite to store the info and Rest api to exchange data with server.
To ensure high performance and less CPU /memory hogging but to also maintain good user experience we need following suggestions:
1 We tried running sync at frequency of 30 seconds but it hogs resources.Is there any client side framework which can be used to sync sqlite with MySQL or we have to only plan all possible events for same
2 How does applications like Gmail /twitter work- do they sync only on demand or keep on syncing in background. I feel it is on demand but not sure.
3 Notifications should be server side or client side (based on updates in sqlite). In whatsapp I observed it is client side only. If I do not click a received message I keep on getting the notification about same
4 IF we keep notifications server side and sync on demand basis. then on clicking a new notification when app will open up at that time should we make a sync call
Need an expert opinion that such applications should be designed to manage sync and notifications in such a way that it does not hogs resources and also gives online kind of experience to customer
Your question is pretty broad, but I'll at least give you a direction to start.
I've run local databases in iOS and Android that are over 100 MB without incident. SQLite should never be your problem, if you use it correctly. Even with 100,000 rows of data, it is fast and efficient. Where people get into trouble is by not properly indexing the data or over-normalizing the data. A quick search can tell you how to use indexes to optimize your queries, so I won't go into that any further. Over-normalization seems to be not fully understood, so I'll go into a bit more depth on it.
When designing a server database, it is important to minimize the amount of duplicate data. This often is done by breaking up a single record into multiple tables and using foreign keys. On a 1 GB database, this type of data normalization may save 20%, which is fairly significant. This gain in storage comes at the cost of perform. Sequential lookups and joins are frequently necessary to get complete data. On a server, there are plenty of CPU cycles and memory, and no one really notices if a request takes an extra millisecond or two.
A mobile app is not a full database server. The user is actively staring at the screen waiting for the app to respond. Additionally, the CPU and memory available are minimal, which makes that delay take even longer. To add insult to injury, mobile databases are only a small portion the size of a server database and duplicate data is already pretty minimal. The same data normalization that may have saved 200 MB (20% of 1 GB) server size, may now only save 5% of 10 MB, or 500 KB. That minor gain is not worth the effort or performance hit.
When you sync, you do not need a full data set each time. You only need to get data that has changed since the last sync. In many cases, that will be no change at all. You should find a way to identify what the device has on it now and only get the changes.
It is important that the UI does not stall waiting for the network request. All network activity should be done on a background thread and notify the UI to refresh once the sync completes.
Lastly, I'll mention that SQLite is NOT thread safe. It is important to limit concurrency with your database access.

Categories

Resources