Advertisement

Managing connected clients

Started by October 16, 2015 01:44 AM
13 comments, last by Zipster 9 years, 1 month ago

Why is that necessary?

It's not super necessary. We had both services tightly coupled together (TCP knows Records service, and Records service maintains list of TCP server nodes) that changes on one needs changes on the other. We thought it's a good idea to decouple them and have it mostly a one-way connection from TCP to the other. Part maintenance, and part that it'd be less headache if we do need to scale.

When the TCP server crashes, who is responsible for telling the record server that the user isn't connected?

If the server app crashes, we have an autorestart policy. Clients connected status will get updated when this server has been rebooted, so all clients connected to this are basically invalid.

If the hardware crashes, then we have so far no disaster plan for that. The rest of the services won't know until any attempt to ping the client is made. Any recommendation? biggrin.png


How? Through a periodic cleaning operation?

There isn't any periodic cleaning operation, unless not at this time. Perhaps something we should add in the near future.

Our TCP server is a dumb server that it's not trying to act smart by pinging clients once in a while, or even try to make sense of the protocol beyond authentication reason. We tried to add something like this but the biggest problem seems to be correctly detecting that clients are disconnected. We have had issues that sometimes server may even think clients are still connected. Any data transfer down the stream doesn't trigger any error until minutes later.


You probably also want to use publish/subscribe rather than checking for propagating online information -- when I log on, I subscribe to the online status of all my friends, rather than having to poll for that for each friend at some interval. Again, both for user responsiveness (polling has to be slow) and for implementation efficiency (polling uses many orders of magnitude more resources.)

Thank you for this. Will certainly keep this in mind. Once we reach hundreds of users, we would need to redesign the communication.

We have a handful of RESTful services in our game, for instance, that require the database to be "pumped" every few seconds or so.


Don't use a database for this. Really! Anything that is not a durable edge transition, should not be in a database; it should be in some kind of in-RAM "game" server.
Well, sure. You can use databases for this. And if you ever get big, you will suddenly have a very high pressure to figure out how to NOT use databases for those calls :-)
enum Bool { True, False, FileNotFound };
Advertisement

ACK messages and 3-way handshake does not solve the problem that, if a server crashes, the client will still be marked as "active" in the database.

If the server crashed then NO clients would still be active and any ones logged that way in the DB would have to be cleaned up as part of the initial server restart processing ?

Recovery processing ... possibly attempting Reattach to still 'active' clients if the server can come back up fast enough (?) when state data corruption isnt an issue.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

If the server crashed then NO clients would still be active


Multiple servers report into the same database, as per the discussion above. (And this is a common pattern in sharding front-ends for scalability.)
enum Bool { True, False, FileNotFound };

We have a handful of RESTful services in our game, for instance, that require the database to be "pumped" every few seconds or so.

Don't use a database for this. Really! Anything that is not a durable edge transition, should not be in a database; it should be in some kind of in-RAM "game" server.Well, sure. You can use databases for this. And if you ever get big, you will suddenly have a very high pressure to figure out how to NOT use databases for those calls :-)

The database is only for persistence. We have a public server that services client requests, and a back-end process for handling the side effects of these requests, such as updating third party services with client information in the DB. We simply chose to go with a separate process for this work as opposed to stuffing it into the front-end server. At which point, since we already have a little daemon running, we can put a few other maintenance tasks in there.

This topic is closed to new replies.

Advertisement