What about Load Balancer
It also doesn't make it clear how each server will end up finding out about other servers/clients on those servers.
As far as I can tell, the design you suggest ends up with every server on average talking to every other server, which scales like N-squared.
For well implemented systems, and 500,000 connections, N can likely be very small, though, so it might work fine.
I think the bottleneck, in the system as you describe it, will be the "lookup if B is online, and if so which server B is on." That will probably not scale to 500,000 users with a single "master server" if the typical use case is that players talk to other players on other servers, unless connecting is rare and connections are persistent after establishment.
You'll probably want the ability to horizontally shard the "presence" bit (meaning "player B is on host Q.") You can likely build this by using a key/value store with ephemeral keys, like Memcached, Redis, or Zookeeper. (You'd have to investigate which of them best matches your particular use case.)
For example if the master server can't find a (place) in slave servers then he will execute another instance of a slave server in another machine
To be more clear master server his job is To "redirect" clients to one of the slave servers...
How many clients it depend on the slave server hardware configuration maybe 60k for each slave server i can execute many instance of my server slave i just give an example of 3 servers
For example if the master server can't find a (place) in slave servers then he will execute another instance of a slave server in another machine
To be more clear master server his job is To "redirect" clients to one of the slave servers...
If master server spins up a new slave server, how do the other slave servers know about it? Suppose Client 1 connected to Slave Server A. Then 100K more players joined in, and Client 2 joins, Master Server decides to spin up a new Slave Server B with a different ip address and reroute Client 2 to Slave Server B. Now Client 1 wants to send a message to Client 2, how do you suppose Slave Server A knows where Slave Server B is?
That's just using 2 servers as an example. Theoretically, you can have N Slave Servers. How does one Slave Server know that Client X is connected to Slave Server Y, without pinging all Slave Servers "Is X connected to you guys?". There's also a possibility that X is disconnected, and reconnect back to a completely different Slave Server.
How many clients it depend on the slave server hardware configuration maybe 60k for each slave server i can execute many instance of my server slave i just give an example of 3 servers
For example if the master server can't find a (place) in slave servers then he will execute another instance of a slave server in another machine
To be more clear master server his job is To "redirect" clients to one of the slave servers...
If master server spins up a new slave server, how do the other slave servers know about it? Suppose Client 1 connected to Slave Server A. Then 100K more players joined in, and Client 2 joins, Master Server decides to spin up a new Slave Server B with a different ip address and reroute Client 2 to Slave Server B. Now Client 1 wants to send a message to Client 2, how do you suppose Slave Server A knows where Slave Server B is?
That's just using 2 servers as an example. Theoretically, you can have N Slave Servers. How does one Slave Server know that Client X is connected to Slave Server Y, without pinging all Slave Servers "Is X connected to you guys?". There's also a possibility that X is disconnected, and reconnect back to a completely different Slave Server.
Usually that situation works something more like this or this. Instances can talk to each other when needed, but it is best to keep cross-communication minimal. While those happen to use Amazon's AWS services the overall design is common.
in your schematic design the slave servers talk to each others directly or via Load Balancing server
They talk out a layer, to their clients, and they talk up a layer, to their data sources, but they are designed to not talk among others at their same group. Also they are designed to communicate as little as possible, as network traffic is incredibly slow.
Generally if you must go across the network inside your severs, that round trip costs the player a full graphics frame. Machines can request data from persistence and save to persistence, but they should only touch the servers outside of game play.
If master server spins up a new slave server, how do the other slave servers know about it?
Perhaps he can build it on top of Erlang, where the platform takes care of that bit?
We built ours on Erlang, and it was probably the right choice. It's certainly worked mostly fine for the last 5-6 years or so.
Note that we have hash-based linear scaling, rather than N-squared scaling; that certainly helps!
If master server spins up a new slave server, how do the other slave servers know about it?
Perhaps he can build it on top of Erlang, where the platform takes care of that bit?
We built ours on Erlang, and it was probably the right choice. It's certainly worked mostly fine for the last 5-6 years or so.
Note that we have hash-based linear scaling, rather than N-squared scaling; that certainly helps!
Heard good things about Erlang. Now I am really curious to try it out :D