Advertisement

Browser strategy game server architecture (with pics)

Started by May 10, 2016 02:20 PM
13 comments, last by hplus0603 8 years, 6 months ago

If we're speaking about serious volumes (over 10-100M DB transactions/day total), it is not that simple.


Agreed! At work, we have to deal with those volumes and more. Meanwhile, 99.999% of all games will not see those volumes, and I would not recommend that a game developer spends too much time worrying about horizontal sharding or read slaves when the real challenge is making a fun game and figuring out how the world will learn about how fun it is.

BLOBs as such are traditionally pretty bad for DBs


Right -- I used "blob" in the sense of "state checkpoint from the game," not necessarily implemented using the SQL BLOB construct. You should implement it however it best suits your storage.

1000 transactions/second is indeed achievable, though depending on DB experience and on the transactions themselves, it can be rather tough to do it


A desktop PC with a quad core CPU and a SATA SSD can do it.

That being said, if it's truly just "userid/gameinstance -> statecheckpoint" then a file on disk would be sufficient, and those can do ... many more :-)


So the question then becomes: Are you doing this just to design a "what if" for a very large game? Or are you actually trying to build a game of your own, and want to know where to start?
For the "what if" scenario, the assumptions about transaction rates and the design of the game play a very crucial role.
Regarding the "let's centralize login" question, the only thing that login does is issue a cryptographic token, valid for some time, identifying the player as a particular customer ID. The world state databases then just verify this token, without having to connect to the central database. Thus, the load on the login database is only as much as how often users need to log in.
enum Bool { True, False, FileNotFound };

A desktop PC with a quad core CPU and a SATA SSD can do it.

It Really Depends on the transactions involved. With these transaction numbers, you often have tables with 1e9 rows, and then - well, it is all about indexes (and BTW, from what I've seen it is usually about latency to commit to log file, so I normally suggest something-which-can-terminate-write-requests-right-on-PCI-card, i.e. RAID with BBWC or NVMe ;-)).

if it's truly just "userid/gameinstance -> statecheckpoint" then a file on disk would be sufficient, and those can do ... many more :-)

Yep - that is, if you don't need inter-game-instance consistency (and if you do, which does happen - then it again becomes complicated...)

I would not recommend that a game developer spends too much time worrying about horizontal sharding or read slaves when the real challenge is making a fun game and figuring out how the world will learn about how fun it is.

I tend to agree, but OTOH restricting an otherwise-great game to having no chance to grow due to technical reasons, is also pretty bad :-(.

Moral of the story: it would be nice if somebody creates a framework doing these kind of things ;-).

Advertisement

Moral of the story: it would be nice if somebody creates a framework doing these kind of things ;-).


Some of these frameworks exist. Build your game on Amazon SimpleDB, SQS, Lambda, Elastic Beanstalk, and all the rest. You will be able to scale as much as you want!

The draw-back is that:
1) You have to learn how to properly use all these technologies, which takes a lot of time.
2) You have to pay the cost of this infrastructure, even during development.
3) Each feature you're trying out will have to be architected to fit within this system, which reduces development velocity.

Another option is to factor the system on your own servers, but just rune "two of each" all on a single box (or a couple of boxes.) Start hosting this box in your server closet; when you go to beta testing, buy into some co-location facility.
This method lets you scale costs a little better, but the draw-back is that you have to factor the services yourself. (This is also a benefit, because you can factor based on your games' needs, not based on whatever primitives Amazon happens to have built for their business.)

If you're a growing studio, with a separate back-end team, and sound business reasons to go into large multiplayer systems, then that totally makes sense!
If you're on your first real game, funding by living on Ramen, then that's probably not where you should spend your efforts.

Now, does it make sense to think at least a little bit about server-side structure and costs? Yes, absolutely! It's just that the right amount of thinking is, for 99.9% of games, probably less than this entire stack.
enum Bool { True, False, FileNotFound };

The Auth server does not need to send auth token to individual game world. This cuts down #5 and #6. Simply send the token back to the client. When client logging in to the game server, it sends the token to the game server, and server can just check with the Auth server if the token valid. This decouples the auth mechanism with game servers availability.

To have a list of game servers load, each game server should emit/broadcast its population status a redis/cache server on a periodical basis, and the frontend server can subscribe to it through pub/sub mechanism. This decouples server health status from app logic.

To summarize, the authentication flow looks like this:

Clients -> Frontend server -> Auth Server // player's credentials sent all the way to auth server

Clients <- Frontend server <- Auth Server // player's token is returned back to clients

Clients -> Game Server -> Auth Server // player's token sent to the game world server, which then checks with auth servers.

Publish server load:

Game Servers -> Redis/Cache

Get list of server availability:

Redis/Cache -> Frontend -> Client

Redis/Cache -> Monitoring Service

server can just check with the Auth server if the token valid


No, you don't need to check in; if the auth server signs the auth token, the game servers can just verify that signature.
Signing can be using public/private keys (RSA style) or it can be using a shared secret (HMAC style) because you control both kinds of servers, so the secret won't leak to the players.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement