Advertisement

Browser strategy game server architecture (with pics)

Started by May 10, 2016 02:20 PM
13 comments, last by hplus0603 8 years, 6 months ago

Hey all,

(i am re-posting here, as suggested in my gamedev.stacktrace post:

http://gamedev.stackexchange.com/questions/121237/browser-strategy-game-server-architecture-with-pics)

I have been working as a Java Developer for over 7 years and while studying AngularJS I decided to create a browser strategy game for fun, like Travian/TribalWars/Ikariam etc...

Right now I am thinking about how the server architecture should be. I tried to find some examples on how this kind of games distributed their servers but I couldn't find any useful resource, so I came with my own.

My sketch and authentication flow (follow the numbers from 1 to 9):

2aY9Q.png

In a general way, is that a suitable one?

From my personal developer experience I decided it was a safe idea to split the public front-end from ther auth and game servers.

Do you think there are any Cons in this architecture?

About hosting, which service/host is most suitable for each server?

1) Authentication server, single point

2) Front-End server, distributed over the world

3) Game Servers, each one in its specific country

I am open to new ideas and changes!

Thanks in advance!

Moving to the multiplayer/networking forum, which probably has a better audience to address your needs.
Advertisement

The steps 1 & 2 seems chatty. The worlds online/offline changes rarely. Knowledge of connections should be independent of the login process, not a query every time someone loads a login page.

I don't understand why you want step 6. If the auth server says an account is authenticated with a token then the user is authenticated. No round-trip is required here; step 5 is effectively an unnecessary notice, step 6 is a reply to the notice.

With that, it boils down to:

3) Client sends a request to the front-end machine for login.

4) Front-end sends request to back-end auth server

5) On success, back-end notifies a listener machine

6) ACK the notice

7) Back-end notifies front-end of authorization results.

8) Front-end notifies user of redirect location or error message

9) User uses token on redirect location.

From that, I would dump #5 and #6. Other servers don't need to be "told" like this. As part of the handshaking for what would be step 10, client getting hooked in to world server, the world server should ask the auth server if the token is valid and store the result for the session. (Other standard things should take place, auth tokens should expire after a short time and be replaced automatically, network boundaries should be enforced, likely your world servers are more complex than depicted, etc.)

The steps 1 & 2 seems chatty. The worlds online/offline changes rarely. Knowledge of connections should be independent of the login process, not a query every time someone loads a login page.

I don't understand why you want step 6. If the auth server says an account is authenticated with a token then the user is authenticated. No round-trip is required here; step 5 is effectively an unnecessary notice, step 6 is a reply to the notice.

With that, it boils down to:

3) Client sends a request to the front-end machine for login.

4) Front-end sends request to back-end auth server

5) On success, back-end notifies a listener machine

6) ACK the notice

7) Back-end notifies front-end of authorization results.

8) Front-end notifies user of redirect location or error message

9) User uses token on redirect location.

From that, I would dump #5 and #6. Other servers don't need to be "told" like this. As part of the handshaking for what would be step 10, client getting hooked in to world server, the world server should ask the auth server if the token is valid and store the result for the session. (Other standard things should take place, auth tokens should expire after a short time and be replaced automatically, network boundaries should be enforced, likely your world servers are more complex than depicted, etc.)

Thanks for your reply.

The reason for #1 and #2 steps are not executed every login request, but in periodic times like a "heartbeat", it truly may not be needed at first as I could check if server is online at the time the Client tried to login, however if I desire to show someplace else which servers are available, wouldn't I have to make my Front-end Server update periodically and gather the info?

Like this example from a MMORPG login screen:

gw391.jpg

For #5 and #6, ok, i got it, so I should postpone and invert the token validation.

I have another doubt, some browser games can make you auto-login (if you were logged in previously) when you try to access it directly like "http://world1.game.com", how that should work to be safe?

Like:

-On its first login, the client send the valid Token (which expires just after its use) and received a "SessionToken" (jsessionid) and stored at its browser cookies.

-On the next day, the client try to access "http://world1.game.com" directly, sending all cookies.

-The World1 server checks if the "SessionToken" exists and it allow the user to directly retake their session.

Should it work in this way? Or I may have security problems?

Thanks in advance!

Most games probably look like this:

2016-05-10-simple-game-server.png

Especially for turn-based, asynchronous multiplayer games, there just isn't that much to do, and you go very far on a single monolith.

Once the simulation cost of the game becomes higher, you'll start scaling out the application servers, yet keeping a single database.

Keeping a separate "login server" function (and database) from the "world server" and databases is reserved for the very largest MMOs, where there is both significant per-world instance state, as well as a very large user base.

Authentication tokens are pretty typical; this is akin to certain kinds of session cookies used for web services. An excerpt from Game Programming Gems about this is here: http://www.mindcontrol.org/~hplus/authentication.html
enum Bool { True, False, FileNotFound };

Most games probably look like this:

Especially for turn-based, asynchronous multiplayer games, there just isn't that much to do, and you go very far on a single monolith.

Once the simulation cost of the game becomes higher, you'll start scaling out the application servers, yet keeping a single database.

Keeping a separate "login server" function (and database) from the "world server" and databases is reserved for the very largest MMOs, where there is both significant per-world instance state, as well as a very large user base.

Authentication tokens are pretty typical; this is akin to certain kinds of session cookies used for web services. An excerpt from Game Programming Gems about this is here: http://www.mindcontrol.org/~hplus/authentication.html

Thank you for your suggestion.

If I have multiple World instances all over the world (US, Europe, South America, etc.), and concentrate the whole Database in a single server (US, for instance), wouldn't that cause some "heavy traffic" on the DB server and delay to commit in World servers?

I am not saying that a few extra milliseconds for each asynchronous commit would impact so much, but I am afraid that considering initially I could have 5 worlds with hundreds of active online players in each one, couldnt the DB server be overwhelmed?

Advertisement

Heavy traffic and system load are gong to depend quite a lot on your implementation details.

People can build two similar systems, one starts to collapse around 500 requests per second, another survives to 10,000 requests per second, with the differences being entirely the small details that don't fit on a chart or a forum post.

Can a DB server be overwhelmed? Sure, but that's nothing new. You can overwhelm a DB server with a small number of really terrible calls on a data set that does not fit will on the architecture. You can write bad software in any language.

What you described can work, but so can many other ideas. What you described is more common in larger architectures. Basically you've got one server that handles light tasks of authentication and initial load balancing. There is nothing inherently wrong with it. It is probably overbuilt, but it can work if that is what you want to build.

5 worlds with hundreds of active online players in each one, couldnt the DB server be overwhelmed?


That sounds like a very small game. If the game is asynchronous (like Backyard Monsters or Travian or whatever) then you just need to store a new updated game state blob each time a player has made changes, and you only need to read when some player attacks or looks at the state (including the original player.) Specifically, all of the "growth" can be handled WHEN YOU READ using game logic because it's 100% deterministic.

If that's the case, I'd expect a smallish MySQL instance to deal with the persistence just fine. Given that all saving should be asynchronous anyway, why does a few hundred milliseconds of cross-world latency matter?

Let's say that, while I'm changing things, you batch save requests into 10 second intervals. If I've made a change, and it saves, then further changes I make until 10 seconds have transpired, get batched into a new save request. There's a "saved versus not" red/green indicator that shows my status. Then the absolute worst that can happen for 1,000 online players is 100 save transactions per second. A save transaction is just a "store this blob for this user/world instance" which is the most trivial database transaction. Any modern DB should be able to easily do 10x that many transactions without undue load.

So ... no?

Of course, the devil is in the details. If you believe that every action made by the user must be 100% durable in the database, and be acknowledged with 50 milliseconds latency, then you will have a very different (and much more expensive) operational profile. Which is why most games don't do that.
enum Bool { True, False, FileNotFound };

Most games probably look like this:

2016-05-10-simple-game-server.png

Especially for turn-based, asynchronous multiplayer games, there just isn't that much to do, and you go very far on a single monolith.

I've recently had rather long discussions on it with a former senior Zynga guy (GDC speaker, etc. etc. etc.). If we're speaking about serious volumes (over 10-100M DB transactions/day total), it is not that simple. The problem is that single DB becomes a Bad Bottleneck way too quickly. The key here is to do write caching optimally, and then we're all-in with memcached/Redis/... - and also CAS/optimistic-vs-pessimistic locking etc etc etc. Some further (though still sketchy) discussion on web architectures can be found on my site here: http://ithare.com/chapter-via-server-side-mmo-architecture-naive-and-classical-deployment-architectures/ . It is not that different from your diagram, but is layed out in different (web-like) terms...

Then the absolute worst that can happen for 1,000 online players is 100 save transactions per second. A save transaction is just a "store this blob for this user/world instance" which is the most trivial database transaction. Any modern DB should be able to easily do 10x that many transactions without undue load.

Two comments. First, BLOBs as such are traditionally pretty bad for DBs (depending on DB, we can speak of 10x performance hit compared to "normal" transaction; I've done splitting-BLOBs-into-several-VARCHAR-lines to speed things up, myself). Second, if not for BLOBs - yes, 1000 transactions/second is indeed achievable, though depending on DB experience and on the transactions themselves, it can be rather tough to do it.

This topic is closed to new replies.

Advertisement