Advertisement

MMOG Server-Side. Front-End Servers and Client-Side Random Balancing

Started by January 04, 2016 08:01 AM
34 comments, last by Sergey Ignatchenko 8 years, 10 months ago

First of all, I apologize if this post doesn't belong here, just let me know and I will cease and desist right away embarrassed.gif

IMNSHO, MMOG architecture is one heavily under-described topic, so (having a bit of experience in this field) I've decided to write a book on it. Now the book is in "public beta", and I'm humbly asking for feedback from all game developers interested in MMO, whether experienced MMO developers or not.

Your comments do make the book better (I've already made over 30 changes based on comments on the site and reddit, with some changes being multiple pages in size)! Experienced MMO developers usually mention things and techniques which I forgot about (or didn't know about - one cannot possibly know everything), experienced non-MMO developers ask questions which help me to understand what needs to be explained in more detail. If you make a comment which makes me change the text - you'll be able to get a free e-copy when/if the book is out.

"Beta" chapters of the book are usually published weekly. Last week's chapter was about MMOG Server-Side. Front-End Servers and Client-Side Random Balancing. All comments are extremely welcome.

great work !!!!!!

Advertisement

I'm working on a MMORPG (server side and networking part) for my company and I'm working on my own architecture for distributed/load balanced servers for all MMO games (see Pulse.NET on Unity forum, if you're interested I will post something here).

I will read your chapters, I really love to read something on this topic: there are few documents on MMO networking.

Online Services Engineer @ Supernova Games Studios

  • this, combined with the observation that Front-End Servers are easily replaceable, means that you improve reliability of your site as a whole; instances when some of your Game World Servers go down, will occur more rarely (!)

Your diagram should probably cover where the actual related websites are, and distinguish them separately from the application servers (Unless you're running IIS/apache on the application servers?

Front end servers would make the service more reliable (Service includes all nodes behind it)

I think there should be a distinction between having smart and dumb front end servers. Having smart ones (That have a state copy of the game to serve to clients) requires you to get more expensive hardware. Under

you have


you can use really cheap boxes for your Front-End Servers

having a copy of relevant game world(s) on your Front-End Servers allows to have virtually unlimited number of observers who want to watch some of the games being played on your site (such as a Big Final or something2) Best of all, this will happen without affecting game server’s performance (!).

I'm not sure what kind of MMO you're covering, but if your front end servers are handling clients randomly, they need to be able to load game world state from many different areas, so there can be high memory requirements, and they might need to be smart in determining what they can load. Where I work has high volume front end servers, and depending on what kind of transactions are running, we can have some extremely high memory requirements during peak hours.

If you run simulation services (that need to know about the "world") on the server side, those servers really do need to be persistent, and generally geographically sharded (worlds/zones/instances) -- separate from the "batch" services that you typically use in a regular transactional system.

The question "which monsters are hit by this fireball at this time?" requires 100% different code and data structures than the question "does this player have enough gold to buy this object from this other player's auction?"

"watching" (as opposed to participating) is easily solved with a fan-out data distribution architecture, and you typically don't need more than one tier of fan-out even for the biggest things. 50,000 observers for the zeroth tier means 2,5 billion observers in a first tier before that's full :-)

If you add geographic location awareness to the observation, though, this stops scaling as nicely. Specifically, most MMO games do not let all players see everything in the world (or zone, or instance) but instead only send update streams for some area around the player. This is by necessity, as a world may have 10,000 players and equally many monsters; trying to send all of that to all players is way too much information in real time. Even for specific instances, 250 players in a particular city or dungeon, plus an equivalent number of mobs, will be too much to send to each player. At that point, the job of "distribute real-time game data" also needs a real-time per-viewer location filter, which may add to the CPU requirements of observers.
enum Bool { True, False, FileNotFound };

Thanks for the comments!


Your diagram should probably cover where the actual related websites are, and distinguish them separately from the application servers (Unless you're running IIS/apache on the application servers?

When I spoke about the site being reliable, I didn't mean website, I've meant a site as a "bunch of servers running the game as a whole" (and seen by user as a single entity, so that they're speaking about "the site is down" - and they don't mean a website). I kinda agree that this term is confusing, but do you have a better idea how to name such a thing?


I'm not sure what kind of MMO you're covering, but if your front end servers are handling clients randomly, they need to be able to load game world state from many different areas

It really depends on the size of your game world, but when expressed not in terms of meshes, but in terms of players/etc. it won't be TOO much. What kind of game world you're speaking about? (and how many of them)?

Advertisement

Thanks for the comments!


If you run simulation services (that need to know about the "world") on the server side, those servers really do need to be persistent,

Does "in-memory state" qualify your interpretation of "persistent"? (term "persistent" is used differently by different groups of people and may mean "persistent" as in "persistent storage" which is different from usual game "persistence"). In other words - do you care about server crashes at this point?


and generally geographically sharded (worlds/zones/instances) -- separate from the "batch" services that you typically use in a regular transactional system.

If you need geo sharding - yes, and this I've tried to cover on Fig. VI.10. On the other hand, geo sharding is not 100% universal, so there are other configurations too.



"watching" (as opposed to participating) is easily solved with a fan-out data distribution architecture, and you typically don't need more than one tier of fan-out even for the biggest things. 50,000 observers for the zeroth tier means 2,5 billion observers in a first tier before that's full :-)

Sure, and fan-out is exactly what Front-End Servers are about biggrin.png . Having two separate infrastructures - one for observing, and another one for playing, is possible, but I didn't see a need for it (yet?)



Specifically, most MMO games do not let all players see everything in the world (or zone, or instance) but instead only send update streams for some area around the player. This is by necessity, as a world may have 10,000 players and equally many monsters; trying to send all of that to all players is way too much information in real time. Even for specific instances, 250 players in a particular city or dungeon, plus an equivalent number of mobs, will be too much to send to each player. At that point, the job of "distribute real-time game data" also needs a real-time per-viewer location filter, which may add to the CPU requirements of observers.

Right, this is quite obvious (and will be covered in Chapter VII, I hope). What I'm arguing for here, is to do this filtering (which I know as "server-side fog of war", or "restricted quotes" back from my stock exchange days wink.png ) on those Front-End Servers. So basically (unless something Really Strange is going on), Front-End Server keeps a copy of the whole world (not in mesh terms, but in macro terms such as PC locations etc.), and then filters it before distributing to the clients (according to, say, location of specific client). Still, it gives a nice separation of responsibilities between Game World Server (which simulates things) and Front-End Server (which just delivers [filtered] game world state to the clients).

Does "in-memory state" qualify your interpretation of "persistent"?


"persistent" was intended to address the discussion about "stateless" or "any player on any server" choices.
If you do physical simulation, then all players in the same area of the world must be on the same simulation server.
This is because of how physical simulation works.
(In the past, Sun tried to do it another way with Project Darkstar; that didn't work out so well, for easily predicted reasons.)

geo sharding is not 100% universal, so there are other configurations too.


If you don't have physical simulation where players interact, then you don't need geo sharding.
Say, FarmVille. However, a lot of people don't think that FarmVille is a "MMO" -- it's not actually "multiplayer."


Sure, and fan-out is exactly what Front-End Servers are about biggrin.png . Having two separate infrastructures - one for observing, and another one for playing, is possible, but I didn't see a need for it (yet?)


Physical simulation is not needed by observers, but is needed by participants. "Receive full firehose data stream and filter to each observer" is a different function than "receive inputs, simulate, send filtered outputs." Yes, the "send filtered output" bit is shared.
At There.com, we actually sent all observers through a view server, including observation for the player themselves. That way, communication was a loop; player sent input to simulation server; simulation server sent game state to view server; view server sent game view to player again. It's architecturally nice (which is why we did it) but I think it ultimately cost more in complexity than it gained us in capability.

What I'm arguing for here, is to do this filtering (which I know as "server-side fog of war", or "restricted quotes" back from my stock exchange days wink.png ) on those Front-End Servers.


The industry term for this (games and mil/sim) is "interest management" -- who should see what, when.

Anyway, in the end: Random balancing works for web sites, and for games that aren't really multiplayer. It quickly breaks down when you actually add the real multiplayer interaction, and is totally gone by the time you also need physical simulation.
enum Bool { True, False, FileNotFound };

"persistent" was intended to address the discussion about "stateless" or "any player on any server" choices.

Ok, this is classical game definition. But then it is IMHO pretty much covered by Fig VI.10.


If you don't have physical simulation where players interact, then you don't need geo sharding.

Actually, the reason for geo sharding it is not about simulation, but about latencies and responsiveness. So yes, for MMOFPS you do need it, but for MMORPG you MIGHT be able to avoid it, especially it if is a game such as Sims (or There.com, as you've mentioned it). 200ms RTT to Australia is not that bad, if one can do client-side prediction and impact of the temporary misalignments on the gameplay is small. BTW, if one wants to make a phone version, he'll need to deal with 200+ms delays anyway. Of course, there can be other reasons for having separate "game worlds" (like to synchronize people into the same time zone), but I've seen non-shooter simulations which worked very believable across 200+ms delayed link.


"Receive full firehose data stream and filter to each observer" is a different function than "receive inputs, simulate, send filtered outputs." Yes, the "send filtered output" bit is shared.

My point is to separate "receive inputs+simulate" from "send filtered outputs". The first one belongs to Game Servers, the second one - to Front-End Servers.


At There.com, we actually sent all observers through a view server, including observation for the player themselves. That way, communication was a loop; player sent input to simulation server; simulation server sent game state to view server; view server sent game view to player again. It's architecturally nice (which is why we did it) but I think it ultimately cost more in complexity than it gained us in capability.

Well, this is one more way to skin this cat.


The industry term for this (games and mil/sim) is "interest management" -- who should see what, when.

You're right, I keep forgetting all those terms from all the different companies and all the different industries :-(.


Random balancing works for web sites, and for games that aren't really multiplayer. It quickly breaks down when you actually add the real multiplayer interaction, and is totally gone by the time you also need physical simulation.

Are you arguing against Front-End Servers, against Random Balancing, or against both? And in any case, I don't see how it follows from the previous discussion (and I've seen both of them working for a game with 500'000 simultaneous players and LOTS of interactions, though "game worlds" were small). Even if you do need geo sharding, random balancing within one geo location will work better than anything else.

EDIT: BTW, I don't want to say that Front-End Servers and Random Balancing are applicable to all the games out there (MMOFPS being one likely exception, for MMORPGs it depends). However, they are applicable to a lot of games (far beyond Farmville), that's for sure.

Actually, the reason for geo sharding it is not about simulation, but about latencies and responsiveness.


By "geo sharding," do you mean putting different servers in different locations in the REAL world? That's not what I was talking about.

I was talking about deciding which player talks to which server based on their geographic position in the virtual world.
You absolutely need to shard the simulation based on the virtual world position.
Which, in turn, means that it's more efficient to have view/front-end servers dedicated to particular geographical streams.
If all view/front-end servers have all state for all simulations, you end up limiting the size of your simulation by the capacity of that server, AND you end up with an N-squared networking problem in the data center.
Which, in turn, means that it's not a good idea to load balance through pure random server choice.

So, I'm not going to agree that random/"state-less" server selection is adequate or recommended for an MMO with continuous in-RAM game state and real-time player-to-player interactions, because it isn't!
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement