Advertisement

Is this a viable architecture for a mmog ?

Started by November 24, 2005 04:41 PM
39 comments, last by _winterdyne_ 19 years, 2 months ago
Keeping a single cell on multiple servers means you have to do a network message for same-cell object interaction; that's not great for performance.

We implemented variable-shape cells for There; each "cell" is an arbitrary set of nodes in a modified quadtree (modified to map an entire earth-size sphere, not just a square area). We can re-balance the cells "live" (in real time) but we usually don't need to, becuase after an initial settling pass, load is fairly repeating over time.
enum Bool { True, False, FileNotFound };
True visibility calculations aren't required for a client - besides, you need to send traffic from non-visible cells (for example a conversation behind the player).

Your only concern on the server end is what is 'relevant' to the client - typically this is a simple radius affair. You can use spatial partitioning methods to aid in culling for certain event types - but it's a lot less headache to design your world with an effective PVS system - for each segment defining either a procedurally generate PVS or manually reducing the set for certain layouts.

You CANNOT allow the client under any circumstance to dictate to the server what information it should receive. This is asking for trouble. Say you have this layout:

a1 a2 a3 a4 a5 a6
b1 b2 b3 b4 b5 b6
c1 c2 c3 c4 c5 c6
d1 d2 d3 d4 d5 d6
e1 e2 e3 e4 e5 e6
f1 f2 f3 f4 f5 f6

A client has a visual range of 1 cell radius and is in b5. It SHOULD receieve a4-6, b4-6, and c4-6.
Your client could request f1 d2 a1 and anything else it feels like.
You still need to perform the checks on the server. You might as well leave them on the server.

With a regular cell based map, your checks are really trivial - you simply take the cells at the appropriate coordinates. It gets a bit trickier when you have an irregular, hierarchical, structure like I do - which is why I have all that bizarre neighbourhood stuff going on.

If you want to transfer cells from server domain to server domain (don't use the term zone for the area a server handles if it can be discontinuous) you will need a means of distributing that information to all other servers in the cluster - a master server is the simplest way of doing this and handling messages that overflow zone boundaries.

Steadtler is right to encourage you to think about multi-server scaling - sure you can start your game on one server, and possibly get the content together to support say 300 users (still quite a big game!). When those 300 are paying you subs, and their mates are trying to join, you'll want to be able to expand the game with minimal down-time. If you have to rewrite your entire codebase to do so, you run the risk of losing customers. When you're paying a monthly fee for colocated hosting, losing customers sucks.

Winterdyne Solutions Ltd is recruiting - this thread for details!
Advertisement
Because your cells are in a grid topology, you can coarsely identify which should be visible based on the frustum. Within these, if you wish, you can send specific 'make visible / make invisible' messages based on fine frustum checks, but in order to quickly load up assets on your client, you should send the descriptors (defining what assets should be loaded for what entity) BEFORE you actually need them - I'd use a larger than visual range radius-based method for this as your client can turn around quite rapidly.

Given today's hardware, your main bottleneck is going to be the network, especially on a single server / single bandwidth allocation setup (unless of course you're running some insane physics simulation on a lot of entities). It's therefore in your interest to cull as much data from being sent as is possible.
You will of course need to sort data - split chat into different channels for example, and allow clients to subscribe to one or more - perhaps you have clients that specify to 'ignore shouts' - this should be done at the server end, not at the client.

Your cell based topology can help here - implement a channel for each major event type for each cell - and subscribe or unsubscribe clients to potentially relevant cells as they move around.

Winterdyne Solutions Ltd is recruiting - this thread for details!
Quote:
you can coarsely identify which should be visible based on the frustum


The user can turn around much faster than the network can keep up. I would strongly recommend against trying to do any kind of aggressive view frustum cull on the server filter, because it will lead to object popping when the user quickly turns around.
enum Bool { True, False, FileNotFound };
Hence me advocating the radial method.
Winterdyne Solutions Ltd is recruiting - this thread for details!
I think that the raidus method is a great idea. As far as which server handles which cell here is what I wrote in another post.

Quote:

Why not just have one master machine then a bunch of slave machines. When you try to connect you connect to the master first to find the slave with the least amount of users. The slaves are the machines that actually do all your packet forwarding. On each slave machine you have a list of zones. And in these lists of zones you have the people who are currently in them. So lets say your in zone A6, you send a chat message to the slave, it looks at list A6 and sends the message to everyone in A6 and also sends it out to the other slave machines who in turn send the message to everyone in their A6 list. If you get too many people on one slave you can have users transfer to a less busy slave to better balance the load. Any problems with the scenario? Blizzards Battle.net servers work in the same manner except they balance the load of chat channels, not game zones.


Expanding on that the backend network traffic should be easily handleable. Gigabit networking comes standard on all boards now so that shouldn't be an issue.
Advertisement
Making view dependent messages like this is a really bad idea - it's doable, but the problem (as hplus has mentioned) is that a player can typically turn a lot faster than the network can keep up.

What you'll end up having to do is limit the player's orientation to what the server says it is, which will give you very laggy control indeed.

Using PVS (potential visibility sets) on your topology is the only way you can really control visibility from the server - players don't swap PVS as often as they change direction.

The only reason you'd need to provide explicit visibility information is to prevent 'radar' or 'x-ray' exploits. Whilst the method you propose does severely curtail these exploits, it doesn't prevent occlusion-avoiding exploits. The benefit of cutting the radar exploit doesn't balance the cost of the required data exchange when compared against limiting such exploits to a potentially very localised area through the use of a PVS technique.

Winterdyne Solutions Ltd is recruiting - this thread for details!
The principle's quite simple. For each cell in your topology, maintain a list of other cells which may be visible. For example if a particular cell is bordered by impassable mountains on one side, the cell on the other side can be marked as invisible, and you can safely ignore any visual event from that cell (because there's no way a client in the first cell can see it). Having a fixed visual range provides the bulk of the PVS data (cells out of range are invisible), but further refining the set helps to cull traffic, which is the ultimate objective of a decent MMO topology.

These should really be defined at design time (implement functionality for specifying PVS in your map/world editor). You could also determine PVS programmatically at load time, but because your world topology is static, you may as well do it at design time.

DPVS (Dynamic Potential Visibility Sets) are used in rendering to rapidly cull things that are hidden by an occluder - because you can't really track the view frustum effectively on the server you won't need these on the server side, BUT you can use the fixed PVS from the server topology in conjunction with these as part of your scenegraph on the client.
Winterdyne Solutions Ltd is recruiting - this thread for details!
Yes, but typically a portal engine is used to cull polys based on a frustum defined by the portal itself. A server isn't so concerned about poly culling, and is better off using a much cruder model of visibility.

Winterdyne Solutions Ltd is recruiting - this thread for details!
Quote:
Original post by _winterdyne_
The principle's quite simple. For each cell in your topology, maintain a list of other cells which may be visible. For example if a particular cell is bordered by impassable mountains on one side, the cell on the other side can be marked as invisible, and you can safely ignore any visual event from that cell (because there's no way a client in the first cell can see it). Having a fixed visual range provides the bulk of the PVS data (cells out of range are invisible), but further refining the set helps to cull traffic, which is the ultimate objective of a decent MMO topology.

These should really be defined at design time (implement functionality for specifying PVS in your map/world editor). You could also determine PVS programmatically at load time, but because your world topology is static, you may as well do it at design time.

DPVS (Dynamic Potential Visibility Sets) are used in rendering to rapidly cull things that are hidden by an occluder - because you can't really track the view frustum effectively on the server you won't need these on the server side, BUT you can use the fixed PVS from the server topology in conjunction with these as part of your scenegraph on the client.




I'm the OP.
I was thinking about having large cells, maybe 3 or 4 maximum visible at any given time, but then a PVS system will not be very efficient i think, I'm I right? Then i should use much smaller cells...

This topic is closed to new replies.

Advertisement