Advertisement

Network Server Culling?

Started by October 18, 2013 01:24 PM
8 comments, last by hplus0603 11 years, 1 month ago

Soo... how would an MMO be feasible?

For example a game like WoW where thousands of users can connect to a "single" global virtual world server (likely load-balanced distributed cluster)... I mean, it's not as if all of those thousands of player positions are streamed to all players?

Regardless of the server bandwidth throughput capacity, say you have 1000 players, 30 bytes per position packet at 30 Hz, that's almost 1MB/sec download which for many clients may be too large??

What are good practices?

One I see, what is somewhat convoluted and complex, is to do server "rendering" of the scene and players for occlusion query testing and only send player positions to those who are visible to eachother... but that would probably be too huge of a cost cause you'd need to render from all players frustums and do occlusion queries for all players * all players :x

I guess maybe that's where smart load balancing can come into play? Heh?

I'm really looking for the "utmost critical necessity" for an MMO to be feasible (network wise. client rendering performance aside)... Not necessarily super high end AAA+ techniques, but both wouldn't hurt

Rendering for occlusion tests is possibly overkill (and you might have to do it software-only if the servers don't have GPUs) but you could do coarse frustum culling by having the clients stream their camera position and orientation to the server, and checking for moving objects in the frustum.

Advertisement

The word you want to look for is "interest management."

In general, the server will limit how many entities you can see, and will often send less detailed information (longer time between ticks; coarser coordinates, etc) for entities that are "less interesting."

The interest management algorithm for most MMOs typically looks something like:

- if it's my character, or someone in my group, or someone I have targeted or is targeting me, interest is "full"

- if it's someone targeted by or targeting some group member, interest is "high"

- allocate interest on a declining scale based on distance to player (further away is less interesting) until the pipe gets full

This assumes that you can make a good estimation of how much bandwidth is used by each entity, and that you can set a reasonable bandwidth limit that will not overwhelm the player.

enum Bool { True, False, FileNotFound };

Rendering for occlusion tests is possibly overkill (and you might have to do it software-only if the servers don't have GPUs) but you could do coarse frustum culling by having the clients stream their camera position and orientation to the server, and checking for moving objects in the frustum.

Not overkill, but a terrible idea in general. Players outside of your field of view can still affect you, for instance you need to be able to collide with a player standing behind you if you step backwards, and even ignoring direct interaction with players outside your FoV, you don't want to have to wait the round trip time to get updates about other people every time your view changes slightly.

Another term to look for beyond the interest system is "awareness" systems (aka: spacial awareness, range pairing, etc). In general this is the top level sitting on top of interest of most server code which generates the pairs of "potential" interest between different clients. Basically these systems are your first level of simply saying something is within a given range of something else, both for clients and NPC's. With that rough information, "then" you can compute the interest systems which decide if you send data, what quality the data is etc. The down side is that solutions for awareness range from naive to really complicated. Each solution has it's own downfalls and benefits, though even the naive solution sometimes is completely fine.

Writing a naive solution to start with would give you a good idea of the problems involved. Usually that can be done in about 100 lines of code or less and it would likely be good enough to handle 50-100 players though CPU would start getting swamped in that range. Figuring out how to remove the O( n^2 ) bits of the code is where you have to make the decision of just how much you intend to scale the server, Simple position hashes (i.e. grids) can chop things down very easily but usually they have the downfall that large groups of players (such as raid groups) would be O( n^2 ) in more confined settings like dungeons and such. Octree/Quadtree's with variable grid sizes are more complicated but deal better with the edge cases of raid groups and such. Loose kd-tree's work also but every little while they get so dirty you have to recalculate from scratch which causes occasional CPU spikes which I dislike. Sweep/prune is probably the most complicated and have a bit more CPU overhead in general yet have the best scaling and steady CPU cost of all the solutions, but implementing this is *not* easy..

John Ratcliff had a good article with several variations but I can no longer find it with brief Google-fu. Good luck...

Figuring out how to remove the O( n^2 ) bits of the code is where you have to make the decision of just how much you intend to scale the server


You may be able to cut down the common case, but the worst case is still that all players want to be in the same spot (market/raid/throne room/whatever.) At that point, you still get N-squared interactions.
Preventing that situation is a game design challenge, not a technical challenge.
enum Bool { True, False, FileNotFound };
Advertisement
Not overkill, but a terrible idea in general. Players outside of your field of view can still affect you, for instance you need to be able to collide with a player standing behind you if you step backwards, and even ignoring direct interaction with players outside your FoV, you don't want to have to wait the round trip time to get updates about other people every time your view changes slightly.

Yes, a simple binary toggle (do not send anything behind player's back) will result in substandard experience, but what you can do, in addition to just distance-metric based interest management, is to further throttle down the update rate of objects behind the back. You're right about the RTT; the server would need to use a larger frustum (based on an estimation of how fast the client might be turning the camera + with RTT factored in) than what the client uses to render. Finally, what you can get away with also depends on the camera type: is it first person? Third person? Diablo-like?

Figuring out how to remove the O( n^2 ) bits of the code is where you have to make the decision of just how much you intend to scale the server


You may be able to cut down the common case, but the worst case is still that all players want to be in the same spot (market/raid/throne room/whatever.) At that point, you still get N-squared interactions.
Preventing that situation is a game design challenge, not a technical challenge.


True, if everyone in the world stands on one spot there really is little that can be done other than overall preventative measure such as "zone full, bugger off".. smile.png On the other hand, octree, sweep prune and probably a couple others you can detect "too many" and just mark an entire range as ignore awareness, just send everything round robin. I.e. You might not get an update about a player for more than a second but the server is hosed anyway so don't expect it to play correctly.. smile.png

Server side it's easy. On my project I've had upwards of 100,000 concurrent players/npc's in the same world, all tracking objects near them using a spatial hash on a partitioned 2d grid. That was on a 3 server cluster. The spatial hashing stuff is open source you can see the core of it here https://github.com/chrisochs/game_machine/blob/master/java/src/main/java/com/game_machine/core/Grid.java.

You just run out of client bandwidth at some point. The only optimizations I did was just sending two coordinates and let the client figure out the third, and sending integers instead of floats, which is good enough most of the time. And bit packed messaging using protocol buffers.

On a local lan I crash my unity client way before I run out of client bandwidth, at around 400 players within view. Over the internet it's a different story, I had to cut back on the number of updates per second quite a bit.

Plus, I was pretty much just doing location tracking plus simple auto attacks in my tests. In a real game you have a bunch of other data flying around. At that point you have to start prioritizing stuff. For example in GW2, they start to lock out abilities except for auto attack in big battles. They just stop sending those commands to the server. Even online just slows down the game clock. There isn't any perfect solution. Eve and GW2 do the best job with large virtual worlds that I've seen. A lot of other games just have terrible server architectures and it's the server that cant' keep up (which is ridiculous in this day and age).

Chris

what you can do, in addition to just distance-metric based interest management, is to further throttle down the update rate of objects behind the back. You're right about the RTT

I actually tried this for There.com (mainly open world / outdoors terrain) before settling on a simple distance based metric. The player can whip the mouse around much faster than your view vector based interest management can react, and the experience was always janky. The solidity of objects always being there/up-to-date when they're near you was worth more than the marginal increased view fidelity when just looking forward.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement