Advertisement

Game state synchronization techniques

Started by July 26, 2014 12:05 AM
7 comments, last by hplus0603 10 years, 6 months ago

I've been reading through a couple open source game projects: RakNet, Cube, and The Mana World.

These projects use different methods to synchronize the game state, but I feel like all of them are subpar.

Here's my very rough summary of each (these may be a little off because the projects' sources are not made to be readable)

* RakNet uses delta compressed reliable packets in its ReplicaManager3 to synchronize variables (uses UDP instead of TCP... why?)

* Cube seems to use reliable packets for everything except movement (positions of every movable object sent every tick in an unreliable packet)

* The Mana World also uses reliable packets similar to RakNet (though they switched to ENet from their working TCP implementation... why?)

In this article, the snapshot system of Quake 3 is described.

I really like the idea of delta compression with unreliable packets, but storing snapshots per-client makes scaling up a pain on RAM.

Instead, I currently have the server storing the last one second of positions from each tick.

The server rewinds to the last tick the client recognized in an ACK and generates a delta snapshot from that state to the current.

This is convenient because I plan on implementing lag compensation later anyways (which requires storing the last one second).

However, I feel like I've not gotten a thorough idea of all the various game state synchronization techniques,

and finding more articles on the subject is fairly difficult for me. What are some other ways fast-paced games synchronize the world?

Edit: About the UDP instead of TCP stuff - I'm under the impression that, if one is going to use only reliable packets on a single stream, TCP is much more optimized.

If one requires reliable, in-order, derlivery of all packets, then one might as well use TCP.

If that's what you do, though, you should know that the network subsystem will delay any newer packets that come in while it's waiting for older packets. For twitch-sensitive FPS games, that is generally not what you want. Hence, why most games end up with UDP for movement.

storing snapshots per-client makes scaling up a pain on RAM


I don't quite see it. Could you calculate out the RAM cost for illustration? (How big is a snapshot? How fast is your network tick rate? How many clients are you talking about?)
enum Bool { True, False, FileNotFound };
Advertisement

I believe that unless you have a large amount of changing data on the objects (in which case you'd likely have a bandwidth problem), the RAM needed for the per-client snapshots is not going to become a serious problem for scalability at first.

Rather, the problem is the CPU processing needed to determine for each client, what data is relevant and what needs to be sent. Assuming that it's typically the other clients' player character state that needs to be examined for changes and to be sent to all interested parties, this becomes an O(n^2) problem for n clients.

As long as the client count is that of typical FPS multiplayer games (something like 32-64 max), even that isn't likely going to be a serious problem. But for scaling beyond that it'd be ideal to be able to just blindly send the same data to several clients, without doing any per-client rewinds or inspection of "what this client needs to know".

Trivia : To keep so many gamestate for each players consumes a lot of memory: 8 MB for 4 players according to my measurements. - http://fabiensanglard.net/quake3/network.php

A few things I note here

* Quake3 does not have a system of relevant sets

* I expect to network many dynamic entities (Quake3 syncs only players and power-ups)

* My entities have less states than Quake3 (position and angles only currently, health later for more prediction)

* Currently, I run the server at 66 tick (CS:GO), but when testing on 30 (Unreal), I seem to get decent results

- Quake3's 20 tick doesn't live up to current games' standards in feeling (too much leading shots, at least in my engine [no extrapolation])

* Player states shouldn't take up the majority of my memory usage

* Also, most importantly, while not being a strict requirement of my project, I'd like the game to be structured for as large of battles as possible (4000+ dream)

So, more-or-less, I think current hardware could handle this based on this qualitative junk,

but I see no advantage to it over my current implementation of server snapshots.

But for scaling beyond that it'd be ideal to be able to just blindly send the same data to several clients, without doing any per-client rewinds or inspection of "what this client needs to know".

I've already implemented what Unity calls relevant sets to remedy this partly. The world is quite big and players generally are not in one another's potentially visible set.

However, you bring up an important point that sometimes it may be better not to use the snapshot system to delta compress objects - and to instead blindly send them.

I think there exists some type of optimization to help alleviate this, but that would get really specific to the game. I'm just looking at different techniques currently.

Thanks for the responses! I'm sort of pleased if nobody has a better answer than what I currently use, but at the same time, snapshots feel sort of sloppy.

Player states shouldn't take up the majority of my memory usage


Again, could you do the math for this? To me, even two seconds of 60 Hz state for 1000 players doesn't seem like a big deal.
What are your restrictions, and how big is your state?
enum Bool { True, False, FileNotFound };

[Dynamic Object] (26 bytes total)

* Id (2 bytes)

* Position (12 bytes - can be lowered by not using floats)

* Angles (12 bytes - can be lowered by not using floats)

[Static Object] (2 bytes total)

* Id (2 bytes)

Each snapshot contains all the static objects in the scene and all the dynamic objects in the scene that are in the clients' range.

I feel like a safe estimate is that there can be 50 dynamic objects and 50 static objects in a players' view on average.

If I have 1000 players, each storing 60 snapshots with 50 dynamic objects and 50 static objects (while not including container pointers and such)

I get that ~80 megabytes are used by the per-client snapshot system.

A worst-case scenario is that 1000 players are actively fighting each other.

Each player then sees maybe 1050 dynamic objects and 50 static objects.

I get that they will take ~1.5 gigabytes. That's a pretty generous estimate too.

The point is that storing one set of snapshots seems more effective, but maybe I'll benchmark it one day instead of guessing at it.

Advertisement

Why have to store 1 sec/60 frames worth ?

Why not nearest 6th of a second? (store 6 snapshots back) as youre going to lag compensate anyway and its all transitory data you dont need perfect catchup animations for.

With this much shorter set of snapshot data you might be able to make your per client store static (ditch the container overhead and use pointer rnath on the server) Snapshot variable sizing just requires circular indexing maths...

Depending on how often this ACK-fail retransmit happens, couldnt you also have the per client snapshot on server be pointers/index to one full (common) set of snap data set (stored per actual object) to minimize the server memory?

If the failure+retransmits are chronic then all this overhead doesnt gain much over just forcing through current data state.

-

It also might be good to compress the object angles from float to Int16 to cut down you primary update data (65000 angle increments should be more than sufficient).

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

... couldnt you also have the per client snapshot on server be pointers/index to one full (common) set of snap data set (stored per actual object) to minimize the server memory?

This is sort of what I do. The client has an integer reference to the last server state it ACK'd, and then a delta snapshot is generated from that server snapshot to the current.

You don't need to store snapshots on a per-player basis. The state of object X at type Y is the same, no matter who is viewing that object.

Thus, with 1000 players and 1000 static objects, and 32 bytes per snapshot, and 120 snapshots total (for 2 seconds at 60 Hz,) I get that to less than 8 MB.

Additionally, you only need to snapshot the states that are actually sent as network ticks, and 60 Hz network ticks is not a good idea for games with 1000 players in the same area.

Also, if 1000 players fight each other, you're going to have other problems than snapshot memory cropping up much earlier. Physics, collision detection, rendering frame rate, client video RAM (assuming 3D), size of each packet you send, ...
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement