Advertisement

Delta compression.

Started by April 13, 2018 01:11 AM
8 comments, last by hplus0603 6 years, 7 months ago

Hi there. I am really sorry to post this, but I would like to clarify the delta compression method. I've read Quake 3 Networking Model: http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/Quake3Networking, but still have some question. First of all, I am using LiteNetLib as networking library, it works pretty well with Google.Protobuf serialization. But then I've faced with an issue when the server pushes a lot of data, let's say 10 players, and server pushes 250kb/s of data with 30hz tickrate, so I realized that I have to compress it, let's say with delta compression. As I understood, the client and server both use unreliable channel. LiteNetLib meta file says that unreliable packet can be dropped, or duplicated; while sequenced channel says that packet can be dropped but never duplicated, so I think I have to use the sequenced channel for Delta compression? And do I have to use reliable channel for acknowledgment, or I can just go with sequenced, and send the StateId with a snapshot and not separately? 

Thank you. 

Check my dev blog: http://wobesdev.wordpress.com/

You do not need a reliable channel for acknowledgement. If an acknowledge gets dropped, the channel will just re-send more state until the next acknowledge is received.

The basic idea in Quake delta compression (more like "selective update" compression) is:

  • a packet has many constituent parts
  • a packet also has a header with three pieces of information:
    • last-received packet sequence number from other end
    • current packet sequence number for this packet in this direction
    • bitmask of which packet parts are included in the payload
  • each part has a separate "generation counter" (which could be "packet sequence number") that you update in memory each time the data of the part is modified
  • keep track of the last acknowledged packet sequence/generation number from the other side
  • when time comes to send a new packet, include all the parts that have a generation number later than the last-received acknowledgement

Note that you can use different sequence numbers in each direction, and it'll still work fine. Also note that sequence numbers don't need to be large; 8 bits (1 byte) is plenty, because you can calculate "is later than" using cyclic math (cast an unsigned difference into a signed value and comparing sign, which ends up being checking the highest bit of the difference.) In fact, using long sequence numbers may hide subtle bugs where those sequence numbers finally roll over.

enum Bool { True, False, FileNotFound };
Advertisement

Thank you so much for the clarification. It makes sense to me now, many thanks!

Check my dev blog: http://wobesdev.wordpress.com/

6 hours ago, hplus0603 said:

You do not need a reliable channel for acknowledgement. If an acknowledge gets dropped, the channel will just re-send more state until the next acknowledge is received.

The basic idea in Quake delta compression (more like "selective update" compression) is:

  • a packet has many constituent parts
  • a packet also has a header with three pieces of information:
    • last-received packet sequence number from other end
    • current packet sequence number for this packet in this direction
    • bitmask of which packet parts are included in the payload
  • each part has a separate "generation counter" (which could be "packet sequence number") that you update in memory each time the data of the part is modified
  • keep track of the last acknowledged packet sequence/generation number from the other side
  • when time comes to send a new packet, include all the parts that have a generation number later than the last-received acknowledgement

Note that you can use different sequence numbers in each direction, and it'll still work fine. Also note that sequence numbers don't need to be large; 8 bits (1 byte) is plenty, because you can calculate "is later than" using cyclic math (cast an unsigned difference into a signed value and comparing sign, which ends up being checking the highest bit of the difference.) In fact, using long sequence numbers may hide subtle bugs where those sequence numbers finally roll over.

 

Quick question. Is it necessary to use world-state model? To send the entire world's snapshot rather than sending separate entities? For example, players synchronization uses 30 packets per second, while other objects use 20 packets. Is there any advantages to send them separately in their own timestamps, or it is necessary to be world state every fixed server tick? Thanks.

Check my dev blog: http://wobesdev.wordpress.com/

You typically do this per-entity. Although in a typical Quake-style game, you just synchronize the players (and the grenades or whatever they fire come with them.) And you synchronize every player every step, because the game needs that.

If you want to do selective synchronization, then you want to know which packets a particular entity was sent in, and then when you get an ack for packet X, only update the last-acked counter for objects that were in packet X. This adds more book-keeping in the protocol, for sure.

enum Bool { True, False, FileNotFound };
20 hours ago, hplus0603 said:

You typically do this per-entity. Although in a typical Quake-style game, you just synchronize the players (and the grenades or whatever they fire come with them.) And you synchronize every player every step, because the game needs that.

If you want to do selective synchronization, then you want to know which packets a particular entity was sent in, and then when you get an ack for packet X, only update the last-acked counter for objects that were in packet X. This adds more book-keeping in the protocol, for sure.

 

Thank you. But don't you know, what is the best approach for the survival game, let's say 250-500 players per server? 

Check my dev blog: http://wobesdev.wordpress.com/

Advertisement

Quake 3 servers keep a snapshot history separately for each client, and only sends a sub-set of the gamestate based on what's visible to each client. So, as long as each client is in a different part of the world, Q3 client-side bandwidth would be the same regardless of whether there's 10 players on the server or 1000.

Of course the server-side upload bandwidth scales with the number of clients, both the that's not as much of an issue -- it's cheap to rent servers in real data centres with 100mbps upload bandwidth and data caps high enough to keep it saturated. 

If all the players are able to congregate in the same area, then yeah you might need some extra measures though, such as changing the update rate per entity (the client themselves and their closest opponents at full rate, other opponents at half rate, etc...), changing the quantisation per entity (nearby opponents with 24bit positions, further opponents with 16bit positions, etc...), or something else... 

29 minutes ago, Hodgman said:

Quake 3 servers keep a snapshot history separately for each client, and only sends a sub-set of the gamestate based on what's visible to each client. So, as long as each client is in a different part of the world, Q3 client-side bandwidth would be the same regardless of whether there's 10 players on the server or 1000.

Of course the server-side upload bandwidth scales with the number of clients, both the that's not as much of an issue -- it's cheap to rent servers in real data centres with 100mbps upload bandwidth and data caps high enough to keep it saturated. 

If all the players are able to congregate in the same area, then yeah you might need some extra measures though, such as changing the update rate per entity (the client themselves and their closest opponents at full rate, other opponents at half rate, etc...), changing the quantisation per entity (nearby opponents with 24bit positions, further opponents with 16bit positions, etc...), or something else... 

 

Hi there. Thank you for the reply. Well, I would say, it would be the case to have 50-70 entities in the same network zone. I've implemented something called "zone of interests", basically it is like the View distance, but instead, network view distance. So the server sends updates only for the objects that the client observes in a certain range, let's say 1km. My final question is probably about memory allocation. What is the best solution for the server? To use entire world snapshot model history per client, or just per object snapshot history. Many thanks.

Check my dev blog: http://wobesdev.wordpress.com/

Last question first: Doing snapshot per client obviously uses more memory than doing snapshot per object. If you have 100 clients, each of which just keeps one snapshot per object, that's the same as keeping 100 global snapshots of the same object, which would let you see 3 seconds back at 30 Hz simulation rate. You probably don't need more than 1 second look-back; if someone is more out of sync than that, just send the entire thing. You still need a per-player "player has acknowledged entity X at tick Y" map, but that's much cheaper (two integers per entity per player) and you need a "packet for tick Y contained entities Xs" map per player; that's also not terribly expensive.

Regarding network view range: This is very common in networked games with large worlds (large FPS-es, MMOs, and so forth.) There are a few common gotchas, such as making sure that entity "become visible" and entity "disappear" messages are reliably delivered, else you may end up with "zombie" entities that the client thinks are visible, but the server doesn't send updates for anymore. There's also the problem of binoculars, and long-range missiles, and similar game mechanics. Easiest is to not add such mechanics to your game; if you need binoculars, you may want to add a "movement blur" effect when opening them to give the server time to make-visible entities further out in the binocular direction.

The classic problem with network view range is that some zones will be much more popular than others. The "auction zone" or "market square" in an MMO will have much higher player density than regular combat zones, for social reasons. If everyone wants to be in the same place at the same time, then you have to show everyone to everyone else. Figuring out what to do in this case is still something you need to solve. Gameplay design can help; PUBG/Battle Royale style gameplay naturally spreads players out initially, and as players attrit, there are fewer of them as the play area shrinks. Similarly, making gathering spots "no-fight" from a design point of view means that dedicated meeting areas can use different network update rates, not send weapon info, etc, and still not break the game.

Also, it may not be obvious initially, but server network needs actually scale by the square of the number of players, because if 100 players see 100 other players, you need to send data about 100 players, TO 100 players, which ends up with 10,000 "player data points" being sent. If your game design includes network view range, then it will tend to put a soft limit on the number of players who can see each other. One way to put a hard cap on that is to dynamically adjust the visible range based on how dense the player population is around you.

Finally, beware of cheap "100 mbps unlimited !!!" hosts. Typically, the 100 mbps is only the virtual network card in your VPS; the physical host is typically over-subscribed, and the network switch at the top of the rack it's plugged into is typically oversubscribed, and so on all the way to the internet peering/transit connections. I've had "unlimited 1 Gbps!" hosts end up delivering 40 kB/s to the actual internet, because of this.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement