Advertisement

Delta compression and dropped client->server acks.

Started by November 19, 2016 03:38 PM
11 comments, last by Guns Akimbo 8 years ago

What I'm currently running with thanks to the above replies:

Client sends "commands" which are acked reliably (can't drop these, they're important for server replay) as: uint(frame n) + uint as n-(1->32) acks... These are small and regular.

Server can send deltas which it fires and forgets to which the client responds with fire and forget acks... subsequent delta's are from the last acked frame or 0 if the gap is too wide (on a frame: 0 delta the payload is treated as the full state)... These are compressed and less frequent.

Server can also send reliable messages which are acked by the client in the same way its commands are... These should be small and sent when they are ready (congestion control permitting)

So the client and server have a reliable 2 way message stream and the server also has an efficient (though not reliable, but eventually consistent) way to dump the deltas onto the client.

With the degradation plans of:

Client --message-> Server: You can only be so late before your input is discarded, you're going to be rubber-banded by server correction in some cases.

Client <-message-- Server: You will get this and it will happen but the effect may be so far in your past you don't see/hear any visual/audio representation.

Client <-state/snapshot-- Server: You'll get one of these sooner or later and the state will just be adjusted (jerkiness) if there is not a per-case, pre-programmed way for you to transition to it nicely.

How often is your server to be sending the snapshot state sync msgs and the deltas in between them ??

IF the full snapshots are sent close enough in time (like ~1 seconds apart) you might be able to make the delta transmissions truely 'Fire And Forget' (which I think commonly means : that they dont even need to be ACK'd or cached) and can be lost as they just smooth the movements (they are deltas off the last (reliably sent) state sync msg and independant of each other).

--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Advertisement

Ahh sorry, I realise that the term 'fire and forget' might have seemed laboured there. That is to say the server fires off a state delta and won't care if the ack is missed, the ack only matters to the next delta that goes out. Same with the "ack" for the delta, the user doesn't care if the server got it (just that is got one some time in the last X frames)... if there's a severe enough window between either then the client is going to be getting a big packet (with much worse compression) and stuff may jump/jerk around until state is restored.

Buffer sizes and updates are fully configurable and the plan is to simulate 250ms of latency and 25% packet loss (firmer numbers TBD), dial updates up to whatever can be managed without the congestion management being permanently on and then tighten up the mechanics from there. What I have at the moment is just a "toy" network layer with crude simulator and player client but it seems to handle throwing test data around localhost at a faster than needed rate (currently 60/60)... Which I understand means nothing, hence looking at simulated latency next.

Just trying to get the basic concepts embedded in the networking layer and making sure reliability is good enough to start working on my networked physics... Actual hard numbers will come as a result of making that feel right :D

This topic is closed to new replies.

Advertisement