Advertisement

Network Tick Sync and Skipping/Stalling

Started by April 05, 2016 11:39 PM
-1 comments, last by sufficientreason 8 years, 7 months ago

I'm currently working on dejitter buffers for my server-auth simulation setup. I've been doing some digging and found this post, which nicely details a scheme for synchronizing clocks between client and server based on the Quake 3 source code. However, one thing I'm not sure about is when to apply extra/fewer ticks to certain things in order to keep entities synced up.

I'm thinking about situations where the client is running a little faster or a little slower than the server. I don't have fine-grained clock control because my clients use Unity, so things like skew adjustments aren't available to me. My main weapon to speed up or slow down the client adaptively is to apply an extra tick on the client, or one fewer. This would happen when the clock steps forward or stalls in the linked post's update scheme. However, I'm not sure what to apply those ticks to. The server is the ultimate authority on entity state, but the client sees two kinds of entities:

Pawns are entities that the client controls (generates input for) -- these are locally simulated for prediction

Ghosts are entities that the client doesn't control -- these are not locally simulated, they just interpolate/extrapolate over known states

For ghosts, I think this is simple. Every fixed update the client estimates what the server's tick is and pulls the highest state out of the dejitter buffer whose tick is <= that estimated tick. Then we just draw that (with some smoothing). For pawns, this is more complicated. One question I'm debating right now on the client side is the following:

1) Say we just had to adjust our predicted server tick on the client. We're now 1-2 ticks above or below where we previously thought we were. Now it's time to update our pawns. We could decide to generate an extra input (though it would just contain the same key presses) or generate no input at all. Should I do this? If not, the client will just always generate one input per client tick update, and may send too many or too few to the server.

Now, the server has the same dejitter buffer for smoothing out received inputs from the client. Just like how we do with states, every update the server pulls the highest input from that buffer whose tick is <= the estimated client tick. Then we apply it. It's important to note that in this naive approach if the client is slow, we'll re-apply the same input more than once, and if the client is too fast we'll skip input. That leads to my second conundrum:

2) We just had to adjust our client tick predictor to be a couple of ticks above or below what we thought. On the server, do I apply an extra input (or skip one) to compensate on all of the entities that client was controlling? What about some sort of speed hack (where the client runs at a higher rate and sends extra inputs for more speed)?

There's more to it, but some discussion might help me hone in on some of the other problems I'm having with this technique.

This topic is closed to new replies.

Advertisement