I've taken a break before reconsidering this.
I still have a remaining question, if you would be so kind as to clarify for me.
Conflict between clock synchronisation and the command delay on the client. - Should I manage the "forward projection" time that the client adds to the current tick separately from the clock synchronisation system (e.g)
received_tick = synchronised_tick + projection_tick
or should I run the client ahead (so the synchronised tick itself handles the latency upstream)?
I assume that running the client ahead makes more sense.
Following from this, how best should I approach clock synchronisation (with the second suggested method, whereby we run ahead of the server)?
The most robust method for finding the RTT seems to me to be something like this
fac = 0.05
rtt = (fac * rtt_old) + ((1 - fac) * rtt)
new_time = timestamp + rtt
But then I need to include the command delay, which will likely change when the RTT measurement changes, so it will jitter around a little bit (the rtt estimate may be "off" so the server will tell the client its command was late, and therefore the client will increment the command delay. But the RTT estimate will likely compensate for the changed conditions the next time we update it).
The other option is that we don't separate the two ideas of "command latency" and "upstream latency" and just have a single latency variable. We update this by nudging it from the server.
if not_current_command:
if command_is_late:
self.client_nudge_forward()
else:
self.client_nudge_backwards()
But this seems a little to coarse grained, surely? Plus there would be some significant convergence time unless we factored in the difference in timing as a scalar. I'm not sure how large this scalar might need to be though.
difference = int((server_tick - command_tick) * convergence_factor)
if not_current_command:
self.client_nudge(difference)
-------------------------------
My findings:
My dejitter buffer is simply a deque structure.
The clock synchronisation relies on the "nudging technique". I have found it to be responsive and stable on local testing conditions, with simulated latency (but not with jitter as yet). I cannot use a smoothing factor because it will otherwise take too long to converge upon the "correct value".
To perform clock sync:
- Send the command, a "received_clock_correction" boolean and the client's tick to the server. The server checks if the tick is behind the current tick, if so discarding the command and requesting a forward nudge (by the tick difference). Otherwise, we store it in the buffer.
- In the simulation, read the first packet in the buffer. If there isn't one available, don't do anything and return.
- If the tick is late (I can't imagine why it would be as we should have caught this, but lets say that you don't run the simulation function for a frame), we just remove it from the buffer (consume the move) and then recursively call the update function to see if we just need to catch up with the newer packets in the buffer).
- Otherwise, If the tick is early, we check how early it is. If it is more than a safe margin (e.g 10 ticks), we send a backwards nudge to the client (by the tick difference, not the "safe" tick difference) and consume the move. Otherwise, we just skip this frame until the server catches up with the latency in the buffer (which, as aforementioned is less than or equals to 10 ticks late).
- Otherwise we consume and run the move.
The purpose of the "received_clock_correction" boolean is to allow lockstep-like correction, This means that we won't send n correct by x RPC calls in the time it takes for the client to receive, apply the correction and send a new packet. I already have an RPC locking system in place (three functions, server_lock, server_unlock and is_locked (server side) ), but they will not be delayed by the dejitter buffer, which is what we use for clock correction.
The boolean is included in the command "packet" (RPC call) and so it is read at the same time the command is considered for execution. In my latest implementation, I have a new RPC (server_add_buffered_lock and server_remove_buffered_lock) which work like their unbuffered counterparts except they are executed in time with the dejittered commands.