Hi,
I've been working on a library to implement netcode for multiplayer games.
- Clients are running in the future compared to the server, approximately
RTT/2
ahead so that client inputs at tick T arrive on the server roughly when the server is processing tick T - Clients speed up their time slightly (by +/- 10%) to make sure that this is respected and that the input buffer doesn't grow too large or too small. Basically we always want to make sure that the server has an input to process for a given tick T
- I've implemented client-side prediction and input-delay. Latency can be hidden by a combination of input-delay + prediction. For example if the client is running 10 ticks ahead of the server (~150ms at 60Hz) then we could have 50ms of input-delay (3 ticks) and 100ms of prediction. In that case the inputs are delayed by 3 ticks, and the client timeline is actually
RTT/2 - input_delay
ahead of the server
I am trying to implement the logic described in this guide: https://www.snapnet.dev/docs/core-concepts/input-delay-vs-rollback/ Which is to make the input delay value dynamic based on the client's latency.
For example at 30ms, we could cover the latency only via input-delay. But then if the latency conditions of the client change and the latency jumps to 140ms, we could over this by increasing the input-delay and adding a bit of prediction.
What I don't understand is how it is possible to modify the input-delay dynamically, as this might cause some inputs to be missing or overwritten.
For example let's say that the input-delay should change from 4 ticks to 3 ticks, then we would have:
- tick 100, delay = 4, write input A in buffer for tick 104
- tick 101, delay = 3, write input B in buffer for tick 104 → input gets overwritten!
In the reverse case where the input-delay increases, we would have missing inputs for a tick.
Any ideas?