Hi,
I implemented a networking library with features like interpolation delay, client reconciliation, client prediction, lag compensation. However I have noticed that on the client there are frequent mispredictions. I have found what is causing it but I'm not sure what the best way to fix it is. Here's the gist of how it works:
Client:
Every tick:
1) Adds sampled input to input buffer (Max input buffer size is based on the configured max prediction time). For simplicity assume input contains data like "MoveLeft = true"
1) Rolls everything back to latest received state
2) Removes old inputs from input buffer (based on last acknowledged input which is contained in latest received state)
3) Simulates inputs in input buffer. After each simulation, it also updates physics or state that is based on time (1 simulation = 1 tick interval of time)
4) Sends inputs to server (including older ones that were not acknowledged in case packets were dropped)
Server:
Every tick:
1) Enqueues received inputs to clients' input buffer - duplicates are dropped
2) Dequeues one input from each client and simulates it
3) Updates physics or state that is based on time (For 1 tick interval of time)
One thing to note here is that if client tries to hack by sending inputs faster, the server will still only execute 1 input per client per tick.
Here is when the problem occurs (Assume both client and server are at local tick #1)
Tick 1: Client sends input A. Let's say the server received this on time and accurately simulates it on tick 1 - great both states in-sync.
Tick 2: Client sends input B. This time however, it was delayed, and the server doesn't have it on its tick 2. The server won't simulate inputs for this client on tick 2.
Tick 3: Client sends input C. This time Server received it on tick 3, same as the previous input. So the server now received 2 inputs on the same Tick (#3). It will dequeue input B and execute it.
Tick 4: Client does nothing, but server will dequeue and execute input C here.
From the perspective of the server, this happens:
Tick 1: Input A is dequeued and simulated. Simulates physics.
Tick 2: Did not receive input, so just simulates physics.
Tick 3: Received input B and input C, dequeues input B and simulates physics.
Tick 4: Did not receive input. Dequeues input C and simulates physics.
The above will cause a misprediction on the client because inputs are run in different times in relation to the physics simulation. In the example here I am using physics simulation as the one causing the misprediction on the client but it could be any inputs whose simulation depends (directly or indirectly) on time.
Eventually the state is synced due to reconciliation on the client but it causes a frequent and visible stutter on the client, even when both client and server are running on the same machine with 0 network latency, so surely there must be something that I am doing wrong because in real network conditions the issue will be even more noticeable.
The only thing I can think of is implementing a de-jitter buffer on the server when receiving the inputs to make sure they arrive nicely per tick, but I have not seen any reference to something like this in other popular games.
I'm wondering if there is either some other way to fix this problem, or if I implemented the network input system in a wrong way. I would appreciate any insights!
Thanks!