Hello Game Dev! Sorry if I'm beating a dead horse with this thread, but I'm making a fast paced action networked game, and I'm having trouble wrapping my head around some things. Namely, how should the server handle ticking player input packets that arrive at the server at an incorrect simulation steps, as a result of network lag, lag jitter, and inevitably imperfect time synchronization algorithms. I'll start by outlining my understandings, and I would be eternally grateful to anyone who can shed some light on what I'm missing.
For the sake of simplicity, I'm assuming the client has synchronized to the server's clock in some respect, and bundles a "suggested simulation step" that the input ought to be executed on by the server.
I've read a few threads here, and a lot of the documentation that exists out there. As far as I can tell, there are 2 main methods to rectify inputs arriving at the server at incorrect times:
method 1. allow the server to rewind time and re simulate the game for late arriving input packets. This would allow late inputs to still have a chance of being simulated. The server would only allow rewinding to some maximum amount of time, to help curb cheating, and undesirably large jumps in the simulation for other players.
method 2. synchronize the client's clock forward of the server's clock by half of round trip time (RTT/2), plus some constant (c) to account for jitter in lag. So in total, (RTT/2) + c. Then, when the input arrives at the server, it arrives on time most of the time. If the input arrives earlier than it needs to, it can be buffered at the server until the appropriate simulation step to execute the input.
As far as I can tell, method 1 has the benefit of minimizing any sort of input buffering for players with good pings, but is a very heavy handed solution, and might cause frequent jumps in a given client's simulation. Method 2 seems to be a nice simple solution, would provide nice smooth simulation, but introduces a sizable lag for a given player's input to be seen by other players.
I suppose I have 3 main questions on this topic, and I would be a very happy boy if I could get some help on them:
1. Is method 1 a widely used approach? It seems like it introduces a massive amount of work, as a result of the server and clients having to correct their simulation state, all the while introducing potentially jittery movement.
2. I get that method 2 can buffer early input packets, but how does it account for late arriving input packets? I get that the server should acknowledge the late input packet in some way to let the client know it should adjust its simulation time, but would it drop that late arrived input? would it execute it, but at the CURRENT frame? What if there are 2 or more late inputs?
3. Am I missing any other solution? Are these 2 methods the widely used methods for handling state and input on the server? I can imagine a combination solution of method 1 and method 2, is this advisable?
Thanks in advance for all of your time! I feel very stuck on this, and will send all the karma your way for any help. In the mean time, I'll keep reading forum posts, to see if this has come up before.
*edit* wording.