Advertisement

Input simulation timing problem

Started by September 29, 2023 09:17 PM
4 comments, last by hplus0603 1 year, 2 months ago

Hi,

I implemented a networking library with features like interpolation delay, client reconciliation, client prediction, lag compensation. However I have noticed that on the client there are frequent mispredictions. I have found what is causing it but I'm not sure what the best way to fix it is. Here's the gist of how it works:

Client:

Every tick:

1) Adds sampled input to input buffer (Max input buffer size is based on the configured max prediction time). For simplicity assume input contains data like "MoveLeft = true"

1) Rolls everything back to latest received state

2) Removes old inputs from input buffer (based on last acknowledged input which is contained in latest received state)

3) Simulates inputs in input buffer. After each simulation, it also updates physics or state that is based on time (1 simulation = 1 tick interval of time)

4) Sends inputs to server (including older ones that were not acknowledged in case packets were dropped)

Server:

Every tick:

1) Enqueues received inputs to clients' input buffer - duplicates are dropped

2) Dequeues one input from each client and simulates it

3) Updates physics or state that is based on time (For 1 tick interval of time)

One thing to note here is that if client tries to hack by sending inputs faster, the server will still only execute 1 input per client per tick.

Here is when the problem occurs (Assume both client and server are at local tick #1)

Tick 1: Client sends input A. Let's say the server received this on time and accurately simulates it on tick 1 - great both states in-sync.

Tick 2: Client sends input B. This time however, it was delayed, and the server doesn't have it on its tick 2. The server won't simulate inputs for this client on tick 2.

Tick 3: Client sends input C. This time Server received it on tick 3, same as the previous input. So the server now received 2 inputs on the same Tick (#3). It will dequeue input B and execute it.

Tick 4: Client does nothing, but server will dequeue and execute input C here.

From the perspective of the server, this happens:

Tick 1: Input A is dequeued and simulated. Simulates physics.

Tick 2: Did not receive input, so just simulates physics.

Tick 3: Received input B and input C, dequeues input B and simulates physics.

Tick 4: Did not receive input. Dequeues input C and simulates physics.

The above will cause a misprediction on the client because inputs are run in different times in relation to the physics simulation. In the example here I am using physics simulation as the one causing the misprediction on the client but it could be any inputs whose simulation depends (directly or indirectly) on time.

Eventually the state is synced due to reconciliation on the client but it causes a frequent and visible stutter on the client, even when both client and server are running on the same machine with 0 network latency, so surely there must be something that I am doing wrong because in real network conditions the issue will be even more noticeable.

The only thing I can think of is implementing a de-jitter buffer on the server when receiving the inputs to make sure they arrive nicely per tick, but I have not seen any reference to something like this in other popular games.

I'm wondering if there is either some other way to fix this problem, or if I implemented the network input system in a wrong way. I would appreciate any insights!

Thanks!

For some clarification:

What is your physics tick rate?

What is your simulation update rate? (They are often the same as physics but sometimes different. Physics may be slower on compute-heavy games, or faster on systems when physics substepping or more advanced features are used.)

What is your graphics frame rate?

What are you doing to correct for mispredictions and corrections? Popping objects back to authoritative positions? Morphing the motion over time? Something else?

Advertisement

Typically, you will run the client “ahead” of the server for the local player, and “behind” the server for remote clients.

If you want everybody to be in the same time frame, then you run client and remote entities “ahead” of the server.

You will need to tag each input event with the frame number it's intended for, and you need to delay the server simulation (or re-simulate on the server) for when late inputs arrive. To avoid cheating and too much lag from too mis-behaving clients, you can enforce a maximum allowed lag/jitter, and throw away inputs older than that. And if a client sends too many of those, drop the client – they won't be having a good experience anyway.

enum Bool { True, False, FileNotFound };

frob said:

For some clarification:

What is your physics tick rate?

What is your simulation update rate? (They are often the same as physics but sometimes different. Physics may be slower on compute-heavy games, or faster on systems when physics substepping or more advanced features are used.)

What is your graphics frame rate?

What are you doing to correct for mispredictions and corrections? Popping objects back to authoritative positions? Morphing the motion over time? Something else?

I simulate physics once per tick, right after the commands are applied, for 1/tickRate amount of time. My tick rate in my tests is 60. Simulation rate is also the same, I do everything per tick. Graphics frame rate is 144. To correct mispredictions I snap objects back to authoritative positions and replay all commands that have not been acknowledged by the server.

hplus0603 said:

Typically, you will run the client “ahead” of the server for the local player, and “behind” the server for remote clients.

If you want everybody to be in the same time frame, then you run client and remote entities “ahead” of the server.

You will need to tag each input event with the frame number it's intended for, and you need to delay the server simulation (or re-simulate on the server) for when late inputs arrive. To avoid cheating and too much lag from too mis-behaving clients, you can enforce a maximum allowed lag/jitter, and throw away inputs older than that. And if a client sends too many of those, drop the client – they won't be having a good experience anyway.

This is what I'm doing, except “need to delay the server simulation (or re-simulate on the server) for when late inputs arrive.”. I don't want to re-simulate on the server to avoid cheaters. I watched the overwatch networking GDC talk and I realized that they keep a de-jitter buffer on the server for each client for the inputs (I think this is what you also mean by “delay server simulation”). I believe this is what I am missing. They do mention that the client adaptively increases/decreases the rate of inputs and the size of the input buffer on the server. However, what's unclear is how can the client send more inputs without increasing the tick rate. Does that mean that the sampling rate of inputs gets lower/higher? Also, currently in my implementation each input sent by the client is timestamped with its local tick id (So that server can ACK by client tick id). Does that mean that a separate ticking clock needs to be implemented for inputs?

Thanks for your help!

nullqubit said:
Does that mean that a separate ticking clock needs to be implemented for inputs?

The client, and the server, need to agree on the tick-to-current-time relationship. This is so that commands sent from the client, arrive at the server “just before” they are needed, and the tick number for client/server are the same.

E g, the client should mark each input as “this is for global game simulation tick 123” and it should ideally arrive at the server while it's simulating tick 122, but arriving a little earlier is fine too, because of de-jittering buffer.

Adapting network send rate presumably means that there's not a packet sent on the network per simulation tick, but rather, less frequently. If you use RLE compression of inputs, then a packet that contains the input events for 4 separate ticks, is only slightly bigger than one that contains a single frame, and sending only a quarter as many packets is an improvement in network usage. You would then unpack this packet into the pending-inputs queue – e g, at server tick 122, it could receive a packet that contains inputs for tick 124, 125, 126, and 127 (assuming you send one network packet per 4 simulation ticks.)

Similarly for server→client, it sends to clients “for tick X, the other client Y had inputs Z,” and those messages could also be RLE compressed to batch many ticks into a single network packet. The end result there is that the “perceived latency” increases, because you buffer more simulation ticks per network packet.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement