Advertisement

Unreliable messages and the server update loop

Started by March 20, 2017 08:13 AM
4 comments, last by brkho 7 years, 8 months ago

As a bit of background, I have a client sending input updates to the server 20 times a second. Similarly, the server's update loop also runs at 20 times a second where it processes all client messages in the message queue, updates game state, and sends game state updates to the clients.

However, despite my client and server loops running at the same rate, my server sometimes doesn't receive a single input message from the client during the 50ms window between server updates. I believe this stems from my client's update timer not being completely reliable and the non-deterministic delays inherent in sending a UDP packet over a network.

The problem I'm running into here is when I try to reconcile client-side prediction with server updates. Let's say my server just acknowledged input message M from the client with state S. The client runs the game simulation with the buffered messages M+1, M+2, and M+3 to get the predicted position. This is fine, but let's say the server does not receive message M+1 before the next server update loop starts. In that update loop, the server will have no new client messages to acknowledge, and will send out an updated game state S+1 to the client while still acknowledging input message M. When the client tries to do client-side prediction, it will start from state S+1, but now it will simulate with inputs M+1, M+2, M+3, and M+4 which effectively results in a prediction in the "future" because we advanced S without increasing M.

None of the articles I read online discussed this problem, so I feel like I'm missing something fundamental here. Could anyone shed some light on my issue?

You can't assume you're going to get a message in every 50ms window if you're only sending every 50ms. You might receive one just before the window and the next one just after, for example.

There are many different ways to approach this, and the best one depends on your exact game, but given that you seem to be running a fast-paced game the usual approach I encounter is that the clients run 'old' data from the servers, where all the entities are seen at a slightly old state, apart from the locally-controlled entity which is seen at the 'now' state. These systems try to avoid extrapolating data for entities. If you can't avoid extrapolation, you might just want to blend back towards the received values as soon as you receive them.

Advertisement

This is why games typically have a de-jitter buffer. You could have more than one command packet waiting to be processed, and that's OK! Just process the one that matches the server tick you are processing.

Typically, the server and client both time-stamps the commands for a particular simulation tick number. When the server receives a packet that is time-stamped for some simulation tick in the past, it includes status information to the client that it must send packets sooner (meaning, time-stamp further ahead of time.) When the server receives a packet that is many ticks into the future (say, 3 or more,) then it includes information back to the client saying it should time-stamp later (meaning, time-stamp further back in time.)

And, yes, this means that the client must send commands for the "future." And will hear about results in the "past." This is a necessary arrangement because of the latency of sending information.

enum Bool { True, False, FileNotFound };

Apologies for the late response on this; I had a busy week at work and didn't have time to work on my game until recently. I now synchronize the client and server clocks using the method described by hplus0603 and keep a jitter buffer on the server. Everything works wonderfully now, so thanks a bunch!

One thing I am curious about is the best way to prevent the client's clock from bouncing around the "true" server clock if the latency is sufficiently high. This happens because the client won't get the response from the server to adjust its clock until a full round trip has passed. Thus, if the RTT is high, the client may receive many messages to bump up its clock and actually overshoot the true server time before the server recognizes this and sends a message to bump down (which starts the same periodic process over again).

I currently combat this by having a running estimate of the client-server RTT on the client. I then use this to restrict the client's clock adjustment to a single time per RTT interval. This works, but it delays the clock synchronization process by quite a bit. I don't really see a way around this, but maybe there's something I'm missing?

prevent the client's clock from bouncing around the "true" server clock if the latency is sufficiently high


If you are aggressive in adjusting the delta to "slower" and very cautious in adjusting the delta to "faster," you'll do fine.

For example, whenever you find that you are ahead (say, you get a server packet saying you should be at 18 ms, but you're at 21 ms) then adjust by the delta, plus some additional padding. For example, set the padding to 30, so you'd adjust backwards by (21-18 + 30) and thus you'd adjust to -12 in this case.
If you find that you're behind (get packet saying you're at 21 when you're at 18) then don't worry about it if it's within the adjustment threshold (30,) and if you're outside the adjustment threshold, just adjust by the delta outside the threshold.

having a running estimate of the client-server RTT on the client


This ends up being approximately the same thing as the adjusted server clock offset, when you squint :-)
Also, don't bump the clock each time the server says so; instead, have the server send "this is what the clock should be" and adjust by the appropriate amount on the client.
That way, once you've adjusted, if you get another adjustment, you compare your adjusted time to what the server says it should be, and notice you've already caught up.
enum Bool { True, False, FileNotFound };

Works great! Thanks again. :)

This topic is closed to new replies.

Advertisement