Advertisement

Networking simulation time questions

Started by July 18, 2022 06:12 PM
4 comments, last by Clonkex 2 years, 2 months ago

Hello. I’ve been working on a networked hobby game for a while. Everything seems to be working well enough but there are some questions that I have relating to some common suggestions and conventions that I have come across.

For context my game is a relatively fast paced PvE ARPG.

Question 1) Using integer frames or tick counts instead if milliseconds / microseconds.

In my game I clock sync and then exclusively use milliseconds for all my time units. Most resources I’ve read online use integer tick counts. I don’t understand why. It seems even if you use an integer tick count, you still need to translate it into smaller units to account for client frame rates anyway. Ex. if your server runs the simulation at 20 ticks per second, but the client is rendering at at 60hz (or even higher these days) the simulation delta needs to be broken down into smaller units. Or do we only actually process the client sim at 20hz and only use the actual framerate for animations / rendering code only? Ex, if the fixed step interval is 20hz, we update the sim every 3 frames that we render.

Question 2) Running the client sim RTT/2 ahead of the server.

I understand with prediction the local player is always RTT/2 “ahead” but i don’t understand the advantage of actually running the client game clock / tick ahead or post dating the messages. Is this simply for input queueing on the server? Ex. We queue an input on the client and send it so it arrives just in time on the server? Is there any other reason? In my game the server processes any received client inputs immediately (they are sequenced and sent redundantly, they always arrive in order). I dont understand why you would ever want to wait or delay processing, or reject them based on some timestamp. Also wouldn’t running the clock ahead make updates coming from the server extra behind? Ex. If the client is running RTT/2 ahead of the server now a server update will have all its timestamps a full RTT behind once it reaches the client, rather than RTT/2.

This leads into my final question:

Question 3) When applying server updates, do we play them out as is, or adjust time values for the estimated RTT/2?

For example, if an enemy is starting some attack animation, by the time the client receives that update, it is already RTT/2 behind. Should we A) start that attack animation at t=0 and play the full thing at normal speed, knowing its slightly behind, or B) snap the t value RTT/2 ahead so that it is immediately in sync and effectively cut off the initial portion of the animation? Or C) do some sort of t value error tracking / reduction and interpolate it (similar to physics error reduction that Glenn Fiedler talks about in his tutorials). Which would effectively speed up the initial portion of the animation.

I feel like C is probably the answer, but something about always fast forwarding the initial frames of every animation doesnt seem right.

If you aren't trying for deterministic simulation, then using millisecond values versus using game tick numbers aren't a big difference (other than milliseconds will wrap/overflow a 32-bit integer sooner than game ticks.)

If you do network recording/replay, or if you do deterministic simulation, it's helpful to know exactly which simulation tick each input/action/event/packet is from, rather than having to guess based on millisecond time stamp values. This ends mattering more when milliseconds start rounding/truncating one way or the other. It can be made to work either way, but the simulation tick counter is more straightforward.

Running the client ahead of the server is entirely a question of perspective. Whether you sync your clock to estimated server time, and add RTT/2 to the intended time stamps for messages you send to the server, or whether you sync your clock to server + RTT/2 and add nothing, or whether you sync your clock to received-server (so, server - RTT/2) and add the full RTT estimate to sent messages, is really up to what works out best in your case. Typically, you'll actually want to either run the clock at estimated server+RTT/2, so that messages are sent as-is, and then you can forward-extrapolate received entities to the current-clock easily, OR you run the clock at server-RTT/2, so that you will receive entity updates at the intended time. The latter is more natural if you display entities lagged, the former is more natural if you display entities forward extrapolated, and the middle ground is, well, a middle ground, with the draw-back that you need to adjust both entities (player and remote entities.)

And this goes to the third question: You can choose an arbitrary time that you use to display remote entities on the local client. You can display them in-the-past at a known-good location, or you can forward-extrapolate some amount. You can display them extra far back to make sure you always get an update before you display them. You can forward extrapolate, but only a little bit, so you don't get too much warping when you predict wrong. It's entirely up to what feels and works best in your game. Unless you do deterministic/lock-step simulation – then the display times and time steps are all very rigidly bound.

enum Bool { True, False, FileNotFound };
Advertisement

Thanks for the response. Sorry I edited some of my questions right as you responded.


1) Makes sense about the integer frame counts. In cases where the tick rate is slower than clients refresh rate, for example a 60hz tick rate and a 144hz client refresh rate, would I end up not stepping the sim every other frame or two in order to preserve the step interval? Or is it more common to apply a smaller timestep on the client so that you are stepping at least once per frame?

2) Is there a particular reason to add timestamps to client input messages other than for time syncing? In my game I have the server simply process any player input immediately upon receipt and then include its own timestamp to adjust any predictions made on the client. Would I include the client timestamp to ensure the server processes the input as close to the client prediction as possible to reduce any desync? Or is there another reason?

3) That makes sense. Im thinking in the case of a PvE game I could afford the slight delay for smoother fuller state animations, and have some function to speed them up if they arrive late past some threshold.

pondwater said:
would I end up not stepping the sim every other frame or two in order to preserve the step interval? Or is it more common to apply a smaller timestep on the client so that you are stepping at least once per frame?

In most of the games I've worked on, the clients just presume state is continuing, and dead reckoning works pretty well. Things that were moving continue moving, so we can adjust whatever we need. With both variable updates (eye candy systems and local interpolations) and fixed updates (deterministic and stable systems) , we can always have some motion to draw.

There have been many good talks about it, one not too long ago by one of my coworkers was this one: “One Frame in Halo Infinite”. Some relevant bits start here.

pondwater said:
Is there a particular reason to add timestamps to client input messages other than for time syncing?

They provide a convenient hook for many things, such as discarding outdated stuff and idempotence. They're not the only way achieve the techniques, but it comes along for free.

pondwater said:
Im thinking in the case of a PvE game I could afford the slight delay for smoother fuller state animations,

It is quite common for different clients to see different animations.

It is near universal that you see your own local player doing something different than what you see other people doing, with the first-person view looking almost nonsensical when viewed externally. The first person hands, tools, and eye positions look nice on your own the screen but aren't physically accurate.

Even if they're all looking at a third person view, local animations are also often slightly longer to help mask latency.

Of course, this presumes you're working in a group large enough to create these kinds of resources. On a small project or single-person hobby project, most of this is tremendously out of scope unless it comes for free with your engine.

The thing I always tell people is that if you're doing client prediction then no matter what you'll be dealing with multiple timelines. Your local character has the most up-to-date inputs on your local machine and is therefore at “current time” in your local timeline, but the objects around you are from RTT/2 ago if they're controlled by the server or RTT if they're controlled by players. If you extrapolate non-local objects, you're attempting to predict the future. You might predict the server's another player's timeline, but often you won't.

Multiplayer games are a giant mess of trickery and guesswork to give the illusion of playing the same game. As long as you have a good base framework multiplayer programming can be quite fun but it's always challenging to do well :D

None

This topic is closed to new replies.

Advertisement