Advertisement

Simulation in time vs. frames (steps/ticks)

Started by March 16, 2014 01:23 PM
1 comment, last by FredrikHolmstr 10 years, 8 months ago

So, like always, I have been working on my FPS game. A while back I switched the networking from using time (as in seconds) to using frames (as in simulation frames/steps/ticks/whatever you wanna call them). All state that is sent from both server to client and client to server are stamped with the frame the state is valid for, all actions which are taken (like "throw a grenade") are stamped with the frame they happened, etc.

This makes for a very tidy model, there is no floating point inaccuracies to deal with, everything always happens at an explicit frame, it's very easy to keep things in sync and looking proper compared when using time (as in seconds).

Now, the one thing that popped into my head recently was the fact that when you deal with explicitly numbered frames and the client start lagging behind or for some reason runs faster then the server, basically when you get into a state which the client needs to re-adjust its local estimate of the server frame (server time if you are using time instead). When dealing with "time" you can adjust/drift time at a very small level (usually in milliseconds) over several frames to re-adjust your local estimate on the client with the server, which gives next to no visible artifacts.

But when you are dealing with frames, your most fine grained level of control is a whole simulation frame, this causes two problems: If you are just drifting a tiny tiny bit (a few ms per second or so), the re-adjustment that happens when dealing with "time" is practically invisible as you can correct the tiny drift instantly and there is no time for it to accumulate. But when dealing with frames you wont even notice this until you have drifted an entire simulation frame worth of time. Also like I said, when you are going to re-adjust the clients estimate you have no smaller unit than "simulation frame" to deal with, so there is no way of interpolating, you just have to "hard step" an entire frame up/down which leaves visual artifacts.

All of this obviously only applies when you are doing small adjustments, but they are the most common ones to do. If you get huge discrepancies in the time between the client and server, every solution is going to give you visual artifacts, snapping, slowdown/speedup, etc. But that is pretty rare.

So, what's the solution here? I have been thinking about a model which still keeps most of my original design with frames, but when the state/actions are received on the remote end it's easy to transform the frame numbers into actual simulation time numbers, and then as things are displayed on remote machines you can nudge the time back/forward as needed. I'm just looking for ideas and input on how people have solved this.

Edit: But then again, maybe I am over-thinking this because since "time" will also be measured with the granularity of your simulation step size. So maybe there is no difference when you compare the two solutions? For example, if you nudge your local time -1 ms to "align" with the server better, but this pushes you over a frame boundary then everything will be delayed by a whole frame anyway...

Edit 2: Or actually, I suppose the case above where you nudge time to get closer to the server and it happens to be right on a frame boundary would require you to drift more then one frame worth of time without detecting it (you most likely got a ping spike or packet drop), because if you just drift a tiny bit away/towards the server, and you re-adjust back closer towards the server, you would never cross the frame boundary.

Edit 3: Also, thinking more of this, maybe the reasons I am confusing myself with this is that currently the updating of remote entities are done in my local simulation loop, when maybe it should be done in the render loop instead? I mean, I have no control over the remote entities what so ever anyway, since they are... well remote. On the other hand it feels a lot cleaner (in my mind) to step all remote entities at the same interval as they are stepped on the peer that controls them. On the other hand, maybe this is why it's getting convoluted as I'm mixing up my local simulation of the entities I control with the display of the remote entities.

Like you probably can tell I am pretty ambivalent on what route to go with here, really looking for some input from someone.

I don't think you should worry about small time offsets. It's only necessary to adjust the clock if there's at least a whole step off, and at that point, you should probably adjust by at least one whole step.

If you still want to adjust the clock by a small amount, then you should probably go back to your clock source, and whatever those units are. Typically, they are microseconds (gettimeofday()) or raw performance counter ticks (QueryPerformanceCounter()) which ends up being millions or many millions of them per second, so very easy to add an offset that's smaller than a tick.

And note that, when I say "use the clock units" then that's only really in the interface to the clock. For the networking and most other subsystems, still using ticks is the right way to go.
enum Bool { True, False, FileNotFound };
Advertisement
I think you're right, to keep nudging the clock back and forth is just going to cause issues. I think I over-complicated this a ton in my head. I just needed someone to agree with my initial idea (using ticks everywhere) to feel confident in it I think smile.png Thanks!

This topic is closed to new replies.

Advertisement