Advertisement

Server client ticks, lag compensation, game state, etc

Started by November 04, 2016 04:47 AM
25 comments, last by bgilb 8 years ago

Ahh I think I'm getting it!

Can you elaborate on this part "Another is to simply call the "send buffered commands" function on your command buffer every X iterations through the main input/simulation loop."

For shifting the clock backwards, how do I then handle the client user commands having duplicate tick #s ?

For shifting the clock backwards, how do I then handle the client user commands having duplicate tick #s ?


Either don't handle that at all and just throw away duplicate commands on the server, or discard buffered commands on the client when the clock is shifted.
Getting out of sync will cause a small inconsistency between client/server, although 9 times out of 10, it will not actually be noticed by the player.
enum Bool { True, False, FileNotFound };
Advertisement

Is there something different I could use than a user command buffer? I only ask because it kind of doesn't work 100%.

When the client first spawns he has a drop in FPS for maybe 100ms. Unfortunately during this time he doesn't send any user commands, but when the FPS drop recovers, he ends up sending 4 messages at once to the server. With 2 commands per message. So the client is immediately 128ms behind. If any other FPS drops happen, he just ends up farther and farther behind.

My solution right now is to put a max on the command buffer and clear it if it's above the max. But it seems janky.

Edit: Maybe I make it so the client never runs more than one tick if any are missed? But let the server have 100% accuracy.

he ends up sending 4 messages at once to the server. With 2 commands per message.


Why do you force exactly two command ticks into each message, instead of sending whatever you have when it's time to send?

Anyway, it's quite common that games need a little bit of sync when first starting up, loading levels, etc.
The role of your networking, for non-lockstep architectures, is to be robust based on this. It's really no different from the network being jittery. For example, my Comcast Internet will drop about 1% of packets, quite reliably (not in bursts or anything.) That's just a fact of life.
If you're doing lock-step networking (like Age of Empires, Starcraft, etc) then you instead make sure everything is loaded and ready to go, and when all clients have sent "everything ready" to the server, the server sends a "start" message and the game actually starts simulating.

Separately, it's common to have an upper limit to how many physics steps you will take when your loop is running slowly.
However, even if you do that, you will end up detecting that you are, in fact, behind the server.
This is one of the reasons why a bit of de-jitter buffer is useful; it adds a little latency, but allows for the non-smooth realities of the internet to not affect simulation as much.
enum Bool { True, False, FileNotFound };

I'm not exactly forcing 2 user commands per message. Just based on Valve's articles they said they usually package a user command message to have 2 user commands. So I wait for 2 user commands to have been created before sending the message. I imagine this would be client adjustable.

Yes that's about the conclusion I'm coming to. This is non lock stop aka Valve style. The only thing is, this all seems so fragile D:

If the client has performance problems, it basically equates to packet loss. Then you have to deal with latency variability. In a nice world the packets would arrive right when they're need.

I'm not 100% sure how to tweak the user command buffer to be optimal on the server. As you said when the clients loop runs slowly he most likely ends up behind the server (especially if 1 tick took like 200ms or something). I actually removed the tick catch-up on the client since I didn't see the point anymore. Is this okay? I kept it on the server, where it doesn't have any upper bound on the amount of ticks it can execute at once. The problem on the client is, if the performance hit is bad enough user commands won't be sent out and he will be perpetually behind. The server loop will eventually run out of user commands for that client, but later it will receive like 4 at once. Since the server only executes 1 per tick, that client will never catch-up. It seems weird that FPS problems make my system fall apart though.

Edit: Also all my testing up till now has been on the same computer. So any problems are just performance issues in the application and not anything with latency or latency spikes. So it makes me worry once there is any latency or latency spiking it will be even worse.

based on Valve's articles they said they usually package a user command message to have 2 user commands. So I wait for 2 user commands to have been created before sending the message


My question is: If there are four commands in the queue, and it's time to send a network packet, why would you only send 2 commands, and not the entire queue, in that network packet?

If the client has performance problems, it basically equates to packet loss


Yes! Hence, why games have minimum recommended system specs.
However, the good news is that MOST games will be more CPU bound on rendering than on simulation, so you can simulate 2, 3, ... N simulation steps between each rendered frame, and at that point, the performance problem looks more like latency than like packet loss.
And you should be able to automatically compensate for latency pretty easily, if you do the simple "let the server tell you how much ahead/behind you are" thing, and don't try to be too fancy.

I actually removed the tick catch-up on the client since I didn't see the point anymore.


If you adjust the client clock based on whether the server sees incoming packets as ahead or too late, then the client will automatically adjust anyway.

The server loop will eventually run out of user commands for that client, but later it will receive like 4 at once


This is the same thing as "latency increased on the transmission network" and thus your clock adjustment should deal with it. The client will simply simulate further ahead for the controlling player, and assume the latency is higher, and things will still work.
enum Bool { True, False, FileNotFound };
Advertisement

My question is: If there are four commands in the queue, and it's time to send a network packet, why would you only send 2 commands, and not the entire queue, in that network packet?

Sorry I may be confusing you. The server and client both buffer commands. The client fills his input buffer up until it hits a max, then sends everything (effectively the same thing as cl_cmdrate in CS games). So I'm always sending what's available, there aren't any pending ones that aren't sent. Input polling is built into the client tick loop, maybe that's a problem?

However, the good news is that MOST games will be more CPU bound on rendering than on simulation, so you can simulate 2, 3, ... N simulation steps between each rendered frame, and at that point, the performance problem looks more like latency than like packet loss.

The problem on the client is that more simulation steps at once means more user commands sent at once which will add latency on the server. The server doesn't let the client catch up or anything. The server buffer would end up with extra latency that won't go down unless the client had packet loss later. So I figure I might as well skip those update loops, because they wont end up executed on the server anyways. The server will let people catch up if the reason for the lag is the server taking too long doing something.

Right now instead of syncing their ticks, I'm just going to have the server handle the user command de-jitter buffer size, which I'm thinking should be similar right? So when the server buffer is too full because extra latency I will discard a certain amount of user commands so they can skip ahead. Ideally the buffer would try to be as small as possible, but without missing any server ticks and without removing excess user commands. Can you see any problems with this?

The problem on the client is that more simulation steps at once means more user commands sent at once which will add latency on the server.


Yes, slower frame rate and larger batches of simulation steps leads to more buffered simulation steps.
What I'm trying to say is that this should look, to your system, almost exactly as if the client simply had a slower and more jittery network connection.
If you adjust clocks correctly, you don't need to do any other work (as long as the server does, indeed, buffer commands and apply timesteps correctly.

So when the server buffer is too full because extra latency I will discard a certain amount of user commands so they can skip ahead.


Are you not time stamping each command for the intended server simulation tick? You should be.
You only need to throw away user commands when you receive two separate commands time stamped with the same tick, which should only happen if the clock is adjusted backwards in a snap, or if the network/UDP duplicates a packet.

And, yes if the client runs at 5 fps, then you need to buffer 200 ms worth of commands on the server. There is no way around that.
If you throw away any of that data, then you will just be introducing more (artificial) packet loss to the client, and the experience will be even worse.
You know that, because the client won't send another batch of commands until 200 ms later, you actually need that amount of buffering.

Separately, if you really want to support clients with terrible frame rates, you may want to put rendering in one thread, and input reading, simulation, and networking, in another thread.
When time comes to render to the screen, take a snapshot of the simulation state, and render that. That way, the users control input won't be bunched up during the frame render, so at least the controls will be able to pay attention to 60 Hz input rates, even if they can't render at that rate.
enum Bool { True, False, FileNotFound };

These things are hard to explain, I'm sure you know! Thanks for all the help.

At the moment I'm not doing clock/tick syncing, or letting the client say what server tick it's intended for. I couldn't wrap my head around how that is different than the user command buffer on the server.

Right now I have it semi working, without client side prediction coded. It's on LAN so it feels fairly responsive.

On the client I sent batches of user commands marked with their local client tick which starts at 0 and isn't the same as the server tick.

The server receives them and buffers them. The server has it's own loop and all that, and removes one user command off the stack every server tick (same as client at 60Hz) and applies it to that player. If there was nothing in the stack it doesn't do anything for that player and the player doesn't move or process anything.

The server then sends gamestate to the client that has the server tick, and the client tick (or none) to identify which user command was used.

The problem with the 5fps scenario is that initially the server receives no user commands for 200ms or whatever, so the client completely misses those server loops and doesn't move. Then when the client catches up, he sends a bunch of user commands at once. Let's say 20. The buffer on the server will stay at roughly 20 user commands indefinitely. This is because the server only executes 1 user command per tick. I don't see a way to fix that besides removing the most of the user commands from the buffer. Isn't that the same as the clock adjusting, and ending up with duplicate tick #s that are deleted/ignored?

My other method before was to execute all ticks as they were received, but this made it impossible to generate any sort of gamestate that could be shared with other clients, since the players would be skipping around. Also it would be pretty easy to speed hack by just sending more user commands. Basically there isn't even a server loop.

At the moment I'm not doing clock/tick syncing, or letting the client say what server tick it's intended for/quote]

I see. That's why you run into these problems :-)

The buffer on the server will stay at roughly 20 user commands indefinitely. This is because the server only executes 1 user command per tick.[/quote]

You are assuming that, after a single burst of 200 ms latency, the client will then be able to fill up 2 ticks every 20 milliseconds.
But, if the client runs at 5 Hz, it means it sends a clump of 20 commands every 200 milliseconds. The server-side buffer will go down to 0, then fill up again to 20, and then repeat.

If you have a single delay, and then the client runs fine (this could be a network delay, rather than client frame rate, even) and do not time stamp commands, then yes, the client will keep filling up the buffer.
If you do not time stamp the commands or sync the clocks (which you're going to want to do once you get into server-side lag compensation for shooting) then the best bet is to remove all buffered commands when you get a new packet of commands.
Keep all the commands when they come in, because there is some chance that the player will in fact send 20 commands every 200 milliseconds (or whatever the send rate is.)
But, if you get more commands, and there are more than 1 commands in the buffer, then delete all but the last command in the buffer before appending the newly arrived commands.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement