Advertisement

Using a physics engine on the server

Started by October 06, 2015 05:13 PM
31 comments, last by Krzych 9 years ago

@sheep19: You probably want to number each game simulation tick. Make sure you use the same, fixed, tick rate on all nodes! The player would then enqueue commands "for the future" for the server. This would include both "spawn bullet" as well as "move" or whatnot. The server applies the appropriate input at the appropriate simulation tick (so at each physics update on the server, only one input from each client can be applied).

I don't quite understand this.

If understand correctly, the server will keep a counter _simulationTickCount which will increment on every physics update?

Clients will do this as well, and send the current tick count with every input packet. Then, the server applies that input at that tick rate, based on its count (_simulationTickCount).

But, due to latency, the server will always be ahead of the clients, right? So when the server receives an input with tick count 15 (from client A), it might actually be at _simulationTickCount 30. What does it do in that case? Furthermore, another client, B, which has more lag than client A sends his tick count 15 at 30 server tick rate... What should the server do?

============

I also have another issue.

Currently, when an input is received, the server sets the rigid body's velocity of the client to a certain value, updates the physics world and then resets it back to zero.

This causes the local client to always be ahead of the server... (and because of corrections, the player's model "jumps" to the corrected position).

But from what I read, what should happen is that the server should be ahead of the client. This make me think that what I'm doing above is wrong.

Should the server assume that when an input is received (e.g RIGHT arrow pressed) that it remains active until a packet containing RIGHT as not pressed is received?


But, due to latency, the server will always be ahead of the clients, right? So when the server receives an input with tick count 15 (from client A), it might actually be at _simulationTickCount 30. What does it do in that case? Furthermore, another client, B, which has more lag than client A sends his tick count 15 at 30 server tick rate... What should the server do?

For this type of game, yes: the server lives in the future relative to the clients, or the clients live in the past from the server.

The server needs to operate basically on a sliding window. When something comes in, the server needs to validate it both with normal validation rules for bounds checking and such, and to make sure it makes sense at that point in time. If it makes sense that a player was at a location and fired at a recent time, it can insert the event back into the simulation and work forward from there. If the time is too far in the past or there are other problems, the event could be discarded or replied to with a failure of some type. If validation shows other oddities like the player being in the future, logging and triggered responses can also be appropriate.

It certainly adds complexity to projects, but done well and coupled with local animations and audio it makes the gameplay experience that much nicer for competitive games. Especially in systems where projectiles take actual real time to fly through the sky and the simulator accounts for that, it can provide even more realistic experiences.

Advertisement

[background=#fafbfc]But, due to latency, the server will always be ahead of the clients, right? So when the server receives an input with tick count 15 (from client A), it might actually be at _simulationTickCount 30. What does it do in that case? Furthermore, another client, B, which has more lag than client A sends his tick count 15 at 30 server tick rate... What should the server do?[/background]


That's why I say clients send commands for the future. If the client knows it's 6 steps away from the server, and it's currently client tick 22, the client will send a command for tick 28.


[background=#fafbfc]Should the server assume that when an input is received (e.g RIGHT arrow pressed) that it remains active until a packet containing RIGHT as not pressed is received?[/background]


Gah, the server lost my reply.

Anyway, it depends on the command. If the command is "RIGHT ARROW IS DOWN" then you want to send the state every tick. Else, if you miss the "key up" event, then the server will think right arrow is down all the time.
Meanwhile, for momentum/velocity, that's often something that takes longer to change, and assuming the previous tick's state on the next tick makes more sense.
enum Bool { True, False, FileNotFound };

But, due to latency, the server will always be ahead of the clients, right? So when the server receives an input with tick count 15 (from client A), it might actually be at _simulationTickCount 30. What does it do in that case? Furthermore, another client, B, which has more lag than client A sends his tick count 15 at 30 server tick rate... What should the server do?


That's why I say clients send commands for the future. If the client knows it's 6 steps away from the server, and it's currently client tick 22, the client will send a command for tick 28.

So the client needs to learn how many steps he is away from the server.

Would adding a current tick rate variable to client inputs and server world states suffice? The client can make the subtraction and find the answer.

"current tick" in each packet is common. "current tick rate" is not very useful, because the idea is that the tick rate is exactly the same on client and server -- 60 times per second (or whatever you choose.)

Often it's useful to have the server send to the client "this is how off you are in your ticks" -- i e, the client sends "this command is for tick X" and the server sends back "your command arrived Y ticks early/late." The client can then adjust its compensation any way it sees fit. Maybe adjust by a third downwards if it's early (to reduce latency) and adjust by the full amount, limited to a value of 10 ticks per adjustment, if it's late.
If you use this to also tell the client what the current tick is when the client late-joins, that one-time adjustment of course needs to potentially be big.
enum Bool { True, False, FileNotFound };

"current tick" in each packet is common. "current tick rate" is not very useful, because the idea is that the tick rate is exactly the same on client and server -- 60 times per second (or whatever you choose.)

Often it's useful to have the server send to the client "this is how off you are in your ticks" -- i e, the client sends "this command is for tick X" and the server sends back "your command arrived Y ticks early/late." The client can then adjust its compensation any way it sees fit. Maybe adjust by a third downwards if it's early (to reduce latency) and adjust by the full amount, limited to a value of 10 ticks per adjustment, if it's late.
If you use this to also tell the client what the current tick is when the client late-joins, that one-time adjustment of course needs to potentially be big.

Yes, that's what I meant. I don't know how I wrote "tick rate" instead :P

So yes, it also seems easier to implement if the server tells the client the difference to each client instead of the client having to do it manually!

Thanks for your advice!

Advertisement

"current tick" in each packet is common. "current tick rate" is not very useful, because the idea is that the tick rate is exactly the same on client and server -- 60 times per second (or whatever you choose.)

Often it's useful to have the server send to the client "this is how off you are in your ticks" -- i e, the client sends "this command is for tick X" and the server sends back "your command arrived Y ticks early/late." The client can then adjust its compensation any way it sees fit. Maybe adjust by a third downwards if it's early (to reduce latency) and adjust by the full amount, limited to a value of 10 ticks per adjustment, if it's late.
If you use this to also tell the client what the current tick is when the client late-joins, that one-time adjustment of course needs to potentially be big.

Alright, so I'm in the process of doing what has been discussed here.

  • Clients include the target tick in their (input) packets.
  • Server includes tick difference for each state update sent to clients.

The clients will use that difference to appropriately set the target tick:

targetTick = currentTick + tickDifference (I haven't implemented this part yet).

Where targetTick will be sent to the server (like you said, send for the future).

For now, clients send their currentTick.

On the client, when a new state from the server is received, I print the tick difference. Here are the results: http://pastebin.com/dh7GFFcq

In the first few frames, the tick difference is 1 and increases up to 20~ until the first input from the client is received by the server.

So my question is, how does the client use tick difference information? Should it just calculate the targetTick using the last value of tickDifference? The value won't be correct until the server processes the 1st input from the client, but this will be corrected really soon.

====

Also, another question regarding this. With this approach, the client sends input "for the future" to the server. The inputs should arrive exactly at that tick and be processed by the server.

Let's say that the client sends a input with targetTick = 5. But due to a lag spike, the server receives it at server tick = 7. But now, at server tick = 7, the server has 3 inputs from that clients (because inputs are received in order by my own protocol).

So, in that update loop at tick = 7, the server should process all 3 inputs at once, correct?

I'm not up on how various games handle this, but for

"it can insert the event back into the simulation and work forward from there"

when multiple (or rather a multitude) of client objects start getting involved with Corrections being made

at some point, there needs to be a cutoff where some good (simple) default patching gets done which is 'good

enough' to handle the situation (the game doesn't have to be 'pure' for its physics, and there can be points where

it cannot be without becoming horribly complicated (to program) or disruptive to the game flow/appearance).

So for the OP that implies some criteria where things just get too weird, and some presentable strategy to handle (clean up) the mess

would be part of the logic. Alot of that is a subjective judgement of for various situations what is acceptable for the game.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact


So my question is, how does the client use tick difference information? Should it just calculate the targetTick using the last value of tickDifference? The value won't be correct until the server processes the 1st input from the client, but this will be corrected really soon.

hm, can't say if it this fits your gamedesign, but i would smoothly change the clients current tick towards the server's. also i calculate the difference on the client, but i guess thats not very important. client sends his current time to server, which replies with "at your time X, i was at time Y". when that answer arrives i also have the exact RTT for that request, because the server includes the time the client sent in the answer. with that i can skew the client's own time a bit from time to time to keep it consistent with the server's (not necessarily equal).

so the client's time won't be correct until first sync, but you can do a sync before you "spawn" the player to avoid a hickup when player starts to do whatever it is he does in your game.

for the time difference for players' inputs i found i preferred to do client side prediction, because rolling back on the server when late input arrives got real messy really fast. and just sending input with future timestamp resulted in a lot of difference between client and server sim, because the client would basically move in a timeframe ahead of the server. that was because the client would immediately apply inputs for time X+Y at time X, but the server would apply these at X+Y. and delaying the client's own inputs just is lame and feels unresponsive to the player.

so my client actually runs the game rtt/2 (roughly) in the future and immediately applies it's inputs. so when the inputs arrive at the server it has just reached that point in time and can apply the input nicely. that comes with the trade-off that clients need to do full rtt of rollback on incoming updates from the server. however i like that better than having the server do the rollback, which might not be possible fast enough for many players and lots of objects. and so far it works quite well.

If you are behind (sending commands too late) you need to immediately catch up, which means stepping the simulation several times without rendering. This will glitch a little bit, but hopefully only happens at most once. If you are ahead (sending commands too early, causing unnecessary latency) you can smoothly drop the step delta. Say, if you're ahead by X steps, you subtract (X / 10) steps from the local offset (which means you step simulation slower / less frequently for a little bit.) (Also, you have to keep the time -> server ticks offset variable as a double or fraction)

Finally, you want some "desired" set of jitter buffering. It may be OK to be between 0-4 steps ahead of the server, for example, because transmission times will jump around a little bit. You may even want to count how many times you have to adjust the clock (either direction) and slowly increasae the acceptable jitter size if you have to adjust it often, to compensate.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement