Advertisement

Client side prediction

Started by January 03, 2014 01:50 PM
9 comments, last by hplus0603 10 years, 10 months ago

Using the Server reconciliation section I'm confused by a few things.


Now, at t = 250, the server says “based on what I’ve seen up to your request #1, your position is x = 11”. Because the server is authoritative, it sets the character position at x = 11. Now let’s assume the client keeps a copy of the requests it sends to the server. Based on the new game state, it knows the server has already processed request #1, so it can discard that copy. But it also knows the server still has to send back the result of processing request #2. So applying client-side prediction again, the client can calculate the “present” state of the game based on the last authoritative state sent by the server, plus the inputs the server hasn’t processed yet.

Why re-apply the last known server state and then the other states we know to exist that the server hasn't seen yet? Why not just validate that the last known server state is in our local list and it matches our local list values and then discard it and NOT re-apply it locally?

Also in the example's he's using he's sending "move 1 unit right" vs Valve's idea of "+move right" command. In this case this is more of a key down/key up commands being sent idea. If that's the case then yeah we'll be getting snapshots from the server of where it says we are, but we wouldn't be saving commands on the client like "I'm at 15,0,25 #1" to compare against. So what is this article really talking about here vs Valve's?

Essentially, what is being purported here is that when you receive a correction, you should re simulate the inputs that were sent to the server since the packet that was corrected. This means that provided we have a relatively recent model of the game world, and all the algorithms are deterministic (for our purposes), we shouldn't incur an error.

E.g

Client:

Send Move 1
Send Move 2
Send Move 3
...
Server:
Receive Move 1 -> Process Move -> Send Correction for Move 1
Client:
Receive correction for Move 1
Simulate n moves since Move 1 (2, 3, ....)
In terms of preventing a cascade of corrections (because the server will receive Move 1, correct it, and then receive the later moves that were sent before the client received a correction), there are two options:
1)When the server performs the correction, it can check to see if the sequence of the move is after the corrected move, and if so, it won't correct them. In order to work out when "fresh" moves are sent, each move it sent to the server with the id of the last correction from the server, and the server can compare. This doesn't mean that the client is able to cheat, as the server will not "trust" the client for any state information, it just means that the client-side player could move where it shouldn't in its local game world.
2)Always send corrections and allow the client to ignore them when it knows that they are from old moves that have since been corrected. This is a simpler design, but wastes bandwidth if the bandwidth to replicate the physics correction is > the size of a single move sequence number.
If you're still a little confused, this is my implementation. It's written in Python.
A quick "heads up" - functions in Python are defined as follows:

def func_name(arg_1, arg_3, default_arg=default_value, annotated_arg:annotation, annotated_default_arg: annotation=default_value) -> return_annotation:
    statement_1...

# So you could write a function like:


def func_name(x, y):
    return x + y 

or you could tell a library of all the data types (which doesn't actually do anything yet)


def func_name(x:int, y:int)->int:
    return x + y 

In my case, I use the return annotation "->" to indicate where an RPC call is intended. Simply defining it means the function is treated as an RPC call.

I use StaticValue instances to denote the type of the data in the arguments list so we can serialise it.

annotations allow you to "write a note about" arguments, and don't do anything, unless a library wishes to inspect them and do some meta magic (which is what I do).

Advertisement

The thing I don't get is that the messages we send in fast action games are more "I pressed the right arrow key" instead of any sort of "Move right 1 unit". So if the messages generated by the client is "I pressed the right arrow key", then the move just starts happening on both client and server to the right. While we are moving to the right snapshots on the server are happening that contain our actual position at a given point in time and sent to our client. How can the client compare these positions from the server to it's own history positions? It doesn't seem like it would know at what point in time the server position exists in comparison to it's client point in any given time. Yes we could store off position timestamps locally on the client but It seems like time/ticks would have to be synced between client and server in order for this to work. I would think syncing would cause problems and they would go out of sync over time.


because the server will receive Move 1, correct it,

What do you mean the server correct it? My understanding is that server isn't doing any corrections. It's the boss right? It's the one who sends out the final positions to the clients and the clients are responsible for correcting their positions to match the servers. Why would the server correct anything?

Neither Valve's or Gabriel's docs talk about the server doing any sort of corrections. It's always the client correcting to the server values.

I think my biggest issue is: Are these commands being sent by the client on it's tick rate when keys are even held down? My assumption is that they are only sent on press & release to not waste bandwidth, but if I'm wrong and we'd always send these commands on key being held down, then this would all make sense and I guess it's a trade-off of bandwidth vs accuracy.

It seems like time/ticks would have to be synced between client and server in order for this to work


Yes. Physics/simulation needs to be quantified to a given tick rate, and clients and servers need to communicate and measure time in terms of ticks, not seconds.

When the graphics render rate is faster than the simulation rate, you either do a little bit of extrapolation/interpolation on the client, or you simply don't render faster than physics and let the GPU cool off a little. Or you make your physics step size so small that graphics won't catch up -- 240 Hz, or even 1000 Hz, is quite possible with simpler physical scenes.

Similarly, when rendering is slower than the physics rate, you may run multiple physics ticks for each frame rendered on screen.
enum Bool { True, False, FileNotFound };

It seems like time/ticks would have to be synced between client and server in order for this to work


Yes. Physics/simulation needs to be quantified to a given tick rate, and clients and servers need to communicate and measure time in terms of ticks, not seconds.

When the graphics render rate is faster than the simulation rate, you either do a little bit of extrapolation/interpolation on the client, or you simply don't render faster than physics and let the GPU cool off a little. Or you make your physics step size so small that graphics won't catch up -- 240 Hz, or even 1000 Hz, is quite possible with simpler physical scenes.

Similarly, when rendering is slower than the physics rate, you may run multiple physics ticks for each frame rendered on screen.

I don't synchronise ticks / clocks at all at the moment. When you start resolving disputes that occurred in the past (shooting players for example) you will need a global unit of time, and ticks are the best way to do this. Once this occurs you will need to synchronise clocks.

One reason I would use timestamps over ticks is because it allows greater flexibility in working with clients of different tick rates.

What does it mean to sync ticks between client server? I mean if I make my game to tick 20 times a second on the client and my server is ticking 20 times a second, that means they are ticking at the same time right? That assumes the space between ticks is even on both sides, which maybe if they aren't exactly it's not a big enough deal to matter?

And/or does this mean that the server is storing tick counts (just increasing integer) for each client, and it's like in the spy movies where they both have watches and on the word "GO" they both set their watches together (ie the starting tick count to 0 at the same time) and from then on their ticks should be in sync? I imagine the way you do this has something to do with ping time/2 and the server being the one to send the "GO" message to the client? Meaning by the time the client gets the GO message it has to calculate what actual tick number the server is at based on 20 ticks a second and how long it took to get the "GO" message. So the client may be starting it's tick value at 2 or 3 or something like that? Is that the right line of thinking?

I guess another way would be for the server to just keep it's tick count since startup and give the GO message to each client on game start passing it's tick count along with it so the client can just use that value and start increasing it's local copy of the server tick count to stay in sync. This all assumes the same tick rate between the client/server.

Advertisement

If you've seen latency diagrams you'll understand what I mean.

1. Computer clocks are rarely exactly synchronised with one another (and the difference can be quite large).

2. There is a delay "latency / ping" between two peers over the internet.

3. This latency is usually sufficiently large that a game will have changed by some margin by the time a client receives the gamestate.

Because of these points, clocks cannot be assumed to be in time. Otherwise, two events that happened in reality at exactly the same time would have different time stamps. To correct this we perform clock synchronisation. However, this is not as simple as sending the time from the server to the client, because the server's time will have progressed by the latency of the downstream connection whilst the packet has yet to be received by the client. The Client cannot determine this as it doesn't know the latency of the connection.

To determine the latency of the connection, on method is to take the average RTT and halve it. (Sending a packet to the server, which immediately forwards it back the client).


class Replicable:

    def __init__(self):
        self.rtt = 0.0

    @property
    def ping(self):
        return self.rtt / 2

    def determine_ping(self):
        time = WorldInfo.elapsed

    def get_rtt(self, timestamp: FlagType(float)) -> Netmodes.server:
        self.set_rtt(timestamp)

    def set_rtt(self, timestamp: FlagType(float)) -> Netmodes.client:
        self.rtt = WorldInfo.elapsed - timestamp
        print("RTT = {}".format(self.rtt))

I use RakNet for my networking and it seems like they handle this for you!

http://www.jenkinssoftware.com/raknet/manual/timestamping.html

So I got that going for me :) Always good to understand what's going on behind the scenes though.

It's a little confusing because you seem to be talking time and Hplus says ticks.

One reason I would use timestamps over ticks is because it allows greater flexibility in working with clients of different tick rates.


That may work for your particular game. Experience shows that running simulations at different speeds on different machines is, generally speaking, a bad pattern that most games that try it end up moving away from.

I would, based on both experience and reading, *highly* recommend *anyone* doing simulation/physics-based networked games to have a fixed time step size / tick rate.

Finally, the point of syncing ticks/clocks is to make sure that events happen in the same order on all machines. The clocks, and ticks derived from them, will not be 100% accurate, but as long as the protocol keeps the relative order and progress of ticks within limits, the game will play well. For example, as a client starts sending ticks that arrive too late at the server, the server can tell the client to increase the delta used in the tick derivation function (jump time backwards.) At the same time, as the client sends packets that arrive way too early at the server, the server can tell the client to decrease this delta (jump time forwards.) If you have sufficient hysteresis to compensate for network jitter, this will quickly settle to a good state and there will be no visible glitches.
enum Bool { True, False, FileNotFound };

One reason I would use timestamps over ticks is because it allows greater flexibility in working with clients of different tick rates.


That may work for your particular game. Experience shows that running simulations at different speeds on different machines is, generally speaking, a bad pattern that most games that try it end up moving away from.

I would, based on both experience and reading, *highly* recommend *anyone* doing simulation/physics-based networked games to have a fixed time step size / tick rate.

Finally, the point of syncing ticks/clocks is to make sure that events happen in the same order on all machines. The clocks, and ticks derived from them, will not be 100% accurate, but as long as the protocol keeps the relative order and progress of ticks within limits, the game will play well. For example, as a client starts sending ticks that arrive too late at the server, the server can tell the client to increase the delta used in the tick derivation function (jump time backwards.) At the same time, as the client sends packets that arrive way too early at the server, the server can tell the client to decrease this delta (jump time forwards.) If you have sufficient hysteresis to compensate for network jitter, this will quickly settle to a good state and there will be no visible glitches.

Would you clarify what you mean by "too early"? Do you mean to say that clients are simulated at the same tick rate as non-client actors on the server, and that they rely on a constant stream of input? I don't do this, because any network conditions that cause a packet to arrive late / early will force the server to correct a client, who may actually be "right" in most cases.

This topic is closed to new replies.

Advertisement