Advertisement

Network Tick rates

Started by January 18, 2014 09:00 PM
13 comments, last by FredrikHolmstr 10 years, 9 months ago

There should be no buffer for network packets on either side.

There could be a buffer for queued commands on either side. Queued commands are just a subset of all possible messages that will arrive in a given packet.

Any message that is not timestamped with "please apply me at time X" should probably be applicable right away.

enum Bool { True, False, FileNotFound };

I've taken a break before reconsidering this.

I still have a remaining question, if you would be so kind as to clarify for me.

Conflict between clock synchronisation and the command delay on the client. - Should I manage the "forward projection" time that the client adds to the current tick separately from the clock synchronisation system (e.g)


received_tick = synchronised_tick + projection_tick

or should I run the client ahead (so the synchronised tick itself handles the latency upstream)?

I assume that running the client ahead makes more sense.

Following from this, how best should I approach clock synchronisation (with the second suggested method, whereby we run ahead of the server)?

The most robust method for finding the RTT seems to me to be something like this


fac = 0.05
rtt = (fac * rtt_old) + ((1 - fac) * rtt)
new_time = timestamp + rtt

But then I need to include the command delay, which will likely change when the RTT measurement changes, so it will jitter around a little bit (the rtt estimate may be "off" so the server will tell the client its command was late, and therefore the client will increment the command delay. But the RTT estimate will likely compensate for the changed conditions the next time we update it).

The other option is that we don't separate the two ideas of "command latency" and "upstream latency" and just have a single latency variable. We update this by nudging it from the server.


if not_current_command:
    if command_is_late:
        self.client_nudge_forward()
    else:
        self.client_nudge_backwards()

But this seems a little to coarse grained, surely? Plus there would be some significant convergence time unless we factored in the difference in timing as a scalar. I'm not sure how large this scalar might need to be though.


difference = int((server_tick - command_tick) * convergence_factor)

if not_current_command:
    self.client_nudge(difference) 

-------------------------------

My findings:

My dejitter buffer is simply a deque structure.

The clock synchronisation relies on the "nudging technique". I have found it to be responsive and stable on local testing conditions, with simulated latency (but not with jitter as yet). I cannot use a smoothing factor because it will otherwise take too long to converge upon the "correct value".

To perform clock sync:

  1. Send the command, a "received_clock_correction" boolean and the client's tick to the server. The server checks if the tick is behind the current tick, if so discarding the command and requesting a forward nudge (by the tick difference). Otherwise, we store it in the buffer.
  2. In the simulation, read the first packet in the buffer. If there isn't one available, don't do anything and return.
  3. If the tick is late (I can't imagine why it would be as we should have caught this, but lets say that you don't run the simulation function for a frame), we just remove it from the buffer (consume the move) and then recursively call the update function to see if we just need to catch up with the newer packets in the buffer).
  4. Otherwise, If the tick is early, we check how early it is. If it is more than a safe margin (e.g 10 ticks), we send a backwards nudge to the client (by the tick difference, not the "safe" tick difference) and consume the move. Otherwise, we just skip this frame until the server catches up with the latency in the buffer (which, as aforementioned is less than or equals to 10 ticks late).
  5. Otherwise we consume and run the move.

The purpose of the "received_clock_correction" boolean is to allow lockstep-like correction, This means that we won't send n correct by x RPC calls in the time it takes for the client to receive, apply the correction and send a new packet. I already have an RPC locking system in place (three functions, server_lock, server_unlock and is_locked (server side) ), but they will not be delayed by the dejitter buffer, which is what we use for clock correction.

The boolean is included in the command "packet" (RPC call) and so it is read at the same time the command is considered for execution. In my latest implementation, I have a new RPC (server_add_buffered_lock and server_remove_buffered_lock) which work like their unbuffered counterparts except they are executed in time with the dejittered commands.

Advertisement

Should I manage the "forward projection" time that the client adds to the current tick separately from the clock synchronisation system (e.g)


The clocks cannot be "synchronized" perfectly. Speed of light and all that :-)

Your question is, if I hear it right, "should I keep two variables: estimated clock, and estimated latency, or should I just keep the estimated clock and bake in the latency?"

The answer is "it depends on your simulation model." The only strict requirement is that you must have a way to order all commands from clients, in order, on the server, and order all updates from the server, or each client, in order.

I typically think of this as two separate values: the estimated clock, and the estimated send latency. I find that works better for me. Your approach *may* be different and *may* work with a single clock value -- it all depends on how you arrange for the updates to be ordered.
enum Bool { True, False, FileNotFound };

Should I manage the "forward projection" time that the client adds to the current tick separately from the clock synchronisation system (e.g)


The clocks cannot be "synchronized" perfectly. Speed of light and all that :-)

Your question is, if I hear it right, "should I keep two variables: estimated clock, and estimated latency, or should I just keep the estimated clock and bake in the latency?"

The answer is "it depends on your simulation model." The only strict requirement is that you must have a way to order all commands from clients, in order, on the server, and order all updates from the server, or each client, in order.

I typically think of this as two separate values: the estimated clock, and the estimated send latency. I find that works better for me. Your approach *may* be different and *may* work with a single clock value -- it all depends on how you arrange for the updates to be ordered.

Thanks hplus!

It seems to work fantastic at the moment, the only concern I have will be for jitter. The jitter buffer becomes such when the client's clock is forward projected.To do this, it needs to overcompensate on the clock at the moment. I think I will implement this latency myself to allow dejittering to occur by default.

From reading the replies in the whole thread it feels like you are over-complicating things in your head a lot, I did the same when I initially learned how to deal with synchronizing time between the client/server. A lot of the confusion comes from the fact that most people call it "synchronize", when in reality that's not what it's about.

hplus, said something which is key for understanding this and for realizing how simple it actually is:

The only strict requirement is that you must have a way to order all commands from clients, in order, on the server, and order all updates from the server, or each client, in order.


This is the only thing which actually matters, there is no need to try to keep the clients time in line with the servers time or try to forward-estimate the local client time with the remote server time by adding half the latency (rtt/2) on some local offset.

The piece of code which I read that made it all "click" for me was the CL_AdjustTimeDelta function in the Quake 3 source code, more specifically line 822 to 877 in this file: https://github.com/id-Software/Quake-III-Arena/blob/master/code/client/cl_cgame.c this shows how incredibly simple time adjustment is.

There are four cases which you need to handle:
  • Are we off by a huge amount? reset the clock to the last time/tick received from the server (line 844)
  • Are we off by a large amount? jump the clock towards the last time/tick received from the server (line 851)
  • Are we slightly off? nudge the clock in the correct direction (line 857 - 871)
  • Are we off by nothing/almost nothing? do nothing
Now quake 3 uses "time" as the value which we try to sync the client against the server with, I have found it a lot easier to use "ticks" completely and just forgo any concept of "time" in my code completely. The process I am going to describe next is the same process i described (a bit long winded) in my earlier post, but I'll try to make it a bit more transparent/easier to grasp:

Each peer have their own local tick which increments exactly like you would expect: +1 for every simulation frame we run locally. This is done on both the server and all the clients, individually and separately. The local tick is sent as the first four non-header bytes of each packet to every remote peer we are connected to. In reality the clients just send its own local tick to the server, and the server sends its own local tick to all clients.

Each connection to a remote peer has what is called the remote tick of that connection, this is exists on the connection object from client->server and server->client. The remote tick of each connection is what we try to keep in sync with the other end of the connections local tick. This means that the remote tick of the connection to the server on the client tries to keep in sync with the servers local tick and vice versa.

The remote tick of each connection is also stepped +1 for each local simulation tick. This allows our remote tick to step forward at roughly the same pace as the other end of the connection steps its local tick (which is what we are trying to stay in sync with). When we receive a packet we look at the four tick bytes of that packet and compare the remote tick we have for the connection, we check the same four conditions as is done in the Q3 sources in the CL_AdjustDeltaTime function, but with ticks instead.

After we have adjust our remote tick we read all the data in the packet and put it in the connections simulation buffer, everything is indexed by the local tick off the remote end of the connection. When then simply run something like this for each connection to de-queue all the remote simulation data which should be processed:


while (connection->simulationBuffer->next && connection->simulationBuffer->next->tick <= connection->remoteTick) {
    connection->simulationBuffer->next->process();
    connection->simulationBuffer->next = connection->simulationBuffer->next->next;
}

This topic is closed to new replies.

Advertisement