Advertisement

Command Frames and Tick Synchronization

Started by May 07, 2018 07:40 PM
28 comments, last by poettlr 6 years, 2 months ago

Thanks for your replies, I was on a small vacation thus my late response. 

Big thanks for the synchronization code insights. I will sweep over the code and as soon as I have a confident version of my own I will share it here. 

Cheers 

Hey, 

I tried out the offset approach for more time now and god pretty good results with a low latency environment. But I am unhappy with what I have with higher latency. 

To clarify: 
Currently my server and client both tick the network at and simulation at 60hz. 
Each tick the client sends a message with its current tick: 


public struct Input { 
	public ulong tick;
  	public Vector2 stick;
}

On the server the input message is used in the following way: 


public void OnPlayerInput(Input input) {
  lastClientTick = input.tick;
  inputQueue.Enqueue(input, lastClientTick); 
}

Each tick the server sends a message to the client containing the offset: 


public struct TimeInfo {
	public ulong serverTick;
	public double serverTime;
	public int clientToServerOffset;
} 

This message is filled in the following way: 


public void Simulate(ulong currentServerTick, double currentServerTime) {
	TimeInfo t = new TimeInfo();
	t.serverTick = currentServerTick;
	t.serverTime = currentServerTime;
	t.clientToServerOffset = (int) (lastClientTick - tick + 1); 
	//send t to client.
}

The client is using the TimeInfo to adjust its tick rate: 


ulong clientTick;
const float tickRate = 1/60f;
float adjustedRate = tickRate;

private void AdjustTickRate(TimeInfo t) {
   int offset = t.clientToServerOffset;
   if (offset > 180 || offset < -180) {
     clientTick = t.serverTick;
     return;
   }
   if (offset < -32) {
     adjustedRate = tickRate * 0.75f;
   }
   else if (offset < -15) {
     adjustedRate = tickRate * 0.875f;
   }
   else if (offset < 1) {
     adjustedRate = tickRate * 0.9375f;
   }
   else if (offset > 32) {
     adjustedRate = tickRate * 1.25f;
   }
   else if (offset > 15) {
     adjustedRate = tickRate * 1.125f;
   }
   else if (offset > 8) {
     adjustedRate = tickRate * 1.0625f;
   }
   else {
     adjustedRate = tickRate;
   }
 }

So if the offset is either to high or to low I snap to the server tick (assuming, I just connected). Otherwise I adjust my local tickRate to send/receive and simulate faster than 60Hz while still using a deltaTime of 16.6ms for the simulation. 

Results: 
RTT ~20ms
Loss 0%
Client runs ahead of server ~ 4-5 ticks. 

RTT ~100ms
Loss 0%
Client runs ahead of server ~ 2 ticks. 

RTT ~200ms
Loss 0%
Client runs ahead of server ~ 2 ticks.

Both results seem okish but also wrong for me. For a RTT of 20ms I expected the client to run ahead for something like 2 or 3 ticks based on my calculation. And for RTT of 100 I'd expected a range of 5-6 ticks. So I think there is something off... 

I hope you guys have an idea because I clearly do something wrong here :D
Thanks, cheers, 

 

Advertisement

It's possible that your measurements are measurement artifacts, rather than anything inherently wrong.

Do you run the server and client on the same machine? They might interfere.

Do you run your networking over TCP or UDP? If TCP, have you turned on TCP_NODELAY?

 

enum Bool { True, False, FileNotFound };
33 minutes ago, hplus0603 said:

It's possible that your measurements are measurement artifacts, rather than anything inherently wrong.

Do you run the server and client on the same machine? They might interfere.

 

I tried both running it on the same machine and running it on two different machines in the same local network. 
 

33 minutes ago, hplus0603 said:

Do you run your networking over TCP or UDP? If TCP, have you turned on TCP_NODELAY?

I use UDP and to be more specific its based on netcode.io + reliable.io by Glenn Fielder. 

I can give you more details but that is basically it. For simulating different network performances (like loss, lag and so on) I use the Windows Tool clumsy (https://jagt.github.io/clumsy/) which works very good for that. 

I just ran the simulation again (on a different machine than before but doing client and server on it) with the following results:
RTT = 16.63ms 
Loss = 0%

The offset the server sends to the client is 1.
The server receives client packets ~ 1 Tick ahead (see log output);


[Info  - 2018-05-15 18:20:34Z] Client is ahead  (Srvtick-17869 || Clttick-17870)
[Info  - 2018-05-15 18:20:34Z] Client is ahead  (Srvtick-17870 || Clttick-17871)
[Info  - 2018-05-15 18:20:34Z] Client is ahead  (Srvtick-17871 || Clttick-17872) 

This output is generated when the server receives a packet and the client tick is the tick included in said packet. 


Note: 
On the client I am actually 5 ticks ahead when sending the message. Which seems odd because 5 ticks are 80ms. Even if I lose 1 tick (because I send and receive packets at the beginning of the frame on client and server) It should not be 5 ticks should it?  

RTT = 126ms (Clumsy Added lag on inbound and outbound packets of 50ms). 
Loss = 0%

The offset the server sends to the client is still ~1-2. 
The server receives client packets ~2 ticks a head based on my log output. 
On the client the last received server tick is ~13 ticks behind. So the client now runs 13 ticks in the future. Which is nearly twice the RTT. 


Is there any information I can provide? 

Edit: 
I should note that the "Client runs ahead of server ~ 2 ticks." was in my prev. post was the offset the server sent to the client at the given time. 

Sounds like you have a bug either in how you calculate the offset on the client, or how you print the offsets.

This is very hard to debug without good test cases, so I suggest you apply debugging and logging to the problem until you get it where you want it to be.

enum Bool { True, False, FileNotFound };

So, after not getting this to work as expected I stripped everything not related to the matter. 

Goals are the overwatch networking model: 
* Run the client ahead of the server. 
* If a packet is lost tell the client to tick slightly faster until its fine again.

So whats happening on my server: 
Each tick the server 
* grabs the stored input for this tick from the buffer. If it does not exist we set a flag. 
* integrates the simulation.
* sends State Message to the client.
 


public struct ServerStateMessage { 
 public int serverTickAtTimeOfSending;
 public bool lossDetected; 
}


So every-time the server does not have input to process the client will know. 

Each tick the client 
* reads the ServerStateMessages (duh). 
* calculates the estimated tick it needs to have packets arriving early
* samples input
* integrates the simulation
* sends input and current tick


public stuct InputMessage { 
  public int clientTickAtTimeOfSending;
  public Input input;
}

That describes the basic loop both go. 

So what is the client doing exactly to be "ahead" of the server. 


private const int MinimumTickAhead = 1;
private int addedTickAhead = 0;
private const float NetDt = 1 / 60f;
private const float NetDtInMs = NetDt * 1000f;
private int tick = 0;

private void Tick() { 
  ServerStateMessage lastState = //some function to retrieve the last state.
  int rttAsTick = Math.Max(Mathf.CeilToInt(Client.Instance.NetworkInfo.RttMillis / NetDtInMs), 1); 
  if(lastState.lossDetected && addedTickAhead < rttAsTick + 1) //not sure on the second part of that condition) 
    addedTickAhead += 2;
  
  int estimatedTickToBeAhead = lastState.serverTickAtTimeOfSending + rttAsTick + MinimumTickAhead + addedTickAhead;
  int diff = estimatedTickToBeAhead - tick;
  
  if(diff < -60 || diff > 60) //1 second off
    tick = estimatedTick; //I assume that will only happen at the beginning of the game for now. 
  if(diff < -2) {
    //Local is ahead estimate. Tick slower. 
    simulationTickRate = 1/(60f-3f); 
  } else if(diff > 2) {
    //Local is behind estimate. Tick faster
   	simulationTickRate = 1/(60f+3f); 
  } else { 
  	//Local is near estimate. Tick normal;
    simulationTickRate = NetDt;
  }
  if(addedTickAhead > 0) addedTickAhead--;
  
  //Simulation
  //SendInput
  tick++;
}



So far this works pretty well. Does anyone of you see a mayor flaw? 
Unfortunately this is the first time I do something like that.

Cheers



 

Advertisement

Your if() case will execute both the "snap time" and "tick slow" branches when diff < -60 (and same for > 60.)

Other than that, I don't quite see why packet loss would change your simulation rate. Typically, you use the local clock to drive local simulation, and just change the offset between "Local clock" and "game tick" and the local clock moves ahead even when a network packet doesn't arrive.

 

enum Bool { True, False, FileNotFound };

Oh yeah, I missed a return there.

My plan was to tick faster on the client to fill the server side buffer faster. But this could actually have no effect to be honest. I just don't want to have the situation where the server does not have input from a client. 

But after thinking about  it.. if I lose tick 105 on the client and my rtt is 10 ticks it would take 10 ticks until I actually notice the loss on the client. So yeah I could actually remove that at all.

Any other suggestions?

 

On 5/18/2018 at 11:51 PM, hplus0603 said:

Typically, you use the local clock to drive local simulation, and just change the offset between "Local clock" and "game tick" and the local clock moves ahead even when a network packet doesn't arrive.

 

 

Could you elaborate on this? 

 

You don't show how "adjustedRate" is used to arrive at the tick count.

Typically, the client clock and the server clock are so close in actual speed, that the rate doesn't need to be adjusted, just the offset.

I'm assuming that your time-to-tick function looks something like:


double lastTime = read_time()
int lastTick = 0;

double adjustedRate = tickRate;

int get_now_tick() {
  double nowTime = read_time();
  while (lastTime < nowTime) {
    lastTick += 1;
    lastTime += adjustedRate;
  }
  return lastTick;
}

Your code adjusts adjustedRate quite a lot (by up to 25%) but the clock is never anywhere near that much off in rate of progression -- you can typically assume client and server clocks run at a speed that's within 0.001% of each other. (Other than when a client sleeps and then wakes up again.)

Thus, you should typically adjust your clock offset by how far apart the client clock is from the server clock instead. You don't need adjustedRate at all.

That being said, reading your original numbers, begin 3-4 ticks away at 20 ms and 2 ticks away at 100/200 ms is actually quite good. Note that you will always be at least 2 full simulation frames away -- the client and the server aren't perfectly in sync, so you'll always be 2 * 1/60 seconds off in jitter. Then you add the amount of extra buffering you add to account for jitter, and it seems quite reasonable that you're (4 * 1/60 == 67 milliseconds) away on a 20 ms link With a longer latency link, the link latency "swallows" the smaller jitter, and it can settle on 2 ticks.

Note that the way you measure the tick discrepancy must hide the transmission latency -- if you were to plot your "tick distance" as the physical clock to logical tick difference at each side, and the physical clocks were well synchronized (PPTP or NTP with a good source) then you'd see that the physical difference was related to the latency. But that's OK -- as long as the server sees the commands in time, you're doing well.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement