Advertisement

Balancing server update loop

Started by June 19, 2017 04:44 PM
2 comments, last by Acar 7 years, 5 months ago

Hello. I have a bizarre inconsistency in my server update loop. Below is approximately what I do:


/* Calculate delta time, do other minor things */

while (dt_sys >= SYSTEM_STEP_SIZE) { /* SYSTEM_STEP_SIZE = 20ms */
  /* Update sockets which are connected to other servers, swap queues for received packets, 
  	 parse queue for received packets(doesn't actually parse packets but instead copies received data onto client ring buffers), 
     update send network ring buffers for all clients, parse parallel task results */
}

while (dt >= STEP_SIZE) { /* STEP_SIZE = 50ms */
  /* Run game simulation */
}

/* Travers client list and parse packets (with a maximum value as to how many can be parsed) */

I get very inconsistent update times for my game simulation(sometimes it's 0ms, sometimes it's 16, 60, 100 and goes all the way up to >1sec!). As you might have noticed I don't have any sleep in anywhere in my code so it ends up taking up an entire CPU core(is it a bad thing to do?). Now here's the bizarre part, when I attach a profiler (Very Sleepy CS) the inconsistency sort of disappears. I start getting update times between 0-16ms with very rare spikes. I'm honestly very confused at this point and not sure how to debug this further. Another thing worth mentioning is that I use GetTickCount to calculate delta time used in loops above. 

First, you should not keep updating networking in a time loop. Drain all incoming packets until recv() returns nothing, and then process all incoming packets until the incoming packet queue is empty, and then go on to perhaps step the world again.

Second, I don't see your code subtract anything from the "dt_sys" and "dt" variables inside the loops. You'd never break out of the loops.

Third, I don't see your code increment "dt" and "dt_sys" variables outside the loop. Given the description you suggest, it sounds to me as if you are perhaps not managing that correctly, and building up "wrong" dt values?

Fourth, what timer are you using to measure time? If you're using Windows GetTickCount(), or even timeGetTime(), those are terrible for resolution. You'd want something based on QueryPerformanceCounter(). On LINUX, you want something based on clock_gettime(CLOCK_MONOTONIC_RAW). On both systems, gettimeofday() is also not particularly stable and should be avoided.

So, if you rip out "dt_sys" from your code and just run the code that's inside that while() block once per outer loop, that's a good first step.

Then, show us the code that increments and decrements "dt" and the code that reads the clock.

Then, if you're still having trouble, we might be able to meaningfully help with details.

enum Bool { True, False, FileNotFound };
Advertisement
13 hours ago, hplus0603 said:

First, you should not keep updating networking in a time loop. Drain all incoming packets until recv() returns nothing, and then process all incoming packets until the incoming packet queue is empty, and then go on to perhaps step the world again.

Previously I had the whole 'dt_sys' loop out of the loop. What was happening is that it was spending a huge amount of time trying to lock mutexes. I do have mutexes(CRITICAL_SECTION) in places such us PopCompletedParallelTask, FlushClientSendBuffer(I use IOCP for external networking) and similar. 

 

13 hours ago, hplus0603 said:

Second, I don't see your code subtract anything from the "dt_sys" and "dt" variables inside the loops. You'd never break out of the loops.

Third, I don't see your code increment "dt" and "dt_sys" variables outside the loop. Given the description you suggest, it sounds to me as if you are perhaps not managing that correctly, and building up "wrong" dt values?

I do decrement them inside the loop after every step. 


/* Calculate delta time */
ct	= GetTickCount();
dt	= ct - g->last_update_ticks;
dt_sys	= ct - g->last_sys_update_ticks;

while (dt_sys >= SYSTEM_STEP_SIZE) {
  if (dt_sys >= 500) {
  	logMsg("System update loop has fallen behind: dt=%u", dt_sys);
  	step_size = dt_sys;
  }
  else {
  	step_size = SYSTEM_STEP_SIZE;
  }
  /* Update things */
  dt_sys -= step_size;
}
/* Do similar for simulation update loop */

g->last_update_ticks = ct - dt;
g->last_sys_update_ticks = ct - dt_sys;

 

13 hours ago, hplus0603 said:

Fourth, what timer are you using to measure time? If you're using Windows GetTickCount(), or even timeGetTime(), those are terrible for resolution. You'd want something based on QueryPerformanceCounter(). On LINUX, you want something based on clock_gettime(CLOCK_MONOTONIC_RAW). On both systems, gettimeofday() is also not particularly stable and should be avoided.

I'm using GetTickCount() indeed. I will look into QueryPerformanceCounter().

This topic is closed to new replies.

Advertisement