Hello. I have a bizarre inconsistency in my server update loop. Below is approximately what I do:
/* Calculate delta time, do other minor things */
while (dt_sys >= SYSTEM_STEP_SIZE) { /* SYSTEM_STEP_SIZE = 20ms */
/* Update sockets which are connected to other servers, swap queues for received packets,
parse queue for received packets(doesn't actually parse packets but instead copies received data onto client ring buffers),
update send network ring buffers for all clients, parse parallel task results */
}
while (dt >= STEP_SIZE) { /* STEP_SIZE = 50ms */
/* Run game simulation */
}
/* Travers client list and parse packets (with a maximum value as to how many can be parsed) */
I get very inconsistent update times for my game simulation(sometimes it's 0ms, sometimes it's 16, 60, 100 and goes all the way up to >1sec!). As you might have noticed I don't have any sleep in anywhere in my code so it ends up taking up an entire CPU core(is it a bad thing to do?). Now here's the bizarre part, when I attach a profiler (Very Sleepy CS) the inconsistency sort of disappears. I start getting update times between 0-16ms with very rare spikes. I'm honestly very confused at this point and not sure how to debug this further. Another thing worth mentioning is that I use GetTickCount to calculate delta time used in loops above.