Advertisement

game server main loop

Started by December 29, 2013 05:12 AM
4 comments, last by hplus0603 10 years, 10 months ago

Hi all, I have a few questions regarding game server main loop. For single player games, I know I could simply use fixed timesteps and update 40 times in a second. However, in game server, I wonder how it's "done".

1. Do game servers use fixed timesteps? if yes, what is the common "tick-rate"?

2. If they tick at 40 Hz, that means they will not merely update the simulation 40 times in a sec, but also do send() and recv() and accept() 40 times too right?

3. Here's my game server loop (that I can think about). Is it correct? please CMIIW


//this is just pseudo code

int main(){
 long timeskip=1000/40;  //40Hz simulation

 init_server(); //initialization code goes here
 start_server(); //server starts listening
 while(running){
  long time=get_time();
  //fixed timestep @ 40 Hz...
  while(time > lasttime){
   poll_using_select(); //this will prepare socket for polling
   accept_new_connection(); //server accept()
   recv_on_clients(); //if there's data to be read, call recv()
   simulate(timeskip/1000.0); //simulate game
   send_on_client();  //send() update. But I'M AFFRAID IT WILL BLOCK!!
 
   lasttime += timeskip;
  }
 }
 kill_server(); //cleanup code goes here
 return 0;
}

4. Is it the right choice to use select() with blocking sockets? or should I make it nonblocking no matter what? Please don't tell me to use boost::asio or epoll() or something like that. I'm still learning. I might resort to those options later, when scalability demands. But for now, I just want to use basic socket functions.

5. I know I can poll socket readiness using select(), and then recv() on it when there's data to be read. so I don't block on recv(). However, I also read that send() blocks by default, but when? isn't a socket always ready to send()?

1. Yes, everybody fixes their time step, which leads to a fixed tick rate for physics/simulation. Common tick rates include 25, 30, 50, and 60 Hz, but I've seen anything from 10 Hz to 120 Hz.
2. No, you can send data for multiple ticks in a single packet. It's very common to send networking at much lower rates. Popular console first person shooters may simulate at 60 Hz but send data at 10, 12, or 15 Hz.
3. Select() will tell you when it's possible to send at least something without blocking. (WinSock may not implement this, though -- it may always show something as writable.) If you're worried about blocking, use non-blocking sends. Also, you typically want an outgoing queue per client in your program; you should attempt to keep the outgoing socket buffer as empty as possible.
enum Bool { True, False, FileNotFound };
Advertisement

Thanks hplus, that cleared up some confusion. However, I still don't know how I could send in a rate lower than update rate. How would I do that?

All I can think of is:

1. Suppose update rate is 40 Hz and send rate is 10 Hz.

2. Then every 4 updates I send once.

Ain't that right?

3. If so, when would I build packet? do I simply build packet every update and put it in buffer, then send it after 4 updates?

4. If not, wouldn't there be a lot of missing details since the sent data aren't synchronized with server updates?

Note that "packets" and "messages" are different things.

Typically, one packet contains many messages, plus some amount of framing (sequence number, timing information to calculate ping, perhaps a checksum, etc.)

Thus, you can have one message that is "user input for player X for tick number Y." Then, you put those messages into a queue, and every network tick, you put all those messages from the queue into a packet to send.

When you receive a packet on the other end, you pick apart the packet, extract the various bits of information you need, verify the checksum (ignore the entire packet if it's wrong,) and put the messages in question into a receive queue for when the local simulation gets to tick numbers Y, Y+1, ...
enum Bool { True, False, FileNotFound };

Hmm, yes I get that. That's what I mean. So everytime there's something "changes", I build the packet, put it in a queue, and when it's "send" time, I gobbled up all those packets and make a "one-great-send", and clear the output queue. Amirite?

Well, it gets clearer to me. Now, my mind is a lil bit foggy. The server is supposed to send as seldom as possible. But how about server recv()? should the server receive as often as possible? or synchronize the recv() in server's update tick?

And what about the client? how often should he be send/recv-ing? Those two questions are really making me confused.

Last wish: do you know any source (sites, books) that teach "advanced" multiplayer gamedev subject? technique such as dead reckoning, extrapolation, client side prediction, or something like that? Thanks master!

The client typically sends at the same rate as the server, but there's no reason it must, if you find that something else works better for you.

Typically, all processes call recv() once per time through the game loop, and deals with whatever data happens to arrive at that time by dequeuing it and decoding it; each message should tell the server which specific time stamp it is intended for, so it may sit in a queue for a little bit, or (rarely) be too late and discarded.

The FAQ at the top of this site has some links to various articles. There isn't any "one" place to learn about game networking, because there are so many different kinds of games and kinds of technologies, and which one you use is particular to your specific game needs.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement