Advertisement

Optimizing my multithreaded server....

Started by August 09, 2000 08:54 PM
5 comments, last by Darrell 24 years, 5 months ago
Hi All, I''m debating on how to approach this but here''s the details... I''m optimizing my server code, namely handling the player connections. My idea is to expand the 1 thread that I have handling player connections and expand it so that it''s configurable, i.e. based on server load so I can spawn more threads to handle player connections. My idea is to create a pool of threads, say 5 at first. All are suspended until the login thread dumps incoming player connections on a queue, and wakes one/more of them up. I then want to distributed the load so that each thread handles an approx. even amount of players. This is my approach: Login thread accepts connection. Login thread dumps the connection into the Connection Pool thread''s queue. The Connection Pool examines all the Connection threads (threads solely handling player connections. This will actually be 2 threads per pool (1 incoming/1 outgoing) and attaches the connection to the least weighted pool. (Thread with least # of players on it) This way I can play around with the # of pools, and pool sizes to determin an optimal load. If all my pools are full, I just add more pools (spawn some threads) and obviously cap it at a certain player amount (which can be done in the login server thread.) My question to all of you is: Does this sound right? I know NT/linux/Solaris handle threads quite differently and my server will most likely benefit by runnning on Solaris, but how much multithreading is too much? I''d like to here some experience on this before I actually start coding this and end up with useless code. Any suggestions? Thanks in advance, Darrell. Early engine for the server.
Hey, your idea is very common in the network field. It tends to work out very well for most cases. For instance, Apache Web server holds n number of child processes in memory to hand off connections as they arrive (configurable in the config file of course).

The theory being that you don''t have to spend time asking the OS for a new thread, initiating the thread, and connecting the player to it. The setup time can be an undesired effect if you''ve got a large number of players connecting.

However if you know your going to get a fairly constant number of players (say 5) for a small time multi-player game you might either need to keep 5 spare tasks, or none at all.

If your planning on a massivily high number of players connecting, or many "short term" connections it might be better to have a pool. For instance, if you believe your going to average 5 players every few minutes connecting to the game it may be a great time saver. Or if you plan on holding a large player base on one processor you don''t want to waste time setting up a new thread and deprive the player of their precious CPU cycles

The OS I work on (non-windows/unix) holds a set of about 16 "spare tasks" to handle any needs that may arise that require a threaded process.

Just remember, the more spare processes hanging around in memory can lead to the OS putting some extra time to handle them, although sleeping processes are not very CPU intensive, too many of them all hanging around doing nothing can''t be healthy either. And they will undoubtibly starte taking up some memory for doing nothing but sleeping.

Perhaps a dynamic adjustment that spawns a few spare tasks (if needed) during the peak downtimes in the game? So that if you see that you''ve only got 2 spare tasks pending, you can await a time that the game is not processing too much and fire up a few more "just in case" spares? Just a thought (easier to say then implement of course).

CodeMonkey
CODE Monkey
Advertisement
Thanks for the reply.

The idea is that I would like the server to be scalable in the case that it was popular and required this kind of approach.

I''d like to have it configurable, so If someone is running a server that typically has low load, then I can set some parameters for this scenario.

But for larger games I''d like to just flip some config values and let the server grow.

Also cool when your server runs on varrying hardware.

Handling a few threads is easy, but when your designing a server to handle hundreds of connections, design is extremely important.
For a game that has say 100 players connected, I''d think that 1 thread per 10 players would be good enough (assuming proper hardware of course.)
Instead of assigning a connection to a thread consider just dumping all incoming work into a single queue and let threads pop the first thing off the queue when they''ve completed whatever task they''re working on.

The problem with the idea of a thread "owning" a connection is that if you get a long task for that thread all the other connections on that thread are suddenly going to get lagged also. You also can''t rundown threads as easily - let''s say it''s now 3:00am and you''ve got 10 people on instead of 100. You''d like to run down some threads but unless you can change connection ownership you can''t.

The single queue approach also saves you from having to figure out the arbitrary and probably-will-fail-under-some-special-circumstance rule of thumb of 1 thread per 10 connections.

Typically you have on the order of 1 thread per processor. If your tasks are compute-bound then you''re not going to get any speed up by queuing more threads. If you''re I/O bound then extra threads aren''t doing anything anyway. Either way roughly 1 thread per processor is best.

-Mike
Oh I agree with you. However when I speak of a thread "owning" a connection, basically all it''s going to do is loop through all player connection''s it''s listening on, and dump it to an appropriate queue. Then the thread that handles processing a particular queue will pick it up from there.

Example:

Connection Pool 1 has 5 players connected (they each have 5 sockets connected)
2 Threads loop in this Pool, listening for player input/output respectivly.

The listening threads loops on all players and see''s if data is in the rec buffer, if not it yields it''s processor time.
The sending thread checks it''s outgoing queue and sends the data to the players. Once all messages are dispersed it yields.

If data is found by the listening thread, it marshals it off to a queue immediatly.

A "game" thread handles the processing of messages in this queue. It processes the message, then sends a response (if req''d) back to the Pool from which it came.

Now the only case where A player in this Pool can exprience server lag, is when another player in his Pool is flooding the receiving/sending buffer. In this case we balance this via some priority queue algorithm. Routers do similar things to packets flooding them, of course some algorithms are better than others.

The key is that the connection pool threads do nothing but read and write data into player sockets.

Moving this logic to a single pool of players can cause bigger problems, as many "bandwidth" hogs affect all players and prioritizing a small # of players is easier than doing it for all, especially if your dealing with > 100 players.
I don''t know exactly what you mean by "loop" in the threads, but if you talking about optimizing you might want to consider select() on the threads, or ioctl().

The last thing you want is 20 threads spinning in a pending fashion waiting for incoming data.

The problem with reading data off the line with read() or recv() is that they block when called, until new data comes in. This becomes a pain when you want to terminate a thread or user abrubtly or whatnot. You''ve got to either keep a file descriptor record and do a close() on that descriptor to cause the thread to fall out, or task delete the thread. If you use select() you can put yourself in the optimal sleep time, and be able to pend in a slight fashion using the time value in milliseconds/seconds.

Not saying that wasn''t your plan, but just throwing that out as a suggestion none-the-less.

Thanks,

CodeMonkey
CODE Monkey
Advertisement
Yup that''s basically what I''m doing.

This topic is closed to new replies.

Advertisement