Server side:
Asynchronous approach - I'm not a big fan of it. The code is pretty Windows-specific. Most socket samples you'll find on the net are blocking. Plus keeping track of all the async calls you've got going can be a headache. And its not that much of an improvement performance-wise- at some level, there is threading/async processing going on within the TCP/IP stack. In general, not worth it.
Message queue/round robin approach. This is good as long as each request can be served in a predictable manner. Ideally as fast as possible but you want predictablility over speed. Say you have two algorithms for handling requests - number one takes 200 ms on average but once in a while takes 2 seconds. Number two takes less than 500 ms every time. Even though on average number one is faster, number two is preferred because you have a gauranteed response time of 500 ms. If you get a request that takes a long time to serve it kills performance for everyone.
Thread-per-client approach. Best for when client requests are independent of each other, such as in web server. Can get complicated when requests require significant processing of shared state - the locking code can be complex,prone to deadlocks, and cause scalability chokepoints. The overhead of allocating threads can be dealt with by pre-allocating a pool of threads. Normally "unused" threads are kept suspended until they are grabbed from the pool and used as a client thread. When a client disconnects its servicing thread is returned to the pool.
Process-per-client - All of the above methods suffer from a flaw - if the request servicing code crashes, the whole server goes down and everyone has a bad day. Increased reliablility can be had at the cost of some overhead by having each request service routine in a separate process - if one crashes, it doesn't affect other clients. I know what you are thinking - I'm Mr. Bad-Ass Network Coder and my code never crashes. Well maybe so, but can you say that for the underlying OS? Or how about malicious attacks that exploit security holes in your protocol and bring down the server? Do you really want to have 1000 angry users when some script kiddie figures out an exploit or maybe just a few?
I'm sure the last two methods could be combined to give the efficiency of threading with the robustness of forking processes.
------------------
-vince