Advertisement

UDP server - 1 socket per client?

Started by November 19, 2003 11:56 AM
4 comments, last by BerwynIrish 21 years, 2 months ago
I''m handling network programming for a game and I''m fairly new to network programming. My question is, what are the pros and cons of having the server (udp protocol) open a new port for every client as opposed to having all the clients access the same port? From what I''ve worked out on my own (which may not be entirely correct), assigning a new port to every client would have the obvious disadvantage of needing to loop through all the ports, as well as not necessarily processing the information in the real-time order it occurs (e.g. player A fires at player B before B raises a shield, but the looping gets to B before A, so B lucks out). Advantages of individual ports - information can be sent directly to the section of code dealing with the player assigned to the port, and assuming that each port has its own queue of incoming packets (or is there a single interface-wide queue?), the server would better accomodate a flood of information with multiple ports. Are there any other considerations? Are any of the ones I mentioned nonsensical? What route do the experienced usually take?
the main advantage of using one port per client is that there''s a nasty udp bug in w2k <= sp1 which closes your listening socket when a single client disconnects
Advertisement
I'm still learning network programming as I go on my game project, and I am writing my game server for a linux platform, so take this with a grain of salt:

I think there are only a few reasons you would bind the same protocol to multiple sockets:
1) Typically a bound listen socket will only queue up to 5 outstanding connection requests. So if you are having more than 5 connections to your socket before your server can accept the connection, you might like to have more than one socket
2) I think people do this when they are writing cross platform unix code. The thread API's on unices historically haven't been very standardized so people fork off processes where they would really like to use threads. And I don't believe there is a way there to share a port between processes.

"assigning a new port to every client would have the obvious disadvantage of needing to loop through all the ports, as well as not necessarily processing the information in the real-time order it occurs"

For this thing, I think people typically do the following flow (in python, but similar to c++):

HOST = '' # Symbolic name meaning the local host
PORT = 50007 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(5)
conn, addr = s.accept()
print 'Connected by', addr

So each client basically gets their own socket (from the server perspective) which is returned by accept. So no matter what you end up looping through your client sockets on the server side. This looping is typically handled by select() though. (very basic explanation) Select takes an array of sockets and tells you which are ready to read, and which are ready to write.

"...the server would better accomodate a flood of information with multiple ports."

This is true to an extent:
For an accepted socket, its been my experience on linux the queue is around 228 packets deep, for about 20 bytes of user data in each UDP packet. I ran a little experiment to test this.

This implies your game server does need to regularly service this socket to keep it from filling up. Some game loops are fast enough that they will keep the sockets from overflowing.

What people often do for this is spawn a worker thread that moves the packet data into a queue for your server main thread.


[edited by - red_mage on November 19, 2003 4:05:12 PM]
I think it's much better to just use one socket, saves you a lot of polling, and there's really no need to have several sockets.

(Clients can still connect to it, server may not be able to 'accept' it, but it's not like you're using those functions, just lets you use recv instead of recvfrom etc.

I think it will just be a waste of cycles to have more then one socket that serve the same purpose, especially if you're going to have threads regulate the dataflow.

[edited by - Siaon on November 19, 2003 7:27:31 PM]
---Yesterday is history, tomorrow is a mystery, today is a gift and that's why it's called the present.
quote:
Original post by red_mage
1) Typically a bound listen socket will only queue up to 5 outstanding connection requests. So if you are having more than 5 connections to your socket before your server can accept the connection, you might like to have more than one socket

Not applicable to a UDP server. There is no connection. In UDP, you don''t listen, and you don''t accept. You simply create the socket, bind to your server''s well known port, throw some read buffers into the queue, and start reading from it.

quote:

So each client basically gets their own socket (from the server perspective) which is returned by accept. So no matter what you end up looping through your client sockets on the server side. This looping is typically handled by select() though. (very basic explanation) Select takes an array of sockets and tells you which are ready to read, and which are ready to write.

I hope, for your sake red_mage, that you are *not* planning on using this model for your "100K simultaneous players with 2.5K players per shard." server The select model forces you into a linear search through an FD_SET structure, and you will get very poor server responsiveness at the levels you are looking for. IOCP. Learn it. Love it. It is the *only* model that will allow you to handle 100K simultaneous connections, and it can do it quite fine with a single socket used for both input and output.

quote:

"...the server would better accomodate a flood of information with multiple ports."


The socket is not going to be your bottleneck. The recieve buffer might overflow, if you use an inappropriate model or are otherwise too slow to empty out the filled buffer space. But presuming you have a reasonably responsive server and a large enough buffer size, this won''t cause you problems either. Your bottleneck will be your bandwidth. Even on a T1 or T3, one socket will be able to handle whatever comes through without any problem (AIUI, my knowledge in this area is a little fuzzy ).

quote:

This implies your game server does need to regularly service this socket to keep it from filling up. Some game loops are fast enough that they will keep the sockets from overflowing.

What people often do for this is spawn a worker thread that moves the packet data into a queue for your server main thread.


Indeed. This is the basis of IOCP. You create a thread that does nothing except run through a very tight loop waiting for completion notifications, when it gets one it then dumps the buffer to the main thread for processing (if it was a read completion), replacing in the read queue the one that was just used (if it was a write completion you can just drop the buffer back into the "available for use" list). You ought to keep a number of buffers in the read queue at all times. Indeed, this loop is so important that you often scale the number of threads running this tight loop to the number of processors you are using in your server (my understanding is that the main thread where you do all your responding to incoming packets will starve, so it''s good to have extra threads throwing completed read buffers at it if you have the processors to do it).

Thus, in conclusion, there is no need to have multiple sockets on your server

But anyway, this is not what the OP asked. He asked about ports, not sockets, heh.

BerwynIrish, if I read what you are asking, I think you may have a bit of confusion. There are actually two sets of ports to think about, the client''s port and the server''s port. They are two different things. *Usually*, though not always, a server uses one port for both input and output. This port is well known to the clients (otherwise, how would they send data to it?). Sometimes a server will use a "connection" port where a client sends an "I am here" packet and the server shuffles them to a new port where they live for the duration. This may be what you were talking about. You can set it up that way if you like, or you can just use the one port for everything. The second method is the one I use, simply because I only have one hole in my firewall as opposed to a bunch of them, so it''s better for security reasons. I don''t think the port can bottleneck you, so I wouldn''t worry about flooding it. Again, your bandwidth will be your bottleneck.

The Client port should be dynamic. That is to say, you ought to let WinSock choose the port on it''s own, rather than choose for it. This allows you the best chance for dealing with networked systems and firewalls. Also, this allows you to distiguish different players by both IP and port address, rather than just IP. When you get that first packet from the client, don''t forget to store both the IP and port address somewhere in that client''s data, you''ll need both of them to respond.

Remember, the path looks like this:

Server -> Server Port -> Internet -> Client Port -> Client
Server <- Server Port <- Internet <- Client Port <- Client

So the server sends a packet through its own port (that you''ve bound to) TO the client''s port, while the client sends through it''s open port (that it has bound to) TO the server''s port. The whole process starts with the client sending a packet to the server through a *known* port (e.g. it has to know this in advance). The server can then detect what port the client is using and can therefore now respond. If the server is going to bounce the client to a different port than the known one, it has to send this information back to the client (via the client port it just got) so that the client can send all subsequent data to the new server port.

Got all that?

I hope I didn''t say something wrong. Someone will no doubt correct me where I made errors.

-Ron
Creation is an act of sheer will
You are absolutly right, Ron.

This topic is closed to new replies.

Advertisement