Quote:He tells you about this because the sockets he uses are stored in fd_set's and he uses select() to check them for activity. Surely you're not limited to 1024 connections? I realize a MUD isn't the most popular form of game, but shouldn't it handle more than that? How do you get around this limitation? Is there a better way to handle connections? Another question, to add to the list: doesn't using select() use a heck of a lot of processing power?
From the book: ... different operating systems have different numbers of sockets that you can store inside an fd_set. For example, Windows is set to 64 sockets, while Red Hat 8, which I am running, is set to 1024.
select(), limited file descriptors?
I'm a complete network programming newbie. I bought a book, MUD Game Programming by Ron Penton, that is great! It has taught me a lot. But there's one thing that is discouraging me, something that is not made clear in the book and has me baffled.
Quote:
Original post by Ronin Magus
Surely you're not limited to 1024 connections?
...
How do you get around this limitation?
You take 64 or 1024 connections, stick them in a FD_SET, call select() ... and repeat as many times it is necessary for you to have checked them all.
Quote:
Is there a better way to handle connections?
Asynchronous IO.
Quote:
Another question, to add to the list: doesn't using select() use a heck of a lot of processing power?
If you use select() in a blocking fashion, you can just stick it in its own thread and it'll wait until a socket wakes up (if you have more than 64/1024 sockets and want to do it that way, you'll need one thread for each block of sockets). If you are polling, well... external IO has always been the most expensive operation (in terms of time spent if not of CPU load - you are mostly waiting on the IO subsystems).
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan
I take that you are using red hat. On Windows, there's IOCP (I/O Completion Port). I think the *nix equivalent is Poll but I can't vouch on it.
A book that you might want to look at is "Network Programming for microsoft windows" by Anthony Jones and Jim Ohlund. While it's geared on windows socket, it does have a section comparing all the different I/O model (Blocking, Non-Blocking, WSA-AsyncSelect, WSA-EventSelect, Overlapped Events and Overlapped Completion Port).
Gizz
A book that you might want to look at is "Network Programming for microsoft windows" by Anthony Jones and Jim Ohlund. While it's geared on windows socket, it does have a section comparing all the different I/O model (Blocking, Non-Blocking, WSA-AsyncSelect, WSA-EventSelect, Overlapped Events and Overlapped Completion Port).
Gizz
Quote:
Fruny: You take 64 or 1024 connections, stick them in a FD_SET, call select()
On Windows, you can stick in any 64 sockets into an fd_set, because it's a list of sockets. However, this makes FD_SET() and FD_ISSET() very inefficient (they scan the entire list, each time).
On UNIX, it's not so easy, because FD_SET() translates into a bitmask operation, where a file descriptor maps directly to a bit mask position, so no file descriptor larger than FD_SETSIZE can be put into a regular fd_set.
However, if you pick apart the headers (both Windows and UNIX) you will find that you can easily extend the struct fd_set to have capacity for more sockets, by defining your own compatible structure with more space. On Windows, the fd_set structure contains a count; on UNIX, the first argument to select() tells the kernel how many bits to look at.
That being said, trying to service more than 1000 TCP sockets at the same time will often lead to performance problems (except for special circumstances) so when you find the need to extend these structures (at least on UNIX), you might want to think again about your application structure, and how it is intended to scale.
enum Bool { True, False, FileNotFound };
Hello Ronin Magus,
As others have stated there are limits to the number of file desc you can have.
On windows it is 64 why? not sure.
Default for most unix's 1024 this should be set by the FD_SETSIZE define.
To change the size on unix set FD_SETSIZE to the number you want before including the sys/types.h the max is 65535.
There is a limit to how many file desc a process can have with max at 65535, but stdin and stdout and stderr will use up 3.
So you only have a max of 65532 file desc to use as you please.
Now how do you get around max limit (65535) I am not sure.
How do big http servers handle this?
Anyone know?
Lord Bart
As others have stated there are limits to the number of file desc you can have.
On windows it is 64 why? not sure.
Default for most unix's 1024 this should be set by the FD_SETSIZE define.
To change the size on unix set FD_SETSIZE to the number you want before including the sys/types.h the max is 65535.
There is a limit to how many file desc a process can have with max at 65535, but stdin and stdout and stderr will use up 3.
So you only have a max of 65532 file desc to use as you please.
Now how do you get around max limit (65535) I am not sure.
How do big http servers handle this?
Anyone know?
Lord Bart
Quote:
Original post by Gizz
I take that you are using red hat. On Windows, there's IOCP (I/O Completion Port). I think the *nix equivalent is Poll but I can't vouch on it.
For Linux, the closest thing that comes to mind is epoll. For FreeBSD, there's kqueue. I'm not sure if there's newer-and-better options now, since I haven't been keeping up with that realm of kernel development for a little while. Using poll isn't really an IOCP replacement, but it does make a better fallback option than using select for large numbers of sockets when epoll, kqueue, et cetera aren't available.
Quote:
Original post by Lord Bart
Now how do you get around max limit (65535) I am not sure.
How do big http servers handle this?
Anyone know?
HTTP servers don't have 65535 connections simultaniously. A browser makes a request, gets the data, then the server closes the connection (Except for Keep-Alive, but thats only for a few seconds).
If a server is expecting more than 65535 connections in a few seconds (Google for example, or MS servers or something), then they often have server clusters, where more than one machine handles the connections.
Quote:
Original post by Null and Void Quote:
Original post by Gizz
I take that you are using red hat. On Windows, there's IOCP (I/O Completion Port). I think the *nix equivalent is Poll but I can't vouch on it.
For Linux, the closest thing that comes to mind is epoll. For FreeBSD, there's kqueue. I'm not sure if there's newer-and-better options now, since I haven't been keeping up with that realm of kernel development for a little while. Using poll isn't really an IOCP replacement, but it does make a better fallback option than using select for large numbers of sockets when epoll, kqueue, et cetera aren't available.
I took the time to hunt back an article in DrDobb (feb04) about "Scalable Socket Server" that I read. The mechanism I was refering to is /dev/poll. In the article it says that it's the mechanism that is the closest to the IOCP performance for Unix OS flavor. But then again, I'm no 'nix expert and I'm only quoting. [grin]
Gizz
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement