Thanks Gorf.
I'm working with Windows XP computers. The software is written in native C++. The main machine I test the server clients on is a 3 Ghz, 1 gb ram P4 with hyperthreading. I know XP has a backlog limit of 5, so I'll be trying it on Windows Server 2003 soon.
Here's a description of the server software: I'm using accept threads designed similarly to the Apache web server. Basically, I use AcceptEx with an event. Once the acceptex() call is made, I wait on the event. When it's triggered, I post a WSRecv() call. All recv and send operations use completion ports. I can start as many accept threads as are needed, and each have their own event.
I tested Apache on my local network, and it can handle a lot of clients connecting per seconds, so I concluded that accept threads were an acceptable solution. One of the server will receive lots of short-lived connections, while the lobby and game servers will have long connections.
I then have 3 other threads: one is the worker thread that waits on the completion port for the recv and send.
once a notification is received, I post a message (using a different, unbound completion port) to a second thread whose job it is to gather data from receive buffers and queue it for processing. Once that is done, I post a message to a third thread that has the app logic in it and it extract a message from the received data and processes it.
If the operation was a send the second thread simply pass the notification to third thread that acts on it.
This way each thread perform a very specific function and do not cause other threads to wait while the do their processing. The server usually consumes less than 10% of the cpu and uses very little memory (around 600K).
On the client side I use the same design (completion ports + 3 threads). And to test my server I wrote a small app that just send data and waits for it to get back. Once it received it the connection is closed (using closesocket() and shutdown()) and a new one is made.
I tweaked the registry so that I can exceed the port 5000 limit, and using netstat I verified that I wasn't running out of ports when running the clients. Still, if I don't slow down the connect rate, I get error 10048 on connect and sometimes the WSSend() fails.
That's about it. Let me know what you think!
Thanks!
Connecting too fast?
My test client has a single loop, unthreaded that always connect to localhost on port 10666. My server only listens to that port.
You CAN also get 10048 when you have no more free local ports I think. But it's not what happens here.
I'm willing to chalk it up to the fact that you cannot connect at ultra-high speed on a single computer, but I'd like to know what limit I should expect.
Anyone ever came across this?
--Eric
You CAN also get 10048 when you have no more free local ports I think. But it's not what happens here.
I'm willing to chalk it up to the fact that you cannot connect at ultra-high speed on a single computer, but I'd like to know what limit I should expect.
Anyone ever came across this?
--Eric
I worked for a company in Seattle where we made admin tools that utilized sockets for command handling. We encountered this same issue but at a different point. Our error occurred when we immediately sent data over the newly connected socket. Adding a short Sleep alleviated this problem. More than likely this is a threading issue with the TCP/IP stack. Don't sweat it.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement