Advertisement

High performance CLOSE_WAIT problems

Started by May 20, 2005 05:42 AM
0 comments, last by nevernomore 19 years, 9 months ago
I'm writing a high performance http server on linux that works well for the most part. However after a few minutes of handleing a few thousand connections, some sockets are getting stuck in the CLOSE_WAIT state with 1 or more bytes in the netstat Recv-Q column. Eventually there are so many stockets stuck at CLOSE_WAIT that accept() start erroring with "Too many open files". I know the typical awnser is to say, "you're leaking a FD somewhere, you just need to close the socket on your end." But I'm seriously doing this. I've been trying trying to solve this problem for a week now. I think my problem is an issue with the FIN_WAIT_2 state (http://biocal2.bio.jhu.edu/manual/misc/fin_wait_2.html) So I made an implementation simular to apaches solution which does seem to help a little, but it still bombs out. Here's my disconnection code below. If anyone has any insight PLEASE do share! Thanks everyone! signal (SIGALRM, lingering_death); alarm (5); char junk_buffer[2048]; if (shutdown(iSock,1) != 0) { alarm(0); close(iSock); return; } FD_ZERO(&fdset); FD_SET(iSock, &fdset); fd_set fdread, fderr; do { timeout.tv_sec = 2; timeout.tv_usec = 0; fdread = fdset; fderr = fdset; } while(select(iSock+1,&fdread,NULL,&fderr,&timeout) > 0 && !FD_ISSET(iSock,&fderr) && FD_ISSET(iSock,&fdread) && read(iSock, junk_buffer, sizeof(junk_buffer)) > 0); close(iSock); alarm(0);
turn linger off. don't do a shutdown, but do a close().

This topic is closed to new replies.

Advertisement