Advertisement

How can i control a connection timeout on a TCP socket?

Started by April 15, 2004 07:21 AM
0 comments, last by leehairy 20 years, 9 months ago
I have an "isAlive" function that simply connects to an open port on a server via TCP. (i havent got the server code so i cannot change anything). 1) If the server is up and is accepting connections i can find out immediately. 2) If the server is down the timeout on my client connection takes an inordinate amount of time to complete. My client code simply opens up a socket and calls connect. If it fails the server is deemed dead. I have tried :- select(fd, NULL, writefds, NULL, timeval) with a timeval set to 3 secs but it makes no difference? The BSD setsockopt() fn takes a READTIMEVAL param, but the one on windows does not. Can anyone help me or point me in the correct direction Thanks
I''m pretty sure that the timeout flags for setsockopt() were implemented in Winsock 2.0. Check your documentation for the SO_RCVTIMEO and SO_SNDTIMEO flags.

This topic is closed to new replies.

Advertisement