Advertisement

Simple experiment

Started by December 02, 2005 08:31 AM
1 comment, last by hplus0603 19 years, 2 months ago
I have to processes running on one machine. One of them is the server and the other one is the client. The client sends messages through a connectionless datagram socket like this:

        for(int i = 0; i < 5000; i++)
	{
		char buf[100];
		sprintf(buf, "%d\n", i);
		if ((numbytes = sendto(sockfd, buf, strlen(buf), 0,
			     (struct sockaddr *)&their_addr, sizeof(struct sockaddr))) == -1) {
			printf("%d", WSAGetLastError());
			perror("sendto");
			exit(1);
		}
	}

        if ((numbytes = sendto(sockfd, "stop", 4, 0,
			     (struct sockaddr *)&their_addr, sizeof(struct sockaddr))) == -1) {
		printf("%d", WSAGetLastError());
		perror("sendto");
		exit(1);
	}
The server recieves the messages like this:

int count = 0;
	while(1)
	{
		if ((numbytes=recvfrom(sockfd, buf, MAXBUFLEN-1 , 0,
			(struct sockaddr *)&their_addr, &addr_len)) == -1) {
			perror("recvfrom");
			exit(1);
		}

		buf[numbytes] = '\0';
		
		if(!strcmp(buf, "stop"))
			break;

		fprintf(log, buf);
		count++;
	}

	fprintf(log, "Messages recieved: %d", count);
The server is launched first. The problem is that no errors are reported from sendto and recvfrom, but the server always gets only the first 1860 messages, and never breaks out from the loop as it does not recieve the 'stop' message from the client. I can run the client as many times as I want, and everytime the server is logging the first 1860 messages and keeps running. Any ideas why it might happen?
Sometimes movement is a result of a kick in the ass!
An interesting fact is that if I send 10,000 messsages instead of 5,000 messages the server gets all of the first 1860 messages, and all of the last messages - from 8263-9999. 3598 messages total. And it stops beacuse it recieves the last 'stop' message too.

Is there a setting I can tune (a size of a buffer?) to stop such things from happening?
Sometimes movement is a result of a kick in the ass!
Advertisement
The problem is that you send the messages too fast, so some part of the UDP stack drops the packets. if you insert a sleep for a millisecond after the call to send(), chances are it'll work fine (because the server thread will get scheduled often enough to drain the internal queues).

Of course you don't get an error in these cases, as UDP does not guarantee delivery. Dropping the packet on buffer overflow is "correct" behavior. if you need reliable in-order delivery, use TCP instead.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement