Advertisement

IPv6 Multicast not working

Started by April 21, 2018 12:41 AM
12 comments, last by hplus0603 6 years, 7 months ago

Hi,

I tried to send an ipv6 multicast packet through my network. The sending seems to work, since it arrives on the destination PC - at least it appears in the logged network traffic in WireShark. But it does not arrive in my server program. When I send a packet from the same PC that should receive it, it does work though.

This is the code for sending:


UDPBroadcastSocket = socket(PF_INET6, SOCK_DGRAM, IPPROTO_UDP);
BOOL Yes = 1;
setsockopt(UDPBroadcastSocket, IPPROTO_IPV6, IPV6_V6ONLY, (char*)&Yes, sizeof(BOOL));
int32_t hops = 50;
setsockopt(UDPBroadcastSocket, IPPROTO_IPV6, IPV6_MULTICAST_HOPS, (char*)&hops, sizeof(hops));
uint32_t IF = 0;
setsockopt(UDPBroadcastSocket, IPPROTO_IPV6, IPV6_MULTICAST_IF, (char*)&IF, sizeof(IF));
	
struct sockaddr_in6 sock_in;
struct addrinfo *result = NULL;
struct addrinfo hints;

memset(&hints, 0, sizeof(hints));

hints.ai_family = AF_INET6;
hints.ai_socktype = SOCK_DGRAM;
hints.ai_protocol = IPPROTO_UDP;
hints.ai_flags = AI_NUMERICHOST;

getaddrinfo("FF18::1243", "12346", &hints, &result);

unsigned char buffer[MAXBUF];
int PacketSize = 8;
int sinlen = int(result->ai_addrlen);
memcpy(&sock_in, result->ai_addr, result->ai_addrlen);
	
freeaddrinfo(result);

sendto(UDPBroadcastSocket, (char*)buffer, PacketSize, 0, (sockaddr *)&sock_in, sinlen);

 

And this is the code for receiving the packet:


std::vector<uint32_t> GetNetworkInterfaceIndices(){
	std::vector<uint32_t> Result = { 0 };

	/* Declare and initialize variables */

	DWORD dwSize = 0;
	DWORD dwRetVal = 0;

	unsigned int i = 0;

	// Set the flags to pass to GetAdaptersAddresses
	ULONG flags = GAA_FLAG_INCLUDE_PREFIX;

	// default to unspecified address family (both)
	ULONG family = AF_UNSPEC;

	LPVOID lpMsgBuf = NULL;

	PIP_ADAPTER_ADDRESSES pAddresses = NULL;
	ULONG outBufLen = 0;
	ULONG Iterations = 0;

	PIP_ADAPTER_ADDRESSES pCurrAddresses = NULL;
	PIP_ADAPTER_UNICAST_ADDRESS pUnicast = NULL;
	PIP_ADAPTER_ANYCAST_ADDRESS pAnycast = NULL;
	PIP_ADAPTER_MULTICAST_ADDRESS pMulticast = NULL;
	IP_ADAPTER_DNS_SERVER_ADDRESS *pDnServer = NULL;
	IP_ADAPTER_PREFIX *pPrefix = NULL;

	

	family = AF_INET6;

	

	// Allocate a 15 KB buffer to start with.
	outBufLen = WORKING_BUFFER_SIZE;

	do {

		pAddresses = (IP_ADAPTER_ADDRESSES *)MALLOC(outBufLen);
		if (pAddresses == NULL) {
			return{ 0 };
		}

		dwRetVal =
			GetAdaptersAddresses(family, flags, NULL, pAddresses, &outBufLen);

		if (dwRetVal == ERROR_BUFFER_OVERFLOW) {
			FREE(pAddresses);
			pAddresses = NULL;
		}
		else {
			break;
		}

		Iterations++;

	} while ((dwRetVal == ERROR_BUFFER_OVERFLOW) && (Iterations < MAX_TRIES));

	if (dwRetVal == NO_ERROR) {
		// If successful, output some information from the data we received
		pCurrAddresses = pAddresses;
		while (pCurrAddresses) {
			
			Result.emplace_back(pCurrAddresses->IfIndex);
			
			pCurrAddresses = pCurrAddresses->Next;
		}
	}
	else {
		
		return{ 0 };
	}

	if (pAddresses) {
		FREE(pAddresses);
	}

	return Result;
}

  

UDPSocket = socket(AF_INET6, SOCK_DGRAM, IPPROTO_UDP);

sockaddr_in6 UDP_Sock_in;
memset(&UDP_Sock_in, 0, sizeof(sockaddr_in6));
UDP_Sock_in.sin6_addr = in6addr_any;
UDP_Sock_in.sin6_port = htons(Settings::GetPort()+1);
UDP_Sock_in.sin6_family = PF_INET6;

setsockopt(UDPSocket, IPPROTO_IPV6, IPV6_V6ONLY, (char*)&No, sizeof(BOOL));

bind(UDPSocket, (sockaddr*)&UDP_Sock_in, sizeof(UDP_Sock_in));

ipv6_mreq BroadcastGroup;

memset(&BroadcastGroup, 0, sizeof(ipv6_mreq));

const auto IfIndices = GetNetworkInterfaceIndices();

BroadcastGroup.ipv6mr_multiaddr.u.Byte[0] = 0xFF;
BroadcastGroup.ipv6mr_multiaddr.u.Byte[1] = 0x18;
BroadcastGroup.ipv6mr_multiaddr.u.Byte[14] = 0x12;
BroadcastGroup.ipv6mr_multiaddr.u.Byte[15] = 0x43;

for (const auto& Index : IfIndices) {
	BroadcastGroup.ipv6mr_interface = Index;
	setsockopt(UDPSocket, IPPROTO_IPV6, IPV6_ADD_MEMBERSHIP, (char*)&BroadcastGroup, sizeof(ipv6_mreq));		
} 
  

  
socklen_t fromLength = sizeof(sockaddr_in6);

pollfd PollFd;
PollFd.events = POLLIN;
PollFd.fd = UDPSocket;
PollFd.revents = -1;
		
WSAPoll(&PollFd, 1, -1);
		
recvfrom(UDPSocket, (char*)buffer, MAXBUF, 0, (sockaddr*)&from, &fromLength);
		

I basically tried specifying every single network interface index and the packet still does not arrive in the server. I have no idea what could be wrong. And why does it work when sender and receiver are on the same PC? I don't understand this. Does anyone have an idea? I'm trying this for like 5 hours now and I'm frustrated.

I believe the receiving socket needs to have broadcast/multicast turned on for that feature to work. ( I haven't done IPv6 multicast, so that may be different, though -- check the manual!)

Btw: Multicast is broken and will never work on the greater Internet. You may already be aware, but in case you're planning to try to use multicast for a real deployment, don't.

enum Bool { True, False, FileNotFound };
Advertisement

I simply want to detect all of my game servers running in the local network. This should be possible somehow. There is so much example code that is very easy and none of it works for me. It may actually be the firewall blocking something though... Let me test this, one moment... No, that was not the problem, it still does not work.

Edit: It works with IPv4. It seems IPv6 multicast is actually broken on Windows. Or I missed something important.

Quote

 

I simply want to detect all of my game servers running in the local network. This should be possible somehow.


 

Yes. Use broadcast!

 

enum Bool { True, False, FileNotFound };
7 hours ago, hplus0603 said:

Yes. Use broadcast!

 

Yeah, that didn't work either, maybe due to firewall issues? I tried it back when I wrote the code years ago and again when I debugged it two days ago and couldn't get it to work. Anyway, just changing everything to IPv4 multicast worked for some reason. That should actually be better than broadcast from an efficiency standpoint.

Quote

just changing everything to IPv4 multicast worked for some reason. That should actually be better than broadcast from an efficiency standpoint.

If you have a large datacenter where only some hosts need the traffic, AND your routers all participate in the multicast group filtering/routing, then you may send packets down fewer connected network ports, this is true! Assuming the cost of multicast in your routers is small enough that it doesn't impact other traffic. I have seen that some/many routers/switches have hardware support for the basic IP routing primitives, but may kick multicast up to the host CPU, so significant multicast traffic may actually choke the switch. Your mileage may vary, and all that.

enum Bool { True, False, FileNotFound };
Advertisement

Multicast and even broadcast can be painful when you get into anything beyond a flat network. We have had servers reboot themselves because of multicast delays (RHEL clustering used to use multicast, they may still but we moved away from that). Here are some things we have done that push this off the network layer (routers and firewalls being black magic to most developers).

Have your server nodes register with a registration service which keeps tabs on the registered server's version and availability status etc. This is what we have moved to for most production servers. The registration service is also responsible for adding/removing the servers from load balancing where appropriate.

Does your server provide a management port for modifying the running config, getting metrics etc.? If so, a periodic local network scan for this port is often much easier to facilitate with tools like nmap kicking off a discovery process. We do also run periodic scans in addition to having servers register themselves. SNMP on managed network gear for publishing events like ARP (unexpected new MAC address or IP failover?), physical port status etc. is ideal but not always available depending on your hosting situation.

Use pub/sub/queueing/event streaming middleware instead of broadcast/multicast. In almost ever scenario the minimal latency is worth it. From Redis to RabbitMQ to Kafka, this is how things work at scale. If you are delving into network protocols beyond unicast tcp/udp there needs to be some very good justification. I am not saying there is no need for multicast or broadcast, but it is very rare beyond HA appliances that expect to be physically close to eachother which is fairly limiting in today's virtualized hosting world. The middleware is also much easier to troubleshoot than getting packet captures from a mirrored/span network port or individual servers.

DNS is important. Every interface on each server has an fqdn (some of ours have multiple interfaces) and PTR record. It is also important to make use of CNAME and SRV records where appropriate for services the servers offer. DNS is here to help. Managing bare IPs is tough enough with IPv4 but after a few dozen it is just taxing keeping it straight. With IPv6, good luck.

Use tools like ansible/chef/puppet to automate multiple server deployment including spinning up VMs, managing DNS and configuration management. Keep the snowflakes to a minimum. There is nothing more satisfying than kicking off an ansible playbook, going to get a cup of coffee and coming back to find a few dozen new servers deployed, packages and dependencies installed, monitoring platforms displaying the new services and automated testing/burn in kicked off.

Spend the time on configuration management and automation. Your users will thank you. Your future self will thank you when you can spend time on adding features vs troubleshooting basic connectivity because of a missing route or a typo in a config.

Evillive2

Err... thanks? But I only need to discover game servers in the local network of the player - and it must work without a centralized server because I want people to be able to play even when their internet connection is not working, so there is no alternative to IP multicast afaik.

There's already a standard mechanism for this in the form of mDNS (typically referred to as Rendevouz/Bonjour in the Mac world, and Avahi on Linux). It's widely used, and more importantly, widely supported by consumer routers.

Rolling your own multicast solution is liable to run into all sorts of fun with being discarded by various routers/switches and blocked outright on many public or corporate networks.

IPv6 multicast is even less useful, given that the majority of consumer networks don't run IPv6 yet. You'd need to support IPv4 as well to reach the widest possible user base.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Multicast is the worst way to do local server browsers. Use broadcast. And, if you don't want to write it on your own, use mDNS, which sits on top of UDP broadcast.

Quote

routers and firewalls being black magic to most developers

At the risk of sounding a bit judgmental, I believe you should not be doing development for distributed systems if you believe routers and firewalls are "black magic." They follow well-defined rules, and do so for known reasons, and distribute systems need to be written with the network fabric in mind, rather than trying to ignore it.

 

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement