Advertisement

Multiple Winsock2 tcp sockets

Started by February 26, 2016 06:37 AM
5 comments, last by Sergey Ignatchenko 8 years, 8 months ago

Hello,

I've been utilizing a winsock2 tcp client for the past several months to send messages to my game server. This past week, I implemented a master server to store connection information about my game servers. On my client side, I decided to use another separate winsock2 tcp client socket next to my other socket that I use for sending messages to my game server.

My main question is: Is it okay to use more than one separate socket connected to different IPs & ports in one application? I'm thinking that my IPs & ports are separate, but I'll definitely go make sure to verify that. In regards to both my game server & client application that are being ran on the same machine.

The reason I'm wondering this currently is because I've been getting some really odd behavior that I haven't experienced until just recently. I'm losing packets for some particular reason. My master server socket is sending a heatbeat message every 5 seconds and my game socket is receiving messages constantly. I can't help but wonder if I'm experiencing some sort of interference.

Thanks for the insight.

When you have client sockets (and don't bind() them manually), they got assigned so-called "ephemeral ports", so they will have the same IP, but different ports. So from this point (unless you're doing some crazy stuff) you should be fine.

On the other hand, two separate connections MAY interfere with each other in strange or sometimes even evil ways. The worst case usually happens if you have one time-sensitive TCP connection, and also have a fat download going on at the same time over a different TCP connection. This can overload your player's "last mile" and cause all kinds of trouble :-(.

On the third hand, a question - how do you know that the packets are being lost? As your client is TCP, packet loss shouldn't be visible to you at all, except for delays. So if you're experiencing something worse than mere delays - make sure that your own protocol over-TCP is correct; the most common mistake in this regard is to assume that each send() call corresponds to one recv() call - for any real-world TCP this is not the case, in spades (but it MIGHT have worked this way in some very trivial scenarios, and adding 2nd connection MIGHT be a trigger for such a bug-of-relying-on-send-recv-matching to manifest itself).

Advertisement

Is it okay to use more than one separate socket connected to different IPs & ports in one application?


Yes. At the low level, a TCP connection is identified by the 4-tuple: "source IP, source port, destination IP, destination port."
When creating a new connection to a host where an existing connection already exists, the kernel will pick a new value for "source port."

Note that it IS possible to run out of source ports, when using many thousands of connections to the same host, or when creating/destroying connections very quickly (because of TIME_WAIT.) You typically see this in proxy/load balancing scenarios on the server side, where lots of connections are normally aggregated. One work-around is to create more source IP addresses for the host that's running out of ephemeral ports, assuming your proxy software doesn't just bind to the "default" address. On a client, you will approximately never see this problem though.

Note that multiple TCP connections will fight for available bandwidth if they each want to pull more data than the connection can sustain. Using the "naive" TCP packet pacing algorithm, a single TCP stream will achieve on average 75% of theoretical max throughput of a link (in practice, most implementations are better than this.) Two streams can each achieve 50% of theoretical max throughput, so the connection will be pretty much saturated. Anything more than two will just make the connections fight more for throughput. Note that this is under the assumptions that connections are long-lived and throughput is limited compared to ability of the other end to send data. For connections which are idle most of the time, or the request-response round-trip latency is significant compared to payload (think HTTP/web resources) it may make sense to use more than 2 connections in parallel to the same server.
enum Bool { True, False, FileNotFound };

I don't see how having two TCP sockets open could cause lost packets, but I also don't see the point in having two TCP sockets going at the same time if both are communicating with the same server.


I don't see how having two TCP sockets open could cause lost packets,

While splitting one connection into two is not too likely to cause dropped packets (though even here YMMV), but if we're adding new connection, it is different: More traffic -> More pressure on the last mile -> Dropped packets.


I also don't see the point in having two TCP sockets going at the same time if both are communicating with the same server.

It is about TCP buffers and relative priorities. If you have high-throughput transfer, you need rather large TCP buffers, but these buffers will slow down your "need-to-be-delivered-ASAP" packets. Over two connections, you can have one "low-latency" connection, and one "high-throughput" connection, and play with their buffers (and also keep the "low-latency" one reasonably empty, so the new packet is immediately transferred). That being said, personally I prefer to use single connection as long as it is possible.

EDIT: changed "fast" to "low-latency" to be more clear.


I don't see how having two TCP sockets open could cause lost packets,

While splitting one connection into two is not too likely to cause dropped packets (though even here YMMV), but if we're adding new connection, it is different: More traffic -> More pressure on the last mile -> Dropped packets.

A TCP connection should never outright lose packets. Dropped packets will be retransmitted until the sender receives an acknowledgement or the connection is broken.

As for the thing about relative priorities, that makes sense. I thought he was just trying to get more throughput.

Advertisement


A TCP connection should never outright lose packets.

It is a terminology issue. Strictly speaking, when you're using TCP socket, you don't deal with packets at all (there are no 'packets' coming out of the socket, only stream), so when we're speaking about "lost packets" in TCP context, we're probably speaking about those-underlying-IP-packets (just because there are no other packets in sight to speak about). Those underlying-IP-packets can be lost (and their loss can be easily induced by adding a second TCP connection). As with any terminology issue though, there is no one single "right" answer, so interpretations may vary; above is what I have meant.

This topic is closed to new replies.

Advertisement