Advertisement

Packet Size and Latency

Started by August 26, 2005 07:45 AM
14 comments, last by John Schultz 19 years, 5 months ago
Quote:
Original post by hplus0603
Just a brief addition: the reason packet size matters for latency, is that you have to have all the data of the packet before sending it, which is when you send the first byte -- but you have to receive all the data (i e, receive the last byte) before you can even start looking at the first byte of data.


The problem is, we've gone off into theoreticals while ignoring the fact that nobody sends 56000 bytes and calls it a 'packet'! For reasonably-sized packets the size factor is dwarfed by the network speed factor.

Wave originally said, "I've heard that there is no latency difference in sending 10 byte packets compared to 150 byte packets", and that was the context in which I answered. A 56K modem - presumably the slowest link in the chain - would take less than 0.2ms to send/receive 10 bytes and less than 3ms to receive 150 bytes. And that's on 56k - on DSL or cable, again as Wave said "I aim for cable modems", it's going to be even less. So at the relative packet sizes mentioned in the example, well short of the usual MTU, the difference is surely almost entirely independent of packet size.
Quote:
A 56K modem - presumably the slowest link in the chain - would take less than 0.2ms to send/receive 10 bytes and less than 3ms to receive 150 bytes


Assuming a speed of 56,000 bps (actually, the upwards speed of a 56k is limited to about 33,600), 150 bytes is 1200 bits, or about 21 milliseconds to send, which is significantly slower than 80 bits (about 1.4 milliseconds). I think your numbers are off by a factor 10.
enum Bool { True, False, FileNotFound };
Advertisement
Quote:
Original post by Kylotan
Quote:
Original post by hplus0603
Just a brief addition: the reason packet size matters for latency, is that you have to have all the data of the packet before sending it, which is when you send the first byte -- but you have to receive all the data (i e, receive the last byte) before you can even start looking at the first byte of data.


The problem is, we've gone off into theoreticals while ignoring the fact that nobody sends 56000 bytes and calls it a 'packet'! For reasonably-sized packets the size factor is dwarfed by the network speed factor.


From the post above (fast internet connection):

Quote:

Pinging [ip address] with 750 bytes of data:

Reply from [ip address]: bytes=750 time=18ms TTL=117
Reply from [ip address]: bytes=750 time=14ms TTL=117
Reply from [ip address]: bytes=750 time=18ms TTL=117
Reply from [ip address]: bytes=750 time=15ms TTL=117

Ping statistics for [ip address]:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 14ms, Maximum = 18ms, Average = 16ms

Pinging [ip address] with 1500 bytes of data:

Reply from [ip address]: bytes=1500 time=33ms TTL=117
Reply from [ip address]: bytes=1500 time=25ms TTL=117
Reply from [ip address]: bytes=1500 time=37ms TTL=117
Reply from [ip address]: bytes=1500 time=34ms TTL=117

Ping statistics for [ip address]:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 25ms, Maximum = 37ms, Average = 32ms


750/1500 byte packets are typical for a real network game, especially for a computer acting as the server. In the above real-world internet example, the latency is directly proportional to packet size. For a shipped XBox Live! game, I tuned the packet size using DummyNet and real-world testing to get the best balance for gameplay (less than 1000 bytes). In the case of a fast connection with multiple hops and other delay factors as noted, the latency is increased from the base latency (proportional to packet size). For example, the delay caused by packet size is multiplied by every hop. Thus for 10 hops, 16ms/32ms becomes 160ms/320ms (where the link speed at every hop is about the same).

[Edited by - John Schultz on August 28, 2005 2:18:28 AM]
Quote:
Original post by Wave
@John Schultz - What's this?
http://www.brightland.com/sourcePages/network_technology.htm
It speaks of a smart network library, but it's just a plain text page? Is this a project that is not finnished? What's the actuall page?

And also you said you've made your own custom netlib - the above? How did you implement the reliable UDP part? Did you send an ack, for every reliable packet received or did you piggyback them on other packets? or did you do this depending on the current traffic?


The network technology is operational and was used in the first XBox Live! game in 2002. It's currently being used in an online physics-based racing game in development.

The design is strongly based on the concepts from TCP (for bandwidth adaption), but optimized for real-time gameplay using UDP. I can't say more about the implementation other than what's on the web page.
hplus0603 - Yeah, I was calculating in bytes rather than bits. Silly me. Still, I stand by my overall point, given that the 56K argument is largely irrelevant.

John - why would anyone send 1500 byte packets when that's ramming up against a common MTU limit? I can see very few cases where I'd ever want to send that much data to one client in one go when speed was important anyway. For example, Starcraft packet sizes average at 132 bytes and Counterstrike averages at 165 (ref: here). Are these not 'real' network games? I'm curious.
Quote:
Original post by Kylotan
hplus0603 - Yeah, I was calculating in bytes rather than bits. Silly me. Still, I stand by my overall point, given that the 56K argument is largely irrelevant.

John - why would anyone send 1500 byte packets when that's ramming up against a common MTU limit? I can see very few cases where I'd ever want to send that much data to one client in one go when speed was important anyway. For example, Starcraft packet sizes average at 132 bytes and Counterstrike averages at 165 (ref: here). Are these not 'real' network games? I'm curious.


As stated above, send the smallest packets possible to minimize latency. 1500 bytes is an even multiple of 750 to illustrate an example that latency is proportional to packet size (note also the "to be fair" example with 1472/1473 bytes showing MTU effects).

Before the XBox Live! launch, we were asked what (max) size packets we were sending, as some routers in Japan were dropping large packets (around 1000+ bytes). Fortunately, we were using under 1000 bytes (max), but other developers were using more (thus the discovery of the problem). We did not send such large packets per network frame; large packets were only used used for initial setup and during periods of high activity (many moving objects, lots of reliable message traffic).

Any game with a large number of moving objects, where positional accuracy is important for gameplay (and not running lock-step/parallel-state-sim where only control input is sent) and/or with many reliable events being fired, can easily hit 1000+ byte packets.

Latency is (minimally) directly proportional to packet size. Every hop adds to the total latency due to packet size.

This topic is closed to new replies.

Advertisement