so the max size of a single sendto() package is 2^16
does this size affect the speed of the package to arrive?
so the bigger the size the slower?
so the max size of a single sendto() package is 2^16
does this size affect the speed of the package to arrive?
so the bigger the size the slower?
First: "packet" not "package."
Second, the maximum size of a packet depends on protocol and hardware.
The maximum size of a UDP datagram is 65507 bytes on IPv4.
On most networks, that will fragment into IP datagrams of size 1452, or 576, or 9000, or ... and be reassembled on the receiving side.
(IPv6 doesn't fragment, but guarantees a minimum MTU of 1200 bytes)
http://en.wikipedia.org/wiki/Maximum_transmission_unit
Third, transmission speed depends on speed of signal/light/electricity, speed of routing, and throughput of the narrowest link.
If your ping time is X and your throughput is Y and your packet is size Z, an estimate for transmission time is X/2+Z/Y.
This does not take into account buffering, interrupt latency, and game simulation delay.
(IPv6 doesn't fragment, but guarantees a minimum MTU of 1200 bytes)Is that so? I've admittedly never sent UDP greater than 1200 bytes since fragmentation isn't such a great plan for UDP (losing one fragment means loses them all).
But I was under the impression that even though routers no longer transparently fragment datagrams, a router getting a datagram larger than the MTU sends back an ICMP error, and the client node does the fragmenting (and resend). Is that not the case?
I was under the impression that even though routers no longer transparently fragment datagrams, a router getting a datagram larger than the MTU sends back an ICMP error, and the client node does the fragmenting (and resend). Is that not the case?
Fragmentation is complicated thanks to the OSI model. UDP is at the transport layer, it could be fragmented at the network, data link or physical layers.
The IP protocol has a "no fragment" bit, AND if it gets fragmented at the network layer AND if the flag is set, then an ICMP error (at the network layer since IP protocol is also network level) is triggered.
If any of those three is not true, it gets fragmented and hopefully reconstructed without an error.
The "no fragment" flag is probably a bad idea in most cases, since several protocols have a small MTU. Consider that SLIP (Serial Line IP) has an MTU of 296 bytes. Network -level access is typically done through PPPoE and PPPoA, but if you happen to travel through SLIP at any point of your connection, you're almost certain to fragment.
Hmm... I doubt the network layer plays a role here. 3/4 or more of all home internet traffic (including mine) goes via ATM nowadays. ATM has 48 bytes payload. Accounting for 40 bytes for the IP and TCP headers, this only leaves 8 bytes of payload per frame when using TCP. Which means if the IPv4 DF bit would cause the network layer to fail, any such thing as TCP congestion control (which sets DF under IPv4) couldn't possibly work. Your window size would be 8 bytes. With IPv6, not even the header without payload fits into one frame, so if fragmenting at the network layer is not allowable, there is no way you could transmit TCP/IP or UDP/IP at all. That isn't (obviously) the case.
Also, one should not confuse IPv4 (which has a DF bit that can be set optionally) with IPv6 (which behaves as-if all the time).
(IPv6 doesn't fragment, but guarantees a minimum MTU of 1200 bytes)
"Third, transmission speed depends on" path too.
I remember someone I worked with MANY years ago dealing with a protocol going through a geosynchronus satellite link and having to significantly change the way ack/retry was done...
Third, transmission speed depends on" path too.