Recently I read this. Does anyone know how these numbers look for heavier traffic (well, games)? Is packet loss so irrelevant (in terms of %)?
How unreliable is UDP?
These days internet connections are more reliable than they used to be, back in the days of dialup you could be expected to lose pings for example when doing a ping against a host, simply because the infrastructure wasn't as reliable (even between your pc and your modem was a serial cable, and after that a phone cable, both of which introduced line noise). Udp still doesn't gaurantee delivery of course so you still need functionality to ensure delivery or handle lost or out of order data, but in general things are better than they used to be.
Games/Projects Currently In Development:
Discord RPG Bot | D++ - The Lightweight C++ Discord API Library | TriviaBot Discord Trivia Bot
In my experience, packet loss does not happen too often. When it does, it seems to happen in bursts. Reordering seems to be much more frequent.
However, most important is that it can happen. For a simple game state this is not an issue (simply drop the old state and interpolate), but you really need to be prepared issues with game events (e.g. chat messages, kill events).
Most net-libs (DirectPlay, ENet, RakNet) implement a way to mark packets as 'reliable and ordered', thereby ensuring that packets arrive and are sent to the application in the correct order.
From my very little experience out-of-order packets were bigger issue than lost packets.
I had no safechecks so it caused a lot crashes: player movement packet arriving before login packet.
I had two computers connected through a crossed TP cable (ie not even a switch or router in between) and did some tests. The only other traffic was Windows stuff. When sending small packets with at least one ms sleep in between, 100% of all packets were received and in order. As soon as I dropped that sleep, however, I saw something like 15-20% packet losses (not received at all) and 10-15% of packets received were out of order. Bursts (multiple packets in between sleep calls) resulted in less dropped packets, but there were still a large amount of packets received out of order.
Reintroducing the sleep between each packet but also introducing other traffic showed that it was _really_ easy to disrupt the communication and get losses. Heavy load on the computers also had some effect.
Thanks for responses, they are useful, but I am more interested in statistics. They will vary from connection to connection, but average % says more than "frequent", "good", "bad".
What is time span for good and bad connection out-of-order? What I mean by this is how much later does this packet arrive? Is it something within 2 * ping or can it arrive 10 seconds later?
Even ping can vary by a factor 50 in some cases. There are just so many variables. For example, there may be a VPN between two peers, either directly or at some point in between them. Such a VPN could decide to "improve" stability by making sure all packets are transmitted through that tunnel. This means that packets could get resent even though it's over UDP. In extreme cases, you may have multiple routes to your target and the routing system in between tries to load balance or whatever. In that case, one route might take seconds while the other milliseconds.
I am more interested in statistics. They will vary from connection to connection, but average % says more than "frequent", "good", "bad".
You won't find any. It depends almost entirely on the physical characteristics of the networks involved.
Some of it is due to context.
Switches get flooded, quality of service kicks in, electronic routes change, equipment gets shut down, and more. Maybe you'll be in a context where the path initially chosen through the Internet is near perfect, and the path is kept alive for you due to frequent traffic. That connection could theoretically go years without rerouting or other changes.
Or you could have terrible routing luck, constantly rerouting through locations that are under heavy burden and the workers are modifying equipment between the edge routers. They assume no traffic will be lost because of all the redundancy within their infrastructure, but the reroutes and automatic changes cause a series of transmitted and reordered packets every time the humans pull a cable.
Some of it is due to physical conditions.
Someone living in a building where lines are crumbling copper wire and a large amount of electrical noise is going to have a lot. Maybe you'll lose 70% of the packets every time equipment in the building gets used, but then perfect or near-perfect when the equipment turns off.
Someone living in suburbia with relatively new wiring in their home and neighborhood might go for days with no nearby noise, then when construction crews start their work three blocks over their connection may have intermittent losses for days; every time the jackhammer turns on their connection quality drops by half.
Someone in a corporate building with fiber to their back office, a high quality corporate switch that goes directly to your cable, and a high quality shielded cat-six between your machine and the switch, or perhaps connected from one machine to another on the same network, they may never experience a single issue.
The users might consider their own rate "normal" because that is what they are used to. For software there is no "normal". It happens sometimes. Sometimes it happens near continuously. Sometimes it can go days or weeks or more between happening. As far as the software developer is concerned, UDP packets getting duplicated, dropped, and reordered are entirely random events that may happen at any time and for any duration.
The dropped packet indicators always start on the inside of Comcast's network, but outside my local link, so it's likely an internal capacity problem on their network.
Meanwhile, I've seen experiments run with 10 million UDP packets sent from a data center on the west coast, to a data center on the east coast, using regular transit (not leased lines) and not a single packet dropped!
In addition to Comcast customers, WiFi and mobile networks will cause more packet drop than wired.
On iOS, there's a developer feature which lets you simulate packet loss, etc. It has various presets for 3G, edge, WiFi, etc. One of them is called 'Very Bad Network' and has the following (grabbed from the full list of presets here: http://jeffreysambells.com/2012/09/22/network-link-conditioner):
- In bandwidth: 1000 Kbps
- In packet loss: 10%
- In delay: 500 ms
- Out bandwidth: 1000 Kbps
- Out packet loss: 10%
- Out delay: 500 ms
This is obviously not real life evidence of anything whatsoever, but someone at Apple thinks that a bad network has 10% packet loss. Two machines both on a bad connection would have 20% packet loss on their communication (well 19%, I suppose).
I think if you can deliver a reasonable networked game experience under that sort of packet loss, then you're doing fine.