Advertisement

Calculating Packet Loss

Started by April 28, 2016 04:03 PM
5 comments, last by sufficientreason 8 years, 6 months ago

Tried doing some searches here but I couldn't actually find and answer to this. Let's say I'm putting in a 1-byte rolling sequence number in each of my packets. On the receiving end, what's the cheapest way to calculate packet loss purely for statistics reporting (don't care about reliability, etc.)?

The main question is whether you'd want to get an estimate of packet loss even when zero packets have currently made it through.
If so, then the easiest way is to have a counter that you increment every time a packet is received, and read-and-clear this counter every 3 seconds.
Then divide out by the number of packets you expected to receive in 3 seconds; that's your effective throughput rate.
The two main problems with this scheme are:
- You have to have a consistent packet rate. This is often not a problem, expecially for action games.
- You may end up with two windows where one receives 31 packets and one receives 29 packets, an you expect 30 per window. Using a "bank" for overage from the last frame can help with this.

If you're OK to update your estimate only when you receive a packet, you can do this:

unsigned char lastSeq;

int packetLossPercent(unsigned char newSeq) {
  if (newSeq == lastSeq || (unsigned char)(newSeq - lastSeq) >= 128) {
    // out-of-order packet
    return 0;
  }
  unsigned char ret = 100 - 100 / (newSeq - lastSeq);
  lastSeq = newSeq;
  return ret;
}
Call this for each packet you receive, and set the estimate to the value returned.
You may also want to add something that says "if it's been more than 2 seconds since I got a packet, assume 100% loss."
enum Bool { True, False, FileNotFound };
Advertisement

Call this for each packet you receive, and set the estimate to the value returned.

The one problem I'm having with that is that I'm using a dejitter buffer, so an out of order packet actually isn't necessarily bad and could still be safely processed at a higher level. Is there an alternative that takes this into account? In that snippet a newSeq of 132 vs. a lastSeq of 134 would return 0 but both could safely be passed to the dejitter buffer and processed in time.

I basically just want this as a way of reporting to the player why things might be behaving poorly if they're playing on a Starbucks wifi next to a microwave. Doesn't have to be very precise, but I'd like it to be as accurate as possible.

You have conflicting requirements :-)

One thing I've seen is a little meter of bars marching right-to-left, with a bar for number of packets received per second, and a red dot for each second that shows up if at least one packet X wasn't available by the time X was needed (after dejitter.)
Each bar is like 4 pixels wide and the entire thing is perhaps 100 pixels wide, showing the last 20 seconds of quality.
enum Bool { True, False, FileNotFound };

* It depends on what kind of protocol are you using.

You may use something in the header of the packet for checking. Maybe some kind of id that is usually sequential.

* A reverse approach would be pinging back the server/client.

One thing I've seen is a little meter of bars marching right-to-left, with a bar for number of packets received per second, and a red dot for each second that shows up if at least one packet X wasn't available by the time X was needed (after dejitter.)
Each bar is like 4 pixels wide and the entire thing is perhaps 100 pixels wide, showing the last 20 seconds of quality.

That works. I could bin by the second and record the number of packets recorded in that second in the appropriate bin using a cyclic buffer. Then a smooth, more or less level graph would indicate good connection quality, while a fluctuating graph would indicate poor connection quality. I could also easily get an average ping per second using more or less the same bins.

Unfortunately, "wasn't available by the time X was needed" is actually complicated in my system. I do a per-entity dejitter, with different send rates for each entity, so knowing whether an arbitrary packet arrived with data just in time is actually a per-entity decision and not really something I could express in a quality meter.

EDIT: I'm actually going to try another approach. Will try to update once that's done.

Advertisement

Just wanted to post an update on what I decided to do.

My packet header is now 8 bytes containing the following:


private NetPacketType packetType; // 4 bits
private ushort processTime;       // 12 bits
private byte sequence;            // 8 bits
private ushort pingStamp;         // 16 bits
private ushort pongStamp;         // 16 bits
private float loss;               // 8 bits

Ping:

Each peer sends its latest stamp in each message and stores the last received ping it received from the other end (a.k.a. pong) as well as the local time that pong was received. I only store a pong in this way if it's new since packets may arrive out of order, so our stored pong is always the highest timestamp we've yet received. Ping/pong values are in miliseconds, stored in 16 bits which gives us a little over a minute of rollover.

When it's time to send, we compute the process time (the time now minus the time we last received a pong) and include that and the pong time in the packet. The recipient then computes the RTT (current time minus pong time), and subtracts the process time to get the true ping RTT. The process time is stored with 12 bytes giving us a four second cap -- if that isn't enough, there are bigger problems elsewhere.

Packet Loss:

Each peer sends an 8-bit sequence id with every packet. On the recipient end we keep a sliding 64 (+1 latest) bit window of all received sequence ids. The packet loss is then computed as the percentage of set bits in that window. If we haven't received a message in [2] seconds, then we report 100% packet loss. This gives you the packet loss for received packets. When it's time to send, we also include our own packet loss (compressed from a float to a byte) so the remote peer can know the packet loss for their sent packets.

Note that you could use these sequence windows to detect duplicate packets and prevent processing them, but I handle that at a higher level and so this low-level implementation doesn't address that. I don't personally trust the 8-byte sequence id and a 64-bit history bitvector to robust enough to decide when to drop or keep packets.

If you're interested in seeing how this works in detail, the low-level UDP library I'm working on is here. Still in progress, but reaching maturity (with an intentionally small feature set). Thanks for the help!

This topic is closed to new replies.

Advertisement