Advertisement

seperating Up + Down latency

Started by June 03, 2006 12:45 PM
1 comment, last by Monkeyget 18 years, 8 months ago
I just started writing a client prediction system, and the first thing I added was a system to calculate latency. I thought it might be useful for the client to know the latency from the client to the server and from the server to the client seperately, in case they were very different, but I haven't been able to figure the math out for that. Right now, the client pokes the server with a ping request and stores the time (t0). The server responds with the current server time (t1S), and when the client gets the message back, it stores the time again (t2), and now we can work out the latency and the clock offsets very quickly with a quick offset = (t1S-t0)/2+(t1S-t2)/2, and lag=t2-t0. We're done. The problem is that in, say, a 600ms ping, I want to know whether that's a 300ms/300ms ping, or a 550ms/50ms ping. Is it even possible to calculate this value? If so, does anyone know the math, or could they point me to a resource? I checked out NTP, but as far as I can tell, they assume a network that's roughly even both ways.
In the absence of an external reference (such as a radio-synchronized atomic clock which compensates for propagation delay), you can't really figure out how much of the latency is one direction, vs the other.

You can run some statistical analysis on things like jitter and draw some educated guesses; you can also examine links for know characteristics (i e, dial-up upstream, satellite downstream), but there's no general method that will "just work".

The good news is that, mostly, the latency is pretty symmetric on most of the internet today.
enum Bool { True, False, FileNotFound };
Advertisement
Investigating NTP was a really good idea because it address a problem very similar to yours.

The answer is : you can only calculate the up time and down time accurately if both clock are perfectly synchronized!

Let's show what happens graphically when a packet i sent to the server:

Image Hosted by ImageShack.us

C is the client and S is the Server.
TC1 is when the client sends the request
TS1 is when the server receives the request
TS2 is when the server sends the response
and TC2 is when the client receives the response

There a two measure for the server because we assume there is some kind of work that has to be done in order to be able to send a response.


After a message has been sent and received again the client knows all 4 values: TC1, TS1, TS2, TC2. The quirk is that the time between TC and TS is probably not perfectly synchornized.
What do the client knows?
I have put in green what the client knows accurately and in red what he knows but is not accurate:

Image Hosted by ImageShack.us

The total time is known and accurate and the handling time by the server are accurate because they both use only timestamp with the same reference. The time The only information not accurate is the time it took to go and the time it took to come back... and it's exactly what you want to know.

There IS a way to compute how much time passed during the sending or receiving of a packet. The only thing that can be done (and there is no alterative) is:
time to send the packet = TS1 - TC1 and T
time to receive the packet = C2 - TS2.
But that information is not accurate because the clock of the client and server is probably not exactly the same...

What's the answer to that problem? NONE!
The creators of the NTP protocol had exactly the same problem and they didn't find any answer to that because there isn't any. And since those creators are probably far smarter than me, I don't think I could find a magic answer.



PS: You probably already know all that but my answer had the purpose to make things clearer...for me! I studied the NTP protocol and used this post to remember the question how-the-hell-that-ntp-thingy-works-again?

This topic is closed to new replies.

Advertisement