Advertisement

Latency and server/client-time

Started by August 20, 2014 12:36 PM
11 comments, last by wodinoneeye 10 years, 2 months ago

Hi,

I am working on a UDP client/server setup with boost::asio at the moment. I've got some basic ack&reliability system in place and tested a connection over the internet yesterday for the first time (instead of running on ports on the same machine...). There's only chat, sending of input and syncing of a constant number of game objects so far.

Against my expectations, everything is working pretty nicely already. Latency was around 120 ms. I guess that's ok, given I'm not running an actual server, just two ordinary PCs. I just check on the local machine, how long it takes until I get an (immediate) reply for a sent packet.

Now I'm wondering if there's a way to split this ping into send/receive-travel times. I mean most of the time, I want to know how old a packet is that was sent one-way, like a server-snapshot/update.

I could just compare simulation times for that, if they are running synchronously. But the way I see it, the only way to sync timers on two machines is by knowing how long a message is underway in ONE direction.

Any advice?

The best you might be able to do is keep track of the differences in time Send your 'ping' msg (with your departure clock time) and have the other side mark it with its departure time and when it arrives add the packets return incoming time (with your local clock ).

You can then tell when the roundtrip and each half varies (by doing various dif calculations with the) to see if they get longer or shorter (if not the exact time they took). The clock values (perfromance timer times) pasted from either side should remain fairly releative to each other at least.

Each leg of the round trip can be figured for their changing time (from other packets times) once the relative clock values are known.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Advertisement

Why are you doing that? In practice there is so much noise it is very rarely used. To synchronize machines requires many measurements done over time.

Games generally use relative time from the moment they start, as observed by whoever is in control. The server says this is simulation step 2143, or that you are 34213ms into the game, and that's the end of it. They might use the round trip time estimates and use half of it for the approximation, but trying to determine the exact per-direction latency at any time is a more challenging problem.

Latency is constantly changing. Some other process opens an Internet connection and you are fighting over bandwidth. Or the neighbor starts a file download and your upstream connection deprioritizes your traffic very slightly for a while. Few games need more precision than you can find with the round trip time.

If the only information you have available is time sent and time received according to an unsynchronized clock on either end, then no, you cannot really split the round-trip time into a "send" and a "receive" part.

However, that doesn't actually matter. What you want to know is "how early should I send commands so that they are at the server at server time T," and "When I receive the state of an object stamped at time T, what time should I display it at?" Both of these questions can be answered with relative measurements, rather than trying to absolutely pin down the server/client send/receive latency. And both of those answers typically include some amount of latency compensation (de-jitter buffer) that make an absolute measurement less useful anyway.
enum Bool { True, False, FileNotFound };

If the only information you have available is time sent and time received according to an unsynchronized clock on either end, then no, you cannot really split the round-trip time into a "send" and a "receive" part.

However, that doesn't actually matter. What you want to know is "how early should I send commands so that they are at the server at server time T," and "When I receive the state of an object stamped at time T, what time should I display it at?" Both of these questions can be answered with relative measurements, rather than trying to absolutely pin down the server/client send/receive latency. And both of those answers typically include some amount of latency compensation (de-jitter buffer) that make an absolute measurement less useful anyway.

You have the time difference from both sides (and the history from previous transmissions of the same time data)

The clocks might be unsynchronized between each other but arent they usually each consistant to itself (and thus relatively consistant in difference to ecah other clock...)?

So you keep history of the data his send time versus my recieve time and compare that difference to the next send done (and so on)

You can build a statistical model of typical transit time (and both sides can do this) AND you can communicate the result to the other side of the connection (and the difference of that (the ratio of diference of differences) .can tell you more).

The change in send time can be valuable (by magnitude at least) as when things go downhill they go very downhill (and its time for some throttling or other compensation) and you can judge roughly averages to see how much variation the transmission times are to do some adaption stategies.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact
The only data you need to keep (and communicate) is whether the data made it too late, much too early, or about right. If the data made it too late, tell the client to send it sooner (you can say by how much.) If the data made it much too early, tell the client to send it later (by about half of the difference.) And, when the data arrives within some target window (say, between 0 and 100 ms before it's needed,) then you tell the client it's doing OK.
enum Bool { True, False, FileNotFound };
Advertisement

Sorry for not replying, but I have been working on the UDP-connection code the whole week. After many hours of frustrating debugging I can finally guarantee 100% reliability for packets that are flagged as "essential" to come through, and for packets that are marked "sequence_critical" to come through and be processed before any other packet that isn't flagged "sequence_independent", even if individual packets or the whole data-exchange is completely lost in one or both directions for any duration. Yay! That just off-topic...

Now I guess I won't really know wether I need synced timers or not until I understand game-syncing and lag-compensation techniques...

I can finally guarantee 100% reliability for packets that are flagged as "essential" to come through


Really? What if I take a pair of scissors to your Ethernet cable?
enum Bool { True, False, FileNotFound };

Damn, I knew I had overlooked something!

//TODO: get scissor-proof ethernet cables or train ferrets to fight off scissor-wielding intruders

And, more practically: You cannot be 100% certain. Your API/library needs to expose the possibility of failure to the client. Pretending that re-trying will always work is not going to work, and the client is likely more able to make the right determination of what to do than your library is, because it has more information.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement