Advertisement

Negative ping? Or how do I sync time with client and server?

Started by April 20, 2016 09:22 PM
17 comments, last by hplus0603 8 years, 6 months ago

You'll probably want to use a real-time clock rather than a wall-clock clock for that reason.

Linux: clock_gettime(CLOCK_MONOTONIC_RAW)
Windows: QueryPerformanceCounter()

OP is using Java, System.nanoTime() provides this functionality (as of JDK8, implementation uses: QPC on Windows, CLOCK_MONOTONIC on Linux, mach_absolute_time on OS X and iOS).

So how do I send server's timestamp without losing accuracy due to latency?

It's impossible to avoid inaccuracy from latency. You could try to compensate for it, but depending on your needs, it might be a waste of effort. Your focus should be on making sure things are happening in the right order, objects are moving at the right rate, and that all of this is happening within an acceptable time difference between clients.

Are you sure System.nanoTime() isn't affected by clock changes? I moved system's clock one day forward and it changed. But I'm not using JDK8, I'm using Java 1.7. And I tested on desktop. Not sure yet how it goes on android, but I know android does not support Java 8 yet, so this probably will not work.

Game I'm making - GANGFORT, Google Play, iTunes

Advertisement

Are you sure System.nanoTime() isn't affected by clock changes? I moved system's clock one day forward and it changed. But I'm not using JDK8, I'm using Java 1.7. And I tested on desktop. Not sure yet how it goes on android, but I know android does not support Java 8 yet, so this probably will not work.

Android reference specifically says System.nanoTime is equivalent to CLOCK_MONOTONIC: http://developer.android.com/reference/java/lang/System.html#nanoTime%28%29
JDK 7's implementation for Mac OS X (and by extension, iOS) was using wall-clock time because the porters for OS X weren't aware of mach_absolute_time. But on a modern version of Windows (since Windows NT) or Linux kernel (2.6), JDK 7's implementation should have been fine. Older versions of Windows and Linux would lack QPC and CLOCK_MONOTONIC and other fallbacks would have to be used, which probably produce wall-clock time.

I just tested on android 5.0 Genymotion emulator. Why is System.nanoTime() so different? It's printed at the same time. Date/time is set to auto in settings.

millis: 1461352093202; nano: 239310141883

Game I'm making - GANGFORT, Google Play, iTunes

Because the timers use different time bases.
enum Bool { True, False, FileNotFound };

I just tested on android 5.0 Genymotion emulator. Why is System.nanoTime() so different? It's printed at the same time. Date/time is set to auto in settings.

millis: 1461352093202; nano: 239310141883

on Android, nanoTime is implemented with clock_gettime(CLOCK_MONOTONIC), which provides time elapsed in nanoseconds since processor startup (on an emulator, probably since you started the emulator).
currentTimeMillis provides system wall-clock time, as milliseconds, in POSIX time format.
That's why it's not currentTimeNano, but nanoTime. You're not getting the current wall-clock time. You're getting an arbitrary, implementation-defined, elapsed time value.

I'm interested in what desktop you were testing from to get a time jump from nanoTime after changing wall-clock time.

Advertisement

Hmm, so how am I supposed to use it if every device's time base is different(including server's)? I thought it would be universal and replacing currentTimeMillis with nanoTime will solve everything.

Just ask the server what time it is, and adjust accordingly:


1. Client requests server time.
2. Client adjusts the value returned by the server, according to the time it took for that request to return.
3. Use this value as a baseline for all timestamps returned by the server, or repeat and average error.

So it goes like this, right?

1. client requests server's timestamp, requestTime = System.currentTimeMillis().

2. server sends it's timestamp

3. client received server's timestamp,

delay = (System.currentTimeMillis - requestTime) / 2f;

currentServerTime = requestTime + delay;

Am I correct to divide it by two?

4. server's update packet includes sendTimestamp. Latency for client = currentServerTime - sendTimestamp.

Could the first packet's delay be somewhy too big? Is this a good solution?

Game I'm making - GANGFORT, Google Play, iTunes

You could have a look at how eg NTP (network time protocol) does it, which aims to synchronize local time with remote reliable atomic clock time.

I think your idea is reasonable, except network delays are not constant either, so you'd need to do this at a regular basis.

That's the essential gist of it. You'll want to keep that calculation running as part of your regular protocol headers -- you don't need to "request" anything; each network packet should start with a header that contains sequence number and timing information. (Last-seen, my-current)

You can then use statistical methods to estimate the jitter/loss/delay of the link, or you can use some simpler function like "new_jitter = (old_jitter - 0.03) * 0.9 + measured_jitter * 0.1 + 0.03 seconds"
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement