Advertisement

Obtaining low ping

Started by January 04, 2017 07:28 PM
3 comments, last by Kylotan 7 years, 10 months ago

I'm wondering how many games can get their ping so low (30ms or lower).

I have made a basic localhost setup. The server updates at 10Hz and the client updates at 60Hz. When i measure the ping i get something like 60mS. I gues this is expected because on average pings would need to wait 50mS on the server (1000mS / 10Hz / 2) and 8mS on the client (1000mS / 60Hz / 2) before they are handled.

Do most games have much much higher update rates or do they have a special case for handling pings? Should pings bypass message queues for example?

The most extreme case i've seen is a game (WoW) reporting 5mS ping. I gues this can only be done with a server that doesn't use a discrete timestep but wakes up as soon as a ping is received? Even then, transport would have to be near instant and the client's update rate would be about 100Hz am i right?

It depends on what YOU want to measure. If you want only the network transit times, then handle the message immediately upon receipt instead of waiting for your server update interval to process the message queue.
Advertisement

It took me a little while to figure this one out as well. What's happening is that the other games are showing the actual measured network RTT, which is a separate thing from the game network update RTT.

The update RTT is naturally going to be as large as 1000ms divided by the update rate Hz, and with different update rates on each end it will keep cycling in a sawtooth fashion as the update rates on both sides line up and fall out of alignment, along with network latency on top of that.

The network RTT is just how long it takes for a packet to travel to and from the other side, outside of the game update packets being sent.

Alright so their pings shown on screen are just network travel times. Are there any uses for this ping other than showing the user whether or not he needs to get a shorter wire?

If we're considering a system that uses entity interpolation and client side prediction (eg valve's approach). Wouldn't the ping that incorporates any queues, tick interval and handling times be more useful for the simulation? For example, to know how far in the past you need to render remote actors you use the sum of the update ping and the server tickrate?

It's certainly possible to get a round trip time (I'm not going to call it 'ping' because that's misleading) of under 30ms if you're updating the network system often enough. Efficient networking techniques will only kick in when there's actual data to be read so you don't need to be polling all the time to achieve this.

In terms of adjusting your simulation to account for latency, then the full round-trip time including processing is more important, yes.

Generally speaking the value shown onscreen by most games is not what would be used for simulation adjustment; it's there to give instant feedback to the player about the quality of their network connection to the server. That's not just about the length of the wire but about the performance of their home network, the performance of their ISP, and that of any of the routers and switches along the way.

Regarding prediction, my reading of the Valve networking system is that they don't actually extrapolate remote players (except in rare problematic situations), so you're not explicitly doing anything with their latency. The rendering is of a fixed time in the past (0.1 seconds, in their docs) and the remote player is rendered at whatever position they were reported to be in at that time, based on interpolating between 2 past snapshots.

This topic is closed to new replies.

Advertisement