Advertisement

Yet another interpolation question

Started by December 12, 2018 05:32 PM
10 comments, last by hplus0603 5 years, 11 months ago

Hello,

I've setup a basic entity interpolation for a multiplayer game and everything works ok. Currently I'm trying to improve it for bad connections with packet loss.
The game uses TCP so in case of packet loss on client it looks like the delivery is stopped for ~300ms and then everything is received at once.
I observe the same behavior on a wireless internet connection and on a good connection with packet loss simulator ("clumsy").
I suppose for many realtime multiplayer games this delay is too much and nothing can be done, but movement in my game is somewhat predictable
and extrapolating last movement even for longer periods of time (with possible smooth corrections after) will look better than just freezing movement.
So my goal is to interpolate when everything is ok and extrapolate in case of packet loss but still let client be behind in time.

My current scheme involves "startPos", "curPos", "endPos" and "startTime" variables. Every time position is received, I set startPos = curPos,
endPos = received pos, startTime = now, and every frame interpolate from startPos to endPos with t ranging from 0.0 when now == startTime
to 1.0 when now >= startTime + 100 (an arbitrary constant larger than expected send rate).

To introduce extrapolation my initial solution was to stop clamping t to 1.0, but it failed because after extrapolating for some time, old data is received
and movement looks like entity jerks backwards. To fix this I started sending server time in form of "server now - first send time"
so client now has two times: server time and client time in form of "client now - first receive time".
But this is where my thought process stops and I don't know what to do next.
All schemes I come to will work in case of packet loss (which can be treated as temp latency increase)
but will be ruined by a permanent latency increase (network route change? idk if this is a good example) and client will either extrapolate too forward and suffer
from corrections or interpolate too behind and have extra latency.
 
So how should I utilize these times to achieve desired results?
 

42 minutes ago, rrr333 said:

The game uses TCP so in case of packet loss on client it looks like the delivery is stopped for ~300ms and then everything is received at once.

Isn't this why UDP is recommended over TCP for this kind of thing?

https://gafferongames.com/post/udp_vs_tcp/

Quote

Using TCP is the worst possible mistake you can make when developing a multiplayer game! To understand why, you need to see what TCP is actually doing above IP to make everything look so simple.

Quote

The problem is that if we were to send our time critical game data over TCP, whenever a packet is dropped it has to stop and wait for that data to be resent. Yes, even if more recent data arrives, that new data gets put in a queue, and you cannot access it until that lost packet has been retransmitted.

 

Advertisement

I'm forced to use TCP this is a browser game via websockets.

Each simulation or rendering frame answers the question: "given the previous state, at time T, and inputs I, what should the state at time T+dt be?"

This is of course easier when "dt" is always 1, which means you count in "frames" rather than time. If you have to show smooth movement in an animation loop that's not frame locked, though, you end up having to deal with fractional frames / continuous time.

The simplest fix for your problem is to simply throw away display data that affect states "in the past." You know what the time should be, and if you get an update that is old enough, just discard it. This will avoid most of the backwards-jumping, but it will also discard that that might be good -- you might not get all updates, but the player has turned, so you should know that the future extrapolation should go towards a different direction.

The more complex fix, then, is to not only store what you've received, but also what you've intended to interpolate. The position you display is then a function of both of those.Pick some amount of time to look forward (200 ms?) and make your extrapolation such that you will converge the previous extrapolation with the best-known data at that point in the future. Do this all the time -- this will be the method you update at.

Let's say, for simplicity, that you store position, and velocity (speed in direction,) for the last received timestamp. Let's say that you also store position, and timestamp for the previous displayed position. You would then calculate:

- How old is the best received position/velocity?
- How old is the last displayed inter/extrapolation?
- Extraoplate the best received position/velocity to 200 ms after "now"
- Set the current diplay position to the interpolation between the target-200ms-position and previous-display-position, based on how old that one was

Now, as long as the clock advances on your local machine at the same rate as the server, you should be able to display a reasonable extrapolated position. You can even make this do duty for interpolation, by calculating the "last received position/velocity" based on the next received position (subtract the previously received position) and divide by time between updates.

enum Bool { True, False, FileNotFound };

Thank you for response. Can you elaborate one more thing?
Imagine a situation when the very first packet gets lost/resent and in the end arriving with some big delay.
So client starts measuring it's time as if there's some inadequate ping. Will everything converge and act properly for the client after some time?
Or should some additional measures be taken to somehow account for this or similar scenarios?

First, you need to estimate the time on the server. When the first packet arrives, you will set an offset; if a lot of packets arrive "way early" compared to that offset, you will adjust the offset over time. This takes care of the "first packet was more delayed than others." (Time management is, in itself, a whole topic in client/server games, and needs to be solved before you can start talking about inter/extrapolation!)

Also, each update sent from server should include "this is the game time that this update is intended for." Trying to infer server time by arrival time at the client will lose too much information.

enum Bool { True, False, FileNotFound };
Advertisement
52 minutes ago, hplus0603 said:

First, you need to estimate the time on the server. When the first packet arrives, you will set an offset; if a lot of packets arrive "way early" compared to that offset, you will adjust the offset over time.

So I need some dynamically adjusted variable which I then add to timestamps received from server and then "compare" this sum to client time. 
Is my understanding correct? And if yes how should this offset be calculated? 

A simple way to calculate the offset:

Each packet from the server contains the server time. You calculate the offset as "receivedTime minus yourOwnTime" but you only update your own value if the offset is greater than before. This means that lag spikes that temporarily reduces the offset, won't drag the estimate off.

enum Bool { True, False, FileNotFound };
On 12/12/2018 at 6:32 PM, rrr333 said:

I'm forced to use TCP this is a browser game via websockets.

Ah, I hadn't encountered this problem before, I found this interesting on the subject, maybe there will be a solution in the future: :) 

https://gafferongames.com/post/why_cant_i_send_udp_packets_from_a_browser/

With this TCP limitation I'd be wondering how statistically often you do get dropped / out of order packets in modern connections (especially wireless as you point out)? Maybe it is hard to predict.

One question I'm wondering: what type of game is this? You say your movement is 'somewhat predictable'. Are you using a standard 'dumb client / all the players simulated on the server' setup? Are you using / intending to use client side prediction? How important is how close players are to their positions in the server simulation? Are you e.g. shooting at players and using their position relative to the aim to determine a hit?

13 hours ago, hplus0603 said:

Also, each update sent from server should include "this is the game time that this update is intended for."

Note that a convenient way of doing this can simply be to include the server tick in the packet (providing you are using a fixed tick rate).

The way I'd think of it, especially as you seem to be suffering from big laggy stutters, is that your client player reps should be independent, but 'chasing' the clients best idea of where the actual server player is. In a UDP scenario it is usually possible to use simple interpolation / extrapolation, however in the case of TCP where you might have frequent long periods of no incoming info, it might be an idea to e.g. consider each player client rep to have a position and velocity, so instead of having a sudden visible jump in course change after a delayed packet, you could more gradually change the velocity to smoothly move towards the destination. This could be more of a client side prediction type 'physics' approach for all players. Of course this might mean spending longer with the client rep further from the server position, but the trade off might be worth it depending on the game type.

The other solution is to try and make a game type that 'designs out' the lag issue (e.g. strategy games that can get by with a large delay before performing a command).

Anyway just some ideas. Showing maybe a captured video might be useful to help with suggestions.

4 hours ago, lawnjelly said:

With this TCP limitation I'd be wondering how statistically often you do get dropped / out of order packets in modern connections (especially wireless as you point out)? Maybe it is hard to predict.

I test with a USB LTE modem and get ~1-2% both in a big city and 15km near, friends report me same numbers. But of course this is just an observation and not a way to judge all modern wireless connections. 

4 hours ago, lawnjelly said:

what type of game is this? You say your movement is 'somewhat predictable'.

2d ships with arcade controls (can change direction instantly). I don't intend to use prediction because control feels ok even with high latency and the only annoying problem is occasional stuttering. And on a normal connection everything is fine. But it seems that what I want is not too far from an actual prediction but with a different purpose (not to hide latency).

15 hours ago, hplus0603 said:

A simple way to calculate the offset:

Each packet from the server contains the server time. You calculate the offset as "receivedTime minus yourOwnTime" but you only update your own value if the offset is greater than before. This means that lag spikes that temporarily reduces the offset, won't drag the estimate off.

Ok, thanks, I now see that it should solve the "initial packet delayed" and "latency was higher but then decreased" problems. Btw, does a reverse problem of "latency increased for the rest of game session" exist and can it "safely" be ignored? By "safely" I mean won't a more complex scheme behave differently and "worse" than the most basic (just interpolate) scheme in that scenario?

This topic is closed to new replies.

Advertisement