Advertisement

Understanding snapshot interpolation

Started by October 18, 2018 08:20 PM
3 comments, last by lawnjelly 6 years, 1 month ago

I'm not totally new to networking and have done a few games with basic real-time networking but I would like to do some physics based games.

I'm trying to understand snapshot interpolation https://gafferongames.com/post/snapshot_interpolation/

Am I correct in understanding that in order for it to work when doing networked physics is that I basically have to disable all rigidbodies on the client except for the player (which will calculate it's own physics) and then store all gameobject transforms on the server (for everything not sleeping) for every frame until I send everything via an RPC?

And in order to save bandwidth I shouldn't send the state for every frame but rather just a few which the client can then interpolate between.

What I'm trying to understand is what exactly is a buffer (a collection of transform states over a specified amount of frames?) and how do I best send this to the clients (RPC?)

Thanks for any input you can give me!

 

First, RPC is generally not the best way to send updates, because it's typically in-order, and often involves getting an acknowledgement back.

You want neither of those, so "events" or "udp datagrams" or whatever your network library provides are probably much better. If some snapshot arrives out-of-order, just discard the older one.

Second, yes, sending on some kind of schedule (10 times a second? 30 times a second? whatever works for your game) is common.

Third, what you'll want to do with a physics simulation is to use the networked extrapolation as a "target," whatever that means. For example, you may have a character controller, and make the character controller turn and move towards the extrapolated position; faster the further away it is. Then display the simulated position (which has nice walk cycles and what not) rather than the extrapolated position. If the simulated position is too far off, just teleport it.

Fourth, if you have a significantly physically based system, with advanced character actions and so forth, you may be better off sending character inputs rather than physics checkpoints, and let the characters run the same actions as they did on the sending machine, just a little bit behind. This will likely look better, as long as you're OK with the delay. Not everything needs to be snapshots and interpolation.

enum Bool { True, False, FileNotFound };
Advertisement

First thing I will point out that you seem to be misunderstanding, is that a server doesn't typically deal in 'frames'. A server may not be rendering anything at all, a server usually deals with ticks, often at a fixed tick rate. A client has frames, but these are typically interpolations between ticks. This is the scenario I believe described in Glenn's article, and he describes it in several of his other articles.

A server maintains the authoritative physics simulation of the world, all the actors and objects. The server then regularly (e.g. every server tick or 2, depending on tick rate etc) sends out snapshots to clients. The server doesn't need to send a snapshot containing all objects in the world to every client, only ones that are potentially visible to the client. This is where spatial partitioning schemes such as PVS can come in useful.

The client receives snapshots, and interpolates (or extrapolates if that is your thing) to give an approximation of what is going on in the authoritative server game. The client also typically runs it's own matching physics simulation for the player, and compares the result with the authoritative player position from the server in the snapshot. If they match, this is all good, if the server places the player somewhere different, the client must change its simulation to match the authoritative server simulation. This is called 'client side prediction'. Note that you can also do physics prediction for all game objects on the client in a similar way, which is a little more complex and more CPU hungry on clients, however this is not described in the article, and in most cases the server only approach he describes can work fine.

The crucial thing to understand is the tick-based scheme of running the game, and move away from thinking in terms of frames. Frames are only used for giving a smooth view to inferior humans between ticks, the real game is tick based.

This post I wrote recently should help explain the tick system:

 

In my experience a lot of people have trouble initially understanding the tick system, and especially the non-linear progression of time. As animals we are used to seeing time as linear, so have trouble understanding that a simulation can calculate time steps in a non-linear fashion, then use interpolation to smooth this back to a 'human view'. There can be a 'eureka' moment when people get it. :) 

14 hours ago, Phero Constructs said:

What I'm trying to understand is what exactly is a buffer (a collection of transform states over a specified amount of frames?) and how do I best send this to the clients (RPC?)

A buffer is in the context used in the article is a short history of transforms as you have guessed (over a number of ticks, rather than frames, see above). Probably a circular buffer.

Typically you send as UDP packets, and as these are received the info gets added to the circular buffer (and the oldest snapshot discarded). You can either send these as a common snapshot used by all clients, or individually build packets for each client depending on what is needed by that client. In a game where everything is visible to everyone all the time, you might decide to use the former, in a game where there are distinct areas you would be more likely to use the latter. The latter approach cuts down on traffic, and simplifies the job at the client, at the expense of some extra packet construction at the server.

How you construct the UDP packet is up to you, it is binary data. There are more suggestions in the other articles on Glenn's site.

This topic is closed to new replies.

Advertisement