Advertisement

Server authoritative. Clientside prediction & interpolation.

Started by November 22, 2022 11:46 AM
4 comments, last by allencook 2 years ago

Hello. As the title states, I'm writing some netcode. On my local clients, I have a “simulation body” and an “interpolation body”. Both are separate objects, but are clones of each other. Server & client both simulate at 30hz and the simulation body responds to player input and performs collision resolution when moving and is invisible. Between client prediction ticks, I lerp the visible interpolation body with a camera attached to smooth it out. No problem there.

My clients can pick up guns and shoot actual projectiles out of them. When a player equips a gun, should he equip it both on the simulation body AND the interpolation view? That way he can accurately fire a projectile from the barrel during the simulation and play pretty effects and animations on the interpolated camera view? Kinda a head scratcher. I would love to know the right way to do this, I look forward to reading the responses.

The other option would be just firing projectiles straight from the barrel of the interpolated view, but I suspect this would lead to more desynchronizations and invalid hits.

Edit:

I now believe that I'm doing it wrong in the first place. Instead of having two objects, I should simply simulate one object during each tick, then put it into interpolation mode for the following frame updates until the next tick. I would still like clarification on this.

Advertisement

In my experience it's a bad idea to decouple your local update rate from your network sync rate, because you are then left with the question of how to handle the ‘inbetween’ data. If I move 10m forward on frame 1 and then 3m back on frame 2, but I only send an update every 2nd frame, what should I send? If I only say “move forward 7m” then that misses the fact that I did actually reach a more remote point than that, and it understates the total distance travelled by a lot.

However, you absolutely can decouple your update rate from your local render rate. What you render can certainly be interpolated. It's hard to comment on your specific case because “interpolation mode” could mean a lot of different things.

It is absolutely the case that the player will expect whatever crosshair they see, to match where the actual simulated bullet will go when they shoot. It is also the case that the player will want the weapon/crosshair to move with great response when moving the mouse.

Assuming your interactive rendered mesh tracks player input, and the round-trip to the server makes the interpolated body “lag behind” by your full round-trip-time (including frame delays) then the easiest way to do this, is to display the fired projectile from the rendered body, even though you simulate it from the interpolated body. The challenge then becomes, what to do when the visible scene shows something different (hit/miss) compared to the simulated scene. Here's where “rewind/replay” comes into play – for example, the “Source networking model" article.

It sounds like you're rendering the lagged (round-trip) body instead. This is unlikely to provide a good user experience, because the crosshair and aim on the screen will be several simulation frames behind the input – at least 3, because of frame hand-off times, and likely more, because of the additional transmission latency. 3 frames at 30 Hz is 90 milliseconds, which will be almost unbearable latency for anyone playing an FPS. On a modern 120 Hz display, that's twelve frames of input lag.

enum Bool { True, False, FileNotFound };

@hplus0603 @kylotan My characters are fully predicted. Each local tick on the client, I simulate the player and send the inputs to the server. I perform full rollback before the simulation, when the necessary packets come in from the server.

I'm not sure what you mean by rendering the round-trip body. For the interpolation, I'm simply lerping using the last local simulation position, and the current local simulation position of the player. From there, I can get the lerp alpha from the fixed network clock: frameTimeAccumulator / simulationDeltaTime.

I got something good working last night. Like I said in my second post, I ditched the dual body approach. I now use a singular body for both simulation and interpolation. When it's time for the client to simulate, I

1.) disable “interpolation mode”,

2.) enable the character controller,

3.) snap the body position back to the “CurrentPosition” (read above), since the interpolator alpha doesn't actually get to ≥ 1 before the next simulation tick comes around, since my interpolations all happen after the simulation. Right before the simulation, “LastPosition” gets set. After the player moves, “CurrentPosition” gets set.

After the local simulation is done, the object goes back into interpolation mode, and uses LastPosition and CurrentPosition. Now that our clock has just processed a simulation tick, the local lerp alpha should be pretty close to 0, so the next render frame the client sees will be very close to “LastPosition”, if not directly on it. I then lerp normally until another simulation tick comes around. Repeat. It's a very seamless and smooth operation.

Anyways this seems to work pretty well from what I've tested so far. Haven't tested with rigidbodies yet but I don't think it's going to be an issue. I strayed away from the dual body approach for a few annoying reasons.

1.) When I was working with dual bodies, the projectile needed a spawn point (end of the gun barrel). This was really annoying when equipping guns on the client, as I found myself needing to replicate the gun on both bodies since the interpolated object with the camera ran the gun animations, and the simulated object needed a spawn point at the end of the barrel for the bullet. Super annoying to do. Now I can just spawn in one gun, have the same code for client/server, and everything works great ?

2.) Engines like unity don't like when there's more than 2 cameras, and more than 2 audio listeners in the scene. When I would spawn in the local character and duplicate/split it into the simulation/interpolation body, each would have a camera, audio listener, etc. This lead to a bunch of really clustererfucked spahgetti code where I wrote code to automatically try to remove all visual aspects from the simulation body (camera), and all simulation properties (character controller) from visual bodies.

I think I got sort of off topic from your replies, but there's hardly any information about this stuff online, so just wanted to share my findings so that other people don't waste time doing things wrong like I did ?

This topic is closed to new replies.

Advertisement