So I'm attempting to perform lag compensation. Here's what I have so far.
When state updates come in from the server, I don't process them immediately. I buffer them and speed up/slow down the interpolation playback depending on network conditions. On the server end, we save past states of objects for 1 second.
I understand how lag compensation works for the most part. Each tick, our clients send the server the current remote interpolation time of what we're looking at, so that our server's player can actually hit objects that the client is actually seeing on his crosshairs. Suppose our client is interpolating between server tick 11 & 12 at an interpolation alpha of 0.5. On the server, we can temporarily generate new colliders for every object 50% between 11 & 12 right before our player fires. That way if our player hits an object clientside, he is most certainly going to hit it on the server.
Here's where I'm confused. The sources I've read say “just send the interpolation time and nothing more”. Well, here's our problem with that. When the server is performing lag compensation, it can determine which ticks our interpolation time is between, and generate an alpha for it. But what if our client misses a few snapshots? What if he's actually seeing ticks 11 & 14 at 50%? If the client only sends the interpolation time, the client and server are going to be viewing two different states.
I'm going to go ahead and assume that the articles I read aren't doing any sort of blending, and just comparing to the “nearest tick” state. That being said, instead of sending the actual interpolation time, should we rather just send fromStateTick, toStateTick, and the interpolation alpha? So in that case the client would be telling the server “When I shot this bullet on tick 18, I was seeing ticks 11 & 14 at a 0.5 alpha blend”? I can't think of doing it any other way.