Given a setup where your client is predicting (tick 100) the actions that they think they will be doing once the server (currently tick 75) receives their input. When the client receives the world state from the server once it reaches and sends the results for tick 100, the client goes back in time to make sure it didn't mispredict. If it does, it has to do a full rollback to that tick (100) and repredict all the inputs up to the current client prediction tick (125 in this contrived example).
Let's assume the client has perfectly predicted everything, it simply wouldn't rollback. It would continue predicting forward. But, the client may not always be able to predict each individual tick like the server is guaranteed to run. Say your server is 60hz and your client up until this point has run at 60FPS. Suddenly, the client is running at 30FPS due to hardware slowdown - either rendering or maybe it's a PC or phone with thermal throttling. Now the client is running at half the speed, so therefore it can only do half the ticks.
I've seen setups where the client simply then does 2 (or more) ticks in 1 frame, in hopes that the slowdown was temporary and the device will regain 60FPS performance again. In my experience, however, this is extremely fragile because more often then not, the device may never recover the performance. And once it does, it enters the dreaded “death spiral” where every frame it's accumulating an increasing debt of ticks.
The example here: frame 1 has to do 2 ticks but it's so slow to do both that now 3 ticks have passed by the time frame 2 hits. So now frame 2 has to do 3 ticks, then frame 3 has 4 ticks, etc
So I was thinking about and even read somewhere that, well, maybe the client doesn't have to predict every tick. Yes, it's ideal, but you should still be able to get the same results (or super close) to what the server will say is reality, hopefully therefore not having to do a rollback.
The example here: If the device slows down, and 2 ticks pass in 1 frame, then you just predict 1 tick with a delta time of 2 ticks. You may be left over with a “fractional tick” that you can just accumulate until it reaches a full tick. Glenn Fielder has a blog that mentions this approach.
- Is this a sound approach?
- Does anyone here have experience with this and know of "gotchas" to look out for?
- This last question is a bit technical. I can best explain it as an example.
A player is moving forward and predicts it accurately even with a slowdown. However, something happens: the world has changed or an event occurred (floor changed or player is hit/damaged) that makes the player suffer a movement “slow”. You can't predict that, so you do a rollback. However, your system is saying “damage was taken at tick 100. So on tick 101 you must move slowly.”
Given that we are “lumping” ticks together when a device gets slow, this logic is no longer valid. A client may never hit tick 101. If the client rolled back to tick 100, and then in the next frame they've skipped over to 102 or 103 tick and beyond, they can't apply that slow properly can then? Anyone have any smart solutions to this? I'm thinking instead of doing “if (tick == damagetick + 1) { slow(); } “
do ”if (tick ≥ damagetick + 1 && !triggered) { slow(); triggered = true; }"
This however I feel is perhaps fragile and isn't accurate. It'll just lead to mispredictions after you've already mispredicted the damage being taken or whatever even occurs. Please let me know if anyone has tricks or resources that deal with something like this!