Advertisement

Authorative Setup Client Prediction 100% Wrong

Started by April 08, 2017 10:08 AM
23 comments, last by hplus0603 7 years, 7 months ago

Hello GameDev Community,

this is the first time, I look forward on contacting you for help, but I really need you as I'm currently totally stuck. Not a single Tutorial Like GafferGames, MPDev, ... was able to explain me what I'm doing wrong and I'm currently really stuck.

The issue I currently face is quite obvious, I know for sure in beforehand that my client side prediction is for 100% wrong. But I need to do it do simulate zero latency for the user. But in the end it is wrong and I have to correct the user. So yeah I'm completely stuck. Something in my network design is really bad to not properly fit Client Side Prediction.

In the game engine I use (and mostly every other game engine), movement is explained in a given speed * time. So let's say Speed is something like 5m/s. So Entities move at an amount that is dependent on the time and speed. The time is very important as this is also a realtime 3D game.

I read most of the good tutorials out there in the wild internet about good authoritative networking but I can't spin my head around client side prediction and server concealment. Please don't misunderstand me, I know for sure that this is needed to simulate zero latency and also ignore the RTT for the change to upon based upon the Input. But:

Every Input that triggers an Action that is depending on the previous one (which are most) will be wrong predicted.

Let me explain this with an example:

Server runs at 10 Ticks per Second. So in between each SnapShot is a delay of 100MS. Let's say the client currently interpolates in between the states 500ms and 600ms. The RTT is 100ms so the ping is 50ms. So as we interpolate on the client side in between 500-600 Snapshot the server is on 650ms.

Now the client sends the button press "W" that triggers the movement forward action. This arrives at the server at 700ms. The client will start running at server time 650ms. The server sends back the ACK of this action with the server time 700ms as this is where the server marks the character to run. Now the client can mark the 700ms world state he will receive with his 650ms Snapshot of his position to now if the predicted state is correct.
Now the client send at 700ms the action "SPRINT" to the server. This triggers double speed. This time the package of the input to server takes not 50ms instead it takes 100ms to arrive. so the Server will send an ACK with 800ms. So to summarize:

Client -> 650ms MoveForward -> Server -> ACK @ 700ms (Ping 50ms)
Client -> 700ms Sprint -> Server -> ACK @ 800ms (Ping 100ms)

So on the clients predicted state we run for 50 ms and then sprint.
But on the server state we run for 100ms and then sprint.

This example is just blown up. For a real world scenario this would be more like the ping is 50ms then 55ms then 48ms. So every time the latency jumps around and is never the same. Knowing this, I know for sure, that my predicted states will be 100% wrong. I will have to correct them.

So what am I missing here that I can't wrap my head around this? Most authoritative game has a client side prediction as the latency would be too noticeable. But it can't be that they always correct the predicted state. In fact a good prediction system has the least correction amount.

So please help me to see what I miss x)

For such games, I've read that the server can have a window of time where it is willing to apply inputs "in the past". The server must have some level of buffering or the ability to unwind so that it can replay the simulation as if that input had arrived at the estimated time of sending. Subsequent updates will correct the other clients, who will need need to interpolate these updates over time to try to hide any jumps caused by this behaviour.

Note that you can still get undesireable behaviour, which is a fundamental "speed of light" constraint of trying to run a real time simulation with non-trivial latency, but at least now there is a potential happy path where the server and client can agree!

For more details, I'll refer you to the forum FAQ, in particular Q12, Q16 and Q27 seem most relevant.

Advertisement

For such games, I've read that the server can have a window of time where it is willing to apply inputs "in the past". The server must have some level of buffering or the ability to unwind so that it can replay the simulation as if that input had arrived at the estimated time of sending. Subsequent updates will correct the other clients, who will need need to interpolate these updates over time to try to hide any jumps caused by this behaviour.

Note that you can still get undesireable behaviour, which is a fundamental "speed of light" constraint of trying to run a real time simulation with non-trivial latency, but at least now there is a potential happy path where the server and client can agree!

For more details, I'll refer you to the forum FAQ, in particular Q12, Q16 and Q27 seem most relevant.

Wouldn't this approach destroy the authoritative idea of a server? I mean the client could force a high latency to have the advantage of seeing the past and reacting to it with a rewind on the server. Wouldn't that be a possible attack for cheaters?

Yes, it does introduce a risk for that. You have to balance that risk vs having an unresponsive experience for everyone. I'm not aware of any AAA game that has zero cheating, thus I infer that the state of that art for those with the most resources cannot solve this problem.

The server is still authoritative, as it can choose to discard the input if it is "too late" or you heuristically infer that it might be a hack (e.g. this client has a typical ping of 40ms but this "shoot gun" event arrived 150ms late).

Ok I can understand the concept of if a package arrives really late I could still give the Client a chance to accept this package by rewinding on the server a bit.

But how would you take care of packages being sent to the server with a slightly different delay like lets say

Client -> MoveForward -> 500MS (ping 50ms)
Server -> MoveForward ACK -> 550ms
Client -> Sprint -> 550ms (ping 55ms)
Server -> Sprint ACK -> 605

So the Client thinks he is walinking 50ms and then sprinting but on the server he is walking 55ms and then sprinting? I mean I could always slowly interpolate back to the correct state, but pls don't misunderstand me, why should I predict something that I know will be wrong? x)

But how would you take care of packages being sent to the server with a slightly different delay like lets say

That's probably less common of a situation than you might think. Round-trip latency tends to be pretty constant over short timescales.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Advertisement

But how would you take care of packages being sent to the server with a slightly different delay like lets say

That's probably less common of a situation than you might think. Round-trip latency tends to be pretty constant over short timescales.

So you would say, that if I compare the Snapshot from the Server with the Snapshot the client if the difference delta is less then sigma I Ignore the difference and let my prediction be correct? Or do I correct always and live with a really terrible prediction system?

I mean please do not misunderstand me.

But when you check games like RocketLeague or Smite they have a fast paces 3d enviromet. How do they do such things?

I believe the client and server try to synchronize on a shared understanding of time, at least in terms of "number of net syncs", thus packets can be "timestamped", e.g. if you net sync 10 times a second, then each sync is 100ms. I can't recall the details, check the forum FAQ, but if memory serves it involves each peer echoing back each other's net sync counter to sync up.

I'm not sure I understand your notation, for me the interaction, starting at time T, could be (assuming constant latency 50ms):

Client A @ T: Start walking @ T
Client B @ T: Player A idle
Server @ T: Player A idle

Client A @ T + 50ms: Start running
Client B @ T + 50ms: Player A idle
Server @ T + 50ms: Receive walk message, rewind simulation to T, apply walking action to A, broadcast state at next net sync time (assume instantaneous)

Client A @ T + 100ms: Stop moving
Client B @ T + 100ms: Player A position is moving since time T, position at T + 50ms included. Animation starts and client starts interpolating to reduce perceived warping (Player A will move "fast" to catch up)
Server @ T + 100ms: Receive run, rewind to T + 50 ms, apply run, broadcast state next net sync time

Client A @ T + 150ms: Player A idle, final position confirmed by server
Client B @ T + 150ms: Player A is running since time T + 50ms, position at that T + 100ms included in packet.
Server @ T + 100ms

You can imagine now that at T + 200 ms, client B is made aware that player A has stopped at a given location.

The scheme is more or less the same even with each link having it's own latency that varies, but harder to represent clearly by hand - Hopefully I've not made any mistakes as it is!

This sounds wonderful with rewinding on the server to the synced timestamp. I did think about this before I tried to go full authorative. But this could be so easily hijacked from a hacker. As an example have 2 characters Race a given distance.
The Hacker could now hijack the timestamp of the package and just instead of saying it was at 500 could change it to 450. Now the Server rewinds to 450 and lets the hacker run 50 ms earlier then the other. The hacker will always win.
On games like RocketLeague or Smite where you have a Grand Prize of 100k I simply don't believe that their system would be that easy to hijack.
I totally understand that such knowledge on howto properly solve such a problem isn't spreaded like the standard authoritative concepts, but there has to be a website that demonstrates how it is done. That is the internet x)

BTW thank you for spending your time to help, I'm just so frustrated that I don't find a solution that I may sound rude x) sr if so

This topic is closed to new replies.

Advertisement