interpolationTime = currentTime - INTERP_BACK_TIME //for my game 0.1s for INTERP_BACK_TIME
state1, state2 = getTwoInterpolationStates(interpolationTime) //get two states to interpolate between
length = state2.Time - state1.Time
t = 0.0
if (length > 0.0001) t = (interpolationTime - state1.Time) / length
entityPos = state.pos
entityPos.lerp(state2.pos, t)
How to deal with inconsitent game state in entity interpolation
Have you read some of the resources pointed to in Question 12 of the Forum FAQ?
Thanks for pointing me out those resources. I read about the Q3 network model but i did not find anything specific enough to cover my problem.
4. The server knows at this point that the action (`MoveAction(1, 0)`) has ended and lasted 5 seconds on the client (but on the server it lasted 5 seconds + 150ms since the `MoveAction(0, 0)` is received 150ms late)
That makes no sense. By your steps 1 and 3, the server received the initial command at time s and the second command at time s+5. Hence, the total time that the server was moving anything was (s+5)-s=5. Not 5.15.
-- general brain dump on game networking --
Yes, there's latency between the client and the server. This doesn't mean that inputs are "late" to the server; it means that the client and server are simulating different points in time! A lot of what you do in networking a game is trying to hide that.
For instance, since you are measuring latency/RTT constantly, you know that a message received on the server from a client with 150ms is about something that happened 150ms ago. It thus makes no sense to apply that message at the current time point on the server. What you can do instead is apply that message _in the past_, e.g. by keeping enough history for the server version of the avatar to roll it back 150ms, apply the input, and then roll it forward 150ms accounting for the effects of the input.
Likewise on the client, you know the latency. You know that a command from the server is 150ms "late." Hence, a command from the server to play an animation that lasts 1 second must instead last only 850 ms in order for the animation to complete on the client at roughly the same time as it would complete on the server.
Now you can extrapolate that further: since the client knows there's 150ms latency currently, if the client runs a command locally that triggers an animation and has to tell the server, the client knows that the server won't get it for 150ms. An option then is for the local client to play that 1s animation over 1.15 seconds. Then the server broadcasts that message to other clients; since those clients are accounting for their own latency to the server, the animation ends up finishing at approximately the same time on all clients.
A related version is for a client issuing a command to do nothing immediately, other than play some kind of feedback animation/sound to the user. The actual game-affecting animation won't start until the server confirms the action (and at the same time as sending the confirmation it also broadcast the event to all the other players). This approach is more useful in games where there is likely a chance that the server will reject an action.
Point being, you can't synchronize time between all the clients and the server. They're running in entirely different points in time in relation to each other. You can expand/contract time where necessary. You also really really need to work as much as possible in terms of _durations_ rather than fixed points in time, since those points in time mean entirely different things on different machines participating in the game.
It's far trickier with actions that need to be more immediate. Movement, for instance. On your local client, your avatar should probably respond to movement immediately. However, other clients won't see that movement until both your command is received by the server and the other clients receive the rebroadcast command.
So here we see that clients are partially working with three different timelines:
1) Locally-controlled entities, like the player's own avatar.
2) Server-controlled entities, like NPCs, which are driven by events that are around RTT/2 in the past.
3) Ohter-client-controlled entities, like other players' avatars, which are around aRTT/2 + bRTT/2 in the past (aRTT is your RTT, bRTT is the other clients' RTT).
Both the clients and the server need to be aware of these timelines. The server has to assume that all commands it receives from a client are from the past and react accordingly. The clients need to assume that any server-controlled or other-client-controlled entities it sees are in the past compared to the client's own locally-controlled entities.
The server can get in trouble if it always tries to fast-forward client input to the server's current time, though. The classic example is moving and stopping. If the server adjusts the movement command to remove the network latency, the server's simulation position for the entity will be ahead-in-time compared to the client when the client stops moving. This can result in the avatar on other clients snapping backwards. Solving this can sometimes be best handled by having clients always assume that the server is in the past and adjusting accordingly.
Fudging the inevitable inaccuracies in timing is also important. For instance, you _could_ send a full position and action up to the server on every input. When the client stops moving, the server knows both that the player stopped and where the player thinks they are on the client. If that position is relatively close to where the server thinks the player should be, the server could adjust and match the client's position. No need to visually snap the client back on their own machine. _Other_ clients may still need to see a small snap... or maybe they just decide to allow a little inaccuracy if the snap-back is relatively small. It's all "fudging" as I said, after all.
Your actual game design is going to be intricately affected by networking too, of course. There are things you just can't do in a network game; your design must thus be constrained by those limitations, period. This is why you will never see a networked version of some types of games, or never see a client-server networked version and only peer-to-peer (which removes an entire timeline from the equation). And there are things that you really should do in a network game; your design should follow those constraints or your game will look/feel bad.
... and this is why networking games is Really Hard and game network programmers are in super high demand.
Sean Middleditch – Game Systems Engineer – Join my team!
I'm curious what value you are using for your currentTime. I'm doing something similar, but for some reason when I'm calculating my t value I'm always receiving negative results.
I currently have it where it is just keeping track of the elapsedTime per each client. This is what I'm doing when I receive an UpdateTransformPacket:
GameObject* obj = netCompRef.GetParent();
current.pos = obj->m_Pos;
current.rot = obj->m_Rot;
current.scale = obj->m_Scale;
NetObjectTransformInterpolator& interp = NetObjectTransformInterpolator::GetInstance();
interp.SetObjectPrevTransform(this->netCompRef.GetNetObjID(), current, interp.GetElapsedTime());
double timeStamp = interp.GetElapsedTime() + 0.3;
TransformInfo transInfo(timeStamp, updatedTransform);
interp.SetNextMoveToTransform(this->netCompRef.GetNetObjID(), transInfo);
Then within my transform interpolator I'm attempting to do as you are:
double interpTime = elapsedTime - 0.3;
double t = 0.0;
double l = moveInfo->moveTarget.timeStamp - moveInfo->prevTrans.timeStamp;
if (l > 0.0001)
{
t = (interpTime - moveInfo->prevTrans.timeStamp) / l;
}
InterpToPosition(t, moveInfo);
InterpToRotation(t, moveInfo);
InterpToScale(t, moveInfo);
Where the InterpToPosition interpolates from the previousTransform.pos to the current moveTarget.pos
The delay time you show on the client needs to include client->server time as well as server->client time, so the "0.1 seconds" you have in your interpolation probably needs to be more like 0.3 seconds for the 150 ms delay case.