Advertisement

syncronization mechanisms

Started by September 26, 2003 05:30 PM
8 comments, last by DGtalx 21 years, 4 months ago
Hi. I''ve been coding networking applications for a long time (web servers, popular protocol clients (IRC, etc)...). Now i decided to try and create a simple network arcade game (just to start with). After learning some basics of network game programming i came to realize, that the main problem that a programmer needs to solve, when coding a network game, is game state syncronization between peers (p2p architecture) or clients and server (client-server architecture). Client-server architecture suits best for internet games. But internet and WAN introduce high-latencies and lag. So i want to ask, about the principles and algorithms for lag-fighting in fast-action internet games. In my game i have several clients and a central server. The server starts the game and each client is responsible for initiating connection with the server (the player, who starts the server tells others its IP and port). I didn''t want to use TCP because of higher bandwidth requirements and because of the additional latency introduced by tcp ack mechanism and the Nagle algorithm. So i want to make it on UDP (28-byte UDP header vs 40-byte TCP header). The goal is to allow 56K players to play smoothly and to make the gameplay seamless and comfortable. Before I ask the questions i want to describe the current situation: The game is a 2d top-view shootout arcade, where players move to top, right, down or left, and fire bullets. Players and bullets move with a constant velocity. The goal is to survive and to shoot all other players. Imagine i have two players (''A'' & ''B'') and the server(''S''). S is running on the same machine where the A runs (second parallel thread specially dedicated for the server). The interaction A<->S is performed very fast, as they communicate through a local socket, so no latencies can occur. But when it comes to communication between A and B, or S and B the problems appear. 1. How do i synchronize firing, when the average latency is about 200ms? Imagine A fires, and it takes about 200ms for the packet to reach B through the S, but the situation could change during those 200ms, and the bullet position won''t be up-to-date at B. The Targeting method described on this site in the ''articles'' section cannot be a good solution, because it depends on ping statistics, but the ping time can change a lot during a short period of time (that concerns dialup clients). 2. How do i synchronize game states and player positions, so that each client could see the same situation of the game on local displays? I''ve read about dead-reckoning and other methods, but i still don''t get how to apply those methods in a game... 3. What are the mechanisms and algorithms for packet exchange between clients and server? Do i need to send updates periodically, or do i need to send packets only when the game state changes occur? What data should be included in the packet (delta values or absolute values?) 4. UDP is unreliable, how to fight packet loss and to make the game run smoothly? Well, i think those are all my questions for the moment. Links to docs are appreciated. Thanks in advance.
If you could format your post alittle better it would be easier to read DGtalx.

Here are some thoughts for you, but in the end I think you will just have to implmenent and test to see if they are working for you.

1) if it takes 200 ms to go from A -> S -> B, assuming equal distribution of latency on each step, it only takes 100 ms for a packet from A -> S, and S -> B.

In that time, for a game running at a 10 Hz cycle, will have updated 1 tick within 100ms. A bullet traveling at 10 pixels/sec travels about 1 pixel/100ms. So in that time a bullet has traveled 1 pixel, so its no big deal that the bullet will appear 1-2 pixels ahead of the player. Scale up the bullet speeds and you will see, until the speed reaches the range of 100 pixels/sec before network latency gets to be a problem with regards to bullet accuracy.

2)Look for articles discussed in this forum about timmer sycnchorionzaiton. I use a variant of NTP, but others have their home grown soultions. Syncing the timmers is the first step in syching the movements, as movement is time dependent.

3)Both are needed. If you use udp you can''t guarnetee dilvery, so resend will be needed. But some data is so time critical that you can''t wait for a ack packet, so u just establish a constant stream of data. Its really dependent upon your applicaiton. For non-time criticla events like chat events, use the single guarnteeed message protocol, and for constantly changing time critical data use udp, streaming protocol.

4) 2 methods :
--->implement your own ack protocol, to resend data
--->ignore the lost data, and just send the current state in a streaming fashion. If you implmeent a throttleing scheme you can improve your bandiwidth effieceny, if you know how much data your losing.

Good Luck

-ddn



Advertisement
Ok, but what do i do if the latency is random?
Take a look at this illustration:


When to start the bullet animation on both clients? And how to make the bullet to be in exactly the same positions on both clients at all time moments during its fly?
It isn''t latency, as much as it is the error in which you measure latency, which will cause descrepency with synchornizaiton.

Error can be reduced over time, through repeated samples, filtering, and projection techqniues. However the system will always have a certain amount of error which you can''t remove.

Lets say you use NTP to synchrnoize the clocks, to +- 20 ms. Which is achiveable with broadband connections using udp pings. It might take a minute to reach that level of accuracy, but it''s doable.

So +- 20 ms, is the amount of error within your timmer model on the client side. The server sends a packet to the client saying a bullet of type A has spawn at postion B, with vector C, and speed D, at time E.

You take that speed, and time measure to project the bullets position to the current time. The current time as the client thinks it is, which has an error of +- 20 ms.

So lets say a bullet travels 10 pxiel/sec again, so in 100ms it travels 1 pixel, and at 20 ms it travels 1/5 pixels. So with that timming algorihtim you have an uncertainy of the bullet position of about .2 pixels. Which is not to bad.

The other issue is that due to the fact that the timmer algorihtim is uncertain to +- 20ms the client could run ahead or behind +- 20 ms. Whcih isn''t too bad. However it takes a minute for you to get enough samples to get this level of accuracy. Starting up, the accuracy will be much worse, and the client can be +- 200ms ahead or behind the server, while it gets into sync. That might be a problem. What i did was for the first minute i let the server drive the time with ticks, in a lock step manner. Then once the timmer algorhtim has achived enough accuracy it switches over. It seemed to work well enough.

Well Good Luck.

-ddn
Ok, now i got even more questions...

1. Who should be responsible for computing damage? Should the clients take responsibility for rendering the bullet, for collision detection, etc, after it receives the ''fire'' packet? Or does the server need to calculate everything and send game states to clients?

2. What does "clock syncronization" mean? Do i have to sync system time (system clock) or some kind of a game internal clock? How do i implement an internal game-specific clock?
Here''s a clock sync algorithm, which is supposed to be good for syncing clocks in games:
1. Client stamps current local time on a "time request" packet and sends to server
2. Upon receipt by server, server stamps server-time and returns
3. Upon receipt by client, client subtracts current time from sent time and divides by two to compute latency. It subtracts current time from server time to determine client-server time delta and adds in the half-latency to get the correct clock delta.
(So far this algothim is very similar to SNTP)
4. The first result should immediately be used to update the clock since it will get the local clock into at least the right ballpark (at least the right timezone!)
5. The client repeats steps 1 through 3 five or more times, pausing a few seconds each time. Other traffic may be allowed in the interim, but should be minimized for best results
6. The results of the packet receipts are accumulated and sorted in lowest-latency to highest-latency order. The median latency is determined by picking the mid-point sample from this ordered list.
7. All samples above approximately 1 standard-deviation from the median are discarded and the remaining samples are averaged using an arithmetic mean.

What do you think of it?

3. What is the basic game loop (on both client and server), including multiplayer prediction and sync issues? Can anybody give examples of packet dump and probably some syncronization pseudocode?
That''s the algorithim i use as well. Except i record 2 time events on the server side. Time of reciving the packet and time of sending the packet on the server side, to account for transational latency, which is not what your trying to measure, and increases the error of the latency measurement, i feel.

As the first question, the server should be authoritive, but the clients should simulate as much as visually possible to give the appereance of a realtime interactive world.

Basic server and client loop:

-parse and dispatch incoming network events
-update game simulation
-update network manager
-render view if you have one

either in a seperate thread or asynchornously

-recive packets from socket stream, translate into network events and push them onto the network event in queue

-send packets, from the network event out queue, translate network events into packets and push them over the socket stream

I seperate the game logic from the network components to reduce dependices and make things cleaner overall.

Good Luck.

-ddn
Advertisement
Ok, but i still don''t know how to implement an internal game-specific clock and how does this clock refer to the game loop?

The game clock is the local system clock with an offset to derive the time of the remote clock. It''s a model of the remote clock. The offset is computed using the pings.

Rt = Lt + Of;

Rt = remote time estimated +- error
Lt = local time ( using the system clock )
Of = offset

Of is computed using the ping packets. They have the local send/recive time and the remote send/recive times.

Of = ( remote_recive - local_send ) - hL;
hL = 1/2L
L = ( local_recive - local_send ) - ( remote_send - remote_recive )

L = lantency
hL = half latency

The offset is progressively refined through repreated sampling/filtering of pings.

Check my calcuations to make sure they are correct, but this if from memory.

Good Luck!

-ddn
ddn:
Thanks a lot for your replys... This now makes sense.
One more thing: can you share some simple source code to look at and to learn from? (Quake, HL SDK, etc are too complicated for me, i just want to learn from a clear solution) Or maybe you got any links to code fragments which implement everything discussed above?
I only know of 1 really good example of a stable and functional network code i''ve seen on the interent. It''s the networking code found in the Tourque engine, which unforuntately not free nor open source, last i remember. But you should check again, that might have changed.

The reason why i hold that code in high regards, is becuase it has gone through actually useage. I do encounter alot of network code on the internet, samples, tutorials, etc, but much of it has not been tested nor used. So i can''t recommened them.

Good Luck.

-ddn

This topic is closed to new replies.

Advertisement