Advertisement

Syncing Client/Server

Started by April 05, 2016 04:29 PM
4 comments, last by oliii 8 years, 7 months ago

First, I don't have a lot of network background, so sorry if this question shows my noobness. In my particular scenario, there is going to be one service per client (one-to-one). I'm sending game data from the server to client so that the client can do some processing with the data. In theory the data could change every frame, but in practice it does not. So I'm trying to only send over deltas, but ran into a problem with my approach.

My approach was to keep a dictionary<ID, data> on the server, so when it came to sending data over the wire, I could look up the last data I sent, and check what pieces of data changed, and then only send that data over. A set of bits would be flagged and also sent over so the client knew what data values were being updated. The client also keeps the data cached so it only needs the updated data.

The problem I ran into is that the server starts off before any clients connect, and starts sending the data (to nowhere). This builds the cache and so by the time a client connects, it is only receiving deltas (but the client never received the full object the first time around because it wasn't connected yet).

Since the client/service is one-to-one, I could probably modify the server to not start building a cache until a client connects. However, I wondered if missed packets would be a problem (maybe our socket API automatically resends so I don't need to worry about this situation). I'm wondering what kind of systems games use to sort of sync up efficiently client/server data so that only deltas can be sent.

-----Quat

The server shouldn't be building caches. Or to be more precise, he should be building caches for each client. SO, no client, no cache.

1) client connects.

2) The cache for that client is empty. Therefore the delta against an empty cache is a full state. Therefore the server sends the full state.

- NOTE : the server would eventually keep sending full states, until the client starts acknowledging something.

- NOTE 2 : can have a blocking full state : Send the first / full state reliably, then wait for the client to ack the first state before sending him delta updates.

3) server can send delta to that client from now on, since he received a ACK from the client.

It's easier to see the client as an observer of the game state. What you send to the client is his view of the game. And subsequent updates are changes to that view. The SQN / ACK numbers are only there for 1-to-1 transmission, and are not a global number used by all clients. Although you can do that too, but from experience, unnecessary and quite messy.

Everything is better with Metal.

Advertisement
You need the map of "last version seen" per connected client.
Then, when time comes to update clients, iterate over the data you have, and compare to the versions seen by each client, and send data that the server has a newer version of.
If the client could miss the update (crashing, dropped packets, or whatnot) then it makes sense to have the client send the last version number it's seen, rather than the server just assuming the client sees everything it sends.
enum Bool { True, False, FileNotFound };

not to derail this thread, but about those delta-updates - would you say it is ok to delta the serialized data, or would you delta on a per-value basis? obviously for the serialized data i would mean before compression.

Preferrably, you do that "map keeping" in a separate process which subscribes to the server (read up on PUB/SUB). Why so needlessly complicated? Well, first because this will automatically offload some non-trivial work (figuring out deltas and compressing) to another CPU core, but if one day you discover that the CPU is getting to its limits, you can simply run the subscriber on a different machine. It is arguably somewhat more elegant from an architectural point of view, too, as you separate things that should be separate (game logic vs. distributing). If you find yourself really lucky and suddenly need to scale to 100,000 users which you didn't see coming, this isn't something you can't tackle either. No need to rewrite all your networking code and desperately try to squeeze out the last bit with vain optimizations. Simply rent another 10 subscriber machines which amplify your data to clients, and you're good. From the main server's point of view, the difference between 10 clients and 10,000 clients is zero. Note that the per-client "last seen" info as well as compression dictionaries and encryption stuff scales linearly with the number of clients memory-wise, so in principle the same thing applies as for CPU. Yes, memory seems to be somewhat abundant these days, but neither caches nor bandwidth are, so it may indirectly very well become a factor.

would you say it is ok to delta the serialized data, or would you delta on a per-value basis?


First, I hope that by "serialized data" you mean an efficient binary packing using all the information your servers/clients know; not any built-in serialization like Java or C# Serializable clases.
When you know that you're sending the player position, the server only needs to send three floats, and the client only needs to receive three floats, which totals 12 bytes (or less, depending on encoding mechanism.) The built-in serialization classes will add many tons of overhead.

Second, the typical granularity of delta encoding is slightly coarser than "individual value" but not typically encoded as a "binary diff" off of a state blob.
Typically, you will know that position and velocity and heading for a player typically changes together, so you use one message to say "pos/vel/heading update" and have one bit in the header for whether to expect that in the packet.
Same thing for other properties that often update together.
You can make it finer-grained if you want to -- it depends on what the specifics of your game is. The additional overhead of knowing "what changed" in the protocol (may be as cheap as a bit, or as expensive as a path to a field within an object hierarchy.)
Do the math and choose what works best for your game.
enum Bool { True, False, FileNotFound };
Advertisement
A coarser grain is detrimental. After all, you need to send those 'bitfield headers'.

Assuming you do delta on single bytes, for the sake or argument, the gain will only 1/8th compression at best.

A vector is usually atomic. One component changes, it's safe to assume the other components also changed to some degree as well.

If you run into long lengths of bitfield headers (lots of little components changing all over the place), it can mean you need to optimise those components in groups based on their frequency and times of change. Thus reducing the header size, and the frequency of delta transmissions.

You can even write meta-data statistics about it, and estimate how you could improve your traffic by bundling up components more efficiently.

Something i actually experimented with before. The delta-encoding was on fixed-lenght byte buffers.

I run a crude statistical analysis on the frequency of byte changes. I remapped the byte order in order of frequency of change. That gave me runs of contiguous bytes that would change and not change at roughly the same time. The static data that never changed, for example, up front.

Then I ran a simple RLE to build up my delta packets. That gave quite a decent compression. Not optimum as more application-aware delta-compression algorithm, but hey.

Everything is better with Metal.

This topic is closed to new replies.

Advertisement