Advertisement

TCP/IP Fast Writer Slow Reader Algorithm

Started by February 25, 2016 05:25 AM
3 comments, last by oliii 8 years, 8 months ago

I've been working on an interesting test and this one seems to be quite an interesting problem.

I wrote a test program on simulating a fast-writer-slow-reader scenario and I was able to reproduce it. The main question that really made me think about it was "is that good or bad?"

So yeah, I had several clients sending messages to the server and at the server the code simply dumps all recv() bytes into a buffer--a stack basically in which a thread worker would pop() an element off the queue. Basically eliminating the surge effect brought about by a slow reader.

And if clients have different rates, I could simply adjust the tick() and go forward from there.

Can someone shed some comments relating to this issue?

Thanks in advance.

Based on your post, I'm not sure you understand the TCP protocol and the OSI model.

TCP is a stream-based protocol and both ends have buffers. The writer lives at the a higher level, usually at the application level, and can write whatever they want and it gets buffered. The Transport layer (TCP) sends the messages across the Network layer (typically IP) as fast as it can. TCP does its magic of acknowledging the packets are all correct, and it comes out the other side as a stream when the data has arrived. TCP will control the flow automatically.


I had several clients sending messages to the server and at the server the code simply dumps all recv() bytes into a buffer--a stack basically in which a thread worker would pop() an element off the queue. Basically eliminating the surge effect brought about by a slow reader.

I'm not sure where your "surge" comes in.

The transport layer of TCP will buffer the incoming stream. That happens behind your back automatically. The size of the buffer is up to the system. (On Windows you can adjust the computed buffer sizes and maximum buffer sizes in registry values, but it is dangerous to do so without a deep understanding of how and what it affects.) Generally the read buffer on the receiving side is on the order of 8 kilobytes to 64 kilobytes which the system will manage automatically. If the program is "slow" and doesn't read from that buffer, once that buffer fills up TCP will tell the sender side to stop sending data. TCP will resume the transfer when there is space in the buffer. The data will still flow from one side to the other, reading it out as a stream of bytes in the same order it was sent.

It is typical to accumulate from the system's buffer into your own application buffer which can be whatever size you want. Once enough data has streamed through the pipe into your accumulator, you read that block into your program, remove it from the accumulator, and process it.


And if clients have different rates, I could simply adjust the tick() and go forward from there.

Connections always have different rates, and the flow rates are constantly changing over time.

Advertisement
I think your question assumes a bunch of context that's not stated.

So, this is for a client/server game? And TCP is used for gameplay information? And some clients are able to send data to the server faster than others? Or perhaps the server can write to all the clients, but some clients are slower to read than others?

TCP isn't particularly great for highly asynchronous situations where timing is important, but it can be made to work. The perhaps most common option is to pace the transmission at the application level.
Define the maximum size of a "tick" on the network. Call it "1kB." Then, define the maximum number of "ticks" that can be outstanding on the network. Call it "3." Number each packet with a sequence number. When you send a packet, also include the latest sequence number you've seen from the other end.
When time comes to maybe send a packet on your server, check to see what the last packet sent is, and what the last packet acknowledged is. If the delta is less than "3" then you can construct and send another update packet. If not, you simply don't do that.
This means you need to de-couple network tick rate from game/simulation tick rate. It also means you need to have some data that you can choose not to send, or send less frequently.

This kind-of re-implements TCP on top of TCP, but with some additional important features, the most important of which is that the application has visibility into the current state of the connection, and can thus avoid putting "out of date" data onto the wire when it's full.

Because it's still on top of TCP, you are still guaranteed to make progress eventually. Even if three packets in a row are dropped, that doesn't mean that the "acknowledge" that will tell the other end it's OK to send more data gets lost, so the other end will never stall waiting for acknowledge (because TCP re-transmits for you.)

The other question is then: What do you do with clients that have varying degrees of throughput? How do you support your ultra-dense MMO on a 56k dial-up modem?
This is something that has to go into your game design. Either you make this a goal, and implement a "varying fidelity" view of the world, and allow different players to see different slices of your world. Or you say "a player needs to support at least X amount of throughput, and Y amount of latency, to be able to successfully play the game," and detect when it falls short, and drop the player (telling them what happened and why on the client.)
enum Bool { True, False, FileNotFound };

@hplus0603,

Yeah, I was afraid of going in that direction because of the complexity in that algorithm, but you absolutely know what I was thinking.

After reading your reply, it somehow triggered an idea how an algorithm can be setup similar to B-TREE or may be looking to a parse tree, but it will have the ability to adjust the rate/tick based on lag, I mean how ticks the current is behind.

I haven't started writing the test program, but I do have a simple test right using key-value store (taken from babudb I'm still building this test, iterating it to a point where I think I can simulate the phenomenon. The test code is at https://github.com/MagnusTiberius/WalkerRoad/blob/master/database_test/DemoDBTest.cpp line 358

I would based the update rate on congestion rather than lag. Basically the size of the data cache currently queued for each client. Latency and bad connections will automatically be scaled down, while the host memory usage is preserved.

Once a size limit is reached for a particular client, maybe it's time to kick him out because his connection is so bad.

TBH, I would use some delta-compressed, state-based UDP traffic packets for entities that need frequent updating, rather than TCP/IP. Reliable messaging can get rather troublesome if as you find out, you're trying to push more data that your internet connection can allow out.

Everything is better with Metal.

This topic is closed to new replies.

Advertisement