Advertisement

Do games normally stream all data, even if unnecessary?

Started by November 06, 2017 02:14 AM
6 comments, last by hplus0603 7 years ago

I'm wondering how common it is for an application using a socket server to continuously send bytes, regardless there is actually any data to send.

For example, I have a socket server which sends a char array to all connected clients, even if the char array has no significant usage for the client.
Reason being, I need the server to send data to the client at the same speed the client sends data to the server. If I check for the buffer to have information before sending through this algorithm, it could cause data to not be sent to a client from another client until the client who needs the data firstly sends data (I.E a player must move in order to see another player move).

In order to prevent the above example, I stream all data, even an empty buffer, to all clients.

Is this normal, or should I worry about spending time on a new algorithm that only sends data when necessary?
Obviously it would be more beneficial to spend the time, but with deadlines I must prioritize. 

How common is it for games to continuously send data, including empty data, instead of sending only data with a message?

6 minutes ago, LunaRebirth said:

How common is it for games to continuously send data, including empty data, instead of sending only data with a message?

Various applications do this all the time, in the form of heartbeats.

I'm not entirely sure what you mean by "I need the server to send data ... at the same speed the client sends data." Do you mean that you need the two synced up, 1:1, as in synchronous rather than asynchronous? If this is the case, I don't see much of a downside to keeping the extraneous data (as long as the interval is reasonable), as that also gives you a way of knowing when a client has dropped. Though if you find yourself sending empty arrays too frequently, you might want to consider an asynchronous model.

Advertisement

Yeah so you've answered my question, but just to extend on it, say we have a blocking server and client.
The client must send data to the server, in which the server sends that data to all clients

Say a client doesn't send data, because it has no data to send. Then, the server waits for data before it will send any, so the client won't receive any data before it sends some.

In order to prevent that, I send and recv each frame, no matter what, so that I can receive all data without waiting. I do the same sort of thing with a non-blocking socket, by not sending data until I've received data. So yes, synchronous.

I have tried to make my server and client wait for a recv before it sends, and then if there is nothing left to send, it sends only 1 empty array in order to tell the other that it is ready for more data, but it's easier said than done, and I fear time isn't on my side.

Basically I'm worried that sending 1024 bytes each loop synchronously may use a lot of unnecessary data for a server hosted on a virtual machine, though my worry is backed up by zero evidence.

10 hours ago, LunaRebirth said:

I send and recv each frame, no matter what ...  I do the same sort of thing with a non-blocking socket, by not sending data until I've received data ... I'm worried that sending 1024 bytes each loop synchronously may use a lot of unnecessary data

You need to process incoming data regularly.   

The risk with what you just wrote is that you can stall your program if it takes a while to send, or if the program must wait around for data to arrive.  An asynchronous or buffered send prevents the stall going out, and a polled read of whatever data is present can prevent stalls on reading. Both of those are set with the asynchronous O_NONBLOCK socket flag.  Otherwise any minor network hiccup can cause your program to stall.

As for sending a kilobyte every "loop", if that means once per frame, or 60+ times every second, that's 60 kilobytes per second (or in more common bits per second, about 500 Kbps) which could be a burden.  Although that rate can be handled easily enough on modern broadband, multiplying that for each client will quickly hit a high number. It would certainly be a bill to pay for a hosted server with metered bandwidth.

 

Be sure when you test your programs that you include both slow networks and poor quality connections. Exactly how you do this depends on your systems. Many major networking libraries include options to simulate both situations.

Okay, thanks frob. 

Data usage is important to me so I will be making a system to deal with that. 

Last question, is it faster to send a smaller amount of bytes than a larger amount? Sounds like it should be, but I don't know enough about networking behind the scenes. 

For example, if I send() a char array of 6 bytes, will it be faster than if I send() a char array of 200 bytes?

 

I'm thinking about sending a message that reserves 4 bytes, delayed by 1 frame, in order to tell the server how many bytes the next message will be. That way, instead of sending 1024 bytes per frame, I can send the exact amount of bytes I need by using up 4 extra bytes per send() to tell the server what amount to expect next. I could see this being good in some cases and bad in others. 

 

That answer gets complicated.  

In general you should minimize the content of what you send.  But you should also buffer them up into larger bundles when reasonable.

Every block of data sent across the wire has some overhead, and most of the overhead is invisible to you.  Exactly how much overhead is required depends on the protocols involved. TCP has different overhead from UDP.  Ethernet on a local network has different overhead from PPPoE used for many broadband connections. Mobile connections and fiber connections have their own overhead.

Many protocols will accumulate small messages in the hope that you will send another message soon, which can reduce overall transmission size. There are also minimum sizes that must be met, and if they aren't used they will be padded and the padding will be transmitted.  

Those details are necessary when you're doing more advanced networking code that requires understanding how the bits are transferred across the wire, but again, they're invisible to you as the programmer. If you want more information on all those details, the book TCP/IP Illustrated Volume 1 by Stevens is probably the best book out there.

 

So unless you want to get into those details, it can be simplified into this: send as little as reasonable as infrequently as reasonable.  Then be sure to test it on as many networking environments as possible, including various broadband systems and cell phone networks.  

Advertisement
Quote

 

say we have a blocking server and client


 

 

You already lost at that point. The only thing that can block is a FTP client, or a batch HTTP client like curl / wget.

This is why select() exists -- it lets you wait until some socket has data to receive, OR a particular timeout expires. typically, you will call this in your main loop, and read data from all sockets that are ready.

This is also why recv() is implemented the way it is: If there is ZERO data available, recv() will block, waiting for data. If there is at least one byte available, the recv() will return data (the smaller of what's available, and what the recv() argument size is) without blocking. Thus, after select() tells you there is data, recv() is guaranteed to not block!

Games will typically generate a packet every X milliseconds, and put some "standard" information in there ("this is packet X, the current game time is Y, the last packet I saw from you was Z, ...") Then they put any message that has been queued since the last send, bundle it all up into a single message, and sends it.

On dial-up modems, the time of sending a packet is almost directly related to the size of the packet. However, on any modern system, there is a "frame size" where send time is mostly determined by how many frames you use (including packet header overhead,) rather than the size within a given frame. IPv6 guarantees a 1280 byte frame (including packet overhead) will be atomic, but does allow and often implements bigger. LTE mobile networking has a MTU size of 1428, from which 40-80 bytes disappear depending on protocol and carrier. Once the radio is spun up on a phone, a packet of 4 bytes or a packet of 1200 bytes will take the same amount of power/time. Once you go over about 1200, It Depends (tm) a lot.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement