How do I judge how much data I can regularly send to players?
[twitter]Casey_Hardman[/twitter]
I would assume it's about sending the least amount without sacrificing the gameplay. How much you send should be fairly easily tweaked in whatever you setup. This way you can test what works best.
If the recipient cannot keep up, detect this, and drop them from the game, rather than trying to cram a size 10 body into a size 2 dress. Nobody will like that outcome!
This means that your game design will exclude some number of players "from the bottom" in network connectivity. You're already excluding some number of players with whatever OS, graphics, CPU, RAM, language, and other choices you're making in the game design, so networking really is no different. Find a level that you can comfortably support with your development resources, and stick to that.
Regarding how much you can, and should, send, that varies very much based on what kind of game it is, what kind of demographics you're targeting, and other context. For action games targeting North America, I'd suggest that more than 32 kB/s is starting to push it. For comparison, I don't know what the console tech cert requirements are these days -- it used to be 8 kB/s, but that was the previous generation, and infrastructure may have improved since then.
This is the only rule that will gracefully handle all possible forms of Internet connectivity available in the modern era. You have to contend with some clients with 256Mbit pipes (or bigger), and some who can barely choke out a few KB a second. Sometimes you have clients connected via, say, 4G-LTE where throughput is fairly high but latency and packet loss are serious issues. Connectivity averages are less than useful; what you need to care about is the connectivity worst-acceptable-case for your target demographic. In other words, what's the shittiest connection they'll put up with before giving up on the Internet for a while? That is your target. You should aim to at least approach playability on bad connections even if you don't land there, because transient connection problems happen to the best of pipes, and resilience is important when they occur.
You can tailor your protocol implementation to compensate for some of these situations (usually at the expense of optimality in other circumstances) but in the end it's all down to minimalism. The less you send, the less can get lost, slowed down, or garbled.
Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]
Thank you for the thorough responses!
I've been measuring how much data the clients get on average per second, and it usually sticks around 220 bytes at the moment, give or take.
It's not a finished game (or near finished), though...it's a MOBA game (like Dota 2, League of Legends, and so on) and right now all I have implemented is the players, who weren't moving or casting spells at the time, and the creeps, 15 per team per wave, which run down lanes and target and attack each other.
All that's really being sent during the test is messages for the health of creeps that have taken damage since the last message was sent, and messages for when creeps are changing targets.
On a side note, I'm actually not sure if this way of measuring it is even legitimate, though...it seems like I'm only recording the actual amount of data I put in the message when I make the averages, but I read this post and the second paragraph seems to say that messages regularly contain extra bytes of data inside them – if I understand correctly, 112 bytes for IPv4 or 132 bytes for IPv6. Does that apply to Lidgren messages?
[twitter]Casey_Hardman[/twitter]
The overhead of a single TCP packet (trying to fill the send window) is 40 bytes plus link layer.
Lidgren actually adds its own framing, too, if I understand it correctly. I don't know how much.
When it comes to DotA-type games, I believe those are typically implemented as deterministic simulations (like Warcraft or Starcraft or Age of Empires) so the only data you need to send is the commands for each player for each command tick. This is a tiny amount of data.
The overhead of a single UDP datagram is 28 bytes, plus whatever link layer overhead (Ethernet, ATM, etc.)
The overhead of a single TCP packet (trying to fill the send window) is 40 bytes plus link layer.
Sorry if this is kinda offtopic but i have a question. Overheads seem to be very large and nearly always larger than packages i send to or receive from clients.
As a client should i collect all data i want to send into a string and send it at the end of my frame? Because I just sent input changes and stuff right away, yet. Same question for servers.
Or won't things i send be splitted into lots of packages anyways?
Or won't things i send be splitted into lots of packages anyways?
TCP may buffer some data you send, and coalesce into a bigger packet. However, most games turn this behavior off using TCP_NODELAY (turning off Nagle's algorithm.)
UDP will not buffer anything, but send one datagram per call to sendto().
You absolutely want to buffer everything you want to send in the game. It is typical that games send data on a fixed schedule -- say, 20 times a second. All messages generated are queued, and put into a single packet and sent whenever that timer expires. This means that messages also need timing information -- which tick was this message generated at?
The actual tick rate used varies by game genre and preference.
Thank you for your response.
Tickspeeds are not constant most of the times and i could just lie to the server and tell him i lag to make my actions happen earlier than they happen even if it are only milliseconds. I think I might get the idea but i don't get how to acually code it.
Tickspeeds are not constant most of the times
My advice is to make them so :-) Make the simulation tick speed constant, and the networking tick speed constant. Rendering can still be free-running and can use interpolation to render faster than simulation.
Here's the canonical game loop for fixed tick speeds: http://www.mindcontrol.org/~hplus/graphics/game_loop.html