Advertisement

20Hz Server Performance Dilemma

Started by June 25, 2016 03:02 AM
6 comments, last by WombatTurkey 8 years, 4 months ago

I finally found the perfect tick rate for our gameserver. 20Hz. [Example Gif] Entity interpolation seems so smooth but I am curious/worried about some things.

Right now, a way to "kind of" validate X, Y values being sent are checking the differences between a players current pos and their intended pos. (They are given x, y server value when entering the map) -- so checking from the get go they can only move so far/so fast. Otherwise they will get re-synced, or kicked.

I have clientside physics as well. So, for example when a player moves towards a block he will either not send any position (player let go of the key) or if colliding, send the position the character is in. This works great because then I don't have to run a physics server.

The problem is, this requires the server to receive data at a rate of 20Hz instead of just sending "up,left,right,down" keys. The trade off is... I don't need to run a movement timer at 20Hz on the server. (Sure, the occasional naughty player will send weird x, y positions to make the player move 20 more pixels so they can enter inside a collision area, but that's not really a big deal to me right now)

With all this said:

  • A) - I could either run a basic fixed timer on the server at 20Hz and only receive the input from the client and then send the new x,y values to all the other players
  • B) - Or, do what I am doing now and send the 20Hz of x,y values only when a player is moving that notifies all other players in that game

I feel like option B is ideal because I don't have to check for all the collision areas in the map. I let the clientside physics do that work, and send the appropriate X, Y values. The downside is, a lot more data needs to be sent to the server. But, you could also give the argument that if a player is spamming 'W, A, S, D' keys, that's a lot of packets being sent too... so if the player is actually moving like a wild billy goat, you could argue that the 20Hz of data being sent could be less.

Option B also has a downside of less responsiveness of player movement since it's being interpolated. For example, strafing left and right real fast will not necessarily be as fast as what you see on the client. I believe that could be fixed by increasing the 20Hz?

Also to add, option A's responsiveness factor to B would also be the same. Since the server is sending data 20Hz as well. I'm just not sure if I can get away with the client sending data at 20Hz. I've read that 30Hz, 60Hz is even common. But you have to remember, I'm using nodejs :rolleyes: Anyways, I just laid out a lot of stuff so please critique if necessary, thanks!

A game typically is a stream of continuous updates. Even if a player is just holding down a key, or just not holding down a key, you should keep sending that information to the server at your network tick rate.
If you use UDP, this will "correct" the server if some packet is lost. (Also, it's highly likely that the next packet will have the same state as the previous packet, so a loss may not cause any de-sync.)
If you use TCP, this is still useful, because it lets you figure out when the connection goes bad in real time (through timing.) Although I really don't recommend TCP for action games where movement matters.

Separately, the pattern that works best for almost all games with real-time movement is the [url=http://www.mindcontrol.org/~hplus/graphics/game_loop.html]canonical game loop[/quote] based on a fixed simulation rate, a possibly variable rendering rate, and a possibly fixed networking rate.
The 20 Hz you've selected sounds like a fine network rate, but may be too long a time step for good physical simulation.
It's totally OK to send more than one command packet in the same network packet. If you run physics at 60 Hz, you will have three input packets per network packet, and they will be put into the server simulation at three successive time steps after being received.

There's also the question about whether you want to run simulation on the server, or just "verify movement." If you don't run simulation, then a player will be able to "tunnel" through thin surfaces, such as walls and doors, because that's not a lot of movement in a single tick.
enum Bool { True, False, FileNotFound };
Advertisement

A game typically is a stream of continuous updates. Even if a player is just holding down a key, or just not holding down a key, you should keep sending that information to the server at your network tick rate.
If you use UDP, this will "correct" the server if some packet is lost. (Also, it's highly likely that the next packet will have the same state as the previous packet, so a loss may not cause any de-sync.)
If you use TCP, this is still useful, because it lets you figure out when the connection goes bad in real time (through timing.) Although I really don't recommend TCP for action games where movement matters.

Separately, the pattern that works best for almost all games with real-time movement is the [url=http://www.mindcontrol.org/~hplus/graphics/game_loop.html]canonical game loop[/quote] based on a fixed simulation rate, a possibly variable rendering rate, and a possibly fixed networking rate.
The 20 Hz you've selected sounds like a fine network rate, but may be too long a time step for good physical simulation.
It's totally OK to send more than one command packet in the same network packet. If you run physics at 60 Hz, you will have three input packets per network packet, and they will be put into the server simulation at three successive time steps after being received.

There's also the question about whether you want to run simulation on the server, or just "verify movement." If you don't run simulation, then a player will be able to "tunnel" through thin surfaces, such as walls and doors, because that's not a lot of movement in a single tick.

Great! I was thinking of doing UDP. I am using electron and not limited to the browser's API so I can use node to connect. I did some math (which I'm not particular good at) but, I was estimating at around 240 Messages Per Second for every 6 players. Since my game is instance based, it's limited to obviously 6 players for each game, if this even helps with performance in the end, I have no idea as the server will still spit out a ton of data (relatively speaking :P) -- (This is also assuming they are all actively moving, and counting for bi-directional communication [server receive and sending total])

Correct me if I am wrong as I am horrible at Math, but this is the algorithm I used:

6 players online = 20 * 6 = 120 * 2 (sending and receiving) = 240 Messages / s

100 players online = 20 * 100 = 2000 * 2 (sending and receiving) = 4000 Messages / s

I changed my architecture up since we last talked. I am running 1 main server that uses nginx as a custom load balancer for 3-4 node instances. This server then sends players off to $5.00 VPSs to act as game instances. They communicate with Redis to let the central server know some things, etc. -- Since node utilizes 1 core, and is single threaded, I heard about this idea from the dev who runs wilds.io and figured it would be perfect. I did some testing on these cheap $5 boxes with the ws library (perMessageDeflate: false), one of which has the specs of:


model name      : Intel Xeon E312xx (Sandy Bridge)
cpu MHz         : 2394.472

Except, I'm only allotted 1 "virtual core", but, with that said:

I got to around 10k Messages per second receiving and going out before nodejs's event loop started to poop. I was sending around 19 Bytes per message in/out. The wilds.io dev said he gets around 100-140 and up to 200 active players per node instance. His data stream is probably different and other factors obviously come into play here, but I figured it's a good ballpark estimate.

So, I'm thinking in the end, 100-200 players per node at a 20Hz rate is what node is capable of, off a $5 box. I'm curious if 4000 msgs/s is a lot in this area, and what am I capable of if I switched to lets say, a C, or C++ TCP server? I assume far more than 4k msgs/s? I feel like nodejs is undermining me.

Edit: I looked at your fixed canonical game loop page. I'm most likely going that route once I figure all this out first, I'm what you call a, little slow ^_^

The messages themselves are not a lot.
However, $5 VPS slices are generally run on highly oversubscribed hosts, and generally have some pretty bad scheduling jitter.
Also, you will typically find them on ISPs that may promise you several terabytes of transfer per month free, but the actual achievable throughput might be something like 100 KB per second, which won't actually let you get to those amounts.

Speaking of Node, it's single threaded, so if you host on a multi-core box, you'll want to have multiple instances run in multiple separate processes. This means each of them needs to listen to a differetn port.

Finally, if you use Nginx for load balancing, you're kind-of stuck with HTTP; you can't do UDP over that. Also, if your $5 VPS provider doesn't give you a guarantee that the load balancer and all game instances are on hte same subnet, you may get significant additional latency from doing that.
enum Bool { True, False, FileNotFound };

The messages themselves are not a lot.
However, $5 VPS slices are generally run on highly oversubscribed hosts, and generally have some pretty bad scheduling jitter.
Also, you will typically find them on ISPs that may promise you several terabytes of transfer per month free, but the actual achievable throughput might be something like 100 KB per second, which won't actually let you get to those amounts.

Speaking of Node, it's single threaded, so if you host on a multi-core box, you'll want to have multiple instances run in multiple separate processes. This means each of them needs to listen to a differetn port.

Finally, if you use Nginx for load balancing, you're kind-of stuck with HTTP; you can't do UDP over that. Also, if your $5 VPS provider doesn't give you a guarantee that the load balancer and all game instances are on hte same subnet, you may get significant additional latency from doing that.

Yeah, good point. For our central server (which is basically the LB) will run around 3-4 instances. Then, once they find a game to join or create, I load them onto a separate $5.00 VPS for game processing. They are still connected to the central server too, so private messages, log out/in notifications, etc will be over TCP (Websockets).

Then, for the $5 boxes they connect to, I can use UDP for player movement. Since electron allows me access nodes API, it will be fairly easy there.

I'm just worried I would need to have two network streams, UDP and TCP? Because when a player drops and item, or needs 100% packet reliability, I'd rather go through TCP. But, this worries me because then that would be even more overhead for those 5 dollar VPS's :P

Maybe just ditch the UDP for now, and see how TCP goes?

Definitely right about the oversubscribed nodes. So I'm thinking even less amount of players. Probably around 100 per instance to be honest. Although, I could just get a nice dedicated one and have game instances on different ports and just have the players connect to those instances, instead of separate physical severs. That would probably be ideal for performance? Quite a bit more expensive though :(

Edit: I went ahead and created this: http://www.html5gamedevs.com/topic/23427-nodejs-networking-500-vpss-vs-1-dedicated/ I personally think the mini $5.00 VPS way is ideal here. Especially because of how nodejs works, but yeah still contemplating

Maybe just ditch the UDP for now, and see how TCP goes?

If you are planning to use only TCP, here is my two cents.

Looking at your game gif, I have a doubt whether that kind of gameplay could work well with TCP alone. The reason is that you have rather quick arrow movement including ability shoot towards enemies similarly than in a games like counter-strike, which requires UDP. (haven't seen any TCP or WebSocket implementation of FPS games in the browser except Quake 2 from google, and it lagged all over the place). Hence, I was previously trying similar RPG mechanics over WebSockets and learned my lesson pretty quickly. It works decent as long as you are testing it with localhost, but the moment you switch over a real world internet or wireless, you experience lags and warping because packet loss will come sooner or later.

At this point, I see a lot of people saying TCP can't be used for online games at all, but it is not necessarily true. The reason is that you can control the amount of tolerated latency by altering your game mechanics to work within the limits of TCP, or if you don't want to do that, just use UDP from the beginning.

For instance, for TCP implementation you could replace quick arrow based movement with mouse movement (point and click) and get rid off skills which uses "free shooting" in which bullets can fly to some direction as in FPS games, and then you need some smarter algorithms at the server-side to decide whether they are hitting the enemies or not. I would also look into the design of games like WoW and BrowserQuest - which is using WebSockets (TCP). Because I think there is a clear reason why TCP is working for them - point and click to move, point and click to select target to attack and then bullets are just following the target no matter where he moves, etc..

Nevertheless, I do like the idea of using electron in the future and I have some plans of using that as well. But if you wan't your game to be as accessible as possible, using TCP from the beginning could work for players playing through browser and desktop. Interestingly, you could also make two implementations, meaning that for browser clients you use TCP and for desktop UDP. Then you could say to players that "download desktop client for better performance" or something. I also recall that RuneScape allowed to play through browser or via desktop client. Dunno if they are doing exactly this.

-

Advertisement
You don't need to use TCP just because "a request cannot be dropped."
Re-sending the request over UDP until you receive acknowledgement can be equally effective.
Just make sure that, if you stop receiving anything back from the server, you stop trying after a little bit, else you'll end up with a system that makes congestion worse once it happens.

I personally think the mini $5.00 VPS way is ideal here.


If your game depends on low latency, don't say I didn't warn you!
enum Bool { True, False, FileNotFound };

You don't need to use TCP just because "a request cannot be dropped."
Re-sending the request over UDP until you receive acknowledgement can be equally effective.
Just make sure that, if you stop receiving anything back from the server, you stop trying after a little bit, else you'll end up with a system that makes congestion worse once it happens.

I personally think the mini $5.00 VPS way is ideal here.


If your game depends on low latency, don't say I didn't warn you!

I think I'll just go with TCP for now, and test UDP on this OVH VPS before it expires. I mean, for movement only, I heard UDP is far better, from every single knowledgeable dev like yourself. So I am really intrigued... It's just, my lack of networking knowledge might create that "congestion" :P -- And in the end, I'll probably do more harm than good

Thanks for all your help @hplus0603. I bookmarked your site as well. So much information overload O_O

This topic is closed to new replies.

Advertisement