Advertisement

MMORPG networking Solutions

Started by May 31, 2015 07:58 AM
27 comments, last by rustin 9 years, 3 months ago

how to make server sides workload lower ? best suggestion i mean

That highly depends on the specific details of the gameplay. You need the game designer and the network engineer to sit down together for a few days and work out what will and wont work, and what compromises will need to be made.

[edit] From your other thread, you say "The server side coding is mostly done as our networking programmer has his own networking framework which is able to do 2000+ CCU per server"

In that case, then get your networking programmer to ask some specific questions here, because the answer to "how to make the server workload lower" will depend on how your existing servers already work.

[/edit]

well 2000+ CCU a server
and 20-40 players in that distance im not sure about update rates still
maybe 30 updates a sec ?

Let's just assume for the sake of some napkin math that you're using 64 bytes per player update.
2000 clients, sending their gamestate to the server at 30Hz:
= 15.36Kbps upload bandwidth at each client
= 30.72Mbps download bandwidth at the server

Plus, each client can see 40 other clients, so the server has to relay on 2000*40 neighbouring client updates at 30Hz:
= 614.4Kbps download bandwidth at each client
= 1.2288Gbps upload bandwidth at the server.

For a total of 1.25952Gbps bandwidth on your server and 629.76Kbps on your client.
That's totally doable for the client end, as long as your min requirement is DSL/Cable/etc broadband internet.
...But finding a server with >1Gbps for a reasonable price will be very hard. You'll also be using over 12TB of data per day, which will likely push you into the realm of paying for excess data transfers.

If you go for P2P, each client has to send their gamestate to 40 peers, and receive gamestate from 40 peers.

= 614.4Kbps upload/download bandwidth at each client.

= 1.2288Mbps total bandwidth at each client.

That's completely feasible in places with good internet, but won't work on low-end residential DSL broadband any more (e.g. I only have 256Kbps upload bandwidth at home, but 5Mbps download) -- that's a decision that now impacts your business and marketing strategies, so you'll want to bring those people into the conversation now too.

P2P also may or may not be compatible with the gameplay design that you have in mind. For example, cheat protection is harder with P2P -- there are solutions like lock-step simulations or cryptographic validation by non-interested nodes, but now this requires you to bring your lead gameplay programmer into the discussion of how the game logic simulation will be architected. He needs to design the gameplay code in such a way that it will interact well with the networking environment.

You need to have an experienced network programmer working on this up-front (and ideally, also an experienced lead gameplay programmer), so they can design an architecture that's compatible with both your gameplay requirements and your wallet.

e.g. in your PM you asked about my friend who claims he can support 500K CCU off of one dedicated server.
His gameplay mechanics work well with sharding (this is where you arbitrarily split the universe into many sub-servers, where you can only see other players within your sub-server), which allows you to keep the number of visible players low in any situation. His game is entirely P2P, so the central server does almost nothing -- this would be a major problem in a game where people can cheat, but cheating is not a big problem in his game due to it being non-competetive. There is no combat in his game, so the per-client update data is very small. No combat also means that his game is very tolerant of packet loss, latency, arbitrary sharding, drop-outs, etc...

The network solution that you choose will dictate what kind of gameplay is possible.
Or likewise: the kind of gameplay that you want to make will dictate what kind of networking architectures are feasible. Specifics matter here, a lot.

Also, 2000+ CCU per server is impossible for what you're doing.

Nothing's impossible. Everything just takes time and money wink.png
e.g. in the above napkin math, if you reduce the update rate to 10Hz, update packet size to 32 bytes and the number of visible players to 20, then suddenly you're edging into the ballpark of what a single $100/mo dedicated server with a 30TB/mo limit and 100Mbps connection can almost handle.

2000 CCU probably means you've got about 20k DAU, so as long as your average revenue per DAU per day is over 0.02c, then that's your (perhaps unrealistically optimistic) server costs covered biggrin.png

a friend of mine told me that

You're looking at a server sided multiplayer game with instancing.

Start with a multi player game with lockstep network determinism

After you have that add persistence

so ?

Advertisement

2000+ CCU per server is impossible for what you're doing.


Where is this "impossibility" coming from? That sounds like a very reasonable number.

Going back to the original question. It doesn't matter if you use existing networking middleware or not. The hard part of MMO networking is partial visibility (only sending updates for things near a client) and integrating the networking with your game simulation - neither of which are solved by those networking libraries.

I still don't get it <.<

@Hodgman

You say that 12TB of Data a day is costly. But, why would it be less data if it was handled by multiple servers? In the end you have to handle the same amount of data, regardless of how many servers you have, right?

Let's assume we want to have 100 players.

Let's also assume, like you said, 64bytes per update, and 30 updates in a second.

If I get my math right, the "download-size" of the server per second should be (30*100*64)/1000 Kbps = 192Kbps

If we now have 2 servers, each handling 50 players, both servers will need to download 96Kb in a second. But: All servers counted together will still have received 192Kbps.

So why would it be less costly? Of course I must be missing out on something, but I don't know on what yet.

@conquestor3

The "n^2" is probably the problem, but I don't get why it even is like that. Let's assume (In terms of real life, not programming) I have one object of size X with cost Y. Shouldn't it be safe to assume that if I buy two objects of size X/2, the cost will still be Y?

Shouldnt it be the same with servers?

@Hodgman, great post!

So it's technically possible to have, for example, 10000 players on the same game with 4 different servers linked together?

Is the linking itself costly in terms of bandwidth?

Of course the cost per month would be very high..

While more advanced networking is out my skill level I think it might help just looking at other real world events where networking is a problem. This video is pretty neat and I think it's worth the watch for the short segment on the WoW networking. While their numbers are much larger than yours, they need to be pushing 100 gbps just on WoW alone. Whether that's per data center or overall was not clear. Each realm comes bundled with a few database servers and instance servers as well, but some networking information gathered by a 3rd party showed you never change connection from the world server. The connection is only lost when moving to an instance (another blade).

How many players you have per server is entirely based down to both the server, type of game, and bandwidth. For example Hearthstone had a goal of 1,000 games per core at once, and ended up with 9-11,000 at once (see this at 21:00). It can get away with this because of the type of game, smart coding, and the bandwidth being extremely low. A quick search online shows something like 1MB per hour or so.. but I could only find the phone specs so PC may be different.

WoW would have a very difficult time trying to match that just because of the bandwidth differences. The world server itself for WoW was not as powerful as I originally thought (looking at the auction they had in 2011), though it has never been stated if a world server was split by region or not. Each server was also upgraded as necessary to help stability.

As for the n^2 problem it comes down to both the upload and download speed. If you have 40 players in the same location the server is receiving and broadcasting updates for every person.. so each player sends his information and receives 39 other players information. The more players within a certain the range the bigger the problem.

Even today with Blizzards infrastructure this is a huge problem I can recall a few times recently where a streamer would gather hundreds of players and crash the server.. with the complimentary ban hammer falling down shortly thereafter smile.png Their CRZ (Cross Realm Zone) technology was meant to help mitigate rising costs and spread players out over the entire data center, but has some fundamental problems still that Blizzard said they can't solve.

Advertisement

But if it is like that, why not just entirely limit the player number? You talk about instances: Why not make multiple instances/channels of the "main" world, with a certain player limit, thus never being able to crash the servers? Anyway, this is getting offtopic pretty quickly :D

As a preface, the scope of the OP's post is wrong. He was discussing this through PM and clarified further.

What he wants is a game where at most 6v6 matches will be played in an instanced area, however, with a hub-style city where users can interact with some props, however, not each other. Of course, this means in the "open world" city, as long as interactions are kept low/players spread out/Few update ticks are sent, the OP's goal isn't as hard as he led us to believe.

The implimentation he described is a multipler RTS with persistant rpg mechanics as well.

a friend of mine told me that

You're looking at a server sided multiplayer game with instancing.

Start with a multi player game with lockstep network determinism

After you have that add persistence

so ?

I suggested this on the assumption that in the 6v6 wars it would be "clash of clans style" as you noted. Bandwidth/processing is a large concern for you as well, and with this network structure you can get away with sending more simple events. Seems like the natural candidate based on that.

Where is this "impossibility" coming from? That sounds like a very reasonable number.

My original assumption was he wanted an open world 3d mmorpg with high player to player interaction, on 1 server (As in paying for 1 physical machine), with decent response speeds.

I still don't get it <.<

If we now have 2 servers, each handling 50 players, both servers will need to download 96Kb in a second. But: All servers counted together will still have received 192Kbps.

So why would it be less costly? Of course I must be missing out on something, but I don't know on what yet.

@conquestor3

The "n^2" is probably the problem, but I don't get why it even is like that. Let's assume (In terms of real life, not programming) I have one object of size X with cost Y. Shouldn't it be safe to assume that if I buy two objects of size X/2, the cost will still be Y?

Shouldnt it be the same with servers?

Let's say you have to update 10 bytes a user per update (Just to be simple).

2 users * 10 bytes = 10 bytes of total information at minimum.

User A and User B each know their state (unless correction required), and need to know 10 bytes of the other users data.

3 users * 10 bytes = 60 bytes of total information at minimum.

User A, B, C know their own state, but each needs to know 2 other users.

4 users * 10 bytes = 120 bytes. of total information at minimum

User A, B, C, D know their own state, but each needs to know 3 other users.

This pattern continues, gets harder with more data and players. (Which you will have)

This is why you need to spread players out/occlude unnecessary player info/actions.

@Hodgman, great post!

So it's technically possible to have, for example, 10000 players on the same game with 4 different servers linked together?

Is the linking itself costly in terms of bandwidth?

Of course the cost per month would be very high..

Lawliet from Empires? Hi dude. Most data centers don't charge for in-center traffic. So if you have a full cluster, the bandwith is only charged when entering/leaving the center.

Even today with Blizzards infrastructure this is a huge problem I can recall a few times recently where a streamer would gather hundreds of players and crash the server.. with the complimentary ban hammer falling down shortly thereafter smile.png Their CRZ (Cross Realm Zone) technology was meant to help mitigate rising costs and spread players out over the entire data center, but has some fundamental problems still that Blizzard said they can't solve.

In EVE online, you have to schedule large battles in advance with a GM so they can reserve a large number of cores for the actual battle.

Oh, I think I get it. I was assuming the whole time that Player A on Server A also knows of Player B's data on Server B, thus my enquiries of how why it is an advantage to have multiple servers. If the players on different servers don't know anything about each other it makes perfect sense of course.

(But what would you do if you'd want those, let's say 500, players to be able to know about each others data? Then you'd just need an exremely powerful server ,right?)

if you want 500 players to talk to each other with any reasonable ammount of updates at all times you're going to have a lot of angry bandwidth-capped Austrailians sending you snail mail.

This topic is closed to new replies.

Advertisement