Advertisement

Plan for scaling a game server using MongoDB?

Started by October 23, 2017 07:45 AM
15 comments, last by hplus0603 7 years ago

I'm part of a small team which is adapting an old game to use a client/server architecture, so game logic/state can be more centrally controlled. The game server currently uses MongoDB, but it's still early enough to switch. I have a plan to avoid server performance problems if we need to scale to support a lot of users. Can you tell me if it seems reasonable? (We don't actually plan to do this stuff until it's needed, but we don't want to develop ourselves into a hole we can't get out of.)

The server is stateless (responding to each message based on what's in the database), so we can have more than one server talking to the same DB. The server is hosted on Amazon Web Services. So the first scaling step will be to use one or more instances/VMs running their own copies of the server, but all connecting to the same database. This assumes the server CPU power will be maxed out before the database. A load balancer will spread the requests to all the servers.

The next step is to change the single database to a MongoDB replication set, so the many servers can talk to many databases. My understanding is that we can read from any database, but must write to the "master". This will require slight changes to the server, but in the real world, do we need to minimize writes or somehow batch them? What does this entail?

Hearing your feedback/knowledge/experience would be appreciated.

Your architecture is quite unusual so it doesn't share many characteristics with typical game servers, which are usually highly stateful and use the database sparingly. So I'm not sure how much help I can be.

Often, when using a DB like MongoDB, it's assumed that 'eventual consistency' is good enough - you can read from secondary servers to share the load but the assumption is that the information returned can be out of date, and that this is actually okay because a future read will be correct. This is fine for social media notifications, messengers, news sites, stock tickers, etc... but it's not necessarily fine for games which want to execute logic based on that data. So your game logic will determine how possible it is to scale out the database like this.

A more traditional game server that performs fewer database reads and writes will be able to scale more effectively just by adding more instances, even without adding extra databases. But that might require a very different server implementation to what you currently have. But if you truly want to control the logic centrally, that is probably the route you want to go, rather than relying so heavily on the database.

Advertisement
1 hour ago, Kylotan said:

Your architecture is quite unusual so it doesn't share many characteristics with typical game servers, which are usually highly stateful and use the database sparingly. So I'm not sure how much help I can be.

Often, when using a DB like MongoDB, it's assumed that 'eventual consistency' is good enough - you can read from secondary servers to share the load but the assumption is that the information returned can be out of date, and that this is actually okay because a future read will be correct...

This is a mobile game, and while we don't know how all the popular mobile games work internally, it seems like a lot of them run this way. They aren't multiplayer so the client can have most of the game logic, but the players can interact in a very limited way and the server must manage player progress/stats (and may manage inventory). Think Puzzle and Dragons or Ingress. Are you at all familiar with that type of architecture (which offloads most of the logic to the client, but has the server keep an eye on things to make sure it's a-okay)? That's what we're going for.

It's mostly an issue of terminology - what you have is likely to be more like a traditional web service than a game server, even though it's obviously a server and it's obviously a game. :) But the key thing is that the scaling problem you face is more akin to problems encountered in e-commerce than in games, since you have little player-to-player interaction, but a (presumably) high dependency on database access.

It's not clear whether you are okay with stale data being returned to your game; if we assume that you are not, I would recommend a very simple solution to your problem, given that your players only interact in a limited way, and that would be to shard the player database based on ID. Start out with 2 instances of your database, and split players evenly across them. e.g. Put all even-numbered players in database 1, and odd-numbered players in database 2. This will split the load fairly equally across your 2 servers. As load increases, you can replicate data across to new databases and change your sharding criteria to split players across 3 or more servers. The database instances and web instances would be entirely decoupled so you can scale either one horizontally based on load.

This sort of approach is often called 'algorithmic sharding' since an algorithm decides which shard to use, rather than relying on a shared lookup server. In this case the simple even/odd system is the algorithm, but you would extend it to some sort of hashing or similar.

However, if I was doing this I'd want to consider how well this works for whatever interactions you have that do require multiple players, and which might be in different databases. And I'd also consider using something other than MongoDB, depending on what database experience your team has.

Thank you @Kylotan, that's very helpful! We don't want to read stale data, but I believe we can make it less likely by using the appropriate MongoDB options. We don't have a database expert on the team, but it seems like our general experience is in nosql and mysql. Is there any other choice that sticks out as an obvious candidate? (We're moving away from Parse.com, because development, debugging, and performance are all too slow.)

Sharding is a great idea, either if I can't get our DB writes to be fast enough for one DB, or if we use the scheme I first described but find it's still not fast enough and needs to be multiplied.

I don't want to start a database argument, but once you start using all the options to make MongoDB give you consistent and atomic results, you stop getting most of its speed benefits, at which point it's usually better to have chosen Postgres or MySQL instead. It's worth thinking carefully about this now, because it's going to be harder to change later. Postgres with a JSONB column is not a million miles away from Mongo in terms of what it stores and what you can query and I'd recommend at least investigating that route.

Perhaps I didn't make it 100% clear above, but be very aware that a replication set for MongoDB is not to help you with performance. These are primarily for availability, i.e. if one DB goes down, another can take over. You can separate out primary and secondary databases to get speed benefits, writing to primaries and reading from secondaries, but the cost here is that your reads do not necessarily represent the latest writes, i.e. you have stale data. That is the trade-off you make with that architecture.

Advertisement

Oh, wow. That's a lot to think about (and read).

The reason I chose nosql over relational is that we may want to change schemas quickly (due to either inexperience or changing requirements). Also,  omitting schemas is just nice, because game data can take so many shapes. From reading about JSONB, I gather it's far slower to query than Postgres as a relational DB. Is a common workflow to start with heavy use of JSONB, but eventually migrate a lot of that data to other columns or tables?

I don't want to slow development too much, but I really don't want to choose a tool that will turn out to be unsuitable and hard to replace.

Querying unstructured data is almost always going to be slower than querying structured relational data, so that's a problem you'd have with MongoDB or JSONB columns in Postgres. It's the price you pay for not committing to a schema. Usually when you use a document-oriented approach like this, you pull the whole document, amend it accordingly, and write the whole document back. As you can imagine, this doesn't scale very well for write-heavy situations. Thankfully, I don't think your situation is likely to be write-heavy.

I don't know if it's a common workflow to start with a schemaless system and then migrate to a more rigorous relational format, but it does seem like a good idea. Postgres makes it practical to do that all in the same database, and even within the same table. MongoDB... not so much.

MongoDB is not great once you max out a single instance, because it doesn't scale across multiple hosts other than using read slaves.

Its backup and safety defaults are "fast" not "secure," so you may also have to plan for data loss, even with a good backup regimen, unless you re-configure it in a way that is safer, but much slower.

If you're expecting to listen in to the Mongo state change stream to distribute state to different clients, then re-think that; it doesn't scale at all past a few dozen simultaneous users (because all writes from N writers get amplified to each of N readers.)

In the end, what is your actual usage pattern? Mongo can work. So can ScyllaDB. So can MySQL. It all depends on the particulars of what you want to do. Beware that schema-less, or semi-schema-ed, databases end up causing lots of headaches in migrations later down the line if you're successful. But then, that's a nice problem to have :-)

Separately, because you're going to run on Amazon, you might want to consider Amazon Aurora. It lets them deal with scale, and Aurora is allegedly much faster than MySQL for the same hardware, while still having all the safety that comes from that. There's also Amazon RDS, and the option of running a database on your own VM, or the option of using DynamoDB.

Btw, running a database on a single VM of your own may not be high performance, for a variety of reasons. You can make it better with local SSD support, but then you have the same "what if the instance crashes" problems as you do for a regular machine -- EBS is not good for database workloads.

Finally, I'd investigate the assumption that application server CPU will top out before your database tops out. That may very well be true, but there's not enough information in your application description for me to tell whether that's a reasonable assumption or not.

 

enum Bool { True, False, FileNotFound };

Wow, this has really turned into a rabbit hole! I've been reading a lot more than I expected to in the last couple days. What I understand is: I'm not going to be able to scale MongoDB for performance without losing data consistency. The only really safe scaling is sharding.

And Postgres can also store/manipulate similar JSONB documents--the API is different, but it seems like it can do the same things. Postgres has the potential to move to relational storage as designs solidify or optimization is needed. Plus, Postgres can run on Aurora, which as @hplus0603 mentioned, will probably be a performance boost and might simplify scaling.

Is there any additional benefit of Postgres or disadvantage of MongoDB, other than the fact that I'm not using it for its strongest use case (massively multiplied/available data)? In other words, why bring up Postgres and MySQL? Is it only because relational is faster than nosql, or are these technologies better in another way?

This topic is closed to new replies.

Advertisement