Advertisement

Best Client/server Architecture For A Mobile Management Game?

Started by July 24, 2016 07:29 PM
16 comments, last by user123456789 8 years, 3 months ago

For the past year I have worked on a small company that develops a traditional browser based strategy game, of the likes of Travian or Ogame, but using a more interactive approach like Tribal Wars 2.

The game is made in a simple php server without frameworks and a simple mysql database, and all of the game happens in a single static page that is changed through ajax calls and has a map made in pixi.js. The automatic updates are delivered to the client side through polling the server which then queries some specific database tables made for the purpose for changes.

While this approach is solid and works, it has 2 big problems:

  • Having a mobile app is increasingly more important and there are not enough resources for having two separate codebases. Having a app which is simply a wrapped webview is also not a solution because the performance of a really complex page with a giant webgl map is, while usable, really subpar;
  • Polling the server for changes creates a lot of programing challanges that make some simple tasks really complicated and creates a lot of convoluted code if we dont want to hurt the game performance, as we are not going make dozens of database queries every 5 seconds.

I want to start developing a game idea that I have that is basically inserted in the same genre and which is going to be, at least initially, mobile only. The real problem here is that after reading a lot on the internet, I am confused on what should be a good client/server architecture for me to start prototyping in a way that I do not run in the problems mentioned above.

Basically, above all, I want the server to be able to know which page/screen/state is each client looking at, and be able to send them messages when another client changes something on that specific screen. It would also be nice if the solution is something lightweight on the server side to be able to scale a little.

Client side I was thinking about Unity because of being cross platform, of all the environment around it (ads, analytics, a lot of support and answers on the internet), and because I have previous development experience with it.

Server side is the real question.

  • Simple http calls will not work and so PHP is out of the equation.
  • I have though about using node.js with socket.io to use websockets solving the polling problem. Is this a good idea? Would it be better to store the game state in a relational or nosql database in this case? Would this work on unstable mobile conections?
  • Lots of people seem to use a c# and sockets for unity. Would this be overkill in this situation? Taking this approach how would the data be stored? would it be feasible with a linux server or would I need a windows server? Would this work on unstable mobile conections?
  • Don't know, I'm open to suggestions.

tl:dr: I want to make a mobile management game in unity but am confused what to choose for the server side architecture considering that I want the server to be ablose to send a message to the client without the client asking for it. Is there anything I should take in account?

Sorry for the broad question and thanks for the help.

So, first, a dozen queries every 5 seconds isn't all that much. You can easily do 1000 parallel players on a single MySQL instance that way, making sure that you have SSDs and/or enough RAM for caching/indexing.
Once you run out of that, you can presumably horizontally shard the game.

Second, if the game is turn based, you can also look to in-RAM storage, like Redis, or even memcached, to get much faster querying. The draw-back is that you have to denormalize your data; ideally to the point where all state for a particular game instance is in a single data blob.

Third, why do you say HTTP won't work for the new Unity based game? Unity has the WWW and the UnityWebClient classes, so you can reasonably easily make it work. Especially if you serve on node.js or a similar server that makes it easy to "pend" incoming HTTP calls until there is actually data, or it times out. (Long polling)

Fourth, just embedding a webview may not be fast enough, but there might be other options. Anything from "compile javascript to Objective C" to "make your core website responsive and just install a shortcut used by the system browser on mobile." You may have pushed far enough along this path that you can tell for sure this won't work; or not; it's hard to tell from the data you gave.

Fifth, and now we get to the question you're asking, assuming the previous bits are already dead ends:

When building games with "current" game state, you don't want to store "current" game state in a database at all; you want to store it in RAM.
If the state is trivial to marshal/de-marshal (as it would be for a game of chess, or poker, say,) then you can store it in network-attached RAM, such as memcached or Redis. Each time someone wants to mutate the state, they do a transaction on the state by slurping, mutating, and writing back. This is trivial to shard horizontally if you get very successful, both the front end servers, and the back-end storage.
Once a game has an outcome, user information should be updated in a persistent database of some sort; similar for if you have consumables in the game.

If the rule set is more complex, or the simulation data is more complex, or you iterate the simulation in real time (think: physically simulated world,) then you don't want to demarshal/re-marshal for each simulation.
At that point, you inflate the game world/level into RAM in a game server, and route all player connections to the correct server process. The easiest way to do this is have users send their traffic/connect to a gateway, which knows where each game instance runs on the back end (again, assuming you're going to be successful and need to shard the game hosts.)
The hosts can periodically checkpoint state back to databases; they can also delegate to databases for very important stuff like player trade, commerce, etc, but most game updates happen in RAM, and if the server instance crashes, you lose whatever changes happened since the last save/checkpoint.

You can implement this on top of raw TCP sockets, or on top of websockets, or on top of HTTP long-poll, and it will work approximately the same (with some increased latency for each step to the right along that chain.)
Note that you can still implement a "message queue" of messages from server to client over HTTP. The client will simply make sure to always have a HTTP request outstanding to the server. When it comes in to the server, if the queue is empty, the server pends the request (doesn't send a response) until there is data; if there is data, the server immediately removes/returns it.
This doesn't mean the clients are "asking for it" -- it just means that you build an event transport on top of a double-buffered polled underlying network protocol.

Which option is the best? Depends on where you want to run clients, what client libraries you want to use, what server libraries you want to use, and how low latency your game really needs.

Finally, management games typically do NOT want to keep an instance per player running. Thus, they aren't really able to push messages to clients, except perhaps if you have a queue of alarms/timers where something happens on behalf of the player at a certain point in time.
Specifically, if the player starts action X, which will take 6 minutes, you don't actually keep that around for six minutes in a game server; instead you store "this thing will be done at time Y" and use the client to draw the progress bar. When the user asks for "what's the current state," you can wind time forward from the last stored state to the current time, return that, and store it back in the database. The wind-forward will be super fast for any typical management game.
enum Bool { True, False, FileNotFound };
Advertisement

First of all, thank you for the answer :)

hird, why do you say HTTP won't work for the new Unity based game? Unity has the WWW and the UnityWebClient classes, so you can reasonably easily make it work. Especially if you serve on node.js or a similar server that makes it easy to "pend" incoming HTTP calls until there is actually data, or it times out. (Long polling)

I was not saying that it does not work, just that it is not enough. currently at work we frequently get to the problem of not being able to push a message to a client and have to make some workarounds around it. If im going to start a new project I just wanted to plan it from the start in a way that avoids it.

When building games with "current" game state, you don't want to store "current" game state in a database at all; you want to store it in RAM.

Sure, and I understand what you are saying. Thing is, while that is true for a game based on rooms/instances (you just have to store to database like who won and etc at the end of the game, because the server going down just means having to play that game instance from the start), what I want to do is a bit more mmo-ish (again, Travian is a good example) and so most, if not all, updates sent to server are sensible information. The kind of stuff you are saying that should be store to ram simply is not present in the ideia. I cannot loose a user buying/selling if the server goes down, or at least is seems way more dangerous and harder to manage to me.

and how low latency your game really needs.

Basically this is not a obstacale at all. This is a concept that in its most simple form would be able to be implemented as a text game. Interactions between users are indirect or asynchronous.

Finally, management games typically do NOT want to keep an instance per player running. Thus, they aren't really able to push messages to clients, except perhaps if you have a queue of alarms/timers where something happens on behalf of the player at a certain point in time.

I think you probably misunderstood what I was trying to say here (or I did not understand what you are saying). It is not that I was going to send every second for 6 minutes. What I wanted to do is stuff like:

User A buys something on City 1 and so the price changed.

Users B,C,D and E, who the server has in memory that are looking at that city right now, recieve a pushed message from the server notifying of said price change.

or

User A makes a city level up

All online users (or within that region of the map) receive a pushed message of said level up to update their map. (this is actually a problem that we ran into at work, and was never done and only updates when we refresh the page, as a simple php server does not mantain state between calls and we are not going to query for all cities in the map to see if their level changed. It can always be done making some more convoluted code with timestamps, but if there is a way to be able to push messages it would make it extremely easier).

Thank you for the help.

User A buys something on City 1 and so the price changed.
Users B,C,D and E, who the server has in memory that are looking at that city right now, recieve a pushed message from the server notifying of said price change.


You can totally do that over long-polling over HTTP. Our in-RAM message queue / bus system can push messages to online users through TCP sockets, websocket sockets, or HTTP long-polling, through different gateways for each method. To the server, it all looks the same: "post this message on this topic."

Separately, if a user app is not in the foreground, then on iOS, you *have* to use polled communications and/or mobile push; you don't get a persistent connection while backgrounded. On Android, you could have a persistent connection from a Service, but doing so will likely drain the battery faster.

I cannot loose a user buying/selling if the server goes down


I did suggest that important actions like trade should go to the database, so it sounds like we agree. As long as players don't trade "often" (dozens of times per second, or whatever,) then a database would be reasonable. They can always be horizontally sharded if necessary (at a cost, of course -- it's reducing cost that's the main goal here.)

Separately, if all players lose the same amount of time in a crash, and that amount of time is never more than 15 minutes, how bad is that? Only you can decide how much those kinds of trade-offs are worth.
enum Bool { True, False, FileNotFound };

Basically, above all, I want the server to be able to know which page/screen/state is each client looking at, and be able to send them messages when another client changes something on that specific screen. It would also be nice if the solution is something lightweight on the server side to be able to scale a little.


This sounds like a simple publisher/subscriber model, or the 'observer pattern'. When a client switches to a given screen, it asks the server to subscribe to that 'publisher' for that screen. When the client switches away, it asks to unsubscribe. When something changes on the server, it publishes those changes only to the specific subscribers. How the data moves from the server to each subscribed client depends on the rest of your architecture, but there's nothing intrinsically wrong with just queueing up the updates and waiting for the client to poll for them.

I have though about using node.js with socket.io to use websockets solving the polling problem. Is this a good idea?

If the reason you don't like HTTP is because it's event-driven and requires the client to poll for data, then using Node.js seems like the wrong approach given that it's explicitly designed to be a fast performer under event-based conditions. That's not to say you can't make it work, but I wouldn't recommend it.

Lots of people seem to use a c# and sockets for unity. Would this be overkill in this situation?


I think it would work well, if you have people who are competent in C# and you have the ability to host it. There are a lot of ready-made libraries which can help with multiplayer games, often intended for use with unity.

Personally I like to use Python servers because they're very quick to develop with and you can deploy them very widely. However they don't scale terribly well (in terms of either performance or code maintainability) and you can't share code with your client.

they don't scale terribly well


Amen :-)
enum Bool { True, False, FileNotFound };
Advertisement

Personally I like to use Python servers because they're very quick to develop with and you can deploy them very widely. However they don't scale terribly well (in terms of either performance or code maintainability) and you can't share code with your client.

I recently read an article about Eve Online (http://www.gamasutra.com/view/feature/132563/infinite_space_an_argument_for_.php?print=1) which says that they uses "Stackless Python" and it could potentially scale better. But yea, I know that your comment was about normal Python. Just wanted to share this :)

-

They got Stackless Python to scale in 3 ways:

1) They worked on improving it in-house
2) They implement most of the heavy lifting in C++ modules
3) They have some SERIOUS hardware

And they still got some bad lag. Whether that's because of the Python aspect or the way they architected it is hard to say. For me, the biggest problem with Python is the lack of type safety. For small programs, that's fine. For large and complex programs, the bugs start appearing thick and fast and the compiler is no help.
Python for big projects is problematic in three ways:

1) It's a dynamically tag-checked language (like JavaScript, LUA, etc,) and thus has no compiler to tell you when you violate any kinds of constraints. Instead, you need to add at least as much code in unit tests as you add in real functionality.
2) It uses a "global interpreter lock" -- it can create threads, but those are only useful to asynchronize blocking operations; you cannot get more than one core's worth of work out of it. (Even "Stackless" Python has this problem.) This is similar to node.js, btw -- it is also not threaded.
3) The interpreter is not particularly fast, and the language uses a number of unfortunate implementation choices that makes building a really fast (Java-level or better) Python runtime really hard. Even Iron Python has to go through reflection on .NET, which makes it much slower.

You can work around #1 with a lot of development discipline. Which ends up costing you all the "speed" that you were supposed to gain from the "loose dynamic" nature, which made you fast in the beginning. And then some.
You can work around #2 by running multiple processes on the same machine. Typically, start up the server, then fork it a number of times, and bind each copy to a separate port (or have them all call accept() and use the kernel for load balancing between then.)
You can work around #3 by paying more for server hardware.
enum Bool { True, False, FileNotFound };

Thanks for the insight. I have never heard such things about python, even though I used to program (small) games with it. One book was particularly fun as it was about game programming with python, but it kept reminding that it is good for scripting higher level things in the game, whereas low level stuff is the C/C++ world. But I believe this is still a same old story with programming: people use high-level programming languages & tools because they are often easier to use and faster to develop with.. yet, true speed can only be achieved at low-level, that is, if you know what you are doing. :D

-

This topic is closed to new replies.

Advertisement