Advertisement

Back-end server communication, Http or Manual TCP?

Started by June 08, 2015 05:36 PM
3 comments, last by evillive2 9 years, 4 months ago

Hello all! I believe this will be my first post on GameDev.net, so HELLO!

My question is, while designing the back-end design of my game, I'm wondering what the standard is for communication between servers that is not open to clients.

My current setup is a simple Client -> Game Server[] -> Master Server setup. I've already whipped up a socket level protocol for my Client -> Game Server communication so that I can be both secure and fast, but the communication between Game Server and Master server don't exactly have the same restraints. I can depend on the servers not sending malicious messages (I know, can't be 100%) to each other, which would make communication through Http/JSON much easier to work with. Is this an okay setup or should I still be practicing the same level of security and speed by using socket level requests?

Also, the Game Server and Master server are just going to be chatting about end-game results and such; nothing extreme.

[Edit] Uh oh, I may have goofed. Probably should have put this in the networking forum. My bad :(

If your game servers validate all incoming traffic, then you are probably safe to use whatever you want between the game servers and the master server.

If your game servers don't validate incoming traffic, or if your master server is somehow reachable from the outside world without going through the game servers first, then the master server needs to perform validation.


Your choice of protocols can be based on different criteria: If you've got a binary data structure in a TCP stream, deserializing that into a C data structure would be extremely fast compared to parsing HTTP messages with JSON in their payload. HTTP+JSON is easier to arbitrarily extend. Or there's also protobuf. Your choice should be based on what properties of the various protocols you value the most.


The Game server <-> Master server interface can also be updated more easily, since you do not need to patch clients. You can start off with a convenient and easy interface such as HTTP+JSON and then if you ever run into performance issues, you can address them at that time.
Advertisement
My personal suggestion: realtime data goes in a custom binary TCP protocol (for less processing overhead and less bandwidth requirement - encoding and decoding JSON into in-memory formats is not cheap). Everything else can use a text-based protocol, although HTTP is probably questionable because it isn't persistent.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

communication between servers that is not open to clients


The "standard" is typically to have all servers on a LAN (or VLAN) with a private address space -- 10.x for example.
Then, for clients to connect to servers, expose some number of IPs that are served by reverse NAT/proxies/load balancers.
A server talking to another server would then connect to a 10.x address, which is not reachable from the outside world.
A server can then know whether it's talking to another server, or to a client, by comparing the remote IP address -- if it's an IP address of a proxy, it's a client.
Don't mix up the IP address list used for proxies versus service servers :-) Best is to dedicate an entire subnet to the internal address of the proxies/DNATs. (All proxies live in 10.250.x.y, for example.)

If a client can somehow figure out how to talk to an arbitrary server (say, through a hole/config problem in your proxies) then you are vulnerable to internal server cross-call spoofing attacks.
Bonus points for using TLS with client and server authentication for internal communications.
Or, if you don't want to go through that effort, at least put a shared secret on each of the servers, and provide/verify this secret in a header/pre-amble of each connection.
enum Bool { True, False, FileNotFound };

I know this is an older thread but I just saw it and had some insight I would like to share.

I would also look at using simple middleware like redis. Redis in particular has bindings to most languages including C/C++/C#, python, ruby, javascript etc. which means you have interoperability across programs and platforms out of the box and it is dead simple to use. I use it extensively for voip applications as part of a hosted UCaaS environment for work (nearly 200k connected clients) and unless you are trying to do an MMO first person shooter, performance isn't an issue for thousands of endpoints. Yes it is single threaded but I consider this a bonus as I can deploy as many or as few instances as I want and take advantage of replication and/or clustering for horizontal scaling.

Middleware like redis has saved me ton of time over the past year and has a much smaller learning curve than say RabbitMQ. In my case, application interoperability and rapid prototyping are invaluable to me. Are there gotchas in using middleware? Absolutely but everything has pros and cons. If TCP is your protocol of choice and you want multiple applications to speak to each other, I would be hard pressed to find a simpler solution to at least start with than redis. It wraps common underlying networking paradigms nicely and you can use whatever custom protocol you want for messaging - json or your own binary format - whatever is convenient for you.

One thing - and I feel compelled to say this because I have seen web developers do this with RabbitMQ a bunch - don't expose your middleware directly to the internet. It is a security concern (especially with redis!) and frankly violates the cardinal rule regarding implicitly trusting client communications. My post was mostly about providing "trusted" inter application communication over a trusted network - preferably a LAN. A thin layer providing an API in front of the middleware is always recommended to do basic sanity checks and authentication validation.

Evillive2

This topic is closed to new replies.

Advertisement