Advertisement

Connecting to the master MYSQL server from a game instance?

Started by September 19, 2018 05:46 PM
6 comments, last by hplus0603 6 years, 1 month ago

I was googling but couldn't really find anything related to this, so I figured i should make a diagram of my current server setup:

0490d41e7856c6e1a9fabe34140e2f5a.png

 

I was curious about the mysql connectivity, because let's say the game instance servers are connected to the master mysql server. Assuming the master server is located in WA State, and someone who lives in  New York connects to the New York game instance server...

And when the game server drops an item (and the player loots it), the game instance will need to communicate back to WA State, then, back to the NewYork game server to tell the user the item has been successfully inserted into their inventory. Now, that's a lot of round trip time, and not fair to that player who chose a game instance server closest to their location. When a player loots an item, I can make it seem like it's inserted into their inventory (update the matrix), but I just feel like I'm doing something wrong here.

Any insight/advice is appreciated

*** I completely replied to the wrong question before, sorry ***

On your question:

A couple of thoughts:

- You adjust your data structure to make it a non-issue, for example by making the loot only looteable by the player which has opened the container or landed the killing blow on the monster.

- Make the outer server take the decision and communicate it back to the client AND to the main server in parallel. If you're worried about failures of comms between the outer server and the main one, add redundancy in the middle (say, message queuing).

- If multiple people can take the same loot, accept that somewhere in your design there will need to be some kind of "quirk" as there must always be a cutoff point were the system decides that "this belongs to player A or to player B". This might be the unfairness that you described (i.e. closest is most likely to win). some kind of different unfairness (player with fastest Internet Connection wins) or it might be a non-unfair but wierd system where sometimes BOTH get the loot (i.e. if outer servers made the decision and two decided "it's my player's" in parallel, then the master server might clone the loot and give it to both).

- You might leave the decision to the outer servers but implement a system where the client only gets a confirmation that the player got the loot once the master server has been informed and confirmed which one gets it. This works by the master server receiving a request from an outer server and then waiting for a certain time (likely less than a second) to check if any other requests come from other outer servers for the same loot. If not, the loot gets given, if so, then both requests get compared using the OUTER server timestamps for fairness and the earlier one gets it.

This not being money or something like that which is not supposed to be lost or created due to a bug, I would probably go with the "clone the loot" option if this unfairness really worried me or leave it unfair (it the time difference are usually less that 200ms, which a typical human reaction time), as the last option whilst impeccably fair is a PITA to implement, has further implications (such as keep the outer server clocks in synch) and thus is probably not worth the work.

Advertisement

First, you seldom want to have your game servers talk directly to database servers. Typically, you'll want your game (simulation/network) servers to talk to an application server, which in turn talks to some database. This lets you scale simulation separately from back-end. Plus, if a game server is "owned" by hacking, the hacker can't just do "select * from users" to get all email addresses and hashed passwords. Presumably, your application servers aren't directly exposed to the greater internet (only to game servers,) and don't have a "return all players" function/request, so the potential attacker would then have to break the app server, after breaking the game server, to get at the full data.

Second, you typically don't want to do anything synchronous in a game. You want everything to be optimistically assumed to succeed, send an asynchronous request, and when it actually succeeds or fails, you go ahead and deal with that as a follow-up resolution. This goes for anything from client-side hit detection to the loot case you talk about. Let the game server tell the client that the item was looted, queue a RPC through some queue to give the object to the player, and when that completes, mark the item as "actually complete." You may be able to show the item, but not actually use it, until it's actually complete, for example.

Third, what is the actual measured latency? You should see maybe 50 milliseconds from east to west coast between data centers. This is not a very large amount of latency. If your game has a ninja looting problem where the database needs to be involved and 50 milliseconds of looting latency more or less, matters, then your game has a HUGE FRICKING DESIGN PROBLEM that you probably should be addressing using other means than shaving milliseconds ?

Fourth, most games don't actually send every single request through the database. Instead, the game server caches the result in RAM, and occasionally checkpoints the state back to persistent storage. This may be done in a streaming fashion (say, write update events back through a message queue) or in a single-checkpoint fashion (say, fork the process and write all object state into some durable database.) If the game server crashes between you looting the item, and the checkpoint happening, you lose the item. Sucks, but hopefully your servers don't crash that often! (And if you stream out updates to persistent storage, you can actually get around that problem, too.) Presumably, the real arbiter of "who gets the item" when looting is the game server, not the database -- having a case where player A in Australia using server A is competing against player B in Washington State on server B, would be inherently bad for a number of other reasons, not the least of which is that syncing state between distant servers in real time is in itself causing a bunch of lag.

enum Bool { True, False, FileNotFound };
19 minutes ago, hplus0603 said:

First, you seldom want to have your game servers talk directly to database servers. Typically, you'll want your game (simulation/network) servers to talk to an application server, which in turn talks to some database. This lets you scale simulation separately from back-end. Plus, if a game server is "owned" by hacking, the hacker can't just do "select * from users" to get all email addresses and hashed passwords. Presumably, your application servers aren't directly exposed to the greater internet (only to game servers,) and don't have a "return all players" function/request, so the potential attacker would then have to break the app server, after breaking the game server, to get at the full data.

Second, you typically don't want to do anything synchronous in a game. You want everything to be optimistically assumed to succeed, send an asynchronous request, and when it actually succeeds or fails, you go ahead and deal with that as a follow-up resolution. This goes for anything from client-side hit detection to the loot case you talk about. Let the game server tell the client that the item was looted, queue a RPC through some queue to give the object to the player, and when that completes, mark the item as "actually complete." You may be able to show the item, but not actually use it, until it's actually complete, for example.

Third, what is the actual measured latency? You should see maybe 50 milliseconds from east to west coast between data centers. This is not a very large amount of latency. If your game has a ninja looting problem where the database needs to be involved and 50 milliseconds of looting latency more or less, matters, then your game has a HUGE FRICKING DESIGN PROBLEM that you probably should be addressing using other means than shaving milliseconds ?

Yeah, not a ninja looting problem, I just want to make sure the player  has a responsive looting experience. As you said, I guess I could update the inventory matrix (check if it fits), and then run the query asynchronously, and if anything goes wrong, we go from there. That way, they'll be able to loot instantly w/o waiting for round trip times. That sounds like a good solution, thank you.

Actual latency I don't know off hand, but I just want the player who chooses the closest server to have good latency w/o the master server hindrance (waiting for round trip times). Which I believe your second paragraph is a solution to that, again, it's appreciated.

@Aceticon lol yeah. I saw your previous post and was like what in the hell is going on. :P 

I got the data structure down and whatnot. Item looting works great, and I have diff modes, free for all, short allocation, etc which the party leader can choose. I was just curious how to implement/run the queries to give the illusion of quick response times. Thanks for your post as well.

 

 

 

 

Well, if you main concern is responsiveness there are two main ways you can go about it:

  • You design the looting UI too look like its doing stuff whilst actually waiting for the backend. The funny thing about people is that the feeling of responsiveness is much more coupled to "seeing things happen" (anything, really) than it is to "conclusion of task". Given the round times involved in the backend (shouldn't be more that a few tenths of second) you can cover the delay with some fancy frontend animations designed to look as part of the experience.
  • Optimize the backend speed. For example bring the decision of "who gets it" to as close to the frontend as possible (so in this case, to the game server), put some smarts in the Master Server and have it handles the low probability cases of multiple Game Servers attributing the same loot to different people - for example, by cloning the loot. (Remember, these are low likelihood events so the solutions for such data change collisions need not be perfect, just good enough that players don't get frustrated). Other optimizations can be on the placement in the network of the master server vs the Game Servers (i.e. minimize the network hops between them and the master) or improving database access patterns (such as hplus0603's suggestion of doing all the work in memory and updating the DB with a separate process),

Hey guys. I just wanted to say after reading all the help and doing some changes, the looting feels instant! Really nailed it on the head about that asynchronous behavior.  Appreciate your insight as well, Aceticon. 

I feel very happy I explained my situation fully to the best of my knowledge and received cordial help. Rare to see that nowadays tbh. Usually there is always someone being snarky. This forum is great!

Advertisement

there is always someone being snarky. This forum is

 great!

(snarky voice) No, YOU'RE great!

 

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement