First, you seldom want to have your game servers talk directly to database servers. Typically, you'll want your game (simulation/network) servers to talk to an application server, which in turn talks to some database. This lets you scale simulation separately from back-end. Plus, if a game server is "owned" by hacking, the hacker can't just do "select * from users" to get all email addresses and hashed passwords. Presumably, your application servers aren't directly exposed to the greater internet (only to game servers,) and don't have a "return all players" function/request, so the potential attacker would then have to break the app server, after breaking the game server, to get at the full data.
Second, you typically don't want to do anything synchronous in a game. You want everything to be optimistically assumed to succeed, send an asynchronous request, and when it actually succeeds or fails, you go ahead and deal with that as a follow-up resolution. This goes for anything from client-side hit detection to the loot case you talk about. Let the game server tell the client that the item was looted, queue a RPC through some queue to give the object to the player, and when that completes, mark the item as "actually complete." You may be able to show the item, but not actually use it, until it's actually complete, for example.
Third, what is the actual measured latency? You should see maybe 50 milliseconds from east to west coast between data centers. This is not a very large amount of latency. If your game has a ninja looting problem where the database needs to be involved and 50 milliseconds of looting latency more or less, matters, then your game has a HUGE FRICKING DESIGN PROBLEM that you probably should be addressing using other means than shaving milliseconds ?
Fourth, most games don't actually send every single request through the database. Instead, the game server caches the result in RAM, and occasionally checkpoints the state back to persistent storage. This may be done in a streaming fashion (say, write update events back through a message queue) or in a single-checkpoint fashion (say, fork the process and write all object state into some durable database.) If the game server crashes between you looting the item, and the checkpoint happening, you lose the item. Sucks, but hopefully your servers don't crash that often! (And if you stream out updates to persistent storage, you can actually get around that problem, too.) Presumably, the real arbiter of "who gets the item" when looting is the game server, not the database -- having a case where player A in Australia using server A is competing against player B in Washington State on server B, would be inherently bad for a number of other reasons, not the least of which is that syncing state between distant servers in real time is in itself causing a bunch of lag.