If we're speaking about serious volumes (over 10-100M DB transactions/day total), it is not that simple.
Agreed! At work, we have to deal with those volumes and more. Meanwhile, 99.999% of all games will not see those volumes, and I would not recommend that a game developer spends too much time worrying about horizontal sharding or read slaves when the real challenge is making a fun game and figuring out how the world will learn about how fun it is.
BLOBs as such are traditionally pretty bad for DBs
Right -- I used "blob" in the sense of "state checkpoint from the game," not necessarily implemented using the SQL BLOB construct. You should implement it however it best suits your storage.
1000 transactions/second is indeed achievable, though depending on DB experience and on the transactions themselves, it can be rather tough to do it
A desktop PC with a quad core CPU and a SATA SSD can do it.
That being said, if it's truly just "userid/gameinstance -> statecheckpoint" then a file on disk would be sufficient, and those can do ... many more :-)
So the question then becomes: Are you doing this just to design a "what if" for a very large game? Or are you actually trying to build a game of your own, and want to know where to start?
For the "what if" scenario, the assumptions about transaction rates and the design of the game play a very crucial role.
Regarding the "let's centralize login" question, the only thing that login does is issue a cryptographic token, valid for some time, identifying the player as a particular customer ID. The world state databases then just verify this token, without having to connect to the central database. Thus, the load on the login database is only as much as how often users need to log in.