Have you actually worked on and deployed a MMOG? It's not about "liking," it's about pure possibility.
Ok, as you're asking (BTW, thanks for the opportunity to brag about it ). Besides being a co-architect of a G20 stock exchange 20 years ago, and a Chief Architect of a game which (while being non-simulation) still processes around a billion user messages a day, during last 10 years I've performed quite a bit of consulting (due diligence, etc.). This consulting/due-diligence included quite a few games, including MMOs in your (quite narrow) definition of this term. Which puts me into a quite unique position to generalize across the whole spectrum of very different over-the-Internet games. And two-and-a-half most important lessons I've learned in the process are the following:
1. All the games which work over the Internet and have over 100 players at the same time, have a lot in common. Whether it is an MMOFPS or a stock exchange, they still have a "game world" which processes user inputs and simulates whatever-logic-you-want-to-throw-in, and pushes updates to the clients. Of course, there are lots of differences (UDP vs TCP, very different ways to compress the data, client-side predictions, name-your-poison), but there are still striking similarities (just two examples include state sync concept which is universal and absolutely necessary for all the games, despite implementation differences, single-threaded event-processing loop is universal across the board, and so on and so forth). As a result, I tend to name all such games as MMOs (I hate to argue about terminology, but this definition also seems to be supported by Wikipedia, which lists rogue-like Gemstone IV as a very first MMORPG).
1 1/2. In spite of a popular opinion, simulation games are not that different from the rest when speaking about protocols and overall architectures. From the very high-level point of view, it still has a server which receives user inputs and calculates state, derives publishable state out of it, and pushes this publishable state towards the clients (filtering it for different clients when/if necessary). Only implementation details (such as TCP vs UDP, complexity of calculations, etc.) are different, but even these are not black-and-white (just as one example, MMORTS are known to use either UDP or TCP, and while UDP tends to work better, it can be playable with TCP too). Instead of being fundamentally different from the rest of the game world, simulation games are just sitting on one end of spectrum (with the opposite end of spectrum occupied by farm-like social games and stuff such as Lords&Knights). While the difference between two ends of the spectrum is drastic, there are lots of things in between (including, but not limited to casino-likes, stock exchanges, arenas, and MMORTS) which make it a kind of "continuous spectrum", with neighbours having lots in common, but ends being indeed drastically different.
2. For each problem there is a different spectrum of solutions, ranging from "it will never work", to "optimal one". And close to "optimal one" there is usually the solution which will work, but has practical drawbacks. Just as one example: blocking RPCs in game loops "will never work", but when it comes to the choice between "messages" and "non-blocking RPCs", the choice is based on game specifics and even some personal experiences. It is in these cases of choosing between two solutions where both will work, when I'm speaking about "likings" (and yes, this choice is quite subjective, not to mention that it depends on game specifics a lot).
Now to your solution (the one "with a minimum amount of configuration"). Yes, it will work (and that's what I've meant when I've said that "It's doable, but..."). But no, for a wide spectrum of games it is not the only one which will work. Moreover, from what I've seen, any kind of server-side failure detection tends to cause trouble (this is a generalization over a few dozens of systems and many dozens of system-wide failures I have seen and was told about). That's why I (whenever possible) strongly prefer to avoid making decisions about server failures on the server side at all. And client-side load balancing does exactly that - it means exactly zero configuration on the server side to make system handle failure of one of Front-End Servers (and no special trickery such as virtualization is necessary either); in other words, failure handling when we have client-side random balancing, is KISS in it's almost pure form. That's exactly why I "like" it better (YMMV, batteries not included). It MIGHT happen that it doesn't work for your game - but it MIGHT as well work, so writing it off without taking into consideration is not exactly wise.
Anyway, it turns out that the role of "filtering the data stream" is much simpler than the role of "physically simulate a world," so it's often convenient to simply make front-end-server == simulation-server for the vast majority of your world.
This is another good example of what I name "liking". Your experience pushes you to combine Front-End Servers with Game World Servers. Mine pushes me to play it another way around (unless game specifics will show otherwise). However, we seem to agree (correct me if I'm wrong) that both these approaches will work, and that the difference is not that drastic, and that neither of these approaches will be a Fatally Wrong Decision which will break the game (especially as, with the right architecture, it can be changed down the road if necessary). IMHO, this is a good point of mutual understanding, as (unlike "it will never work" stuff) it is all about personal experiences and personal judgements (which are inevitably different for different people).
Peace?