$10k custom RAID server, or $100 rental?
I work in designing and building MMO systems. An old friend has recently launched a new mass-market web-based system which has usage/traffic scenarios quite similar to MMOG's : people saturating all the bandwidth they can, need for lossless data writes, extremely high performance, and coping with frequent traffic spikes etc. He's spent 5 figures on a single custom server, primarily so that he could ensure it had a good RAID card with battery-backed cache. This I understand: he wants to run his DB's in write-back mode for optimal performance, without any loss of data in the event of power failure. But the last two companies I've worked for both times we used large numbers of commodity servers to achieve similar performance/reliability (better, AFAICS, and at lower cost). I'm a great fan of the rental model: find a good datacentre, and rent a few machines to start off with, patching in new ones over time as demand increases. It's got a very low up-front cost and gives you exceptionally decentralized failures - you need N/3 or more machines to break in a short time to suffer problems, rather than 1. And, for any serious system, you'll definitely need multi-machine scalability (some form of clustering) sooner or later anyway - and if you expect exponential growth (he does) you'll need it sooner. So we argued about this for an hour or so :). As it happens, his funding model means that he doesn't really suffer from having spent the capital up-front, although personally I would have saved the money and tried to use it on buying other thigns at this stage. I was just wondering what other people here do - has it become worth going back to the old days of custom-building rack servers right from day one, and paying the massive co-lo + capital costs? These days, I've been able to get such good hardware on rental, running my OS of choice (debian) which I can instance automatically with my remote install CD's to have a preconfigured system that I don't seem to have any need for the custom system any more. Used to do that, no longer seems necessary nor worth it. Thoughts?
The problem with all eggs in one basket is that there's a limit to how big a basket you can buy.
enum Bool { True, False, FileNotFound };
I built a server at home that I couple with my normal home high speed internet for initial testing and debugging and then I rent I'll rent a dedicated server for true alpha and beta testing.
This allows me to scale my costs depending on my need: low cost during development; high cost during testing; highest cost during deployment (when all I do is upgrade the hosting solution I had for testing).
In short, if I get your point, then I'm with you that the days of NEEDING your own NOC to deploy a MMO are waning. Having a NOC does have advantages when it comes to hardware, but more and more hosting companies are taking care of that as well by providing, for example, 24 staffing, free remote reboots, or vpn serial access to your computer so that even if the networking is down, you can get in.
This allows me to scale my costs depending on my need: low cost during development; high cost during testing; highest cost during deployment (when all I do is upgrade the hosting solution I had for testing).
In short, if I get your point, then I'm with you that the days of NEEDING your own NOC to deploy a MMO are waning. Having a NOC does have advantages when it comes to hardware, but more and more hosting companies are taking care of that as well by providing, for example, 24 staffing, free remote reboots, or vpn serial access to your computer so that even if the networking is down, you can get in.
Well, for us, renting a machine was a pretty good decision. We grew slowly at first and we didn't even come close to using the full power of our P4 server. When we grew large enough to strain it, we just switched to a dual opteron 244 and it was quite painless, rent a second machine, ask their tech to switch both machines IP so we don't have to wait for new DNS info to propagate and we're done.
As said above, most places now offer staff on standby 24 hours a day and other options like this. We can only afford 24 hours tech support but it's not that much more vs a 10k server =).
Only negative comment I have to make about no having direct access to your server is that stupid accidents can happen. We asked to have linux 64 bit installed on the new machine and they ripped out the P4 out of the rack and was going to format it when we called them directly because we guessed something was really wrong when the prod server disappeared from the network completly. Well, that's what backups are for eh?
As said above, most places now offer staff on standby 24 hours a day and other options like this. We can only afford 24 hours tech support but it's not that much more vs a 10k server =).
Only negative comment I have to make about no having direct access to your server is that stupid accidents can happen. We asked to have linux 64 bit installed on the new machine and they ripped out the P4 out of the rack and was going to format it when we called them directly because we guessed something was really wrong when the prod server disappeared from the network completly. Well, that's what backups are for eh?
Quote:
Original post by Dark Rain
Only negative comment I have to make about no having direct access to your server is that stupid accidents can happen. We asked to have linux 64 bit installed on the new machine and they ripped out the P4 out of the rack and was going to format it when we called them directly because we guessed something was really wrong when the prod server disappeared from the network completly. Well, that's what backups are for eh?
Ah...that's a non-issue. I never run a production server unless I have a CD I can auto-install it from (cunning use of debian auto-install + custom pkg repositories). It's easy enough any decent sysadmin could learn to do it from scratch in a day or so.
Equally, I always spec rented machines to have two linux installs on. Want to re-install linux? reboot into the other install, re-install the first one, and reboot back into first one, having set it to emergency reboot into second one if the reboot to the NEW first one fails.
Tried, tested, works fscking brilliantly. Never need to ask the tech guys @ the co-lo / hosting facility to touch the machine.
PS: I got this trick from a colleague who used to manage a large cluster of machines that lived in India, and he had several times to fix them from the UK. I won't comment on the quality of the tech support on-site, you can probably imagine what they were capable of. He didn't really have the option of "get tech support to reboot" let alone "...to re-install linux".
Quote:
Original post by hplus0603
The problem with all eggs in one basket is that there's a limit to how big a basket you can buy.
Indeed. He will *have* to make his clustering work sooner or later, so he's getting no free ride.
The real question, I suppose, is "is it better to put off the development of cluster support, or to put off the purchasing of expensive hardware?".
Given that he's based on J2EE, I consider that a non-question in his case...
Quote:
Tried, tested, works fscking brilliantly. Never need to ask the tech guys @ the co-lo / hosting facility to touch the machine.
... until the power supply blows up ...
enum Bool { True, False, FileNotFound };
Quote:
Original post by hplus0603 Quote:
Tried, tested, works fscking brilliantly. Never need to ask the tech guys @ the co-lo / hosting facility to touch the machine.
... until the power supply blows up ...
That's not my problem, its not charged for, and if it isnt fixed very quickly, I just get a different rental server in the same room.
If you've done a lot of hosting, you'll know that one of the risks is being reliant on host to fix problems with your machine - usually they charge an obscene amount of money and/or take a very long time to do this sort of thing. If it's their stuff, i.e. a hardware failure, no problem. Software problems, though, are YOUR problem.
My suggestion was to go for a full co-lo, where you put in the machines into a lockable cage. When the hard drives blow up, or you need to re-set the BIOS CMOS, or whatever, you go in there yourself. For a large-scale operation, I really don't see the managed hosting solutions as an option -- but for a small thing, where you need 1-3 machines, it seems a great way to get started!
enum Bool { True, False, FileNotFound };
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement