Advertisement

DIY NAS, RAID/ZFS questions

Started by December 31, 2014 01:56 AM
2 comments, last by medevilenemy 9 years, 10 months ago

Because I've more or less run out of disk space on my existing small on board RAID, I've decided to build a DIY NAS/Media server (on the advice of a friend, this approach seems to be cheaper and more flexible than an off the shelf NAS). All the parts are on their way, but I've encountered a couple concerns I'm not sure about, and I wonder if anyone might have any advice:

1) I'm using normal consumer grade parts (with some NAS grade hard drives), including a run of the mill mobo and non-ECC RAM. A friend suggests I use ZFS as the filesystem on the drives, but a quick google search pulls up lots of examples of people saying not to use non-ECC RAM with ZFS. Is this really a problem worth worrying about? If so: What is a good alternative?

2) I'm trying to decide on a RAID level/arrangement to use. I'll have 6 drives, all of equal size, including two which are currently in use in a RAID 1. I want to keep risk of data loss to a reasonable minimum, so I'm currently thinking of setting up the 4 new drives in RAID 5 (RAIDZ?) and keep the older pair as separate drives for backup. The thing is that if I were to get a 7th drive to cover the full capacity of the RAID, space efficiency would only be around 42% (3 drives for backup, one drive worth for parity, 3 for actual data). Is this a bad idea? The general consensus online seems to be that RAID 5 alone isn't trustworthy, and that one should always have some form of backup, so I'm rather worried.

There was a saying we had in college: Those who walk into the engineering building are never quite the same when they walk out.

using normal consumer grade parts
Most NAS systems use the same or cheaper parts (especially the low-cost systems from e.g. WD do).

ZFS

Most commercial NAS systems (including e.g. all my Synology boxes) use ext4 on a LVM RAID, which works just fine assuming you do not have drives exceeding 500 terabytes or so (and even then it works "fine", it just isn't optimized for that scenario).

My overall recommendation -- having maintained DIY systems and tried cheap and expensive NAS systems for over a decade -- is not to do this. Spend money (even if it hurts) or don't do it at all. Get something real, or save until you can afford it.

If you have to ask, you are most likely unable to set up the system so it will work reliably with zero maintenance. You might be able to set up something that "mostly works", in other words something akin to a $100 NAS.

However, a cheap (or DIY) NAS means you save $200 now and pay $20,000 later when something fails catastrophically and you lose your data. Yeah of course, you make regular backups, so... oh wait a moment... shit.

With a good NAS this (hopefully) won't happen. Apart from a physical disk failing, it will (usually) run 100% problem-free, and even then it will manage the issue (unless something like lightning strikes your house and the whole box burns down, or such). It will do stuff like failover, monitoring, and of course backups automatically, without you having to do anything. Well, you might have to buy a new harddisk when the system sends you email "disk 3 has failed", but that's pretty much it.

In other words, it "just works". Don't tell me, don't ask me, just do your fucking job.

Advertisement

If you can afford it, go RAID 10. Performance plus redundancy, and with reasonably fast disks it'll smoke anything this side of an SSD.

Note that I said "redundancy" above: RAID isn't backup. It'll protect you against hardware failure, but if you screw up and delete/overwrite/whatever a file, RAID is no help. That's why you still need a proper backup in place. If you have the bandwidth use a cloud solution, otherwise (and if you're serious abut backup) use tape: backup-to-disk is sexy and modern but tape is still the cheapest storage per-terabyte, and you can move it offsite, put it in a fireproof safe, etc.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I'll echo the usual comment: RAID is not a backup. If you are planning out your file backup system, and "RAID" actually matters to it, then you are doing something wrong. RAID can be used to group physical drives together to make a larger logical drive that you just throw data at. It can be used to make data transfers go faster. It can be used to help ensure system uptime and availability. It however, does not backup anything in any configuration. Having the same logical file stored on two different drives is only protecting from a very low risk hardware failure, and ignoring many far more likely issues.

As for tape, I have yet to see anyone map out a tape backup plan that actually made economical sense for anything smaller than horribly massive datasets. External slot hard drive bays are cheap and easy to use. You can buy a lot of hard drive storage for the cost of a single tape writer last I had seen.

I haven't been able to price out an overly effective build that was much cheaper than an off the shelf solution, unless I was going for something with an insane amount of drives and supporting way more volume than I really need. After buying a case, motherboard, powersupply, etc, I'm usually not a great deal below what a 4-6 bay driveless unit is, and I still have to go through and configure everything and hope I haven't mucked a script up somewhere along the line. Compared to pulling an off the shelf unit out of its box, powering it on, attaching it to my network, and loading it with a few drives... A lot more headache doing it myself for the few hundred bucks I can save.

Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.

The consensus, then, seems to correspond to my instinct, which is to maintain a separate backup copy in addition to the RAID 5, despite the space inefficiency. I agree that tape backup doesn't really seem feasible. Maybe it will be worth looking into in the future, but right now HDD backup should suffice. I'll re-evaluate my arrangement the next time I decide I need to expand storage (which *shouldn't* be for at least a couple years, hopefully not for a few). Thus I should have two layers of defense... the one disk of redundancy RAID 5 provides, and a full backup to separate drives (currently in a separate machine, possibly taken offline completely). Obviously, if my house burns down or something I'm in severe trouble, but I'd have much bigger problems to worry about in that event.

I have a pair of hard drive "toasters", so access to offline drives is not a problem.

So, the arrangement as planned (and discussed with a friend who has been running similar systems for a few years):

Fractal Design Node 804 MATX case (Space for 8 3.5" drives, plus a couple SSDs. I'll be using 4 to start, so lots of space to expand if I want to in the future)

i5-4440s CPU (Massive Overkill, but should provide good flexibility... Probably going to run a mumble server and a couple other things for my friends off it)

Cooler Master V650 PSU (MASSIVELY overkill... it was on sale!)

ASRock H97M Motherboard

8 GB G.Skill Ripjaws DDR3 1600

4x HGST Desktar NAS 4TB 7200RPM Hard drives

120GB Samsung 840 EVO SSD as main system/server software drive

2x yearish old 4TB NAS drives as separate backup. When I get into the top 4TB of the array, I'll re-evaluate/get another drive for backup (at which time prices will have dropped)

RAID arrangement: Standard linux software RAID 5 with the 4 new drives, formatted with XFS (The consensus from google searches and my friend seem to suggest that XFS is best for this purpose, and has fewest questions as to reliability. ZFS introduces too many questions, and btrfs appears to be couple/few more years from being ready).

Thanks for the comments, all. I'm feeling rather better about this now. More than happy to discuss it further if anyone is interested/has further suggestions.

There was a saying we had in college: Those who walk into the engineering building are never quite the same when they walk out.

This topic is closed to new replies.

Advertisement