My old file server (not as old as this one) needed replacement, and I'd just moved into a house with a basement and thus a wall-mounted equipment rack. It was time to replace old hardware before it started getting flaky and unreliable due to capacitor aging.
I'd love to have a commercial NAS, but I want a few features that still don't exist in the consumer market, and I'm too cheap to buy a $2000 (plus disk) business NAS. I want a short-depth rack case, with support for at least 6 drives, with hardware-accelerated crypto, ECC memory, SSD caching / tiering, and 10GbE (upgradeable is okay; I don't need it now, but will want it before the NAS gets retired). So that means once again I'll be building it myself. And once again, this will hopefully be the last time I'm choosing between a home-built box and a commercial offering at 3-4x the price.
First up: the parts
- X-Case X339-C8-LCD short-depth rack
- ASRock Rack C236 WSI mini ITX motherboard
- Intel Pentium G4400 CPU
- 8GB Crucial DDR4-2400 ECC RAM
- 120GB Mushkin Ventura Ultra USB 3.0 flash disk
- 4x Seagate IronWolf 8TB NAS drives
Apparently that rack case is the only one in the world that meets my requirements: short depth, with 6+ drive bays, ideally 3U. Since it's trivial to get a mini ITX motherboard with lots of SATA ports, this seemed to me like it should be easy, but was not. That case is great, though. The power-on-hours counter on the LCD has only 3 digits for the hours portion, which means it rolls over uselessly fast, but it does usefully report the health of the two fans behind each half of the disk array. And despite being a mini-ITX-only case, it has space for 5 (!!) PCIe cards or I/O brackets. I've not found a pre-made expander to bifurcate a x16 to 4x4, though this riser which the manufacturer specifically says will support my motherboard, will break out two x8 slots to support a 10GbE NIC as well as a PCIe SSD.
To be able to use ECC memory I need to use one of Intel's C-series chipsets, in this case the C236 which is basically the workstation / server certified version of the Z170 chipset. Although the consumer chipsets don't support Xeons, the server chipsets still do support the consumer CPUs. In particular, they'll support the dirt-cheap Pentium-branded G4400, a low-end, low-power chip which inexplicably supports ECC memory. And hardware-accelerated AES crypto. And a lousy 16 lanes of PCIe, which is actually perfect for an ITX motherboard with only one PCIe slot. The motherboard supports 8 SATA ports which is a nice match for the case I got, as well as an on-board USB 3 type-A header for the boot drive. I had originally intended to use an 8GB USB 2.0 flash disk made with SLC flash for high endurance, but this board has no USB 2 header. I could have modified it to fit the USB 3 header, but decided instead to buy a fast flash disk and trade off inferior MLC flash with much better wear leveling in lieu of the older SLC flash, and get a significant size advantage in the bargain.
I don't need memory bandwidth for application performance, so I bought a single stick of RAM, leaving the second slot open for future upgrades, given that DRAM was at the expensive end of the boom/bust cycle. I'll probably add a 16GB stick when RAM prices crash in a few years. OS storage is handled by the now-discontinued Mushkin drive, which is based on an old Sandforce SSD controller behind a USB to SATA bridge chip. It thus has far better performance, and far better endurance, than a generic USB flash disk, and I'm not worried about wearing it out with log file writes.
I don't get too hung up on HDD brand loyalty, but did spring for NAS-grade drives as I judged the premium over consumer parts was worth it, particularly if I ever end up using the extra 4 bays in the case and thus subject the drives to even more vibration. The cheap option would have been to use desktop drives, or perhaps to shuck external drives, depending on the day. Still, $30/TB in 2017 seemed a good price.
Even this low-end processor can run AES in XTS mode with a 512 bit key at >2.5GiB/s which is plenty fast to keep up with even 8 drives, let alone the four that I have initially populated. Besides which, I'm comfortably bottlenecked by network speed (currently 1GbE, but 10GbE once prices fall just a wee bit more) even if the disk weren't an issue. Modern CPUs are so fast that Linux doesn't even bother to benchmark RAID5 checksumming speed anymore, and this weedy little CPU exceeds 15GiB/sec on RAID6's much more demanding calculations.
That leaves the disks themselves as the major bottleneck. I'm not supporting many concurrent users, and am mostly serving large files that don't stress the IOPS performance of the drives, so SSD caching is currenty a low priority. Sequential reads hit about 650MiB/sec from a 4-drive RAID5 array; crypto and filesystem overhead drop that to about 570MiB/s, with writes at about 400MiB/sec. I'm quite pleased by the low encryption overhead, and I suspect that with either more spindles or an SSD caching layer implemented, I could readily saturate 10GbE.
Something like the Synology RS1219+ really ought to work. It's a short rack NAS with 8 bays. It uses an Atom CPU that supports NBASE-T Ethernet, ECC memory, and hardware-accelerated crypto, though Synology doesn't support ECC, and only supports 1GbE so the PCIe slot that would otherwise make a nice host for an SSD cache would have to be used for a NIC instead. With ECC RAM and the ability to host an M.2 or AIC SSD, and a 10GbE NIC, it would've easily been worth its price, ~2x what I spent on my NAS. Sadly, to get the features I want, you have to step up to something like the RS3618xs which costs about as much empty as my NAS does with 8 drives in it. It's in a completely different class, with a Xeon-D processor that you're supposed to use for VM hosting as well.