8-bay home NAS.

    • 8-bay home NAS.

      cost & shopping list

      I haven't done any hardware builds recently, and I have gotten a bit lazy. Then, when my Synology 4-bay NAS started filling up, I started considering the options. I could:

      - change 4x4TB to 4x6TB ... the total cost would be: 4x1450 RMB = 5800 RMB (~733 EUR), and would have added roughly 6 TB capacity. That's roughly 966 RMB (~129 EUR) per additional TB.

      - buy a new 5-bay NAS (e.g. Synology DS1515) and new hard drives at either 4 or 6 TB:

      8x4TB: 8810 RMB (~1174 EUR) for 8x4TB, which gives me a usable capacity of ~28TB, resulting in a total cost of 314 RMB (~42 EUR) per TB. 4TB: 4900 RMB + plus 5x4TB at 800 RMB each: 8900 RMB (~1186 EUR). This would end up costing 556 RMB (~74 EUR) for each additional TB

      6TB: 4900 RMB + plus 5x6TB at 1450 RMB each: 12150 RMB (~1620 EUR). This would end up costing 506 RMB (~67 EUR) for each additional TB

      - build a custom NAS from scratch. The parts i went with for this calculation, after some research:

      the enclosure: 1300 RMB
      item.taobao.com/item.htm?id=18453969007

      the psu: 150 RMB
      item.taobao.com/item.htm?id=43239748874

      the board:600 RMB
      item.taobao.com/item.htm?id=38675467623

      disks: 800 RMB
      detail.tmall.com/item.htm?id=522889894501

      plus, obviously, a boot drive (generic 60G msata at around 120 RMB) and RAM (8GB at around 150 RMB)

      this gives me a total cost for the hardware of

      8720 RMB (~1162 EUR) for 8x4TB, which gives me a usable capacity of ~28TB, resulting in a total cost of 311 RMB (~42 EUR) per TB.
      7120 RMB (~949 EUR) for 6x4TB, which gives me a usable capacity of ~20TB, resulting in a total cost of 356 RMB (~47 EUR) per TB.

      this choice is pretty much a no-brainer. the off-the-shelf version clocking in at almost twice the price, and upgrading the existing one would have have been almost as expensive in absolute numbers, three times as expensive in cost per TB - and i am not even factoring in the fact that with every option that includes a new enclosure, the old one would be
      left untouched.

      so i decided to go with the last option - an 8-bay home build NAS, but for now with only 6 disks.

      requirements and choosing a distro

      at home, i am using macos, android and linux. so samba isn't all that important. afp and especially working support for time machine backups were much higher on the list - the main workload for my NAS will be media storage, backups ... and being able to run standard stuff (i.e. some SQL databases, maybe a mongo, gitlab etc) on a familiar platform is definitely a big plus. a nice (working) UI is a must.

      freeNAS

      i have heard a lot about freeNAS before, so that was pretty much my starting point. i am using linux a lot, so free BSD is a little bit off. the main thing that put me off was that apparently, time machine isn't as well supported with netatalk on freeBSD as it is on linux. i tried it, and it didn't suck, but i kept looking anyways.

      openfiler, rockstor

      next, i looked at some more storage-centric distros. i didn't actually try these. from what i have read, i got the gut feeling that these don't have the kind of feature set i would expect coming from synology DSM with it's broader approach to what a NAS can do.

      open media vault

      next, i gave OMV a try in a virtualbox environment. i did a couple of experiments with the raid management, installed ZFS, virtually "unplugged" drives and recovered from failures, and the impression i got from it was that it quite a solid system. so i decided to use this for my actual build.

      installation

      i created an USB installer from the ISO using unetbootin, and after i figured out that the apparently, the first stick i created (which previously had a debian jessie installer on it) was not entirely correct, the installation went extremely smoothly.

      since OMV shows the serial numbers of the disks in the UI, figuring out the order of the SATA ports was also smooth sailing. everything on the board is pretty much standard, so there was no need to install any extra components - everything was recognized and working out of the box.

      the first glitch i ran into was that i didn't really want to use a screen on the machine after the initial setup, but rather use SSH and the web interface. so, after i set a static IP in /etc/network/interfaces, i shutdown, unplugged everything and shoved into the corner, then fired it up ... and it didn't come up. first, i disable any on-board graphics in the BIOS, but this wasn't enough, then i did some research, and it seems in this case, grub is the culprit.

      so, in

      /etc/default/grub

      i changed this:

      GRUB_CMDLINE_LINUX_DEFAULT="quiet"

      to this:

      #GRUB_CMDLINE_LINUX_DEFAULT="quiet" // commented out
      GRUB_CMDLINE_LINUX="text"
      GRUB_TERMINAL=console

      to convince grub to NOT do graphics. i updated grub with:

      update-grub

      after this, the box came up without a hitch (and without a screen). i think changing these defaults would make for a useful patch OMV could apply to the standard debian stuff.

      filesystem

      now, i had a running OMV with six blank 4T disks. the next big question was how to use the disks. the options are, basically:

      - ZFS
      - RAID + LVM
      - LVM + RAID (did you ever set this up? not really an option ... )

      there's plenty of opinions on ZFS. the TLDR to me sounds something like "it's not GPL, but if it would be, it would be awesome". and, while i agree that it would probably be a good idea to license it under the GPL ... it isn't.

      there are some good arguments FOR zfs:

      - it is just one layer. and it's pretty straight-forward
      - it grows and shrink very easily
      - it's strong on data integrity
      - it has awesome features, like snapshots and incremental transfers thereof

      there's also good arguments for RAID+LVM:

      - built into the kernel (no extra compile step)
      - lvm allows to set the size of every logical volume created on a VG

      especially with a distribution like OMV, and the smooth sailing i got from installing OMV extras and ZFS, i don't think rebuilding the custom kernel modules is going to be an issue, not having size-per-volume doesn't really bother me either. the simplicity of growing and shrinking, however, is just marvelous.

      but the main point happened when i was trying to test the performance of each one. i set them up as ZRAID, created a shared folder on it, mounted it, saw around 3 MB/s, realized it was actually a network problem that had nothing to do with OMV, fixed the issue, tried again, got >100 MB/s on 1 Gbit LAN, and 189 / 430 MB/s read/write locally.

      then i tore it down, set up a RAID set ... and realized that it wanted me to wait for almost 20 hours to "resync" ... even cranking up the limits in /proc didn't bring it down to anywhere near reasonable.

      so i aborted that and went with ZFS.

      ZFS tweaks

      one obvious thing to try would have been deduplication, but the board i got has only one slot for memory, so i am not able to cram as much memory into it as i would have needed, so unfortunately, i am not able to try this.

      another thing was to et "acltype" to "posixacl" in

      ZFS -> $POOL -> Edit -> Add Property

      while this does NOT show up again when i re-open the Edit dialogue, i can now configure ACLs in:

      Access Rights Management -> Shared Folders -> $Folder -> ACLs

      which did cause errors before.

      Plugins

      i installed some basic plugins and i am very happy with them (so far). netatalk seem solid, remote shares (for migrating data from my synology) also looks fine.

      in other areas, i was a bit disappointed. for example, the docker plugin always gives me a "connection refused" (didn't investigate much, yet), and the owncloud plugin is just abysmal - but then again, so is owncloud. it is really a shame that something that has been around for this long and with a concept this good still isn't up to snuff in convenience and functionality.

      the upside of this is - if i really want it, i can just SSH into the box and install it myself. the nginx config is pretty neat and easy to extend.

      wrapping it up

      the installation of OMV was pretty straight-forwards, and there were not as many snags as i had feared. still, if a technically not-so-savvy would ask me for advice, i'd probably recommend a synology instead.

      some of the snags could be avoided:

      - the grub thing, this would be a rather simple patch, given that by default, not desktop environment is installed and most people wouldn't want their NAS to need a monitor in order to boot).

      - chosing the right file system
      this is a bit more tricky, since it very much depends on the planned size, extensibility, use case, data integrity requirements etc etc ... but it might well be worth a

      open points

      The case has not arrived yet - so i will be back with the final assembly. Also - i am not entirely sure if the current PSU (270W) is actually necessary. the disks are on average 4W, but with a peak of 21W. as for the board, i have no specs whatsoever. i will try to get hold of a ampere meter and measure the actual consumption of this thing.
      Images
      • overview-no-case.jpg

        105.9 kB, 1,000×748, viewed 1,511 times
      • sata-ports-2.jpg

        151.29 kB, 1,000×1,337, viewed 1,040 times
      • sata-ports.jpg

        128 kB, 2,000×748, viewed 1,074 times
      • msata.jpg

        97.09 kB, 1,000×748, viewed 945 times
    • here's some rrd graphs - this is copying roughly 6TB of data onto the NAS. 4.5TB from the old NAS, copied by mounting it as a samba remote share, the rest of it was a time machine backup and a USB disk copied over.
      Images
      • rrd_network.png

        21.52 kB, 497×235, viewed 900 times
      • rrd_cpu.png

        24.65 kB, 497×221, viewed 753 times
      • rrd_load.png

        23.35 kB, 481×235, viewed 957 times
    • the board. uh, well. so, there is a couple of people selling this on taobao. some call it "群晖" (that's "synology"). inside, nobody actually puts a manufacturer name on it. came in a non-descript brown box. so i have no clue where it comes from.

      it actually has THIRTEEN sata ports (there's another one, in black, in the middle of the board). there are one intel 6-channel sata controller and two marvell 9215 with 4 channels each.

      Source Code

      1. ​root@nas:/ZRAID/media/movies# lspci
      2. 00:00.0 Host bridge: Intel Corporation ValleyView SSA-CUnit (rev 0e)
      3. 00:02.0 VGA compatible controller: Intel Corporation ValleyView Gen7 (rev 0e)
      4. 00:13.0 SATA controller: Intel Corporation ValleyView 6-Port SATA AHCI Controller (rev 0e)
      5. 00:14.0 USB controller: Intel Corporation ValleyView USB xHCI Host Controller (rev 0e)
      6. 00:1a.0 Encryption controller: Intel Corporation ValleyView SEC (rev 0e)
      7. 00:1b.0 Audio device: Intel Corporation ValleyView High Definition Audio Controller (rev 0e)
      8. 00:1c.0 PCI bridge: Intel Corporation ValleyView PCI Express Root Port (rev 0e)
      9. 00:1c.1 PCI bridge: Intel Corporation ValleyView PCI Express Root Port (rev 0e)
      10. 00:1c.2 PCI bridge: Intel Corporation ValleyView PCI Express Root Port (rev 0e)
      11. 00:1c.3 PCI bridge: Intel Corporation ValleyView PCI Express Root Port (rev 0e)
      12. 00:1f.0 ISA bridge: Intel Corporation ValleyView Power Control Unit (rev 0e)
      13. 00:1f.3 SMBus: Intel Corporation ValleyView SMBus Controller (rev 0e)
      14. 01:00.0 SATA controller: Marvell Technology Group Ltd. Device 9215 (rev 11)
      15. 02:00.0 SATA controller: Marvell Technology Group Ltd. Device 9215 (rev 11)
      16. 03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
      17. 04:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
      Display All



      Source Code

      1. ​root@nas:/ZRAID/media/movies# cat /proc/cpuinfo
      2. processor : 0
      3. vendor_id : GenuineIntel
      4. cpu family : 6
      5. model : 55
      6. model name : Intel(R) Celeron(R) CPU J1900 @ 1.99GHz
      7. stepping : 8
      8. microcode : 0x829
      9. cpu MHz : 2000.029
      10. cache size : 1024 KB
      11. physical id : 0
      12. siblings : 4
      13. core id : 0
      14. cpu cores : 4
      15. apicid : 0
      16. initial apicid : 0
      17. fpu : yes
      18. fpu_exception : yes
      19. cpuid level : 11
      20. wp : yes
      21. flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer rdrand lahf_lm 3dnowprefetch ida arat epb dtherm tpr_shadow vnmi flexpriority ept vpid smep erms
      22. bogomips : 4000.05
      23. clflush size : 64
      24. cache_alignment : 64
      25. address sizes : 36 bits physical, 48 bits virtual
      26. power management:
      27. processor : 1
      28. vendor_id : GenuineIntel
      29. cpu family : 6
      30. model : 55
      31. model name : Intel(R) Celeron(R) CPU J1900 @ 1.99GHz
      32. stepping : 8
      33. microcode : 0x829
      34. cpu MHz : 2000.029
      35. cache size : 1024 KB
      36. physical id : 0
      37. siblings : 4
      38. core id : 1
      39. cpu cores : 4
      40. apicid : 2
      41. initial apicid : 2
      42. fpu : yes
      43. fpu_exception : yes
      44. cpuid level : 11
      45. wp : yes
      46. flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer rdrand lahf_lm 3dnowprefetch ida arat epb dtherm tpr_shadow vnmi flexpriority ept vpid smep erms
      47. bogomips : 3999.67
      48. clflush size : 64
      49. cache_alignment : 64
      50. address sizes : 36 bits physical, 48 bits virtual
      51. power management:
      52. processor : 2
      53. vendor_id : GenuineIntel
      54. cpu family : 6
      55. model : 55
      56. model name : Intel(R) Celeron(R) CPU J1900 @ 1.99GHz
      57. stepping : 8
      58. microcode : 0x829
      59. cpu MHz : 2000.029
      60. cache size : 1024 KB
      61. physical id : 0
      62. siblings : 4
      63. core id : 2
      64. cpu cores : 4
      65. apicid : 4
      66. initial apicid : 4
      67. fpu : yes
      68. fpu_exception : yes
      69. cpuid level : 11
      70. wp : yes
      71. flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer rdrand lahf_lm 3dnowprefetch ida arat epb dtherm tpr_shadow vnmi flexpriority ept vpid smep erms
      72. bogomips : 3999.79
      73. clflush size : 64
      74. cache_alignment : 64
      75. address sizes : 36 bits physical, 48 bits virtual
      76. power management:
      77. processor : 3
      78. vendor_id : GenuineIntel
      79. cpu family : 6
      80. model : 55
      81. model name : Intel(R) Celeron(R) CPU J1900 @ 1.99GHz
      82. stepping : 8
      83. microcode : 0x829
      84. cpu MHz : 2000.029
      85. cache size : 1024 KB
      86. physical id : 0
      87. siblings : 4
      88. core id : 3
      89. cpu cores : 4
      90. apicid : 6
      91. initial apicid : 6
      92. fpu : yes
      93. fpu_exception : yes
      94. cpuid level : 11
      95. wp : yes
      96. flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer rdrand lahf_lm 3dnowprefetch ida arat epb dtherm tpr_shadow vnmi flexpriority ept vpid smep erms
      97. bogomips : 3999.89
      98. clflush size : 64
      99. cache_alignment : 64
      100. address sizes : 36 bits physical, 48 bits virtual
      101. power management:
      Display All