My final HW config

    • My final HW config

      Hey guys,

      I have finalized my hardware list and feedback is appreciated.

      Case: Fractal node 304 (wow, nobody else uses that ;)
      CPU: i3-8100T
      RAM: 16GB DDR4
      Mobo: AsRock H370M
      Cooler: Arctic Freezer 12 Co (semi passive)
      PSU: SilverStone SFX 300W (semi passive)
      NVME: Corsair MP510 240GB
      HDDs: 2x 8TB WD Red

      I want to use one HDD for data and the other one for parity. The NVME is for the OS, but will be partioned and also serve as a highspeed datastore.

      Should I use SnapRAID for the parity or just sync the drives daily with RSYNC or is there a simpler option to sync the data. I don't want to use RAID1, because I only want one drive working to save power and noise.

      Thanks!
    • Shinobi wrote:

      Should I use SnapRAID for the parity or just sync the drives daily with RSYNC or is there a simpler option to sync the data
      You could make use of the ZFS plugin and then use znapzend.org to automagically create snapshots on your data disk and let them be sent to the other disk in intervals. Less stressful for the disks since unlike rsync both filesystems don't need to be scanned each time for changes (only snapshots are transferred sequentially). Added benefit: bit rot detection on both disks so in case one disk gets faulty you should still have an intact copy on the other disk (when installing ZFS a monthly scrub happens on the disks --> /etc/cron.d/zfsutils-linux)
    • tkaiser wrote:

      Shinobi wrote:

      Should I use SnapRAID for the parity or just sync the drives daily with RSYNC or is there a simpler option to sync the data
      You could make use of the ZFS plugin and then use znapzend.org to automagically create snapshots on your data disk and let them be sent to the other disk in intervals. Less stressful for the disks since unlike rsync both filesystems don't need to be scanned each time for changes (only snapshots are transferred sequentially). Added benefit: bit rot detection on both disks so in case one disk gets faulty you should still have an intact copy on the other disk (when installing ZFS a monthly scrub happens on the disks --> /etc/cron.d/zfsutils-linux)
      Thanks, that sounds really interesting. I haven't really considered that yet. The goal is, to have a copy disk to which i can switch over, if necessary. Does the plugin make full snapshots every time or incremental and with CBT?
    • tkaiser wrote:

      Shinobi wrote:

      Does the plugin make full snapshots every time or incremental
      The ZFS plugin 'only' allows to use ZFS for shared folders. Znapzend then does snapshots and transfers them incrementally to the other disk. If you're a Windows user you could even benefit from 'shadow copies' this way...
      The more I think about, the more I think that my media files are not worth duplicating. My internet speed is more than capable of downloading all the stuff again... Sonarr and Radarr automate the process anyway. Personal data if a different story. I guess I will use both drives as individual data disks and just snapshot the important stuff onto the other one. Does that make sense?
    • Shinobi wrote:

      Hey guys,

      I have finalized my hardware list and feedback is appreciated.

      Case: Fractal node 304 (wow, nobody else uses that ;)
      CPU: i3-8100T
      RAM: 16GB DDR4
      Mobo: AsRock H370M
      Cooler: Arctic Freezer 12 Co (semi passive)
      PSU: SilverStone SFX 300W (semi passive)
      NVME: Corsair MP510 240GB
      HDDs: 2x 8TB WD Red

      I want to use one HDD for data and the other one for parity. The NVME is for the OS, but will be partioned and also serve as a highspeed datastore.

      Should I use SnapRAID for the parity or just sync the drives daily with RSYNC or is there a simpler option to sync the data. I don't want to use RAID1, because I only want one drive working to save power and noise.

      Thanks!
      I like your config :) I only point out one thing: if you use the m2 disk, one sata port will be disabled, so you will have 5 sata ports.
      Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 2X6TB Seagate Ironwolf - 2x4TB WD RED
      OMV 4.1.22 - Kernel 4.19 - omvextrasorg 4.1.2
    • Shinobi wrote:

      i settled on 2x 8TB now. With my latest idea, i would reach about 14TB capacity, with another 2TB that will be redudant in some form...
      Another idea with modern storage attempts, this time btrfs. You could partition both disks and then create two btrfs filesystems:
      • A large one using 7 TB of each disk, using no redundancy for data but redundancy for metadata. This will allow you to detect data corruption and as such you know which files to replace: mkfs.btrfs -m raid1 -d single /dev/sda1 /dev/sdb1
      • A small one using redundancy for both data and metadata: mkfs.btrfs -m raid1 -d raid1 /dev/sda2 /dev/sdb2


      Snapshot handling on the redundant data share will be done using btrbk and if you're a Windows user you can access older versions by using Shadow Copies (works since OMV 1). You should set up a cron job to run btrfs scrub for your btrfs filesystems at least every two months. This will repair potentially corrupted data on the 2nd share while reporting bitrot on the first share. Expanding this setup is as easy as adding another disk(s) and rebalancing the data and metadata.
    • tkaiser wrote:

      Blabla wrote:

      if you use the m2 disk, one sata port will be disabled, so you will have 5 sata ports
      That's why @Shinobi talks about NVMe und not a SATA M.2 SSD.
      Cool! I didn't know that. In that case that build is perfect :D
      Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 2X6TB Seagate Ironwolf - 2x4TB WD RED
      OMV 4.1.22 - Kernel 4.19 - omvextrasorg 4.1.2
    • tkaiser wrote:

      Shinobi wrote:

      i settled on 2x 8TB now. With my latest idea, i would reach about 14TB capacity, with another 2TB that will be redudant in some form...
      Another idea with modern storage attempts, this time btrfs. You could partition both disks and then create two btrfs filesystems:
      • A large one using 7 TB of each disk, using no redundancy for data but redundancy for metadata. This will allow you to detect data corruption and as such you know which files to replace: mkfs.btrfs -m raid1 -d single /dev/sda1 /dev/sdb1
      • A small one using redundancy for both data and metadata: mkfs.btrfs -m raid1 -d raid1 /dev/sda2 /dev/sdb2


      Snapshot handling on the redundant data share will be done using btrbk and if you're a Windows user you can access older versions by using Shadow Copies (works since OMV 1). You should set up a cron job to run btrfs scrub for your btrfs filesystems at least every two months. This will repair potentially corrupted data on the 2nd share while reporting bitrot on the first share. Expanding this setup is as easy as adding another disk(s) and rebalancing the data and metadata.
      That all sounds great, but I'm a little overwhelmed with the configuration and what exactly this is going to do. Is there a more detailed tutorial out there? Yes, I use Windows, so Shadow copies would be an option.

      How should I partition the disks exactly? Do you mean with something like gparted or within OMV?
    • Shinobi wrote:

      That all sounds great, but I'm a little overwhelmed with the configuration and what exactly this is going to do. Is there a more detailed tutorial out there?
      Don't think so. Making use of any of these more advanced and modern storage attempts unfortunately still requires some deeper understanding about the inner workings of this stuff. OMV integration isn't there (yet). OMV only allows to use full disks so you would need to utilize gdisk manually to create one 7 TB partition followed by a 1 TB partition on each drive (gdisk takes care about correct partition alignments and can deal with GPT and as such is my only recommendation in 2019). Then you would need to execute the mkfs.btrfs commands, mount the filesystems in OMV and later take care of executing scrubs regularly and set up snapshots.

      Shinobi wrote:

      I use Windows, so Shadow copies would be an option
      This as well would require manual configuration in OMV and is still a bit of an expert's job. But it's the best option to deal with snapshots (and one of the few things where Windows has a unique advantage over other OS).

      I guess my ZFS/btrfs proposals are more 'food for thought' than practical advice if you're not willing to invest some time to become more familiar with such low-level stuff (which most probably wasn't your goal when choosing OMV in the first place :) )