Are Onboard/Chipset SATA Controllers reliable for 24/7 or should I buy a PCIe SATA/SAS HBA for a Software RAID?

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Are Onboard/Chipset SATA Controllers reliable for 24/7 or should I buy a PCIe SATA/SAS HBA for a Software RAID?

      Hello everyone :D


      I was thinking about upgrading the Storage of my current NAS in the next ~6 Months from 1x 2TB + 1x 4TB to a RAID 6 with 6x 4TB Disks (using Btrfs as the File System).

      But since the Mainboard in my NAS only has 6 + 2 SATA Ports ( 2x 6GB/s + 4x 3 GB/s are from the Z77 Chipset + 2x 6GB/s are from from ASMedia ) the upgradability is quite limited.

      In addition, I do not trust ASMedia and don't know how well the Z77 chipset can handle 24/7 operation ( especially after what happened with the Intel 2000 Series Chipsets ).


      In this sense - would you recommend to buy a SATA / SAS PCIe HBA instead of connecting the Hard Drives to the SATA Ports provided by the Chipset?

      And if yes, what about the Drivers / Hardware compatibility with OpenMediaVault?

      I know that the Linux Kernel supports a wide range of Hardware, but are there particular Controllers which don't run (well or at all) with Debian (Broadcom / LSI?) or some that do run really good?


      I hope someone can answer my burning question, thanks in advance and good night :thumbsup:
      Never Touch a Running System
    • tkaiser wrote:

      Do you plan to use mdraid's RAID 6 with btrfs on top or creating a btrfs raid6 with -draid6 -mraid1?
      Good question, so far I have not used Btrfs yet - right now I'm still using FreeNas 11.2 with ZFS, but I'd like to switch back to OpenMediaVault and retain the ability to take snapshots (testet it for 3 Months now).

      I know that i can also use ZFS with OpenMediaVault (Proxmox Kernel / ZoL), but since Btrfs is already Part on the Linux Kernel and not a Kernel Module, i want to use it over ZFS.

      Anyway, I have little to no idea about Btrfs, but I do not want to use Ext4 anymore ;)
      Never Touch a Running System
    • Alchemist wrote:

      since Btrfs is already Part on the Linux Kernel and not a Kernel Module, i want to use it over ZFS
      This can also be a problem since features/fixes depend on the kernel version as well. With a default OMV4 installation you're running with kernel 4.9 so you need to either install a backports kernel or proxmox kernel (recommended) to get a more recent kernel version.

      When you combine mdraid6 with btrfs your whole array is prone to the 'classical' parity RAID write hole, resulting in data corruption when degraded in case of crashes and power losses and as such the btrfs array on top can be lost as a whole at any time in such situations. Using btrfs' own raid56 mode requires a really recent kernel and you are still affected by the parity RAID hole (creating the array with -draid6 -mraid1 protects the filesystem but still data corruption might occur once you run the RAID fully degraded and experience a crash or power loss)

      I would strongly recommend to read through github.com/openmediavault/openmediavault/issues/101 (dealing with a lot of FUD/BS and talking about OMV5 and as such at least kernel 4.19 and btrfs-progs version 4.20!) or choose the TL;DR variant: ZFS/RAIDz2
    • tkaiser wrote:

      This can also be a problem since features/fixes depend on the kernel version as well. With a default OMV4 installation you're running with kernel 4.9 so you need to either install a backports kernel or proxmox kernel (recommended) to get a more recent kernel version.
      When you combine mdraid6 with btrfs your whole array is prone to the 'classical' parity RAID write hole, resulting in data corruption when degraded in case of crashes and power losses and as such the btrfs array on top can be lost as a whole at any time in such situations. Using btrfs' own raid56 mode requires a really recent kernel and you are still affected by the parity RAID hole (creating the array with -draid6 -mraid1 protects the filesystem but still data corruption might occur once you run the RAID fully degraded and experience a crash or power loss)

      I would strongly recommend to read through github.com/openmediavault/openmediavault/issues/101 (dealing with a lot of FUD/BS and talking about OMV5 and as such at least kernel 4.19 and btrfs-progs version 4.20!) or choose the TL;DR variant: ZFS/RAIDz2
      Looks like i have to stay with my current setup for the time being, didn't think Btrfs still has issues like this ...

      But since I am only doing the upgrade in half a year ( or later ) either the Btrfs Raid6 Problems are solved or i'll go with ZFS RaidZ2 instean :)
      Never Touch a Running System
    • Alchemist wrote:

      either the Btrfs Raid6 Problems are solved or i'll go with ZFS RaidZ2 instean
      Well...

      • the 'Btrfs Raid6 Problems' are actually solved with recent kernel versions except of the well known parity RAID 'write hole' problem that also affects close to 100% of OMV users relying on either RAID5 or RAID6. With mdraid users can live with this rarely occuring problem (you're only affected with a fully degraded RAID which means with RAID6 that two array members are missing!) while with btrfs the very same problem is a show stopper for whatever reasons
      • If you want to go the RAIDz2 route do yourself a favour and coduct a quick web search for 'ZoL sequential resilver' first (this problem has been addresses on Solaris ages ago but was/is a problem at least with Linux. Haven't checked for a while)