Raid 5 + 0 or Raid-6 or SnapRAID for home data server?

  • Hello, I'm new to OMV.

    I have recently made an OMV server with OMV 6.1.1-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3 (6Gbps) Non-RAID PCI Express Gen3 x2 card, 6 Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, 128 GB PCIE M.2 SSD for Rootfs/Operating System, and 128GB SATA SSD for use by DOCKER. Motherboard has 4 native SATA ports and one M.2 PCIE port.


    I want to build raid 5+0 on this system with six 2.5 inch SATA HDD's. I have built two Software RAID 5's with 3 drives each. However, when I try to build a stripe raid, only one RAID 5 shows up and I am unable to create Stripe Raid. What am I doing wrong?


    After reading up on Raid 5+0 material on google, it is my understanding that I need to first make two sets of raid 5 with 3 drives each and then make a raid 0 (stripe raid) using these two Raid 5's. Is it even possible to make Raid 5+0 on OMV? If yes, a pointer towards step by step tutorial to make a Raid 5+0 on OMV would be great for a newbie like me. There is not much information on the net about Raid 5+0. Thanks in advance for everyone's input on this issue.


    Regards,


    Navi

    OMV 6.9.0-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3x2 (6Gbps) Non-RAID PCI-e card, 7(2Parity+5Data) Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, SnapRaid with MergerFS plugin, Kingston USB-3 Data Traveler Exodia DTX/32 GB Pen Drive for Root/OS, 128GB SATA SSD for use by DOCKER and spare 128 GB PCIE M.2 SSD. Motherboard has 4 native SATA ports and 1 M.2 PCIE port. SilverStone Sugo SG13 Case.

    Einmal editiert, zuletzt von NsinghP ()

    • Offizieller Beitrag

    I want to build raid 5+0 on this system

    Raid 5+0 on OMV would be great for a newbie like me,

    Why?


    Based upon what you have posted why do you believe that a Raid5+0 is a better option than say Raid6 for your six drives and why do you believe a Raid setup is the right way to go.


    Don't get me wrong I have no problem with a user wanting to use a Raid but when they post 'newbie' in a question I start by asking questions

  • Hi,


    Thanks for taking the time to respond. I am doing Raid 5+0 Simply because during my research I discovered, and then made a judgement, that Raid 5+0 is the safest option when considering speed and cost to benefit balance of data protection. If i'm not able to implement Raid 5+0 on OMV then I may just use either Raid 5 or more than likely Raid 6 which is slower thant Raid 5, or even Raid 10 but Raid 10 will use extra 1tb for protection so i'm leaning towards Raid 6, but my first choice is Raid 5+0.


    Also with my setup, I have capability to further expand my setup with two more SATA HDDs in future when need arises.


    Regards,


    Navi

    OMV 6.9.0-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3x2 (6Gbps) Non-RAID PCI-e card, 7(2Parity+5Data) Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, SnapRaid with MergerFS plugin, Kingston USB-3 Data Traveler Exodia DTX/32 GB Pen Drive for Root/OS, 128GB SATA SSD for use by DOCKER and spare 128 GB PCIE M.2 SSD. Motherboard has 4 native SATA ports and 1 M.2 PCIE port. SilverStone Sugo SG13 Case.

    • Offizieller Beitrag

    Simply because during my research I discovered, and then made a judgement, that Raid 5+0 is the safest option when considering speed and cost to benefit balance of data protection

    That sentence alone is a 'newbie' perception, so what is your understanding of a Raid?

    If i'm not able to implement Raid 5+0 on OMV then I may just use either Raid 5 or more than likely Raid 6 which is slower thant Raid 5,

    Speed is of little relevance if the rest of the hardware, cpu, ram, network can't keep up, and you are referring to read speed not write


    Question: What is the drive fault tolerance of the following;


    Raid5+0

    Raid5

    Raid6

  • A little background first:

    I will be using this NAS/Raid setup at home. I lost bunch of precious data when my, always on, 4TB Segate GoFlex Home NAS drive failed last year, and i didn't have complete backup, since then I have been contemplating building a Raid to get some fault tolerance. In the short term I don't see my data requirement exceeding 4TB for couple of years. I plan on taking a complete backup of Raid data every month on a USB3 4TB portable drive.


    Here is what I am considering when giving preference to Raid 5+0.


    a) Though Raid 5 gives me extra 1tb (5tb from six 1tb HDD's) of usable space it can only sustain failure of 1 drive at a time and performance will severely degrade when this failure happens. And I only need 4Tb of space, Therefore I am choosing not to go with Raid 5.


    b) Raid 6 is very attractive as it can sustain 2 drive failures, though it uses additional 1tb space over Raid 5 to do that, but I'm OK with that. Apparently Raid 6 performance is lower than that of Raid 5 and I realize that it may not make a difference in my case as most of Clients will be connecting over Wifi.


    c) Raid 5+0 gives me same usable space as Raid 6 and it is my understanding that: it has better write performance, better data protection, and faster rebuilds than RAID 5. Performance does not degrade as much as in a RAID 5 array because a single failure only affects one array. Up to four drive failures can be overcome if each failed drive occurs in a different RAID 5 array. ( So I'm thinking, this is sweet, I'm getting fault tolerance of Raid 6 and Better performance than Raid 6 and unlike Raid 5, performance will not degrade severely when a drive fails).


    d) I am rejecting Raid 10 as it is killing extra 1Tb Space and gives only 3TB of usable space (3tb from Six 1tb Hdd's).


    In conclusion I will probably go with Raid 6 if I am not able to do Raid 5+0 on OMV. I also plan on experimenting with DOCKER and installing bunch of other stuff on this 24x7 on Server I'm building. i.e. MQTT broker etc.


    Regards,


    Navi

    OMV 6.9.0-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3x2 (6Gbps) Non-RAID PCI-e card, 7(2Parity+5Data) Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, SnapRaid with MergerFS plugin, Kingston USB-3 Data Traveler Exodia DTX/32 GB Pen Drive for Root/OS, 128GB SATA SSD for use by DOCKER and spare 128 GB PCIE M.2 SSD. Motherboard has 4 native SATA ports and 1 M.2 PCIE port. SilverStone Sugo SG13 Case.

    Einmal editiert, zuletzt von NsinghP () aus folgendem Grund: Correct Typos/Crammer etc.

    • Offizieller Beitrag

    OK again your analogy is another newbie misunderstanding, Raid is not about data protection it is about availability, there is no protection unless you have a solid regular backup, if you like I'll tag a few users on the forum who'll confirm the same.


    The write speed you refer too will be negligible and users won't even notice.

    Up to four drive failures can be overcome if each failed drive occurs in a different RAID 5 array

    Where did you find this, you are looking at 2xRaid5 Striped, the Raid5's dictate the fault tolerance, so in this case one drive failure in each array


    So if two drives fail in one of the Raid5's the data's toast


    As to usable space, Raid6 and your Raid50 will generate the same usable space, the difference is as you've already stated, is that Raid6 allows for two drives failures.

    performance will severely degrade when this failure happens.

    Performance will degrade within any Raid scenario during a rebuild

    • Offizieller Beitrag

    OK to answer the question you cannot create a Raid50 using OMV's GUI, it can only be done from the cli

  • Where did you find this, you are looking at 2xRaid5 Striped, the Raid5's dictate the fault tolerance, so in this case one drive failure in each array


    So if two drives fail in one of the Raid5's the data's toast

    I got this information from Seagate Raid Calculator Site. Yes, This was misleading, after you pointed it out, I analyzed the statement and I realized that Four drive protection is for Four Raid-5 arrays and not Two Raid-5 Arrays.

    OMV 6.9.0-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3x2 (6Gbps) Non-RAID PCI-e card, 7(2Parity+5Data) Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, SnapRaid with MergerFS plugin, Kingston USB-3 Data Traveler Exodia DTX/32 GB Pen Drive for Root/OS, 128GB SATA SSD for use by DOCKER and spare 128 GB PCIE M.2 SSD. Motherboard has 4 native SATA ports and 1 M.2 PCIE port. SilverStone Sugo SG13 Case.

    Einmal editiert, zuletzt von NsinghP ()

  • Please consider to use ZFS and RaidZ2 for your case:

    Thanks for the suggestion I will have to read up further on ZFS and RaidZ2. It can be implemented on OMV or is it a different platform?

    OMV 6.9.0-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3x2 (6Gbps) Non-RAID PCI-e card, 7(2Parity+5Data) Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, SnapRaid with MergerFS plugin, Kingston USB-3 Data Traveler Exodia DTX/32 GB Pen Drive for Root/OS, 128GB SATA SSD for use by DOCKER and spare 128 GB PCIE M.2 SSD. Motherboard has 4 native SATA ports and 1 M.2 PCIE port. SilverStone Sugo SG13 Case.

  • OK to answer the question you cannot create a Raid50 using OMV's GUI, it can only be done from the cli

    Hey, Again Thanks for your time. I will try out the CLI method and see if Raid 5+0 Works for me. Otherwise, I think you have me convinced for Raid-6 Setup.

    OMV 6.9.0-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3x2 (6Gbps) Non-RAID PCI-e card, 7(2Parity+5Data) Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, SnapRaid with MergerFS plugin, Kingston USB-3 Data Traveler Exodia DTX/32 GB Pen Drive for Root/OS, 128GB SATA SSD for use by DOCKER and spare 128 GB PCIE M.2 SSD. Motherboard has 4 native SATA ports and 1 M.2 PCIE port. SilverStone Sugo SG13 Case.

  • NsinghP

    Hat den Titel des Themas von „Raid 5 + 0, also know as - Raid 50“ zu „Raid 5 + 0, also known as - Raid 50 (Solved)“ geändert.
    • Offizieller Beitrag

    5 Simple Reasons Why RAID Is Not a Backup
    “Can I use RAID in place of backups?” I see this question posted throughout the web in one form or another. After learning how RAID facilitates redundancy,…
    blog.storagecraft.com

  • macom

    Hat das Label gelöst hinzugefügt.
    • Offizieller Beitrag

    Four drive protection is for Four Raid-5 arrays and not Two Raid-5 Arrays.

    Even with 4 each RAID5 arrays, stripped together, the likelihood of 4 drive failures being distributed evenly among each of the 4 member arrays (with 1 failed drive each) is highly unlikely. That's more of a theoretical talking point than what might be realistically expected to happen. The problem with most home users and small businesses is that they don't have a plan for when the drives in their arrays become "geriatric" - 5 years +/-. When drives get old, they may begin to fail together, especially during an array rebuild after a failed drive is replaced.

    geaves is right about the network. If you have a 1gb network, any RAID5 array type can saturate a 1GB connection when a network client accesses the server. With today's disks, with onboard cache, a single disk can saturate 1GB as well. In essence, with a 1GB network and for the purpose of increased throughput, there is no speed gain by striping RAID5 arrays together,. You'd need something faster than a 1GB network to see a real difference.

    If you want "data protection", as in protection from silence corruption commonly known as "bitrot", raulfg3 's suggestion of ZFS is a good one. Measures to protect against bitrot (known as a "scrub") are built into ZFS. If you're looking for "data redundancy", you need BACKUP - there's no other solution. For redundancy, you need two independent copies of data.

    While these two goals may seem like a lot to implement, they're surprisingly easy to accomplish.

  • Why?


    Based upon what you have posted why do you believe that a Raid5+0 is a better option than say Raid6 for your six drives and why do you believe a Raid setup is the right way to go.


    Don't get me wrong I have no problem with a user wanting to use a Raid but when they post 'newbie' in a question I start by asking questions

    Just an update. Based on further reading on RAID on the internet and on advice in this post, I finally decided to give up my at home Raid-6 Setup in favor of SnapRaid setup as plugin on OMV. I simply realized that data availability was not that important for my setup and SnapRaid also takes care of Bit Rot problem.

    OMV 6.9.0-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3x2 (6Gbps) Non-RAID PCI-e card, 7(2Parity+5Data) Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, SnapRaid with MergerFS plugin, Kingston USB-3 Data Traveler Exodia DTX/32 GB Pen Drive for Root/OS, 128GB SATA SSD for use by DOCKER and spare 128 GB PCIE M.2 SSD. Motherboard has 4 native SATA ports and 1 M.2 PCIE port. SilverStone Sugo SG13 Case.

  • Just an update. Based on further reading on RAID on the internet and on advice in this post, I finally decided to give up my at home Raid-6 Setup in favor of SnapRaid setup as plugin on OMV. I simply realized that data availability was not that important for my setup and SnapRaid also takes care of Bit Rot problem.

    Spoke too soon! Just figured out that when using Snapraid, it will not present combined capacity of all hard drives for use, like a Raid-6 array would. (i.e. in raid6 total capacity of all six 1TB hard-drives is usable by one shared folder which is 4TB, but in SnapRaid any shared folder will be limited by the total capacity of physical hard drive it is located on so in my case since each hard drive is 1TB that is the maximum amount of data a shared folder on that drive will accommodate.


    Time to move back to RAID-6 setup. What a pain.

    OMV 6.9.0-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3x2 (6Gbps) Non-RAID PCI-e card, 7(2Parity+5Data) Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, SnapRaid with MergerFS plugin, Kingston USB-3 Data Traveler Exodia DTX/32 GB Pen Drive for Root/OS, 128GB SATA SSD for use by DOCKER and spare 128 GB PCIE M.2 SSD. Motherboard has 4 native SATA ports and 1 M.2 PCIE port. SilverStone Sugo SG13 Case.

  • You should use Snapraid in conjunction with mergerfs

    Thanks for the pointer, I just finished reading the mergerfs wiki page and I think combination of SnapRaid with mergerfs may just be the best solution for my home server.

    OMV 6.9.0-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3x2 (6Gbps) Non-RAID PCI-e card, 7(2Parity+5Data) Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, SnapRaid with MergerFS plugin, Kingston USB-3 Data Traveler Exodia DTX/32 GB Pen Drive for Root/OS, 128GB SATA SSD for use by DOCKER and spare 128 GB PCIE M.2 SSD. Motherboard has 4 native SATA ports and 1 M.2 PCIE port. SilverStone Sugo SG13 Case.

  • Are you referring to the -> page?

    Yes, it has very clear explanation and instructions.

    OMV 6.9.0-1 (Shaitan) on ASROCK B560M-ITX/ac Motherboard, 16GB DDR4 RAM, Intel Pentium Gold 6405 CPU, Silverstone ECS06 6 Ports SATA Gen3x2 (6Gbps) Non-RAID PCI-e card, 7(2Parity+5Data) Toshiba 2.5 inch Laptop SATA HDD's 1TB each for Data, SnapRaid with MergerFS plugin, Kingston USB-3 Data Traveler Exodia DTX/32 GB Pen Drive for Root/OS, 128GB SATA SSD for use by DOCKER and spare 128 GB PCIE M.2 SSD. Motherboard has 4 native SATA ports and 1 M.2 PCIE port. SilverStone Sugo SG13 Case.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!