Grew RAID5 cant grow filesystem, only spare?

  • I asked the question OMV 2 versus OMV 3 - Growing RAID 5, in a new thread. Here's to hoping one of the programmers, a moderator, or other subject matter expert takes a look.


    OMV 2 Versus OMV 3 - Growing RAID 5

    Hello,
    Wow took sometime for some replies. I will give your link a shot, but yes, I think something is not working right in OMV3 when it comes to growing a RAID5. Yes, I did wipe the drive prior to hitting the button.


    Update - your command to add the spare to the RAID in your link worked! Reshaping now!

    • Offizieller Beitrag

    Hello,Wow took sometime for some replies. I will give your link a shot, but yes, I think something is not working right in OMV3 when it comes to growing a RAID5. Yes, I did wipe the drive prior to hitting the button.


    Update - your command to add the spare to the RAID in your link worked! Reshaping now!

    At this point, I've used mdadm command mdadm lines to "add" and "grow" a RAID 5 array several times. I've created failures and recovered. So far, it's been flawless but it's important to note those operations were in a VM. (The real world, with a broad spectrum of hardware, is much different.)


    You said it was reshaping. So how did it end up for you? And if you've been running RIAD 5 for awhile, how long has it been and what issues have you experienced so far? Thanks.

  • At this point, I've used mdadm command mdadm lines to "add" and "grow" a RAID 5 array several times. I've created failures and recovered. So far, it's been flawless but it's important to note those operations were in a VM. (The real world, with a broad spectrum of hardware, is much different.)
    You said it was reshaping. So how did it end up for you? And if you've been running RIAD 5 for awhile, how long has it been and what issues have you experienced so far? Thanks.

    The RAID grew successfully! I have never had issues with mdadm managing my RAID for like 8 years. I recovered 2 HD failures (and recovered) over the years and migrated the RAID array to a new system. Perfect. This was the first time I grew the RAID to 5 devices. So far, very happy.

    • Offizieller Beitrag

    The RAID grew successfully! I have never had issues with mdadm managing my RAID for like 8 years. I recovered 2 HD failures (and recovered) over the years and migrated the RAID array to a new system. Perfect. This was the first time I grew the RAID to 5 devices. So far, very happy.

    I have one last question, if you have a minute. Since you added a drive to your array AFTER the array was created,, is the new drive the exact same size AND model as the existing drives, or is it slightly dissimilar?


    Thanks

  • I have one last question, if you have a minute. Since you added a drive to your array AFTER the array was created,, is the new drive the exact same size AND model as the existing drives, or is it slightly dissimilar?
    Thanks

    I have a mix of 2x3TB and 3x2TB drives. I found a deal on 3TB enterprise drives that were as cheap as WD 2TB reds. mdadm is smart enough to use RAID5 array of 5x2TB, brand is irrelevant. Easy. In the future if I find more 3TB drives, I can grow the RAID to a 5x3TB array. Hope that helps.

    • Offizieller Beitrag

    I have a mix of 2x3TB and 3x2TB drives. I found a deal on 3TB enterprise drives that were as cheap as WD 2TB reds. mdadm is smart enough to use RAID5 array of 5x2TB, brand is irrelevant. Easy. In the future if I find more 3TB drives, I can grow the RAID to a 5x3TB array. Hope that helps.

    It did (help).


    I have experience with hardware RAID, from back in the day, but only with two different controllers and all drives in the ARRAY's were exactly the same. (Size and OEM.) Software RAID, for me, is a new thing.


    As you're aware, I've done some RAID testing in a VM but those tests are limited. While the size can be varied (where the lowest size becomes the standard for the ARRAY), I can't do anything with different drive models, different rotation speeds, cache sizes, etc. As it seems, Linux software RAID is much more flexible than I'd ever expect it to be.


    Thanks for the info.

  • Yeah, its super flexible. I dont even bother to look at rotational speeds, cache, or brand. As long as its a reliable drive that supports RAID and it goes sale, I buy it. When I made a new NAS, i simply threw the old drives in new case and openmediavault recognized it ready to mount. Easy.

  • I had the same experience.


    Adding a fifth drive to my array via the GUI only added it as a spare.
    I went to the console and entered


    mdam --grow --raid-devices=5 /dev/md0


    Now the new drive is part of the array and the RAID is reshaping.


    I love OMV, version two for years and everything was easy. Maybe it's my imagination, but I find a lot of things require a lot more console intervention in 3. Many things just don't seem to work properly right out of the box, and I see they are already working on 4?

    • Offizieller Beitrag

    Adding a fifth drive to my array via the GUI only added it as a spare.
    I went to the console and entered


    mdam --grow --raid-devices=5 /dev/md0


    Now the new drive is part of the array and the RAID is reshaping.

    If you have 5 drives in a mdadm RAID5 array, you're living a bit on the dangerous side. I sincerely hope you have a full data backup. This is especially true if any or all of your drives have some age on them.


    Good Luck.

  • If you have 5 drives in a mdadm RAID5 array, you're living a bit on the dangerous side. I sincerely hope you have a full data backup. This is especially true if any or all of your drives have some age on them.
    Good Luck.

    It's RAID6, but why would 5 drives be more dangerous than 4?

    • Offizieller Beitrag

    It's RAID6, but why would 5 drives be more dangerous than 4?

    It's statistical probability. Individual hard drives have a life of around 4 to 5 years, of 24x7 use. As more individual drives are added to an array, the probability of a single drive failure increases. The greater the number of drives, the greater the chance of a drive failure and, since the weakest drive in the array sets the bar, the first failure may come sooner than 4 years.


    Then there's the age factor. Given that drive failure is an ever increasing possibility as they grow older, some data centers have a replacement policy where drives are swapped out at year 4 or 5 as a matter of practice.
    In a home NAS environment, a drive typically gets replaced after it completely fails. Unfortunately (probability again), when a drive fails, the remainder of drives in the array tend to be old and closer to failing themselves. Rebuilding the array, when a replacement drive is added, can take several hours or even days depending on the size of the array. This is a nonstop drive torture test for those remaining (old) drives in the array. Cascade failures of 1 or more drives during an extended rebuild can, and do, happen.


    RAID 6 does give you more tolerance than RAID5 but, even with RAID 6, I wouldn't put more than 5 drives in the array. I see the practical limit for RAID5 as 4 devices. Outside of a multi vdev ZFS RAID pool, or other nested RAID arrangement (like RAID10), having more than 5 drives in a single array is risky.


    In any case, with good backup of your entire data store, the potential risks fade considerably.


    (These are just my opinions. :) You may see things differently.)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!