Drive replacement/upgrade on OMV 4 - SnapRaid & Union Filesystems plugin

  • Hi All,

    Just wanted to get some advice from people that have done these things many time more than I have.

    Currently config is an R510 with 8 x 3.5" drives formatted in ext4.
    - SnapRaid configured through GUI as:

    5x 4TB & 1x 2TB as Content/Data disks

    1x 4TB & 1x 2TB as Parity disks

    - Data disks are joined into a single datastore using the Union Filesystems plugin using the Most Free Space policy.

    I have just bought 2 new 8TB drives and intend to replace the 2 x 2TB drives with 8TB drives.
    I believe these a straight swap of these two disks is best as i believe a parity disk should always be the same or larger than the largest data disk, right?

    If these suppositions are correct, my question is this. How best to replace these disks?

    1. a. Shutdown, Replace 1 of the 2TB data disks, boot, remove the "broken" disk from the snapraid, wipe, format and mount new 8tb disk, add new disk to snapraid as data/content, run fix.

    1. b. Repeat above with 2TB parity disk.

    2. as 1, but in reverse order parity first.

    3. a. nuke the whole snapraid config, same with unionfs, replace 2x 2TB with 2x 8TB, wipe and initialise all drives, re-create snapraid and UnionFS.
    3. b. recreate shares, and then restore all files to the new UnionFS datastore from my 2x 12TB USB 3.0 external drives.

    I am uncertain which of the options above is best for both stress on the hardware, time taken to restore, time expended doing config etc.

    What does anyone recommend? is there a better way, if not which option is best from your experience?.

    Thanks in advance for any help.

  • I don't understand your parity disk setup, particularly the 2TB disk. What is its role?

    Google is your friend and Bob's your uncle!

    OMV AMD64 6.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 16GB ECC RAM.

  • Hi,

    Short answer. No idea.
    All i knew when i started was snapraid could be used with any size and drive combo, and that the parity drive should be as big or bigger than the largest data drive in the array. I knew multiple parity drives meant more resilience so as i had 8 bays and nothing i could see said parity drives had to be the same size, I started with 6x 4tb drives and 2x 2tb i thought it was best to use one for data nad one for parity have 2 parity drives?

    Please let me know if not and i'll reconfigure the whole array and restore from backup.

    What would be best setup for 6x 4TB 2x8TB ?

    I have a 6tb WD blue in my desktop that i'm tempted to drop into this server too so that would be give me:

    What would be best setup for 5x 4TB 2x 8TB 1x6TB ?


  • ITfactotum

    Added the Label OMV 4.x
  • ITfactotum

    Changed the title of the thread from “Drive replacement / upgrade on OMV system using SnapRaid and Union Filesystems plugin” to “Drive replacement/upgrade on OMV 4 - SnapRaid & Union Filesystems plugin”.
  • I'm not sure what happens with mixed sized parity drives when one of them is smaller than the largest data drive and the other is as large or larger than the largest data drive. Obviously the the largest parity drive will protect any single data drive in the array. But I do not know what coverage you really wind up having in a situation where you lose two drives simultaneously. I would ask in the snapRAID user forum before relying on this setup.

    My use case is:

    2x 12TB Parity drives

    7x Data drives - 2x 8TB, 1x 4TB, 4x 3TB

    I add about 1TB of data to the array monthly and have five free bays left. Easiest way for me to grow the array is without juggling parity disks to add 12TB (or smaller) data disks.

    Google is your friend and Bob's your uncle!

    OMV AMD64 6.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 16GB ECC RAM.

  • Ok, thanks.

    I'll check with that forum on the drives used for parity before i make any changes.

    ..... NVM, found it straight away on the forum. Although the plugin in OMV doesn't complain if you try and use 2 different size drives for parity. It seems i must have read up or realised it wasn't right to use mismatched drives for parity and never set it up that way!!

    I have 2x 4tb drives in parity, and 4x4, 2x2 in data for 17.9TB total in the merge.

    As much as i don't really want to use these 8TB for Parity (they are shucked Seagate Desktops so compute SMR drives) i guess its tough!

    I'll gain 4TB in the datastore and 2 fresh 8TB parity drives meaning i can replace any of the 4's with 6's or 8's when i can afford any more.

    Thanks for the assist. I assume it should be as simple as removing one of the drives from snapraid, turning off SMART monitoring for that drive, the pulling it. Provision the new drive, add it to the snapraid, and sync.

    Then repeat for the second one?

  • When I was in the situation where I used up all my drive bays and needed to add more drive space it involved replacing one or both parity disks with new larger ones. I used dd to clone copy the old parity disk(s) to the new one(s), then I expanded the partition and filesystem(s) on the new drive(s). This preserved the UUID and allowed me to swap the drives without touching any OMV settings or having to recompute the parity.

    Then I would do the same thing with the smallest data drive(s) and use the old parity drive(s) as the target(s). Again, this preserved UUIDs and prevented me from having to fiddle any OMV settings.

    Now that I have added a DAS box to my setup I have five empty bays and can just add new data disks as needed. I should be able to get at least five more years out of the current setup without having to swap any drives or add another DAS box, just by adding more 12TB or larger drives.

    Google is your friend and Bob's your uncle!

    OMV AMD64 6.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 16GB ECC RAM.

  • I have finally resolved part of the above issue, by replacing the 2 parity drives with 8tb drives.

    I now need to go through and replace 2 of the 2TB drives with 4TB drives.

    Then in future my intention is replace a 4tb drive at a time with an 8TB drive.

    Given the speed with which i have filled 18TB of capacity i don't forsee needing more than that before i can afford to replace most of the setup entirely.

    My issue is that I do not know the ideal order/proceedure to replace these drives without making excess work for myself.

    I know that it is viable to backup everything and remove the shares, backup configs, snapraid, union fs everything, and just build the filesystem from scratch again with the new drives and copy the files from backup.

    But is it possible to copy the data from the disk being replaced over to a replacement disk, and then swap that disk in so that the system only had to update the device ID and free space etc, rather than having to rebuild the disk from parity etc?

    Just want to know from people with experience what the ideal method is.

    Kind Regards

  • The easy way: Clone an old drive to a new one, then grow the filesystem if needed on the new drive to fill out the drive space. Then swap the new drive in for the old one. You won't have to make any other changes to anything anywhere.

    Google is your friend and Bob's your uncle!

    OMV AMD64 6.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 16GB ECC RAM.

  • Hi,

    I have been searching for a while for a guides for what you mention that are specific to using SnapRaid and UnionFS in Openmediavault.

    Lots of things even Snapraid documenation of course relate only to themselves being used in a generic linux environment and sadly don't relate well to openmediavault and its GUI etc.

    What i don't want to do is start trying things i find and mangle my Openmediavault config so badly that i'm left with no choice but to start again.

    i have salvaged a USB 3.0/Sata from a shucked drive and have a 4tb connected to the system, but am uncertain how to dot he following:

    Clone the disk, grow the filesystem, and any preparations in the GUI of OMV before physically replacing the disks.

    I have read contradictory things and am rather apprehensive about screwing it up.

    I'm looking into how to use DD as you said, but also noticed that re-labeling the disk seems to be advisable? Something to do with a newer version working on label and not UUID? I may have confused myself further!

  • Reading some more of your posts, i think the label thing is for OMV5 and up? So hopefully won't effect me.

    On this i assume i could label my drives by bay position and the label would stick to the drive UUID not the mount order as i know this can change depending on what order the server picks up the ports as it boots right?

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!