MergerFS+SnapRaid config replace all drives & keep data and mount points intact

  • I'm running MergeFS + SnapRAID. I am trying to expand the storage in my instance from 3TB (of which 1TB is parity) to 8TB (with 4TB as parity, planning to add more data drives once the mail arives).


    I have drives /dev/sda through /dev/sdh:


    /dev/sdc - is my OS drive - 128GB in size

    /dev/sdd - exiting data drive - 1TB

    /dev/sdh - existing data drive - 1TB

    /dev/sdf - existing parity drive - 1TB

    /dev/sda - new parity drive - 2TB

    /dev/sdb - new parity drive - 2TB

    /dev/sde - new data drive - 2TB

    /dev/sdg - new data drive - 2TB


    Right now, /dev/sdd, /dev/sdh and /dev/sdf are plugged in via USB and the MergerFS is created on it. My merge pool path is /srv/mergerfs/mergerPOOL/. I want to be able to keep the path and the data in that path as is without making any changes to the structure while just replacing the drives. After everything is done I want to be able to remove /dev/sdd, /dev/sdh and /dev/sdf and not be worried about anything not working or if I'm overdoing this and there is a better/simpler/secure-er way.


    My plan is the following, and I would like someone to evaluate my plan for me.

    1. Stop all my running containers and services. Backup some important data to an external drive.

    2. Add New Parity Drives (/dev/sdb and /dev/sda) to SnapRAID's existing array. Parity numbers will be 2 and 3 respectively(?)

    3. Add New Data Drives (/dev/sde and /dev/sdg) to MergerFS's exiting pool

    4. Run SnapRAID Sync and verify the sync by running a SnapRAID Check (to make sure I have my old parity drive up to date)

    5. Remove old parity drives from Snapraid's Array

    7. Remove old data drives from MergerFS Pool

    8. Move data from existing data drives to new data drives with:

    rsync -avh --progress /srv/uuid-of-sdd/ srv/uuid-of-sde/

    rsync -avh --progress /srv/uuid-of-sdh/ /srv/uuid-of-sdg/

    9. Restart MergerFS

    10. Change parity numbers to 1 and 2. needed(?)

    11. Run SnapRAID Sync and verify the sync by running a SnapRAID Check

    12. If all goes well start up all my services and unmount all my old 1TB drives which are plugged in via USB

    13. Once my mail arrives with 3 more 2TB drives, simply expand MergerFS Pool

    • Official Post

    I deleted what I had in this post. It was too complicated.


    Do this.


    - First, format and mount your 2TB drives.

    - Do an rsync disk-to-disk copy of each of your 1TB drives. (1TB drive to a 2TB drive) There will be two copies to do.

    Details on how to set up an Rsync drive-to-drive copy are -> here
    - In your case, do not use the --delete switch.
    - To insure that all files are copied, when the command line is complete, run it again. (On the SSH command line, the up arrow will bring up the last command.) All is done when you see "success" AND no files scroll by.


    Each drive copy may take awhile.
    (**Note,, don't let users add or delete files when you're making these copies.**)


    - At this point you might want to take a screen shot of your existing MergerFS array, under Storage, MergerFS. (This will provide you with "backout" information if needed.)


    - Add the mount points of the freshly copied 2TB drives, as found in the filesystem window, to your MergerFS array and remove the mount points of the 1TB drives. (Save)
    - Reboot/.

    - Test your shares and other services attached to the array.


    -Deconfigure SnapRAID for 1TB drives (both data and parity) (Save)

    - Configure SnapRAID for the 2TB drives drives that are now in your MergerFS pool along with 1 or more TB drives for parity.


    Test for proper operation.

    Run a SnapRAID sync command.

  • Thank you so much! I appreciate this a lot. I did everything you said. Worked pretty much perfectly. Thanks for the tip on MergerFS screenshot. Came in handy. lol


    I’m going from 1 parity drive to two. I don’t want split parity. So I number the parity drives 1 and 2 respectively in snapraid is that correct?

    • Official Post

    So I number the parity drives 1 and 2 respectively in snapraid is that correct?

    I'm not sure I'm following here. You can name them anything you want but, when I name drives, the name itself is an indicator of the drive's function. Along those lines, if you want to use two parity drives that have unique names, Parity1 and Parity2 makes sense to me.

    SnapRAID's "Split Parity" does work (I've tested it with a data restore). However, if you use split parity, you'd need to keep a close eye on the health of both drives. For this reason, I'm not a fan. My preference is to put the newest (healthyist) drive in the Parity role. After all without a solid Parity drive, after a data drive failure, there is no "restore". In any case, setting up SMART tests and filesystem reporting with E-mail notifications, for all hard drives, is a good idea.

    BTW: I take it you found the rsync guidance -> here ? (I could have sworn that I had the link right, above. In any case, I fixed the link.)

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!