Replacing smaller drives for larger ones

  • If you are looking to replace smaller data/parity drives with larger ones and don't know how to go about it, I have just completed a replacement of 8 drives with minimal downtime for the family consuming that data. Here's how I did it:


    Original Setup:

    • OMV 5.x (on a separate SSD) - this will not be touched
    • 2x WD Red 8 TB drives (Parity)
    • 6x WD Red 8 TB drives (Data)
    • SnapRAID/UnionFS

    Target Setup:

    • 2x Seagate Ironwolf Pro 18 TB drives (Parity)
    • 6x Seagate Ironwolf Pro 18 TB drives (Data)
    • SnapRAID/UnionFS

    My NAS is a physical server with an SSD used for the OS and Docker configs only, all data resides on the data drives for my media.


    Procedure:

    • Plug new hard drive into the USB dock and power on (NOTE: Your USB dock needs to actually support the size drive that you are going to. Your drive will not report the correct size if this is not the case, check your USB dock's documentation for how large a drive it supports)
    • Login as root to the server (NOTE: If you don't login as root, any commands from here on out will need "sudo" before them.)
    • Run: snapraid diff (if there are any differences, then run: snapraid sync).
    • Repeat the above step until there are no differences
    • Run: lsblk -o NAME,PATH,FSTYPE,MOUNTPOINT,LABEL,UUID,SIZE and note the drive that doesn’t have a partition. Also note the sdX of the drive that is being replaced and the mountpoint for the source drive
    • Run: fdisk /dev/sdX (where X is the letter of the new drive without a partition)
    • Type:
      • g to create GPT partition
      • n (ENTER – for new partition)
      • Partition Number (hit ENTER)
      • First Sector (hit ENTER)
      • Last Sector (hit ENTER),
      • p (ENTER – for primary partition)
      • w to write to disk
    • Format the drive with: mkfs.ext4 -L <NEWDRIVENAME> /dev/sdX1 (where <NEWDRIVENAME> is the label of the drive and X is the drive letter from above)
    • Create a mount directory: mkdir /mnt/sdX (where X is the drive letter of the new drive)
    • Run: mount /dev/sdX1 /mnt/sdX (to mount the NEW drive where X is the drive letter of the new drive)
    • IMPORTANT: Make sure that nothing is writing to the drives (stop all containers/apps)
    • Run: rsync -av --progress <SOURCE_MOUNTPOINT>/ /mnt/sdX (where X is the drive letter of the new drive and SOURCE_MOUNTPOINT is the mount of the original drive you found in the lsblk command above)
    • Login to the OMV UI and go to the SnapRAID section
    • Find the old drive in the Drives tab and remove it from the array (NOTE: Write down the label of drive you are removing you will need this for a step later). Apply the changes to OMV.
    • If it's a data drive, move to the Union Filesystems section and deselect the drive to take it out of the file system, save and apply.
    • Shutdown the server and swap out the drives. Boot server back up.
    • Open SSH session and the OMW gui
    • Go to File Systems and mount the new drive, apply config. Remove the old drive, apply config.
      • If it's a data drive, go to Union Filesystems, add the new drive to the list, save and apply changes
    • Go to SnapRAID, then DRIVES, select the new drive to add to the array. Apply changes. (NOTE: For data drives, tick the CONTENT and DATA, for parity drives only choose the PARITY toggle)
    • IMPORTANT: Change the label of the new drive in SnapRAID to match the OLD label you noted above (in RED). Apply changes
    • Go into S.M.A.R.T. section and enable monitoring onnew drive.
    • Reboot the system
    • IMPORTANT: Make sure that nothing is writing to the drives (stop all containers/apps)
    • In SSH session, run: "snapraid diff" and "snapraid sync"
    • Go into OMV UI and change the SnapRAID label for the new drive to whatever new you want. Apply.
    • Run: "snapraid diff" and "snapraid sync" one final time

    I hope this helps someone who is doing this the first time (like I was), but this method ensures there is no data lost and no corruption. If you use Plex Server like I do for watching your media, you can keep Plex as the only container running as it just reads and doesn't write to the data drives. The only downtime I had was when I had to swap the drives (since my case wasn't hot swappable), so the impacted time was minimal. The rsync command in my case took about 14 hrs. On average, its about 2 hrs per TB of data copied, so use that as the baseline to know how long it will take for each copy. I started the command before bed and let it run all night, that ate up a lot of the time.


    I'm also attaching these steps as an attachment for the doc I created that I worked off of.

  • votdev

    Hat das Thema freigeschaltet.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!