Replacing a disk and assigning same label?

  • I have two new 8TB drives that will be replacing older 2TB drives in my NAS. The drive labels of the old drives are the location of the drives in the bays, so I want the new drives to use the same labels. What is the best way to accomplish this? I tried this a few weeks ago and ended up having to reinstall OMV because I must have forgotten or not known the proper way to clean up references to the old drives.


    My assumption is that I would:


    1) Physically install the new drive and create an ext4 filesystem, use a temporary label
    2) Rsync the data from the old drive to the new (rsync -avxHAWX --numeric-ids --info=progress2 /srv/olddisk /srv/newdisk ???)
    3) Remove the old drive from the mergerfs pool, remove shared folders and any other references
    4) Unmount old drive, physically remove it from case
    5) Mount new drive, boot OMV, still has old label
    6) Use e2label to re-name drive to "proper" label
    7) Add drive to mergerfs pool


    I also use SnapRAID, but it will detect the new UUID on the first scan and should be no problem.


    Is there something I am missing? Does fstab need to be edited manually, and if so, in what way? I'm a Linux beginner.

  • Re,

    The drive labels of the old drives are the location of the drives in the bays, so I want the new drives to use the same labels.

    I'm struggleing araound, what "labels" do you mean, and why you need to bind it to "bays"?
    Drives uses serial-numbers for indentification (which is unique), and partitions uses UUIDs (which are also unique), the only "static" part can be the root-filesystem ... with the mountpoints under /srv (there you can have a directory named "bay1" or "bay2") ...


    6) Use e2label to re-name drive to "proper" label

    e2label makes partition-labels, not drive-labels!


    So you have to alter the mounts in the /etc/fstab file at least, may be there is a way via the OMV-WebGUI, but i don't know that ...


    Sc0rp

  • I'm struggleing araound, what "labels" do you mean, and why you need to bind it to "bays"?


    Drives uses serial-numbers for indentification (which is unique), and partitions uses UUIDs (which are also unique), the only "static" part can be the root-filesystem ... with the mountpoints under /srv (there you can have a directory named "bay1" or "bay2") ...


    Thanks for taking the time to weigh in. When I refer to the drive label, I mean the one that I can create in the file systems tab of the OMV interface, which I guess are actually partition labels. I have all of my drives mounted in hot-swap bays, such as this one. I label the drives (okay, partitions) according to their position in the hot-swap bay. So, the first drive in the top bay is A0, the next is A1, then A2, A3, and A4. The bay under it houses drives B0 - B4. Today I am replacing my smallest two data drives, which happen to be B1 and B3. I want the new drives to also assume these labels, so if (when) a drive fails, I will know where it is physically located.


    Ultimately, I want to transfer all the data from the "old" B1 drive to the "new" B1 drive so it is a seamless transition for the mergerfs pool (which contains drives A1-A4 and B1-B4) and my SnapRAID array (8 data drives and 2 parity drives).


    I jumped the gun a bit and tried to do the first disk yesterday, but through my impatience I ended up deleting some data from the "old" B1 drive. Therefore, I am now trying to rebuild the SnapRAID array by recovering the lost data on to the "new" B1 drive. This kind of accomplishes the same thing, but I think it's less risky to transfer the data first in case there's a problem with the SnapRAID recovery process.

  • Re,


    I label the drives (okay, partitions) according to their position in the hot-swap bay.

    Understood, but be aware of the fact, that this naming structure is not bound to a "bay" - it is bound to the (SATA-) ports on your mainboard, and which of them is recognized (or coded in) first from BIOS/UEFI, and then found by the kernel (kernel module related) ... so, if your BIOS/UEFI changes the order, or the kernel-module does it, it will not function anymore.


    Therefore most hot-swap-setups use the "port-bounding" (aka sata-port 1 is connected to bay 1 and so on), while retieving the disk-data "dynamically" from outputs, e.g. the serial number from the (dead/faulty) disk via commandsline.


    You can "connect" a sata-port to a special mountpoint, but this is only logically/virtual. You can do this with altering the /etc/fstab file directly (using "temporary labels") - OMV will read and use this.


    Changing drives in a SnapRAID/mergerfs context is easy, since there are two ways you can use:
    - offical way: replace the disk you want and recover it with SnapRAID (takes time, stressing the disks but is easy to manage)
    - inofficial way: just dupe the disk you want to replace to the new one directly (using another PC or eSATA or an other SATA-Port, with "dd" command), and after that is done, edit&correct all the needed files for matching the new "hardware" ...


    You can change the label of a partition as often as you want, but you have to make sure, which detail is used for mounting (should be the UUID in the fstab).


    Just take a look via console (SSH) in your /srv directory:
    ls -la /srv


    Sc0rp

    • Offizieller Beitrag

    Just a quick remark here. Current omv uses label as first option for device mounting and mount path generation. If no label is present it will default to by-id which in most bare metal servers will be the device brand-model plus a serial number.
    If you remove a device that’s currently registered in the backend with label, assigning the same label to the new disk should mount the new disk in the same point as the old one. As long as the old one is not present, I don’t what would happen if two fs with same label are attempted to mount boot.
    The fs uuid’s are no longer used.


  • Thank you for expanding on that. I am having some difficulty visualizing how I can use this information to make my setup more "bulletproof." The motherboard labeled "SATA 1" may not equate to the BIOS/kernel recognized "first port," and even if it does today, it may not after a BIOS or kernel update. I assume that the drive under /dev/sda is the one that the kernel sees as "port 1," and so on?



    Just a quick remark here. Current omv uses label as first option for device mounting and mount path generation. If no label is present it will default to by-id which in most bare metal servers will be the device brand-model plus a serial number.
    If you remove a device that’s currently registered in the backend with label, assigning the same label to the new disk should mount the new disk in the same point as the old one. As long as the old one is not present, I don’t what would happen if two fs with same label are attempted to mount boot.
    The fs uuid’s are no longer used.

    Well, I can't answer that exact question, but having two drives with the same label was the cause of my data loss. I had re-labeled the drive to be replaced as "oldb1" and the new drive as "b1." Then, because I know what I am doing, I ran a cat command to copy the old disk to the new, but I ran it on /dev/sda to /dev/sdl, which also copied the label of the drive to the new one (I think I'll use rsync in the future, but just to satisfy my curiosity, should I have copied /dev/sda1 to /dev/sdl1 instead? Would that have copied only the data and not the label?). I knew which drive was which under /dev/, so I tried to unmount the new drive and delete the filesystem, and of course, this also deleted the filesystem from the old drive.

  • Re,

    Current omv uses label as first option for device mounting and mount path generation.

    Right, therefore:

    Just take a look via console (SSH) in your /srv directory:
    ls -la /srv

    ... to check what is used ...



    I assume that the drive under /dev/sda is the one that the kernel sees as "port 1," and so on?

    Right ... but only for older systems. Currently the fstab uses the UUID for mounting, which is more failproof.


    As @subzero79 explained, you'll find unter /srv-directory mountpoints with (hopefully) "dev-disk-by-label-<label>" entries, if you use labels, otherwise you'll find "dev-disk-by-id-<id>" entries. If you have the label-ones, you are good to go with your plan - as far as you keep the UUID-thing in mind (you have to track the UUID down to system, to get the connection between the old sd[a-z] labeling, partition-UUID and drive's serial number)


    Sc0rp

  • For posterity, I'm posting my method of transferring data since I seem to have found a method that works (at least, nothing has broken so far):



    • Temporarily replace one parity drive with new data drive
    • Create new ext4 filesystem using "new" drive label
    • Remove "old" data drive references from MergerFS and SnapRAID configs
    • rsync -avxHAWX --numeric-ids --info=progress2 /srv/olddiskpartition /srv/newdiskpartition
    • Add "new" data drive to SnapRAID config and run check to ensure no missing files, then remove drive from config
    • Unmount "old" drive filesystem
    • E2label /dev/sdX1 with the drive bay label
    • Edit /etc/openmediavault/config.xml manually to correct the drive label
    • Shut down and move new drive into correct bay, replace second parity drive
    • Add new drive with updated label to MergerFS and SnapRAID
    • Run Reset Permissions on all shared folders
    • Profit?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!