How to replace a disk

  • I have OMV 6 on a Raspberry Pi 4 with 2 USB HDD formated Ext4, both just normal mounted (no RAID, LVM,...)

    I have different users, some shares on each disk and gave access to the shares for services (e.g. rsanpshot, SFTP, rsync task, rsync module) and users.


    I am wondering whats the recomended way to replace one of the disks by a newer one.

    e.g. in case I want to replace a 2 TB disk by a 4 TB disk.

    On OMV 5 this should have been easy. Just copy the whole contents (ensure UID, GID are the same), label the disk exactly same as the smaler one, shutdown, unplug 2 TB disk, plug 4 TB disk.


    But on OMV 6 disks are identified by UUID. How can I tell OMV that the new disk is the replacement for the smaler one?

    Where to assign the new disk so that the shares with all access rights,... are mapped to the new disk?

    Or do I have to change the UUID of the new disk to be exactly the same as the old one?


    So whats best practice for this quit common usecase?


    Thanx in advance!


    BR - Jochen


    P.S.: Background/motivation/reason for asking:


    1. I have another sytem, same setup as above, and I rsync one of the disks on my working system to the other system; rsync is is donw with "-azgo -H --delete --numeric-ids " so that the shares on the remote disk is a exact "clone" of the local one).

    So if the local one fails, I ll get the one from the remote location, plug it and should be able to work normal as if nothing had happened.


    2. One of the disks migth run out of space. I ll get a large one cp/rsync the contents of the working disk, undplug it, plug the larger one, work normal as if nothing had happened.


    BR - Jochen

  • The way I do this is as follows, and there are several applications that can be used to arrive at the same result.


    Conceptually, assuming only one partition on the disk, which is typical:


    1) Clone the smaller disk to the larger disk. Any existing content on the larger disk will be lost.

    2) On the larger disk that now contains the exact bit for bit content of the smaller disk, expand the partition to use the whole disk.

    3) On the larger disk, expand the filesystem to fill the partition.


    Gparted would probably be the easiest single program to accomplish all of this, so long as you can meet the requirements of its use.


    I have done a lot of this, in place, on my OMV machine using only dd, parted, and resize2fs in a screen session over ssh from another machine, as my OMV runs headless.


    A few times I had the new larger disk in my desktop machine and used dd over the network via ssh and screen to clone the smaller disk in the OMV box to the new larger disk in the desktop machine. Then I used Gparted on the desktop to expand the partition and filesystem on the new disk.


    Shutting OMV down and swapping the disks completes the operation.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    Einmal editiert, zuletzt von gderf ()

  • Thanx for exact explanations.

    Seems to be a good, straight forward practice if the old/smal disk is stil ok and running and more disk space is needed.

    Also thanx for the hint using screen in the ssh session if doing such time consuming tasks remotely.



    But whats the reccomended way if the work disk just failed and a backup restore to a new disk must be done.

    How restoreing form a previous local rsnapshot or remote rsync to an new empty disk is quit clear for me.

    But then whats the best practice to get this disk up and running in OMV 6, so that shares, users, services (accessing some shares) are working?!?


    Is it just prevent the file system structure (restoring rsnapshot / rsync backup) and setting same UUID the failed disk had to the new one?

    If so where is a good place to lookup the old UUID if the old disk already failed totally (cant be accessed, mounted,...)?


    BR - Jochen

  • I can't speak to the ins and outs of a restore via rsync and the effects it has on OMV.


    As for determining the UUID of a totally failed unreadable disk, you can look in the /srv directory. Those subdirectories are the mountpoints by UUID. If any are empty they are candidates for being the mountpoint for the failed disk. However, if you previously removed the disk from within the OMV GUI, that mountpoint may no longer be there.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!