LVM - Remove a disk

  • Hello,


    I need to remove 1 of 3 disks from a Logical Volume. Following the standard pvmove commands I noticed that even though there is plenty of free space over the remaining extents all 3 disks are showing 100% full in LVM (both at the CLI and inside OMV).


    Is there something odd with the way OMV sets out its file system on the Logical Volume which makes LVM think the disks are 100% utilized?


    Many thanks.


    Edit: To add a bit more info here. In the LVM section on OMV I see that the 3 Physical Disks are showing 100% used and the single Volume Group is showing 0Mb Free. The only area which says shows free space is the OMV File Systems section. About 50% used. The File System is Ext4.


    Thanks again!

  • You need to shrink the filesystem before you can remove the disk from the lvm. This is not possible via WebGUI, you need to do it via CLI or use a parted magic/gparted live cd.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Thanks David,


    I need to unmount the disk before I can reduce the size. In the gui I can see no way to do this without deleting all shares using the folders on the filesystem (i.e. so that those folders turned to 'not in use').


    Is it safe to bypass all this and unmount via the command line? It seems a bit crazy to have to unshare everything. I thought just turning off the CIFS service would have been enough but its still grey'd out.

  • Zitat von "davidh2k"

    You need to shrink the filesystem before you can remove the disk from the lvm. This is not possible via WebGUI, you need to do it via CLI or use a parted magic/gparted live cd.


    Greetings
    David


    I've now reduced the size of the Ext4 filesystem enough to in theory pvmove out the offending disc. The change was successful in that the capacity of the file system is reduced to very close to the size of the data sitting on it.


    However I still get the message 'No extents available for allocation' and a pvscan shows all discs to be fully utilised.


    I'm now wondering if i'm meant to reduce the size of the Logical Volume as well?

  • Right all sorted.


    For the record and others wanting to remove a physical disk from an OMV install which leverages LVM, these are the high level steps I had to take.


    Sizes may not be exact as I documented this after the fact and so lost some of the exact numbers


    MAKE SURE YOU HAVE A FULL BACKUP BEFORE DOING THIS


    Note: In OMV the file system uses the mapper feature. usually the fs is /dev/mapper/[filesystem name]


    Working Example: 3 HDs. 1 Volume Group. 1 File system.
    root@NAS:~# pvscan
    PV /dev/sdb VG data lvm2 [931.51 GiB / 0 free]
    PV /dev/sdc VG data lvm2 [931.51 GiB / 0 free]
    PV /dev/sdd VG data lvm2 [1.82 TiB / 0 free]


    Ext4 File system 'data' 3.18TB. Datasize 1.8TB


    Task: Remove /dev/sdb


    Steps:
    Unmount the file system
    root@NAS:~# umount /dev/mapper/data


    Check file system
    root@NAS:~# fsck -f /dev/mapper/data


    Reduce the file systems size to 2100G (leaving room for the 1.8TB of data)
    root@NAS:~# resize2fs /dev/mapper/foo 2100G


    Check file system
    root@NAS:~# fsck -f /dev/mapper/data


    Reduce Volume Group size to a few Gigabytes bigger than 2.1TB filesystem size
    root@NAS:~# vgreduce -F 2110G


    Check file system
    root@NAS:~# fsck -f /dev/mapper/data


    Resize file system to fill the volume group size
    root@NAS:~# resize2fs /dev/mapper/data


    Scan again for results
    root@NAS:~# pvscan
    PV /dev/sdb VG data lvm2 [931.51 GiB / 0 free]
    PV /dev/sdc VG data lvm2 [931.51 GiB / 0 free]
    PV /dev/sdd VG data lvm2 [1.82 TiB / 92.49 GiB free]


    pvmove contents of the disk needing removal
    root@NAS:~# pvmove /dev/sdb


    Remove the disk from the volume group
    root@NAS:~# vgreduce data /dev/sdb


    Scan to check results
    root@NAS:~# pvscan
    PV /dev/sdc VG data lvm2 [931.51 GiB / 0 free]
    PV /dev/sdd VG data lvm2 [1.82 TiB / 92.49 GiB free]
    PV /dev/sdb lvm2 [931.51 GiB]


    In the OMV GUI the Delete button for /dev/sdb in LVM is now un-grey'd. click it to remove the disk from the LMV.


    You are now free to remove the physical disk from your box.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!