LVM and retiring old disks

  • Dear forum,


    I have recently installed more disk capacity in the form of a new RAID5 set under LVM. Unfortunately, I had severe performance problems (90% full) and needed to expand the file system immediately. OMV took all the available capacity without asking, so now I will have to reduce the file system before removing the old RAID set.


    What I need to do:

    1 Reduce the file system,

    2 move the physical extents from the old PV to the new (pvmove) (old PV is 8TB, new is 24TB),

    3 remove the old PV from the volume group (vgreduce).


    There is only support for item 3 in the GUI, so using commands is probably what I will have to do. However, any LVM command results in "command not available" error. Even a simple "lspv".


    How can I get about to do what I need to do?



    Code
    Version
    6.9.16-1 (Shaitan)
    Processor
    AMD FX-8320E Eight-Core Processor
    Kernel
    Linux 6.1.0-0.deb11.21-amd64
  • Is this still W.I.P? When you say "a new RAID5 set under LVM", do you mean a MD RAID device is created first and then that md device is used as a physical volume, or do you mean you created a raid via a LVM command at the CLI, e.g. lvcreate --type raid5 .... ? I'll assume the former.


    If forum questions are an indicator, LVM is not used by many forum members. But moving any filesystem from one raid set to another has risks and most people would be well advised to have a proper validated backup of all their data before attempting this. Do you have back ups?


    You've probably long discovered by now that "lspv" is an incorrect command, you wanted "pvs" or "pvdisplay", "vgs", "lvs", etc.


    Your step one doesn't seem correct to me. How can you reduce the size of a filesystem that's at 90% capacity?


    IIRC, you can only do a pvmove between device in the same volume group, so both old raid set md and new raid set md devices must be part of your vg. You'd need to add your new pv to your existing vg first, This assume you have the ports available to connect up all your drives. The pvmove would be done at the CLI. The old raid set device removed from the VG via the WebUI. Then you can resize any LV as desired via the WebUi and then finally resize the filesystem via the WebUI.


    I've not done this myself, but as the filesystem removes mounted at all times during this process, you should not face the problem of removing all references to a mounted filesystem before changing your disk storage layout.

  • Thanks Krisbee for reaching out!

    I am laughing at myself. I thought I was being crystal clear in my description, but crystal clearly, I was not, ha ha! :S


    "do you mean a MD RAID device is created first and then that md device is used as a physical volume"

    Yes, exactly. None of the usual LVM CLI commands work for me. I can only work from the webUI.


    I did this operation about 25 years ago on an AIX system and have a good idea of the process, but may have forgotten the details (to say the least...). My old RAID set was 90% full, so a second set was added as a physical volume in the volume group. I intended to grow the file system just a little to get rid of the immediate performance issues, but OMV allocated the entire volume group. It is now 20% full. This is why I now have to shrink the file system in order to remove the old RAID set from the volume group.


    "You've probably long discovered by now that "lspv" is an incorrect command, you wanted "pvs" or "pvdisplay", "vgs", "lvs", etc."

    All of these LVM commands render a "command does not exist" error in the CLI. (Sorry for the "lspv" thing - a detail I thought I remembered...)

    Code
    root@bolivar:/# pvdisplay
    bash: pvdisplay: kommandot finns inte

    The commands all exist on the pc I am writing this on, but not in OMV CLI. Hence "pvmove" is not available. This is problem #1.


    Are you saying that removing the old RAID set physical volume from the volume group could be done *before* shrinking the file system? And are you saying that shrinking the file system is possible from the webUI? In my webUI, it is only possible to grow the file system (EXT4). Is it perhaps an OMV7 feature?


    I have used utilities in the past like Partition magic. They are able to resize file systems and partitions without trouble (including shrinking), but then they are not mounted. Can this be done in OMV?


    And yes, I have backed up as much as I possibly can. :)

    //

  • crashtestdummy First, have you actually installed the LVM2 plugin ( openmediavault-lvm2 ) on your OMV6 system? I'd guess not. Install the plugin and then for clarity post the output of commands: pvs , vgs, and lvs 


    Then we can go from there. Do not use Partition magic.

  • Yes, it's installed:


    Code
    root@bolivar:/# pvs
    bash: pvs: kommandot finns inte
    root@bolivar:/# lvs
    bash: lvs: kommandot finns inte
    root@bolivar:/# vgs
    bash: vgs: kommandot finns inte
    root@bolivar:/# 

    //

  • crashtestdummy It was getting late yesterday. To continue, if commands are not found but are truly installed ( stat /usr/sbin/pvs finds the binary) then something must be wrong with your login/console shell environment path and/or the ~/.bashrc and ~/.profile files which should both be present.

  • Yes, Bolivar is the OMV6 server:



    Code
    root@bolivar:/# stat /usr/sbin/pvs
      Fil: /usr/sbin/pvs -> lvm
      Storlek: 3             Block: 0          IO-block: 4096   symbolisk länk
    Enhet: 97eh/2430d    Inode: 787443      Länkar:1
    Åtkomst: (0777/lrwxrwxrwx)  Uid: (    0/    root)  Gid: (    0/    root)
        Åtkomst: 2023-10-15 15:58:09.000000000 +0200
    Modifiering: 2021-02-22 22:39:14.000000000 +0100
        Ändring: 2023-10-15 15:58:11.169602446 +0200
           Född: 2023-10-15 15:58:11.121604325 +0200
    root@bolivar:/# 


    "...something must be wrong with your login/console shell environment path and/or the ~/.bashrc and ~/.profile files which should both be present."


    I think you are right in that. The home directory is pretty empty:

    Code
    root@bolivar:/home# ls -al
    totalt 12
    drwxr-xr-x  3 root root 4096 16 dec  2023 .
    drwxrwxr-x 19 root root 4096  6 dec 08.04 ..
    drwxr-x---  2 Rob  root 4096 16 dec  2023 sftp_users
    root@bolivar:/home# 

    //

  • I should have added, use the command "env" to check that both the "logname" and "user" are both "root". Depending how you've reached "root" those two "env variables" may point to a normal user. So either allow "root" to use "ssh" or allow a non-root account to have both a login using /usr/bin/bash and belong to _ssh and sudo groups.

  • Indeed! I was using a normal user to log in to the box and then "su" to root. "env" showed that things were pointing to the normal user, as you suggested. I now allow root to ssh and can now see the LVM commands. Yay! :)


    Now I should be able to shrink both the filesystem and LV with "lvresize". However, the filesystem needs to be unmounted first.

    The "unmount" button is grayed out:


    Dare I unmount from the CLI, or what should I do?

    //



  • //

  • crashtestdummy The outputs above and your screenshot in #11 confirm you appear to have increased both the LV and filesystem /dev/dm-0 after adding the second raid set to the volume group "datavg". Unfortunately that was the wrong order if you wanted to move extents from old to new raid set. You should have moved the extents before expanding the LV and filesystem while everything was still mounted, so you've lost an advantage of using LVM.


    In OMV you cannot umount a filesystem unless and until you remove all references. Which basically means undoing other OMV config settings, e.g any shares, use of folders by dockers etc. and the "shared folders".


    One way to see references to a filesystem is to install the "openmediavault-resetperms" plugin. Here's a simple example:





    Only two share settings and shared folders to remove before unmounting the filesystem in this example, you may have many refs to remove.


    Before doing anything else, please post the output of pvdisplay in order to see the extents is use.

  • That's a 100% correct assessment. Wow! :) The history is that the old RAID set was 90% full with severe performance problems. With a new RAID set installed and included in the volume group, I intended to increase the file system just a bit to get the system running reasonably, before doing the rest (pvmove etc). However, OMV grabbed all available storage without asking, so here I am in a position where I need to reduce the size of the file system. Not at all what I intended.


    This is getting more difficult than I thought. I will start with installing the "openmediavault-resetperms" plugin and see what that brings. I would suspect a couple file shares and an ftp dependency, but we'll see.

    //

  • crashtestdummy There is a way round the problem having to remove refs before a filesystem can be unmounted and then shrunk. One solution is to boot the OMV host from a Linux SystemRescue USB. ( See: https://www.system-rescue.org/ )


    Once SystemRescue is installed on a USB stick, you can edit it's yaml config to make a few changes to enable root ssh access to the booted system. The md raids should start and the /dev/dm-0 will be unmounted so can be shrunk before rebooting into OMV. Once that's done you can sort out your LVM settings. I'll be back tomorrow.

  • I am on new territory here. Installing the "openmediavault-resetperms" plugin went well and I can see that I have eight shared folders, each accessed by two to five services. Permissions are admin-rw, users-rw and others-r on all of them. Looks like I would reset permissions on the eight folders, do the shrinkage and then do what to set the permissions back? The docs don't explain that part... Not sure what the system would look like afterwards. Would the shared folders still be there with its services and I only need to set r/w back, or would I have to set up the shares again?


    I am a little partial toward doing the necessary "surgery" from the comfort of my desk, where I have a good chance of double checking everything, taking one step at a time in a feelgood manner :) , rather than crammed in front of a very noisy server, raising the stress hormones to a toxic level. =O Although the command line with SystemRescue would be more familiar territory. After all, I did something similar 25 years ago... :D

    Would you recommend one or the other in this situation? Downtime is okay.

    //

  • crashtestdummy I didn't ask before, but wondered what kind of machine you were running OMV on. As you have around 12 HDDs available I guessed it might be server grade hardware. It was too late for me to complete my last post last night.


    A much more convenient use of SystemRescue is to install the "openmediavulat-kernel" plugin. This allows you to embed and boot a SystemRescue iso on the OMV OS disk itself, assuming you have at least 1GB of space on the OS disk. Everything can be done from the comfort of your desk.


    First install the kernel plugin then install "SystemRescue " via the plugin. GRUB is amended so you can boot into SystemRescue once. Here's a screenshot:






    Select "Reboot to SystemRescue Once" and then Reboot OMV. You can ssh as root to SystemRescue with password "openmediavault" at whatever IP is assigned to it by DHCP. You can get the new IP form your router.


    In my example it was ssh root@192.168.0.200. The MD RAIDs are active and all LVM commands are available. The filesystem you want to shrink is not mounted and at this stage you've made no changes to any settings in OMV.



    I created a small scale example which mimics your situation. Unless the filesystem and LV are both shrunk, there are no free extents on the larger PV to move all the extents from the small to the large PV. To ensure 7.99GB can be moved, things need to be shrunk by an amount which does not reduce the filesystem below its current size of circa 8GB. I will shrink to 15GB.


    You can shrink the filesystem and LV in one step, e.g.:


    You can now check for free extents, before executing a pvmove:



    Execute the pvmove, which for 8TB of data will take several hours.



    Rebooting the system will automatically boot into OMV.


    In my example:


    Using the WebUI stage you can remove the smaller PV from the VG. You can now extend both the LV and its filesystem via the WebUI. It would be sensible not to extend the LV to the max space avail. E.g.:


    • Official Post

    Permissions are admin-rw, users-rw and others-r on all of them. Looks like I would reset permissions on the eight folders, do the shrinkage and then do what to set the permissions back? The docs don't explain that part...

    I'm sorry, but I don't quite understand what part the documents don't explain. Can you explain it better?

    I assume that when you say "The docs" you mean this one in particular. https://wiki.omv-extras.org/do…7:omv7_plugins:resetperms

    In reality, despite being a very useful complement, its use is truly very simple, but if any further clarification is necessary in that document, please let me know.

  • chente: This one, since I am still on OMV6: https://wiki.omv-extras.org/do…6:omv6_plugins:resetperms

    I don't think there is anything wrong with the user guide, but it does not explain my particular use case of restoring the permissions after first resetting them. Maybe it's trivial, but I don't know what to expect after removing the permissions. Would the shares still be there and only the permissions have to be set back again, or would I have to recreate the shares?

    //

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!