Snapraid lvm question

  • Hi,


    I have a little HP micro server. I was running freenas with 3x2TB drives in a RaidZ1 config until I began running low on space and got another 2TB drive and realized I couldn't grow my volume. I'm now on omv and so far it looks great, but I would like to add some redundancy so here's my story. I initially planned on using snapraid to try and replicate the above setup (I'm okay with a bit of data loss if snapraid isn't completely up to date) so my plans were to have my 3 2TB drives containing media and the 4th drive as the parity drive. I didn't want to manage each drive independently so I created a lvm so I could just copy files over and let the lvm deal with the rest. That worked great whilst I performed the risky move of degrading my old freenas setup to 2 drives and copying data across to my new 2 drive lvm. I have since wiped formatted to ext4 and added the 3rd drive to the lvm and the 4th drive (will be parity) as a separately mounted drive.


    I'm now:
    LVMVolume 3 2TB drives
    Parity drive 1 2TB drive


    I then installed snapraid and as I have only one mounted volume (the lvm) I can't use my parity drive as it's too small. I would need to somehow mount all the partitions in the lvm separately to get this to work as intended. Is there a simpler way to do this or am I looking at the wrong solution here? Also what would happen in lvm if one of my drives failed would I lose everything or just that disk? Perhaps dealing with each drive on it's own would have it's benefits then? I did try searching for some results here but struggled finding something that seemed to be the same as my issue. ?(


    Thanks in advance.

    • Offizieller Beitrag

    If a disk from the VG fails there will be data loss.


    If you want to move the lvm drives into pure sd[X] disk formatted as ext4 for snapraid you will have to start reducing the lvm by reducing the logical partition. You'll have to reduce the logical partition as the same size of one the disk, then make sure the physical volume isn't being used as part of the logical partition. In my opinion is a very complex procedure. You have to read more about on how to accomplish such thing.
    I'd recommend you to start using a virtual machine to practice the feasibility of this.

  • Thanks a mil, this is the process I followed if it helps anyone and it seems to have worked:


    I'd highly recommend reading up on each command and the options I selected to make sure they'll work as intended in your scenario.


    Before:
    root@openmediavault:~# pvscan
    PV /dev/sda VG volumeGroup lvm2 [1.82 TiB / 0 free]
    PV /dev/sdb VG volumeGroup lvm2 [1.82 TiB / 0 free]
    PV /dev/sdc VG volumeGroup lvm2 [1.82 TiB / 0 free]
    Total: 3 [5.46 TiB] / in use: 3 [5.46 TiB] / in no VG: 0 [0 ]
    root@openmediavault:~#


    I needed to remove /dev/sdc so I copied off as much as possible and was left with 2.2TB of data on my VG. These commands were run:


    /etc/init.d/nfs-kernel-server stop
    umount /dev/mapper/volumeGroup-MyDisk
    resize2fs /dev/mapper/volumeGroup-MyDisk 2900G && e2fsck -f /dev/mapper/volumeGroup-MyDisk
    lvreduce -L 3000G /dev/mapper/volumeGroup-MyDisk && e2fsck -f /dev/mapper/volumeGroup-MyDisk
    resize2fs /dev/mapper/volumeGroup-MyDisk && e2fsck -f /dev/mapper/volumeGroup-MyDisk
    pvmove /dev/sdc
    vgreduce volumeGroup /dev/sdc


    After:
    root@openmediavault:~# pvscan
    PV /dev/sda VG volumeGroup lvm2 [1.82 TiB / 0 free]
    PV /dev/sdb VG volumeGroup lvm2 [1.82 TiB / 726.03 GiB free]
    PV /dev/sdc lvm2 [1.82 TiB]
    Total: 3 [5.46 TiB] / in use: 2 [3.64 TiB] / in no VG: 1 [1.82 TiB]
    root@openmediavault:~#

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!