Raid on lvm on raid...

  • Hi I have (I hope) a reasonable requst. Build raid on top of lvm on top of raid. Here is my config (I have done it from cmdline):


    5 hdds 320GB (notebook disks):
    One hdd is OMV 0.5 system disk with about 280 GB partition for data (sda2). On sda2 I have an lvm.
    Four hdds are used in raid0 with lvm on top of it.


    All my data is stored in a fex logical volumes on raid0. But I have some data which I want to have on raid1. So I have created a logical volumes (exactly same sizes) for that data. One logical volume on sda2 volume group and one on raid0 volume group. Than I have used these two same size logical volumes to create raid1 (OT: probably I should also use write mostly parameter).


    So I have now raid1 from two logical volumes. One is located in VG on sda2 and one is located on very fast raid0 VG.


    My questions:

      1. How can I get

    Code
    mdadm --assemble --scan

    done after lvm initialization? (If I am not wrong, omv system, like any other system, assemble all raid volumes, than searches for lvm containers. Thus It will not assemble raid from logical volumes of lvm which are created after raid assemble part)


    • 2. How can be grow (or any resize operation) done on this raid1?


    PS: I have plan to make that raid0 from four of disk to be something bigger in future (probably raid10), but I am waiting for some sata multiplicator cards...
    PS: Four notebook disks in raid0 can use whole 1Gbit network (NFS share).

  • If you are using OMV best to use the GUI to do any software raid setups and not from the command line. You can either do software raid and then lvm or hardware raid then lvm. lvm is just a container to manage it all and gives you the flexibility to add more disks without breaking your setup. If you use hardware raid then the setup is done on the raid controller and OMV will see your array setup as one large disk. If you have several hardware arrays you can use lvm to manage them.

  • Raid 0 very bad idea raid 10 OK if you really need the speed and some protection.
    Raid 0 fast read write no protection.
    Raid 5 best for storage is very good read ok write and parity of 1 drive.


    When I will set up my VM I will use raid 10 for the VM drives and raid 5 for all the storage in groups of 4 drives and manage the raid with lvm.

  • Pardon, but I do not want any advices how to store my data. Please read my question fully and answer it if you can, or do not post things that I know and that I do not have questioned...


    Thanks.

    • Offizieller Beitrag

    No need to be rude when someone is offering advice especially when doing something unsupported. Like cpoc said, if you use raid and lvm from the web interface, you might not have this problem...

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Zitat von "ryecoaaron"

    No need to be rude when someone is offering advice especially when doing something unsupported. Like cpoc said, if you use raid and lvm from the web interface, you might not have this problem...


    Pardon for my rudeness. I have asked that question in hope that some hacker of OMV can tell me where can I add needed functionality to be in harmony with OMV and debian (as I am an gentoo user and init.d on debian is for me a mystery) and thus disapointed of offtopic question I wrote that. Sorry.


    To answer my question myself:


    edit lvm init script:

    Code
    vim /etc/init.d/lvm2


    , have a look at start part:

    Code
    do_start()
    {
            modprobe dm-mod 2> /dev/null || :
            /sbin/vgscan --ignorelockingfailure --mknodes || :
            /sbin/vgchange -aly --ignorelockingfailure || return 2
    }


    and moddify it like this:

    Code
    do_start()
    {
            modprobe dm-mod 2> /dev/null || :
            /sbin/vgscan --ignorelockingfailure --mknodes || :
            /sbin/vgchange -aly --ignorelockingfailure || :
            /sbin/mdadm --assemble --scan || return 2
    }


    Added mdadm call could use variable like in /etc/init.d/mdadm-raid script to point to mdadm binnary. This solution is working (after reboot, mdadm create raid and than populate lvm logical volumes, and than moddified part of initscript will try to found additional mdadm raid devices also in populated logical volumes).

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!