[RESOLVED] RAID on LVM logical vols. not avail. after reboot

  • Hello,


    I started using OMV recently and have a question regarding RAID on LVM logical volumes.
    I installed Debian 6.0.9 and the followed the very good installation guide http://forums.openmediavault.org/viewtopic.php?f=12&t=2140
    I think this might be more of a debian problem but I still hope somebody can help me.


    I have 4 drives (2* 2,5 TB and 2*3 TB) and I want to create a RAID 5 on them.
    I can't do it directly because the size differs and I would loose the additional storage on the larger drives.
    After I installed the lvm plugin I can create logical volumes with identical size and create a 4 drive RAID 5 and put the additional space on the larger drives into another RAID 1.
    It all works fine until I reboot.
    After reboot the RAID is gone. It is not listed in RAID Management and the file systems are listed as missing.


    The RAID can be restored manually by running


    Code
    mdadm --assemble --scan


    and the RAID is listed under RAID Management as clean again.
    I then have to manually mount the filesystem and everything is fine again. The data stored is available.


    I think the problem is that during boot when the system looks for RAIDs LVM might not be available, but this is just a guess.


    Is there any way to automate the assemble and mounting?

  • Zitat von "lenny_n"

    I think the problem is that during boot when the system looks for RAIDs LVM might not be available, but this is just a guess.?


    I'm sure this is the problem. You need some type of delay before the raid is mounted. ryecoaaron would be best person to ask how to setup the delay. I will send him a link to this post. While this will achieve what you want I don't think that LVM is that great. Have you ever done a pvreduce??? It is a painful process. If you don't need redundancy on all your files you could use Greyhole and only have extra copies of important data. Also, you could use SnapRaid but with that you should have same size disks.

    • Offizieller Beitrag

    The only way I know how is to put the mounting (with delay) in /etc/rc.local (which I am not a fan of).

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I thought about this a bit this morning. Putting a RAID 5 on a LVM really does not make sense. It defeats the whole purpose of having a RAID 5, redundancy in case of disk failure. It one disk fails that has 2 partitions on it that are being used by the RAID 5 how much sense does that make? You wold not be able to recover the RAID 5. Bad idea and I would go no further with this idea. If you want to pool the drives I would use Greyhole, or AUFS, plugin.

  • Thank you very much for your help so far.


    I think I have to explain a bit more about my use of LVM. When I create a RAID 5 with my drives (2 x 3 TB and 2 x 2,5 TB) I will end up with about 7,5 TB space on the volume because it will only use 2,5 TB on the 3 TB drives. So I will "loose" 2 x 0,5 TB space. This is happening because the disks have to be of the same size or else the system will only use the lowest size of any disk that is part of the RAID.
    What I do to fix this is that I create logical volumes for each disk with the size needed by the RAID (about 2,5 TB) and another volume of about 0,5 TB on the larger disks. These are each separate volumes and do not span multiple disks.
    Now I can use the larger volumes to build a RAID 5 consisting of 4 logical volumes on 4 disks and build a separate 2 disk RAID 1 volume with a size of about 0,5 TB mirrored.
    Should one disk fail, I can replace the disk, configure the LVM to have the logical volumes of the size needed and add the new LVs to the degraded RAID in order to rebuild it.
    I don't see any reason why this should not work.


    I will try putting "mdadm --assemble --scan" and the mounting in /etc/rc.local and see if it works.


    Thanks again.

  • LOL, a lot work. I get it now that I see you are mirroring the .5 volumes. I've used LVM quite a bit and it can be a pain in the ass. Since my last pvreduce I'm pretty much done with it. It was so damn slow.


    Good Luck!

  • Just a quick update.
    Putting "mdadm --assemble --scan" and the mounting in /etc/rc.local works.
    I use this configuration for over a month now and had no problems.


    Thanks again for your help.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!