RAID 1 (Mirror) with LUKS encryption missing after reboot / edit of config.xml

  • What happened:
    I recently wanted to install OpenVPN to my OMV v4.1.23-1 and consulted a YouTube video of TechnoDadLife (Link follows). Anyways, after the first steps I had to reboot the system only to find that my Mirror RAID1 (it was encrypted using LUKS) was gone. It's also gone in the encryption section.
    I have read through a couple of threads here, but was yet unable to mount even a single of the two disks, let alone the RAID. After having read these threads, I assume my manual edit of the config.xml and following omv-mkconf fstab was the reason for this error, though I'm clueless why. The exact procedure can be found at this timestamp of the video: https://youtu.be/nMthOobE-8g?t=172 - nothing really complicated especially up to this point.


    When I try to create a new file system, OMV says it'll format the drive(s), which of course I do not want to happen.
    I have already tried to reactivate the RAID by using mdadm -assemble, here's the output:

    Code
    root@Serverlein:~# mdadm --assemble /dev/md0 /dev/sd[de]
    mdadm: no recogniseable superblock on /dev/sdd
    mdadm: /dev/sdd has no superblock - assembly aborted

    Here's the output to the Degraded or missing raid array questions


    So the two drives for the RAID are /dev/sdd and /dev/sde, both with 4TB capacity (see screenshots below as well). And as said, encrypted using the LUKS / the encryption plugin. Worked perfectly well until this reboot...
    Somehow I have the feeling that OMV "forgot" that these drives are encrypted and needs to be told and afterwards would be fine to recreate the RAID, but I do not know how.


    Here's a screenshot of file systems with the missing RAID:
    FileSystems-missing.jpg
    And here's one showing the 5 disks of my server. A 128GB SSD as system drive sda, two 8TB drives sdb & sdc (both LUKS encrypted as well) and the two 4TB drives sdd&sde which used to be the LUKS encrypted RAID:
    disks-overview.jpg

  • First off I have no knowledge of LUKS, but the output from mdstat, blkid and mdadm examine does not display any information about /dev/sd[de] the only real output is from your mdadm assemble that returns no superblock on /dev/sdd


    I remember reading something on here regarding LUKS and Raid, I think you have to turn off LUKS for the Raid before you can do anything.

  • First off I have no knowledge of LUKS, but the output from mdstat, blkid and mdadm examine does not display any information about /dev/sd[de] the only real output is from your mdadm assemble that returns no superblock on /dev/sdd


    I remember reading something on here regarding LUKS and Raid, I think you have to turn off LUKS for the Raid before you can do anything.

    Thanks for your answer. I think it's pretty weird that no other information about the missing RAID shows up as well.
    But how would I turn off LUKS for the RAID? I would still like to access my data which hides encrypted on those two drives...

  • But how would I turn off LUKS for the RAID?

    I don't know, I've never used it, there is another option, in OMV-Extras -> Kernel scroll down and you will see Install SystemRescueCD then an option to reboot to use it, this might help recover the raid as it's all command line and may well bypass LUKS, but again I don't know.


    The superblock error is repairable but only if it's on one drive + using the SystemRescueCD mdadm -D /dev/md0 hopefully will return something, and I suppose you don't have a backup of that raid :)

  • I am not sure about the best practices here, but using a raid in my opinion the encryption should sit on top of the assembled md0 device, not the underlying members.


    I use luks but just as single drives encrypted merged into a pool.



    This should be right IMO
    RAID->LUKS->FILESYSTEM


    we can wait for other users comments also they use a similar setup

  • Honestly I believe that's the way it was configured - I just can't check at the moment. There was only one way to configure Raid and LUKS via OMV and I think that's exactly how it worked.
    I'm wondering at this moment of I could somehow connect only one of the two drives, decrypt it and save the data - usually that should be possible with using Raid 1, but I'm not sure if I can do that with OMV.

  • This should be right IMO
    RAID->LUKS->FILESYSTEM


    Honestly I believe that's the way it was configured - I just can't check at the moment.

    I did some research on this last night @subzero79 is correct that is the way it should be done, that way you are encrypting the raid and not the two disks.


    One way to check is to look at the output from cat /etc/fstab

  • Is strange from what I can see sdd is missing the raid signature. Reboot the server and put in hastebin the whole dmesg output after boot.


    Should be able to start the raid without one of the disks if is raid1

    Thanks for all your replies. I'm on parental leave at the moment and don't find time to care for my server every day, so please excuse my late answer.


    Where do I find the boot dmesg output as requested?

  • That was easy enough, thanks :P


    I just rebooted the server, here's the dmesg output:


    ---longer than 10.000 chars, so I had to put it in an attachment---



    I didn't do anything after reboot, thus the 8TB drives are not decrypted yet at this moment.

    Quote from subzero79

    Should be able to start the raid without one of the disks if is raid1

    That's what I thought. But how would I do that? Since at this time I can't decrypt even a single drive in OMV (at least not the web frontend)
    I'm wondering if that would be less work than trying to repair the RAID. I'll probably rsync the two drives in the future, since I'm currently planning to rsync that data to an external OMV as well anyways.

  • Having looked at that dmseg I can't see any errors or warnings regarding any of the drives, but I'm no expert at using this, your fstab would show if the raid is actually configured in their.

  • I attached the lsmod output as a txt-file. I also attached the out put for fstab - when reading the whole thread again I realized I was asked for it earlier.


    It was a fresh iso-install.


    Thanks again for caring!

  • The fsatb is incorrect yours is missing this;


    # <<< [openmediavault]


    which should be the last line, OMV adds the fstab entries in # >>> [openmediavault] at the start with # <<< [openmediavault] at the end.

  • The fsatb is incorrect yours is missing this;


    # <<< [openmediavault]


    which should be the last line, OMV adds the fstab entries in # >>> [openmediavault] at the start with # <<< [openmediavault] at the end.

    My bad. I forgot to copy the last line. Sorry.
    # <<< [openmediavault] is actually at the end. I corrected the txt-file in the post above - it really was only the last line missing.

    Doesn't return anything...

  • My bad. I forgot to copy the last line. Sorry.

    Ok, but I agree with @subzero79 there is nothing that jumps out, I have seen users have inactive, degraded, yes go missing, but there is information you can work with, this is the first time I have seen anything like this, the drives are there but that's all.

  • IPMI is afaik theoretically possible (it's a HP N54L MicroServer), but that's it.


    Given that the two of you haven't experienced this weird behavior I assume it's actually best to try and see what I can do with a Ubuntu live... I'll try if I can at least decrypt one of the two drives (which would be sufficient). I actually have a backup of the content of the two drives, but it's about two weeks old I some of the data has changed in the meantime, so I'll need to see if I can somehow access it again.
    For the future (as said above somewhere) I guess I'll just rsync the two drives and won't trust a soft RAID :(

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!