RAID 1 (Mirror) with LUKS encryption missing after reboot / edit of config.xml

    • OMV 4.x
    • geaves wrote:

      Ok this will give some info on the drives wipefs -n /dev/sdc this will not wipe the drive but will supply information on the drives signatures.
      Nothing happened... (see screenshot below)

      subzero79 wrote:

      Don’t have any more answers for you unfortunately. Looks very weird.
      All I can think is try to assemble the raid in degraded mode.


      Try see if lsblk reports more info.
      lsblk - not sure if that helps:


      How would i assemble in degraded mode?

      I did some more research and found this thread: unix.stackexchange.com/questio…onger-a-valid-luks-device
      Unfortunately, my Linux knowledge is somewhat limited, so I wonder if I could follow the same route or not. It sure sounds like the same problem, doesn't it?

      BTW
      I tried to simply decrypt one of the drives with cryptsetup luksopen - obviously that didn't work either. The drives are not recognized as LUKS devices.

      The post was edited 1 time, last by KOENICH ().

    • Isn't /dev/sde the other drive in the array? If it is, mdadm --assemble --force --verbose /dev/md0 /dev/sde
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Unfortunately not, the two 4TB drives (sdc & sdd) used to be the array, see screenshot above - but I'll include the overview in the screenshot below as well.
      sde is the SSD that OMV runs on (and sda & sdb are the two independent 8TB drives)

      Nevertheless I tried your command - adding the verbose option - again for those two drives; sadly giving the same result:




      ---
      edit
      I see the confusion about the drive labels - in my very first post in this thread, sdd & sde are the 4TB drives. When booting SystemrescueCD it seems to change - why, I don't know. Whenever I boot OMV, they're back to sdd & sde as well.

      The post was edited 2 times, last by KOENICH ().

    • KOENICH wrote:

      Unfortunately not, the two 4TB drives (sdc & sdd) used to be the array, see screenshot above - but I'll include the overview in the screenshot below as well.
      I looked at your first post where they were sdd and sde.

      KOENICH wrote:

      Nevertheless I tried your command - adding the verbose option - again for those two drives; sadly giving the same result:
      The verbose definitely wouldn't help assemble.

      So, I went back and read the whole thread since this situation was very confusing. Your drives don't show up as array members or luks devices. Did you wipe these drives or ever use the mdadm --create command? There is really no way to assemble these drives if neither one has an mdadm signature.
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      KOENICH wrote:

      Unfortunately not, the two 4TB drives (sdc & sdd) used to be the array, see screenshot above - but I'll include the overview in the screenshot below as well.
      I looked at your first post where they were sdd and sde.

      KOENICH wrote:

      Nevertheless I tried your command - adding the verbose option - again for those two drives; sadly giving the same result:
      The verbose definitely wouldn't help assemble.
      So, I went back and read the whole thread since this situation was very confusing. Your drives don't show up as array members or luks devices. Did you wipe these drives or ever use the mdadm --create command? There is really no way to assemble these drives if neither one has an mdadm signature.
      Concerning the label confusion: it's really strange! I double checked OMV again and now the drives have changed their labels there as well. The 8TB drives are sda & sdb, the 4TB drives (the former array) are sdc & sdd and the SSD containing OMV is sde (just as when booting RescueCD).

      As you can see in comparison to the screenshot on page 1 the labels have indeed completely changed. That is _very_ strange. Might that be a source of the problem as well?

      Concerning your question regarding the array situation: I never used the mdadm --create command as I created the RAID via the OMV frontend, but I would think that's what happened in the background, you would probably know best. I never did anything to the drives after creation of the LUKS encrypted RAID. Except for editing the config.xml and rebooting as explained in the beginning...

      I am not entirely sure of the correct order of encryption / creating the RAID, though - but I believe there was only one way to get a LUKS encrypted RAID1. At this time, I'm unable to to check again because obviously I do not want to overwrite data on those drives.
      I believe the order was:
      * creation of a mirror RAID device using Storage/RAID management
      * creation of an encrypted device using Storage/Encryption
      * creation of a ext4 Filesystem on that encrypted device called RAID4TB
    • KOENICH wrote:

      That is _very_ strange. Might that be a source of the problem as well?
      Nope. Not strange. Some bios initialize their drives in different order on every boot. It shouldn't cause the problem since mdadm is looking for a specific signature on the drive (which your drives don't seem to have).

      KOENICH wrote:

      I never used the mdadm --create command as I created the RAID via the OMV frontend, but I would think that's what happened in the background, you would probably know best.
      OMV does use this command but I was wondering if you tried it afterwards trying to fix the situation. Sounds like you didn't.

      KOENICH wrote:

      but I believe there was only one way to get a LUKS encrypted RAID1.
      You should be able to create an array on encrypted disks or create an encrypted disk on an array. From the output, it looks like neither was done even though I know one of them was used. My only suggestion at this point would be to try the create flag with mdadm but it usually doesn't fix the problem and wipes the drives. Good suggestion I know but it is really the only one since you can't use recovery tools due to encryption being used.
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!