File System (RAID1) "Missing" after Upgrade

    • OMV 4.x
    • Upgrade 3.x -> 4.x
    • krispayne wrote:

      Yes I tried that as it was a troubleshooting step already in this thread. It still does not show in OMV, or mount.
      Two of your drives are marked as spares (you should fix that). That should be fixed but that shouldn't keep a raid 6 array from showing up as a filesystem. What is the output of blkid?
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Source Code

      1. root@omvserver:/# blkid
      2. /dev/sda: UUID="954de9a6-aa7d-febc-fd8a-c82892be0349" UUID_SUB="0c173526-cbef-e873-e987-ad0471515b63" LABEL="omvserver:pool01" TYPE="linux_raid_member"
      3. /dev/sdb: UUID="954de9a6-aa7d-febc-fd8a-c82892be0349" UUID_SUB="2d0d7354-6ef4-34fb-d611-8ac88d820264" LABEL="omvserver:pool01" TYPE="linux_raid_member"
      4. /dev/sdc1: LABEL="Backup01" UUID="b971bdaa-022a-44fb-a8c1-ad89c370bb54" TYPE="ext4" PARTUUID="75011f3a-a535-42da-a479-c99d86491a5b"
      5. /dev/sdd5: LABEL="SSD" UUID="7bf65535-09bf-4199-a3b3-7e5fabc98ca7" TYPE="ext4" PARTUUID="000b6b63-05"
      6. /dev/sdd6: UUID="f2ed6c8a-f1ff-4b0b-978b-00b422b92631" TYPE="swap" PARTUUID="000b6b63-06"
      7. /dev/sdd7: LABEL="Root" UUID="acde4389-52ad-42aa-82f9-2607a2afc803" TYPE="ext4" PARTUUID="000b6b63-07"
      8. /dev/sdd8: LABEL="Homes" UUID="67ac96db-cfbe-4082-a4e7-bc57c651ba27" TYPE="ext4" PARTUUID="000b6b63-08"
      9. /dev/sde: UUID="954de9a6-aa7d-febc-fd8a-c82892be0349" UUID_SUB="90680e50-c911-e922-67e0-e0dc42cb6fdf" LABEL="omvserver:pool01" TYPE="linux_raid_member"
      10. /dev/sdf: UUID="954de9a6-aa7d-febc-fd8a-c82892be0349" UUID_SUB="aa54a7e8-dbbf-759b-c41b-dc76b074d71b" LABEL="omvserver:pool01" TYPE="linux_raid_member"
      11. /dev/sdg: UUID="954de9a6-aa7d-febc-fd8a-c82892be0349" UUID_SUB="4f9a393f-7306-ff85-ca48-cdfce22feb76" LABEL="omvserver:pool01" TYPE="linux_raid_member"
      12. /dev/sdh: UUID="954de9a6-aa7d-febc-fd8a-c82892be0349" UUID_SUB="ce2c2f21-4f2c-c84e-13e4-c7fa3167b4a1" LABEL="omvserver:pool01" TYPE="linux_raid_member"
      13. /dev/sdi: UUID="954de9a6-aa7d-febc-fd8a-c82892be0349" UUID_SUB="b2c63a35-8576-2c1b-8607-f43aeca02009" LABEL="omvserver:pool01" TYPE="linux_raid_member"
      14. root@omvserver:/#
      Display All
      Something I noticed last night was that maybe it has to do with the UUID being different? (it was, now that I've done the omv-mkconf mdadm it's the correct UUID) I'm not too sure of myself when it comes to the underlying RAID stuff (that's why I used OMV :) )

      I thought RAID6 was supposed to have 2 spares? My thought process is that it can lose 2 drives?

      The post was edited 2 times, last by krispayne: sanitize ().

    • krispayne wrote:

      I thought RAID6 was supposed to have 2 spares? My thought process is that it can lose 2 drives?
      raid6 has two parity drives which allows you to lose 2 drives without losing data. spares are different and are there to replace failed drives.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Nope, With the mdadm commands, try removing the drive then adding it back.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • growing the raid to 6 devices seems to be the trick to get rid of the extra spare:

      mdadm --grow /dev/md0 --raid-devices=6

      do you think that by changing something like this in the raid configuration that OMV will be able to see it in the GUI? Once this process is done (800 minutes estimated) should I recreate the mdadm.conf?

      Thanks for your help so far, it's appreciated!
    • krispayne wrote:

      do you think that by changing something like this in the raid configuration that OMV will be able to see it in the GUI? Once this process is done (800 minutes estimated) should I recreate the mdadm.conf?
      OMV will never be able to "see" it unless blkid shows it in the output. Once it is done, I would execute: omv-mkconf mdadm This will recreate mdadm.conf and update initramfs.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!