Upgrade from 2.x to latest 4.x : Raid Ok but marked as 127

    • OMV 4.x
    • Resolved
    • Upgrade 2.x -> 4.x
    • Upgrade from 2.x to latest 4.x : Raid Ok but marked as 127

      Hi,

      I did a fresh install on a thumb,

      Then did my settings,

      Raid is ok, but marked as md127 instead of md0, is this a problem ?

      Source Code

      1. Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
      2. md127 : active raid6 sdb[8] sdf[6] sdc[2] sdd[0] sda[3] sde[7]
      3. 5860548608 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]




      Source Code

      1. /dev/sdb: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="b243c911-8058-51ff-0413-31fb1528c36c" LABEL="zetta:0" TYPE="linux_raid_member"
      2. /dev/sdc: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="70983681-b5c2-9f70-1801-948b1b7c97d1" LABEL="zetta:0" TYPE="linux_raid_member"
      3. /dev/sdd: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="b2460066-00b1-5070-cfe3-7ac67aae96c1" LABEL="zetta:0" TYPE="linux_raid_member"
      4. /dev/sde: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="ce05af8c-7da1-c0a7-0a07-c10b0b154735" LABEL="zetta:0" TYPE="linux_raid_member"
      5. /dev/sdf: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="440f9a88-1d42-020c-ba4d-d303afb76c7c" LABEL="zetta:0" TYPE="linux_raid_member"
      6. /dev/md127: LABEL="ZettaFiles" UUID="76c7546c-5ae6-4884-ac9f-3ecda0f473bc" TYPE="ext4"
      7. /dev/sda: UUID="96b0e7b7-83aa-203f-1545-031b43caaa85" UUID_SUB="ee7eab8f-dc90-e3be-146d-a4e09d104418" LABEL="zetta:0" TYPE="linux_raid_member"
      8. /dev/sdg1: UUID="e7e85422-36f4-425a-bf67-daeca4725765" TYPE="ext4" PARTUUID="d1d52e93-01"
      9. /dev/sdg5: UUID="32f1f36a-e295-4904-b6e1-484eba83ff1a" TYPE="swap" PARTUUID="d1d52e93-05"

      Source Code

      1. Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      2. Units: sectors of 1 * 512 = 512 bytes
      3. Sector size (logical/physical): 512 bytes / 4096 bytes
      4. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      5. Disk /dev/sdc: 1,4 TiB, 1500301910016 bytes, 2930277168 sectors
      6. Units: sectors of 1 * 512 = 512 bytes
      7. Sector size (logical/physical): 512 bytes / 512 bytes
      8. I/O size (minimum/optimal): 512 bytes / 512 bytes
      9. Disk /dev/sdd: 1,4 TiB, 1500301910016 bytes, 2930277168 sectors
      10. Units: sectors of 1 * 512 = 512 bytes
      11. Sector size (logical/physical): 512 bytes / 4096 bytes
      12. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      13. Disk /dev/sde: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      14. Units: sectors of 1 * 512 = 512 bytes
      15. Sector size (logical/physical): 512 bytes / 4096 bytes
      16. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      17. Disk /dev/sdf: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      18. Units: sectors of 1 * 512 = 512 bytes
      19. Sector size (logical/physical): 512 bytes / 4096 bytes
      20. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      21. Disk /dev/sda: 1,4 TiB, 1500301910016 bytes, 2930277168 sectors
      22. Units: sectors of 1 * 512 = 512 bytes
      23. Sector size (logical/physical): 512 bytes / 4096 bytes
      24. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      25. Disk /dev/md127: 5,5 TiB, 6001201774592 bytes, 11721097216 sectors
      26. Units: sectors of 1 * 512 = 512 bytes
      27. Sector size (logical/physical): 512 bytes / 4096 bytes
      28. I/O size (minimum/optimal): 524288 bytes / 2097152 bytes
      29. Disk /dev/sdg: 7,5 GiB, 8019509248 bytes, 15663104 sectors
      30. Units: sectors of 1 * 512 = 512 bytes
      31. Sector size (logical/physical): 512 bytes / 512 bytes
      32. I/O size (minimum/optimal): 512 bytes / 512 bytes
      33. Disklabel type: dos
      34. Disk identifier: 0xd1d52e93
      35. Device Boot Start End Sectors Size Id Type
      36. /dev/sdg1 * 2048 7567359 7565312 3,6G 83 Linux
      37. /dev/sdg2 7569406 15661055 8091650 3,9G 5 Extended
      38. /dev/sdg5 7569408 15661055 8091648 3,9G 82 Linux swap / Solaris
      Display All
      (I did update the fstab for the usb drive and flash plugin, why is it still showing the swap file ?)

      Thanks

      EDIT : tried to fix this too within grub and uuid, no luck


      Source Code

      1. mdadm : no array found in config file or automatically
      Gigabyte GA-H87M-HD3, Pentium G3220, 4 GO DDR3, 6 x 2 To raid 6, 2.5' 100 Go for system
      Donator because OMV deserves it (20€)

      The post was edited 3 times, last by hubertes: The server encountered an unresolvable problem, please try again later. Exception ID: 6bf48d7630ac20d223abee1aaeee22efb1b7ea28 ().

    • hubertes wrote:

      Raid is ok, but marked as md127 instead of md0, is this a problem ?
      Nope.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      Display All
      still have the old hdd where OMV 2.x was installed though
      Gigabyte GA-H87M-HD3, Pentium G3220, 4 GO DDR3, 6 x 2 To raid 6, 2.5' 100 Go for system
      Donator because OMV deserves it (20€)
    • hubertes wrote:

      still have the old hdd where OMV 2.x was installed though
      omv-mkconf mdadm should fix that file followed by update-grub. Then see if you see the message at boot.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Thanks

      Solved

      Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. ARRAY /dev/md/zetta:0 metadata=1.2 name=zetta:0 UUID=96b0e7b7:83aa203f:1545031b:43caaa85
      Display All
      Gigabyte GA-H87M-HD3, Pentium G3220, 4 GO DDR3, 6 x 2 To raid 6, 2.5' 100 Go for system
      Donator because OMV deserves it (20€)
    • Just out of curiousity, did you create your array using OMV?
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • hubertes wrote:

      With an old OMV 2.X , I'd say 2 or 3 years ago ?
      I'm surprised it didn't create the mdadm.conf entry. Oh well. It is there now.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • hubertes wrote:

      You mean when I reconnected my drives after clean installing omv 4.x ?
      Ah. That is why there is no entry. I thought you upgraded from 2.x to 4.x. Mounting the filesystem will not create the entry.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • hubertes wrote:

      Ha ok. I followed the recommendation and installed a fresh new system and replugged my hard drive.
      Normally this is ok. The mdadm.conf entry isn't required but it does make things less noisy. Not sure how to fix this other than telling people to run omv-mkconf mdadm when installing fresh with a mdadm raid array.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!