Raid missing after upgrade to OMV 4.1.22

    • OMV 4.x
    • Resolved
    • Raid missing after upgrade to OMV 4.1.22

      Recently did a clean install of OMV 4.1.22 (from 3.0.99). Used a new system SSD and disconnected my RAID 5 array. Shut down after initial install and reconnected RAID. Rebooted and checked the OMV web interface and RAID was not listed. All the disks were listed in DISKS, but nothing in RAID management.

      I am looking for help getting the RAID back up. Information Below:

      cat /proc/mdstat

      Source Code

      1. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      2. md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
      3. 19534437560 blocks super 1.2
      4. unused devices: <none>
      blkid

      Source Code

      1. /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
      2. /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
      3. /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
      4. /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
      5. /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
      6. /dev/sda1: UUID="8c781b3b-33bd-42bb-ba23-0cdc03a68fdc" TYPE="ext4" PARTUUID="d73019f2-01"
      7. /dev/sda5: UUID="98699062-8fa7-403b-b0f0-6a4226840926" TYPE="swap" PARTUUID="d73019f2-05"

      fdisk -l | grep "Disk "

      Source Code

      1. Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      2. Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      3. Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      4. Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      5. Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      6. Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
      7. Disk identifier: 0xd73019f2


      cat /etc/mdadm/mdadm.conf[/b]



      Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      Display All


      mdadm --detail --scan --verbose[/b]



      Source Code

      1. INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
      2. devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
      3. cat /proc/mdstat
      4. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      5. md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
      6. 19534437560 blocks super 1.2
      7. unused devices: <none>


      cat /etc/fstab[/b]



      Source Code

      1. # /etc/fstab: static file system information.
      2. #
      3. # Use 'blkid' to print the universally unique identifier for a
      4. # device; this may be used with UUID= as a more robust way to name devices
      5. # that works even if disks are added and removed. See fstab(5).
      6. #
      7. # <file system> <mount point> <type> <options> <dump> <pass>
      8. # / was on /dev/sda1 during installation
      9. UUID=8c781b3b-33bd-42bb-ba23-0cdc03a68fdc / ext4 errors=remount-ro 0 1
      10. # swap was on /dev/sda5 during installation
      11. UUID=98699062-8fa7-403b-b0f0-6a4226840926 none swap sw 0 0
      12. /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      13. tmpfs /tmp tmpfs defaults 0 0
      14. # >>> [openmediavault]
      15. # <<< [openmediavault]
      Display All

      Drive Type: all 5 drives are 4TB SATA disks 4HGST, 1 WD Red


      Stopped working after clean install of OMV 4.1.22

      The post was edited 1 time, last by 1dx: left out some importand information. ().

    • geaves wrote:

      ']ou'll need to assemble it mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcdef]

      Thank you geaves for the reply.

      Just tried to assemble it:
      [tt]
      mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcdef]
      [mdadm: looking for devices for /dev/md127
      mdadm: /dev/sdb is busy - skipping
      mdadm: /dev/sdc is busy - skipping
      mdadm: /dev/sdd is busy - skipping
      mdadm: /dev/sde is busy - skipping
      mdadm: /dev/sdf is busy - skipping
      /tt]

      Not sure what that means.
    • The array stopped:

      mdadm --stop /dev/md127
      mdadm: stopped /dev/md127

      Then tried to assemble:

      mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcdef]
      mdadm: looking for devices for /dev/md127
      mdadm: /dev/sdb is identified as a member of /dev/md127, slot 4.
      mdadm: /dev/sdc is identified as a member of /dev/md127, slot 2.
      mdadm: /dev/sdd is identified as a member of /dev/md127, slot 1.
      mdadm: /dev/sde is identified as a member of /dev/md127, slot 0.
      mdadm: /dev/sdf is identified as a member of /dev/md127, slot 3.
      mdadm: forcing event count in /dev/sdf(3) from 14035 upto 14059
      mdadm: forcing event count in /dev/sdc(2) from 13901 upto 14059
      mdadm: clearing FAULTY flag for device 1 in /dev/md127 for /dev/sdc
      mdadm: clearing FAULTY flag for device 4 in /dev/md127 for /dev/sdf
      mdadm: Marking array /dev/md127 as 'clean'
      mdadm: added /dev/sdd to /dev/md127 as 1
      mdadm: added /dev/sdc to /dev/md127 as 2
      mdadm: added /dev/sdf to /dev/md127 as 3
      mdadm: added /dev/sdb to /dev/md127 as 4
      mdadm: added /dev/sde to /dev/md127 as 0
      mdadm: /dev/md127 assembled from 5 drives - not enough to start the array.

      The array has 5 drives, which it should, but it's saying that's not enough to start.
    • 1dx wrote:

      The array has 5 drives, which it should, but it's saying that's not enough to start.
      That doesn't look good, have got a backup just in case?

      You'll need to run this mdadm --examine /dev/sdbon each drive, so replace b with c and so on, run each command in turn and post each output using </> on the toolbar I'll have a look in the morning, have you also checked each drive's SMART status.

      You have 5 drives in that Raid5, Raid5 can only have 1 drive failure, the error 'not enough to start' would suggest that 2 drives c and f are the problem.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      Check your fstab and mdadm entries

      Source Code

      1. cat /etc/fstab
      2. # /etc/fstab: static file system information.
      3. #
      4. # Use 'blkid' to print the universally unique identifier for a
      5. # device; this may be used with UUID= as a more robust way to name devices
      6. # that works even if disks are added and removed. See fstab(5).
      7. #
      8. # <file system> <mount point> <type> <options> <dump> <pass>
      9. # / was on /dev/sda1 during installation
      10. UUID=8c781b3b-33bd-42bb-ba23-0cdc03a68fdc / ext4 errors=remount-ro 0 1
      11. # swap was on /dev/sda5 during installation
      12. UUID=98699062-8fa7-403b-b0f0-6a4226840926 none swap sw 0 0
      13. /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      14. tmpfs /tmp tmpfs defaults 0 0
      Display All


      Source Code

      1. cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      Display All
    • 1dx wrote:

      SMART shows that sdb is red due to Reallocated Sector Count. I'll have to replace that drive today.
      You can do that from the GUI, select the Raid, then click remove and select the drive -> Ok that, add the new drive, under disks select the drive and then wipe, short is sufficient, you may then have to format the drive, back to raid management, select the raid then click recover a dialogue box will open with the new drive shown, select click Ok and the new drive will sync with the array.
      Raid is not a backup! Would you go skydiving without a parachute?
    • There's no reference to the raid in either, so omv-mkconf fstab same again but mdadm
      Okay, ran the omv-mkconf fstab and below are fstab and mdadm.conf:

      Source Code

      1. cat /etc/fstab
      2. # /etc/fstab: static file system information.
      3. #
      4. # Use 'blkid' to print the universally unique identifier for a
      5. # device; this may be used with UUID= as a more robust way to name devices
      6. # that works even if disks are added and removed. See fstab(5).
      7. #
      8. # <file system> <mount point> <type> <options> <dump> <pass>
      9. # / was on /dev/sda1 during installation
      10. UUID=8c781b3b-33bd-42bb-ba23-0cdc03a68fdc / ext4 errors=remount-ro 0 1
      11. # swap was on /dev/sda5 during installation
      12. UUID=98699062-8fa7-403b-b0f0-6a4226840926 none swap sw 0 0
      13. /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      14. tmpfs /tmp tmpfs defaults 0 0
      15. # >>> [openmediavault]
      16. # <<< [openmediavault]
      Display All

      Source Code

      1. cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      Display All

      The post was edited 1 time, last by 1dx: put wrong information in mdam.conf ().

    • 1dx wrote:

      Okay, ran the omv-mkconf fstab and below are fstab and mdadm.conf:
      Then there is still a problem! those two commands should write the relevant information to those files thereby recreating them. You say the it's showing in Raid Management, is it mounted under file systems. I've stopped using raid so I'm doing this from memory :)
      Raid is not a backup! Would you go skydiving without a parachute?
    • Swapped out faulty drive and md127 just finished recovering. Then mounted drive in file systems (wasn't mounted before), then ran omv-mkconf on fstab and mdadm. How do these look?

      Source Code

      1. cat /etc/fstab
      2. # /etc/fstab: static file system information.
      3. #
      4. # Use 'blkid' to print the universally unique identifier for a
      5. # device; this may be used with UUID= as a more robust way to name devices
      6. # that works even if disks are added and removed. See fstab(5).
      7. #
      8. # <file system> <mount point> <type> <options> <dump> <pass>
      9. # / was on /dev/sda1 during installation
      10. UUID=8c781b3b-33bd-42bb-ba23-0cdc03a68fdc / ext4 errors=remount-ro 0 1
      11. # swap was on /dev/sda5 during installation
      12. UUID=98699062-8fa7-403b-b0f0-6a4226840926 none swap sw 0 0
      13. /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      14. tmpfs /tmp tmpfs defaults 0 0
      15. # >>> [openmediavault]
      16. /dev/disk/by-label/share /srv/dev-disk-by-label-share ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      17. # <<< [openmediavault]
      Display All

      Source Code

      1. cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md127 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
      Display All
      Now it looks like I will have to set up file systems.