Getting a "SparesMissing event" on raid 5 after replacing failed drive

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Getting a "SparesMissing event" on raid 5 after replacing failed drive

      Running OMV 3.0.99 (Erasmus). I had a disk fail on my raid 5 array, and got this email message:

      This message was generated by the smartd daemon running on:

      host name:nas

      DNS domain: OMV

      The following warning/error was logged by the smartd daemon:

      Device: /dev/disk/by-id/ata-HGST_HDN726040ALE614_K7H5PPDL [SAT], ATA error count increased from 0 to 8

      Device info:

      HGST HDN726040ALE614, S/N:K7H5PPDL, WWN:5-000cca-269d0aec8, FW:APGNW7JH, 4.00 TB

      For details see host's SYSLOG.

      You can also use the smartctl utility for further investigation.
      Another message will be sent in 24 hours if the problem persists

      Checked it out and /dev/sdc/ had been automatically removed for SMART read errors, and degraded. I purchased a new drive and replaced /dev/sdc, and rebuilt the array. everything is working fine with no data loss.
      However, I am now getting emails about a SparesMissing event:



      This is an automatically generated mail message from mdadm running on nas

      A SparesMissing event had been detected on md device /dev/md127.

      Faithfully yours, etc.

      P.S. The /proc/mdstat file currently contains the following:

      Personalities : [raid6] [raid5] [raid4]
      md127 : active raid5 sdd[0] sdb[4] sde[3] sdf[2] sdc[5]
      15627548672 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]

      unused devices: <none>

      Rebuilt array shows clean and no longer degraded, but still get the Spares Missing messages, any ideas?


      Raid information:

      Source Code

      1. root@nas:~# blkid
      2. /dev/sda1: UUID="194f80e2-eb10-4eff-a8b3-3bf634107a24" TYPE="ext4" PARTUUID="4266dfff-01"
      3. /dev/sda5: UUID="fb11c777-c87a-4d07-a440-9277d1f08864" TYPE="swap" PARTUUID="4266dfff-05"
      4. /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
      5. /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
      6. /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
      7. /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
      8. /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
      9. /dev/md127: LABEL="share" UUID="a0a9808b-f7e5-48fe-9d41-c8c0ff053887" TYPE="ext4"





      Source Code

      1. root@nas:~# fdisk -l | grep "Disk "
      2. Disk /dev/sda: 59.6 GiB, 64023257088 bytes, 125045424 sectors
      3. Disk identifier: 0x4266dfff
      4. Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      5. Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      6. Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      7. Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      8. Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      9. Disk /dev/md127: 14.6 TiB, 16002609840128 bytes, 31255097344 sectors

      Source Code

      1. root@nas:~# mdadm --detail --scan --verbose
      2. ARRAY /dev/md127 level=raid5 num-devices=5 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
      3. devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf

      Source Code: cat /etc/mdadm/mdadm.conf

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. ARRAY /dev/md127 metadata=1.2 spares=1 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
      Display All

      Source Code

      1. Disks:
      2. /dev/sdb HGST HDN724040AL ATA 4TB
      3. /dev/sdc WDC WD40EFRX-68N ATA 4TB (New)
      4. /dev/sdd HGST HDN724040AL ATA 4TB
      5. /dev/sde HGST HDN724040AL ATA 4TB
      6. /dev/sdf HGST HDN724040AL ATA 4TB

      The post was edited 1 time, last by 1dx: Edit source code ().