RAID 5 drive missing - Troubleshooting tips?

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • RAID 5 drive missing - Troubleshooting tips?

      I set up OMV yesterday with a RAID 5 made from 3 drives (sdb, sdc, and sdd). Today I started getting emails saying "A DegradedArray event had been detected on md device /dev/md0." It looks like sdb is offline or something. In the web GUI, under "Disks", all the drives are listed. But under "S.M.A.R.T." > "Devices", the status circle is greyed out for sdb. Under "RAID Management", sdb is no longer listed as one of the devices.

      Back under SMART > Extended information, I see:

      smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.14.0-0.bpo.3-amd64] (local build)
      Copyright (C) 2002-16, Bruce Allen, Christian Franke,

      Short INQUIRY response, skip product id
      A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

      Any suggestions for what my next steps should be? Do I just assume the drive is bad and replace it, or are there other troubleshooting things I can do?

      Here is the output from the commands we're supposed to include when posting:

      root@HOSTNAME:~# cat /proc/mdstat
      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      md0 : active raid5 sdd[2] sdc[1]
      7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
      bitmap: 30/30 pages [120KB], 65536KB chunk

      unused devices: <none>
      root@HOSTNAME:~# blkid
      /dev/sda1: UUID="6ecaf974-e508-4b57-ac73-1c7c6d4db83a" TYPE="ext4" PARTUUID="9b1c85d3-01"
      /dev/sda5: UUID="ee468e24-5325-433d-a0e2-c4f80b8d2c99" TYPE="swap" PARTUUID="9b1c85d3-05"
      /dev/sdb: UUID="86de832d-1fd4-1c45-e2da-b489d50ca2b1" UUID_SUB="c533bbcb-888a-0e3b-7c65-91d287ffcc7f" LABEL="HOSTNAME:MainRAID" TYPE="linux_raid_member"
      /dev/sdd: UUID="86de832d-1fd4-1c45-e2da-b489d50ca2b1" UUID_SUB="243ebd72-8da8-65c2-1555-2efe889aa20f" LABEL="HOSTNAME:MainRAID" TYPE="linux_raid_member"
      /dev/md0: LABEL="MainFileSystem" UUID="93dbae66-16f3-4cb9-a5ea-9f427e85d562" TYPE="ext4"
      /dev/sdc: UUID="86de832d-1fd4-1c45-e2da-b489d50ca2b1" UUID_SUB="0f30d9c1-3ca8-1e61-43d6-ed1fad23ca74" LABEL="HOSTNAME:MainRAID" TYPE="linux_raid_member"
      root@HOSTNAME:~# fdisk -l | grep "Disk "
      Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors
      Disk identifier: 0x9b1c85d3
      Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      Disk /dev/md0: 7.3 TiB, 8001304920064 bytes, 15627548672 sectors
      root@HOSTNAME:~# cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      # Please refer to mdadm.conf(5) for information about this file.

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
      ARRAY /dev/md0 metadata=1.2 name=HOSTNAME:MainRAID UUID=86de832d:1fd41c45:e2dab489:d50ca2b1

      # instruct the monitoring daemon where to send mail alerts
      MAILFROM rootroot@HOSTNAME:~# mdadm --detail --scan --verbose
      ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=HOSTNAME:MainRAID UUID=86de832d:1fd41c45:e2dab489:d50ca2b1
    • Update: I replaced the SATA cable and I can see the drive in OMV now and get SMART info, etc. It's still not part of the RAID, though. Under RAID Management I see a greyed-out "Grow" button and a "Recover" button. With my RAID selected, I click "Recover", but I'm not really able to do anything. "Name" and "Level" are filled in, but greyed-out. Under "Devices" it looks like I'm suppose to select a disk to add, but there are no devices listed.