Raid degraded, cannot add disk

    • Raid degraded, cannot add disk

      So my 4 disk Raid 5 is degraded in a clean state, and i can see the 4th disk in OMV but i just cannot add it to the raid to recover it.
      I saw i needed to post this info, so here goes .
      1. cat /proc/mdstat
      gives;
      cat /proc/mdstat
      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      md127 : active (auto-read-only) raid5 sda[1] sdd[3] sdb[0]
      8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
      bitmap: 5/22 pages [20KB], 65536KB chunk

      unused devices: <none>

      1. blkid
      gives;
      /dev/sda: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="2b20e8a7-125f-091b-5dd7-16b528eebeb6" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
      /dev/md127: UUID="1db1716f-6925-4055-b177-90ec77c59e66" TYPE="ext4"
      /dev/sdb: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="aff6fa5a-2158-8b2b-7dfc-0e4ffbfbba79" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
      /dev/sdc: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="60f1906c-95c7-70f0-30d0-e561ad3c27c8" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
      /dev/sdd: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="b63fb25d-b5a1-f3b2-9938-8644fcfb22cd" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
      /dev/sde1: UUID="b7f84e5e-0dd6-4916-9523-7efa28fda8db" TYPE="ext4" PARTUUID="9b344873-01"
      /dev/sde5: UUID="e4455646-7655-432f-afc1-f660ec01d150" TYPE="swap" PARTUUID="9b344873-05"



      1. fdisk -l | grep "Disk
      gives
      fdisk -l | grep sdc
      Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors



      1. cat /etc/mdadm/mdadm.conf
      gives
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays



      1. mdadm --detail --scan --verbose
      ARRAY /dev/md/NAS-*NAME*:NAS level=raid5 num-devices=4 metadata=1.2 name=NAS-*NAME*:NAS UUID=a6af1c8f:c3642be9:925785ac:37b4f7d0
      devices=/dev/sda,/dev/sdb,/dev/sdd



      Disk sdc is the issue here, i even did a complete new install and have the same issue. So hoping for your thoughts :)
    • mdadm --stop /dev/md127
      gave
      mdadm: stopped /dev/md127



      mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcd]
      gave
      mdadm: looking for devices for /dev/md127
      mdadm: /dev/sda is identified as a member of /dev/md127, slot 1.
      mdadm: /dev/sdb is identified as a member of /dev/md127, slot 0.
      mdadm: /dev/sdc is identified as a member of /dev/md127, slot 2.
      mdadm: /dev/sdd is identified as a member of /dev/md127, slot 3.
      mdadm: added /dev/sda to /dev/md127 as 1
      mdadm: added /dev/sdc to /dev/md127 as 2 (possibly out of date)
      mdadm: added /dev/sdd to /dev/md127 as 3
      mdadm: added /dev/sdb to /dev/md127 as 0
      mdadm: /dev/md127 has been started with 3 drives (out of 4).





      cat /proc/mdstat
      gave
      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      md127 : active (auto-read-only) raid5 sdb[0] sdd[3] sda[1]
      8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
      bitmap: 5/22 pages [20KB], 65536KB chunk

      unused devices: <none>




      cat /etc/mdadm/mdadm.con
      gave
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
    • Thanks, there is obviously an issue with /dev/sdc it could infer that the drive is failing. When you post the output of a command can you use </> on the menu bar and copy and paste the full output, makes it easier to read thanks.

      mdadm --stop /dev/md127

      mdadm --zero-superblock /dev/sdc

      mdadm --add /dev/md127 /dev/sdc

      mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcd]

      cat /proc/mdstat
      Raid is not a backup! Would you go skydiving without a parachute?
    • mdadm --stop /dev/md127

      Source Code

      1. cat /proc/mdstat
      2. gave
      3. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      4. unused devices: <none>

      gave

      Source Code

      1. mdadm: stopped /dev/md127




      Source Code

      1. mdadm --zero-superblock /dev/sdc
      2. gave
      3. mdadm: Unrecognised md component device - /dev/sdc

      Source Code

      1. mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcd]
      2. gave
      3. dadm --assemble --verbose --force /dev/md127 /dev/sd[abcd]
      4. mdadm: looking for devices for /dev/md127
      5. mdadm: No super block found on /dev/sdc (Expected magic a92b4efc, got 00000000)
      6. mdadm: no RAID superblock on /dev/sdc
      7. mdadm: /dev/sdc has no superblock - assembly aborted

      Source Code

      1. mdadm --add /dev/md127 /dev/sdc
      2. gave
      3. mdadm: error opening /dev/md127: No such file or directory
    • Yes i replaced a disk in the array, i added a 4TB instead of a 3TB so i can slowly make the transition to 4TB drives.
      blkid
      gave

      Source Code

      1. /dev/sda: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="2b20e8a7-125f-091b-5dd7-16b528eebeb6" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
      2. /dev/sdb: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="aff6fa5a-2158-8b2b-7dfc-0e4ffbfbba79" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
      3. /dev/sdc: TYPE="zfs_member"
      4. /dev/sdd: UUID="a6af1c8f-c364-2be9-9257-85ac37b4f7d0" UUID_SUB="b63fb25d-b5a1-f3b2-9938-8644fcfb22cd" LABEL="NAS-Jory:NAS" TYPE="linux_raid_member"
      5. /dev/sde1: UUID="b7f84e5e-0dd6-4916-9523-7efa28fda8db" TYPE="ext4" PARTUUID="9b344873-01"
      6. /dev/sde5: UUID="e4455646-7655-432f-afc1-f660ec01d150" TYPE="swap" PARTUUID="9b344873-05"
      Display All
      Funny it shows as ZFS member, because the disk that is replaced is not that disk. I had a few reinstalls after changing the disk though.(initial setup was 4x3TB, then i went to a disk array (20x2TB)and when i got the energy bill i went back to 3x3TBand 1x4TB. I actually made a new array, on a different microserver(i borrowed one) and copied all data from my external enclosure to internal disks. then destroyed the external enclosure.
    • Well wiping it isn't that simple...sorry, what you need to do is to follow this thread from post 7 ignore post 14 you need to remove that zfs signature.

      So you'll need to use system rescue cd, OMV-Extras on the Kernel tab scroll down to the bottom and follow the instructions....this is the only way to remove that zfs signature and follow that thread.

      When done come back I'm usually around.....AND BUY A PARACHUTE!! :)

      Raid is not a backup! Would you go skydiving without a parachute?
    • Theelepel88 wrote:

      Can i just remove the correct disks while doing the procedure? That would make sure i won't delete any data right?
      You you could, because the other odd thing about your output is that the mdadm.conf file has no reference to an array. If you pull the drives, don't reboot, shutdown then plug the drives back in, then start.
      Raid is not a backup! Would you go skydiving without a parachute?
    • Just copying half a terra of photo's now, after that i will turn it off, remove the correct disks, boot in rescue mode, see if i can completly destroy disk C (or fix it...) and then shutdown, put disks back in, boot.

      I will let you know. Copy-ing the photo's will take some time so have to wait for that.. don't want to loose those, i can live without the movies and series and music i collected.
    • Users Online 1

      1 Guest