Lost Raid 5 after assembling new hard disk drive

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Lost Raid 5 after assembling new hard disk drive

      Hello everybody

      i got a real problem on raid 5.
      How i got the problem
      At first i had an SMART Error
      shut down the system
      unplug fault drive
      got a new hard disk
      plug the new
      reboot

      On the GUI i have no Raid, drive are there

      ##########
      console outputs

      cat /etc/mdadm/mdadm.conf

      ################

      root@nas-omv:~# cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
      ARRAY /dev/md/FirstRaid5 metadata=1.2 name=NAS-OMV:FirstRaid5 UUID=1fcf9705:1c13cd44:c9bcf8d4:5c66fb11

      # instruct the monitoring daemon where to send mail alerts

      ################

      second output on consol:
      sudo mdadm --misc --examine /dev/sd[abc]

      ################

      root@nas-omv:~# sudo mdadm --misc --examine /dev/sda
      /dev/sda:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x1
      Array UUID : 1fcf9705:1c13cd44:c9bcf8d4:5c66fb11
      Name : NAS-OMV:FirstRaid5
      Creation Time : Fri Nov 4 21:37:43 2016
      Raid Level : raid5
      Raid Devices : 3

      Avail Dev Size : 5860271024 (2794.39 GiB 3000.46 GB)
      Array Size : 5860270080 (5588.79 GiB 6000.92 GB)
      Used Dev Size : 5860270080 (2794.39 GiB 3000.46 GB)
      Data Offset : 262144 sectors
      Super Offset : 8 sectors
      Unused Space : before=262056 sectors, after=944 sectors
      State : clean
      Device UUID : 751ecf5b:6fd057ae:4fbf95d7:939777d0

      Internal Bitmap : 8 sectors from superblock
      Update Time : Tue Feb 12 19:59:12 2019
      Bad Block Log : 512 entries available at offset 72 sectors
      Checksum : 405dfe1a - correct
      Events : 37509

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : Active device 0
      Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
      root@nas-omv:~# sudo mdadm --misc --examine /dev/sdb
      /dev/sdb:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x1
      Array UUID : 1fcf9705:1c13cd44:c9bcf8d4:5c66fb11
      Name : NAS-OMV:FirstRaid5
      Creation Time : Fri Nov 4 21:37:43 2016
      Raid Level : raid5
      Raid Devices : 3

      Avail Dev Size : 5860271024 (2794.39 GiB 3000.46 GB)
      Array Size : 5860270080 (5588.79 GiB 6000.92 GB)
      Used Dev Size : 5860270080 (2794.39 GiB 3000.46 GB)
      Data Offset : 262144 sectors
      Super Offset : 8 sectors
      Unused Space : before=262056 sectors, after=944 sectors
      State : clean
      Device UUID : 7254c079:85f1e316:d64fd1da:6fbb95b9

      Internal Bitmap : 8 sectors from superblock
      Update Time : Tue Feb 12 19:59:12 2019
      Bad Block Log : 512 entries available at offset 72 sectors
      Checksum : b335098d - correct
      Events : 37509

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : Active device 1
      Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
      root@nas-omv:~# sudo mdadm --misc --examine /dev/sdc
      mdadm: No md superblock detected on /dev/sdc.


      ################

      mdadm --create /dev/md0 --assume-clean --level=5 --verbose --raid-devices=3 missing /dev/sda /dev/sdb /dev/sdc

      ##############
      root@nas-omv:~# mdadm --create /dev/md0 --assume-clean --level=5 --verbose --raid-devices=3 missing /dev/sda /dev/sdb /dev/sdc
      mdadm: You have listed more devices (4) than are in the array(3)!

      ##############


      blkid

      ##############
      root@nas-omv:~# blkid
      /dev/sda: UUID="1fcf9705-1c13-cd44-c9bc-f8d45c66fb11" UUID_SUB="751ecf5b-6fd0-57ae-4fbf-95d7939777d0" LABEL="NAS-OMV:FirstRaid5" TYPE="linux_raid_member"
      /dev/sdb: UUID="1fcf9705-1c13-cd44-c9bc-f8d45c66fb11" UUID_SUB="7254c079-85f1-e316-d64f-d1da6fbb95b9" LABEL="NAS-OMV:FirstRaid5" TYPE="linux_raid_member"
      /dev/sdd1: UUID="b2f11b56-6610-4618-b4e8-037387c66fc4" TYPE="ext4" PARTUUID="afb2ec5b-01"
      /dev/sdd5: UUID="e13aacbb-179e-4577-a52e-16776674f174" TYPE="swap" PARTUUID="afb2ec5b-05"


      ##############

      fdisk -l | grep "Disk "

      ##############

      root@nas-omv:~# fdisk -l | grep "Disk "
      Disk /dev/sda: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdb: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdc: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdd: 14,9 GiB, 15977152512 bytes, 31205376 sectors
      Disk identifier: 0xafb2ec5b
      root@nas-omv:~#


      ##############

      mdadm --detail --scan --verbose

      ##############

      root@nas-omv:~# mdadm --detail --scan --verbose
      root@nas-omv:~#

      ##############


      So i need some advise to fix this issue.

      Thanks!!

      Christian
    • Hello here my outputs
      I can not install the original hard disk anymore. I had to send these back to WD.



      #################

      System:

      Linux nas-omv 4.19.0-0.bpo.1-amd64 #1 SMP Debian 4.19.12-1~bpo9+1 (2018-12-30) x86_64

      The programs included with the Debian GNU/Linux system are free software;
      the exact distribution terms for each program are described in the
      individual files in /usr/share/doc/*/copyright.

      Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
      permitted by applicable law.
      Last login: Fri Feb 22 19:42:53 2019 from 20.20.2.6

      1. #################

      root@nas-omv:~# cat /proc/mdstat

      Source Code

      1. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      2. md127 : inactive sdb[1](S) sda[3](S)
      3. 5860271024 blocks super 1.2
      4. unused devices: <none>







      2. #################

      root@nas-omv:~# blki

      Source Code

      1. /dev/sdb: UUID="1fcf9705-1c13-cd44-c9bc-f8d45c66fb11" UUID_SUB="7254c079-85f1-e316-d64f-d1da6fbb95b9" LABEL="NAS-OMV:FirstRaid5" TYPE="linux_raid_member"
      2. /dev/sda: UUID="1fcf9705-1c13-cd44-c9bc-f8d45c66fb11" UUID_SUB="751ecf5b-6fd0-57ae-4fbf-95d7939777d0" LABEL="NAS-OMV:FirstRaid5" TYPE="linux_raid_member"
      3. /dev/sdd1: UUID="b2f11b56-6610-4618-b4e8-037387c66fc4" TYPE="ext4" PARTUUID="afb2ec5b-01"
      4. /dev/sdd5: UUID="e13aacbb-179e-4577-a52e-16776674f174" TYPE="swap" PARTUUID="afb2ec5b-05"

      3. #################

      root@nas-omv:~# fdisk -l | grep "Disk "


      Source Code

      1. Disk /dev/sdc: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      2. Disk /dev/sdb: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      3. Disk /dev/sda: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      4. Disk /dev/sdd: 14,9 GiB, 15977152512 bytes, 31205376 sectors
      5. Disk identifier: 0xafb2ec5b
      4. #################

      root@nas-omv:~# cat /etc/mdadm/mdadm.conf

      Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. ARRAY /dev/md/FirstRaid5 metadata=1.2 name=NAS-OMV:FirstRaid5 UUID=1fcf9705:1c13cd44:c9bcf8d4:5c66fb11
      17. # instruct the monitoring daemon where to send mail alerts
      18. MAILADDR Email@mydomain.com
      19. MAILFROM root
      Display All

      5. #################

      root@nas-omv:~# mdadm --detail --scan --verbose

      Source Code

      1. INACTIVE-ARRAY /dev/md127 num-devices=2 metadata=1.2 name=NAS-OMV:FirstRaid5 UUID=1fcf9705:1c13cd44:c9bcf8d4:5c66fb11
      2. devices=/dev/sda,/dev/sdb
      3. root@nas-omv:~#

      6. Used Drives: #################

      3x WD Red 3TB
      1x USB System Disk

      Disk /dev/sdc: ==> should be raid its the new one
      Disk /dev/sdb: raid
      Disk /dev/sda: raid
      Disk /dev/sdd: System disk

      7. That happend #################

      New hard drive installed due to SMART failure from the old one.


      Thanks

      The post was edited 1 time, last by trynas ().

    • Your mdstat shows the raid as inactive with 2 drives /dev/sda and dev/sdb try the following to bring the raid back up with the two drives mdadm --assemble --verbose --force /dev/md127 /dev/sd[ab]

      That should bring the raid back up as clean/degraded, then in Storage -> Disks select the new drive and "wipe" then in the GUI again Raid Management click on your current Raid then select Recover from the menu. In dialogue box that appears your new drive should be displayed, select it, hit ok and you should see the Raid recovering.
      Raid is not a backup! Would you go skydiving without a parachute?
    • Hello,

      thanks for your help.
      So i tried:
      the console output is.

      mdadm: looking for devices for /dev/md127
      mdadm: /dev/sda is busy - skipping
      mdadm: /dev/sdb is busy - skipping

      On GUI i do not see the raid.
      Take this time? Need a restart?

      Edit:

      Now its looking good.
      I had to do frist
      mdadm --stop /dev/md127

      Source Code

      1. root@nas-omv:~# mdadm --stop /dev/md127
      2. mdadm: stopped /dev/md127
      3. root@nas-omv:~# mdadm --assemble --verbose --force /dev/md127 /dev/sd[ab]
      4. mdadm: looking for devices for /dev/md127
      5. mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
      6. mdadm: /dev/sdb is identified as a member of /dev/md127, slot 1.
      7. mdadm: added /dev/sdb to /dev/md127 as 1
      8. mdadm: no uptodate device for slot 2 of /dev/md127
      9. mdadm: added /dev/sda to /dev/md127 as 0
      10. mdadm: /dev/md127 has been started with 2 drives (out of 3).
      11. root@nas-omv:~#
      Display All



      Thanks

      The post was edited 1 time, last by trynas ().