Degraded RAID 5 after reboot

    • OMV 2.x
    • Degraded RAID 5 after reboot

      Hi everybody,

      I have a problem with my RAID 5 array after reboot.
      One drive was listed as removed.
      I tried to add the missing drive, now I have the following state:

      cat /proc/mdstat shows:

      Source Code

      1. root@nas-kriwi:~# cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4]
      3. md0 : inactive sdb[0] sdd[2] sdc[3]
      4. 11720662536 blocks super 1.2
      5. unused devices: <none>
      mdadm -D /dev/md0 shows:

      Source Code

      1. root@nas-kriwi:~# mdadm -D /dev/md0
      2. /dev/md0:
      3. Version : 1.2
      4. Creation Time : Fri Mar 20 12:14:12 2015
      5. Raid Level : raid5
      6. Used Dev Size : -1
      7. Raid Devices : 3
      8. Total Devices : 3
      9. Persistence : Superblock is persistent
      10. Update Time : Sun Oct 9 18:42:08 2016
      11. State : active, degraded, Not Started
      12. Active Devices : 2
      13. Working Devices : 3
      14. Failed Devices : 0
      15. Spare Devices : 1
      16. Layout : left-symmetric
      17. Chunk Size : 512K
      18. Name : nas-kriwi:Daten (local to host nas-kriwi)
      19. UUID : 37e3f043:a8132bf9:55122d62:11073655
      20. Events : 2174
      21. Number Major Minor RaidDevice State
      22. 0 8 16 0 active sync /dev/sdb
      23. 3 8 32 1 spare rebuilding /dev/sdc
      24. 2 8 48 2 active sync /dev/sdd
      Display All
      Here the other additional information:
      blkid:

      Source Code

      1. root@nas-kriwi:~# blkid
      2. /dev/sda1: UUID="526f5cda-3ac1-4914-ac2a-5c623afe7cff" TYPE="ext4"
      3. /dev/sda5: UUID="ff79c131-a4b2-44b1-a758-a12f4c96cf52" TYPE="swap"
      4. /dev/sdb: UUID="37e3f043-a813-2bf9-5512-2d6211073655" UUID_SUB="7d1ca78e-7d4a-3ad6-dbf6-708f888de353" LABEL="nas-kriwi:Daten" TYPE="linux_raid_member"
      5. /dev/sdd: UUID="37e3f043-a813-2bf9-5512-2d6211073655" UUID_SUB="4e35f91f-5aee-5dcf-061c-f59b1f08b2dd" LABEL="nas-kriwi:Daten" TYPE="linux_raid_member"
      6. /dev/sdc: UUID="37e3f043-a813-2bf9-5512-2d6211073655" UUID_SUB="0a575562-0a7b-250a-716c-b9eabdc98f53" LABEL="nas-kriwi:Daten" TYPE="linux_raid_member"
      fdisk -l:

      Source Code

      1. root@nas-kriwi:~# fdisk -l
      2. Disk /dev/sda: 60.0 GB, 60022480896 bytes
      3. 255 heads, 63 sectors/track, 7297 cylinders, total 117231408 sectors
      4. Units = sectors of 1 * 512 = 512 bytes
      5. Sector size (logical/physical): 512 bytes / 512 bytes
      6. I/O size (minimum/optimal): 512 bytes / 512 bytes
      7. Disk identifier: 0x6315bd60
      8. Device Boot Start End Blocks Id System
      9. /dev/sda1 * 2048 112383999 56190976 83 Linux
      10. /dev/sda2 112386046 117229567 2421761 5 Extended
      11. /dev/sda5 112386048 117229567 2421760 82 Linux swap / Solaris
      12. Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
      13. 255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
      14. Units = sectors of 1 * 512 = 512 bytes
      15. Sector size (logical/physical): 512 bytes / 4096 bytes
      16. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      17. Disk identifier: 0x00000000
      18. Disk /dev/sdb doesn't contain a valid partition table
      19. Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes
      20. 255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
      21. Units = sectors of 1 * 512 = 512 bytes
      22. Sector size (logical/physical): 512 bytes / 4096 bytes
      23. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      24. Disk identifier: 0x00000000
      25. Disk /dev/sdc doesn't contain a valid partition table
      26. Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes
      27. 255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
      28. Units = sectors of 1 * 512 = 512 bytes
      29. Sector size (logical/physical): 512 bytes / 4096 bytes
      30. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      31. Disk identifier: 0x00000000
      32. Disk /dev/sdd doesn't contain a valid partition table
      Display All
      cat /etc/mdadm/mdadm.conf:

      Source Code

      1. root@nas-kriwi:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md0 metadata=1.2 name=nas-kriwi:Daten UUID=37e3f043:a8132bf9:55122d62:11073655
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR xxx@xxx
      20. MAILFROM root
      21. ARRAY /dev/md/Daten metadata=1.2 UUID=37e3f043:a8132bf9:55122d62:11073655 name=nas-kriwi:Daten
      Display All
      mdadm --detail --scan --verbose:

      Source Code

      1. root@nas-kriwi:~# mdadm --detail --scan --verbose
      2. ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 spares=1 name=nas-kriwi:Daten UUID=37e3f043:a8132bf9:55122d62:11073655
      3. devices=/dev/sdb,/dev/sdc,/dev/sdd

      What does the state means. What does my NAS? The RAID is now not listed in OMV. I can't see any process that rebuild the array and the server is idle...

      Thank you for your pleasent help!!

      Best regards,
      Marco

      The post was edited 1 time, last by SkinnyBoy ().