RAID 10 Degraded after disconnected drive while in use

    • OMV 2.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • RAID 10 Degraded after disconnected drive while in use

      I have a hard drive dock that disconnects all drives in it when a new drive is put into that dock. This dock is a 2 bay hard drive dock that had one of my raid drives in it. I added a new Seagate Ironwolf 4TB drive to this dock and began copying files from my raid array to this new 4TB drive without noticing my raid array was degraded. I moved about 20GB of files from my raid array to this new 4TB drive and then noticed that my array was degraded. I immediately stopped the transfer and rebooted my server, but that did not fix anything. How can I fix this degraded array? Thanks in advance for taking the time to help me out with this.

      One of my hard drives says it is removed but it is actually not removed, and is detected in the "Physical Disks" tab in the webGUI. This leads me to believe that I have to re-add it to the array. The problem is I don't actually know how to do this. Rather than playing around with 2TB of my files I have came here to ask for help. Thanks.

      My RAID 10 array is as follows:

      4 Seagate Barracuda drives of 1TB each in RAID 10.
      1 of these drives is connected via SATA and the rest are connected via USB on 2-port 3.5 inch external hard drive docks.

      The webGUI outputs the following:

      Source Code

      1. Version : 1.2
      2. Creation Time : Tue May 16 20:37:37 2017
      3. Raid Level : raid10
      4. Array Size : 1953262592 (1862.78 GiB 2000.14 GB)
      5. Used Dev Size : 976631296 (931.39 GiB 1000.07 GB)
      6. Raid Devices : 4
      7. Total Devices : 3
      8. Persistence : Superblock is persistent
      9. Update Time : Tue May 23 19:48:14 2017
      10. State : clean, degraded
      11. Active Devices : 3
      12. Working Devices : 3
      13. Failed Devices : 0
      14. Spare Devices : 0
      15. Layout : near=2
      16. Chunk Size : 512K
      17. Name : screamserver:ScreamRaid (local to host screamserver)
      18. UUID : ddaf6947:8c3f9552:e1ec6bbc:4be83769
      19. Events : 493
      20. Number Major Minor RaidDevice State
      21. 0 8 32 0 active sync /dev/sdc
      22. 1 8 48 1 active sync /dev/sdd
      23. 2 0 0 2 removed
      24. 3 8 64 3 active sync /dev/sde
      Display All

      Output of commands from RAID help sticky thread:

      cat /proc/mdstat

      Source Code

      1. Personalities : [raid10]
      2. md0 : active raid10 sdc[0] sde[3] sdd[1]
      3. 1953262592 blocks super 1.2 512K chunks 2 near-copies [4/3] [UU_U]
      4. unused devices: <none>


      mdadm --detail --scan --verbose


      Source Code

      1. ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=screamserver:ScreamRaid UUID=ddaf6947:8c3f9552:e1ec6bbc:4be83769
      2. devices=/dev/sdc,/dev/sdd,/dev/sde
      cat /etc/mdadm/mdadm.conf

      Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. ARRAY /dev/md0 metadata=1.2 name=screamserver:ScreamRaid UUID=ddaf6947:8c3f9552:e1ec6bbc:4be83769
      Display All

      fdisk -l | grep "Disk "

      Source Code

      1. Disk /dev/sda doesn't contain a valid partition table
      2. WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
      3. Disk /dev/sdc doesn't contain a valid partition table
      4. Disk /dev/sdd doesn't contain a valid partition table
      5. Disk /dev/sde doesn't contain a valid partition table
      6. Disk /dev/md0 doesn't contain a valid partition table
      7. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
      8. Disk identifier: 0x00000000
      9. Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
      10. Disk identifier: 0x00000000
      11. Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
      12. Disk identifier: 0x00000000
      13. Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
      14. Disk identifier: 0x00000000
      15. Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
      16. Disk identifier: 0x00000000
      17. Disk /dev/sdf: 120.0 GB, 120034123776 bytes
      18. Disk identifier: 0x00099f0a
      19. Disk /dev/md0: 2000.1 GB, 2000140894208 bytes
      20. Disk identifier: 0x00000000
      Display All

      blkid

      Source Code

      1. /dev/sdc: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="9a520635-d41c-8a42-4f2a-85129f667b55" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
      2. /dev/sdb1: UUID="4086908c-fcf7-467d-923a-867222729129" TYPE="ext4" LABEL="Ironwolf"
      3. /dev/sda: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="f1901490-fb9c-2847-5d14-b10584fda9d9" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
      4. /dev/sde: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="81e429b8-f7bd-7e4c-fbaa-ae82daa59b09" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
      5. /dev/sdd: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="72d5b57e-9acb-115e-f890-92c69155af79" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
      6. /dev/sdf1: UUID="6ce9b17c-b25e-4ead-bb1a-96ed4982cf5f" TYPE="ext4"
      7. /dev/sdf5: UUID="f898a3d3-b314-4b29-a0d7-9e2fbea99ccc" TYPE="swap"
      8. /dev/md0: LABEL="ScreamDrive" UUID="f33e8fe4-1951-4061-90e6-f3241fe7401d" TYPE="ext4"

      The post was edited 1 time, last by DatScreamer ().

    • I'm no expert at this, your removed drive is /dev/sda to re add it you'll first have wipe the superblock info, mdadm --zero-superblock /dev/sda then mdadm /dev/md0 --add /dev/sda IF sda is the missing drive which it appears to be.

      I would suggest you look at how you have this set up, your first sentence is the give away, as this will happen again should you need to replace a drive.

      As I say I'm no expert at this.....but understand you have to remove the superblock info so that it can re created.
    • I would suggest to be safe, incase you don't have a backup, do a full copy of everything from your md disk
      while it still accessible.
      if I read your post correctly you only have about 2TB of data, do a full backup first thing.

      from your output, it doesn't seems like the drive is lost or got a different designation it just got dropped from array for some reason.
      so, next, as geaves suggested, identify your dropped drive and try adding it back to array

      "mdadm --manage /dev/md1 --re-add /dev/sda"since you use raid10 you can not remove the drive until you add a new one as it will drop you below min required disk count.
      so you can try procedure from here and see if it works

      from your output, it doesn't seems like the drive is lost or got a different designation it just got dropped from array for some reason.
      omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
      SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
      PSU: Silencer 760 Watt ATX Power Supply
      IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
      OS on 2×120 SSD in RAID-1 |
      DATA: 3x3T| 4x2T | 2x1T