RAID 10 Degraded after disconnected drive while in use

  • I have a hard drive dock that disconnects all drives in it when a new drive is put into that dock. This dock is a 2 bay hard drive dock that had one of my raid drives in it. I added a new Seagate Ironwolf 4TB drive to this dock and began copying files from my raid array to this new 4TB drive without noticing my raid array was degraded. I moved about 20GB of files from my raid array to this new 4TB drive and then noticed that my array was degraded. I immediately stopped the transfer and rebooted my server, but that did not fix anything. How can I fix this degraded array? Thanks in advance for taking the time to help me out with this.


    One of my hard drives says it is removed but it is actually not removed, and is detected in the "Physical Disks" tab in the webGUI. This leads me to believe that I have to re-add it to the array. The problem is I don't actually know how to do this. Rather than playing around with 2TB of my files I have came here to ask for help. Thanks.


    My RAID 10 array is as follows:


    4 Seagate Barracuda drives of 1TB each in RAID 10.
    1 of these drives is connected via SATA and the rest are connected via USB on 2-port 3.5 inch external hard drive docks.


    The webGUI outputs the following:


    Output of commands from RAID help sticky thread:


    cat /proc/mdstat

    Code
    Personalities : [raid10]
    md0 : active raid10 sdc[0] sde[3] sdd[1]
          1953262592 blocks super 1.2 512K chunks 2 near-copies [4/3] [UU_U]
    
    
    unused devices: <none>


    mdadm --detail --scan --verbose



    Code
    ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=screamserver:ScreamRaid UUID=ddaf6947:8c3f9552:e1ec6bbc:4be83769
       devices=/dev/sdc,/dev/sdd,/dev/sde

    cat /etc/mdadm/mdadm.conf


    fdisk -l | grep "Disk "


    blkid

    Code
    /dev/sdc: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="9a520635-d41c-8a42-4f2a-85129f667b55" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
    /dev/sdb1: UUID="4086908c-fcf7-467d-923a-867222729129" TYPE="ext4" LABEL="Ironwolf"
    /dev/sda: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="f1901490-fb9c-2847-5d14-b10584fda9d9" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
    /dev/sde: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="81e429b8-f7bd-7e4c-fbaa-ae82daa59b09" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
    /dev/sdd: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="72d5b57e-9acb-115e-f890-92c69155af79" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
    /dev/sdf1: UUID="6ce9b17c-b25e-4ead-bb1a-96ed4982cf5f" TYPE="ext4"
    /dev/sdf5: UUID="f898a3d3-b314-4b29-a0d7-9e2fbea99ccc" TYPE="swap"
    /dev/md0: LABEL="ScreamDrive" UUID="f33e8fe4-1951-4061-90e6-f3241fe7401d" TYPE="ext4"
    • Offizieller Beitrag

    I'm no expert at this, your removed drive is /dev/sda to re add it you'll first have wipe the superblock info, mdadm --zero-superblock /dev/sda then mdadm /dev/md0 --add /dev/sda IF sda is the missing drive which it appears to be.


    I would suggest you look at how you have this set up, your first sentence is the give away, as this will happen again should you need to replace a drive.


    As I say I'm no expert at this.....but understand you have to remove the superblock info so that it can re created.

    Raid is not a backup! Would you go skydiving without a parachute?


    OMV 6x amd64 running on an HP N54L Microserver

  • I would suggest to be safe, incase you don't have a backup, do a full copy of everything from your md disk
    while it still accessible.
    if I read your post correctly you only have about 2TB of data, do a full backup first thing.


    from your output, it doesn't seems like the drive is lost or got a different designation it just got dropped from array for some reason.
    so, next, as geaves suggested, identify your dropped drive and try adding it back to array


    "mdadm --manage /dev/md1 --re-add /dev/sda"since you use raid10 you can not remove the drive until you add a new one as it will drop you below min required disk count.
    so you can try procedure from here and see if it works


    from your output, it doesn't seems like the drive is lost or got a different designation it just got dropped from array for some reason.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • Thanks for your help guys. After doing a backup I formatted the drive that wasn't in the raid array for whatever reason and then used the webui to add it to the degraded array. And that worked. Sorry if this is a dumb thread, I am still a novice.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!