How to recover a "clean, degraded" RAID after cables problems of one of the 4 disks in RAID 10?

  • Hi everyone,

    after i had some problems with the cables to one of the 4 drives in Raid 10 (unable to open ATA device), i fixed the cable connections and the drive restarted and is visible again (/dev/sde). From the OMV interface the disk is now visible again in Storage -> Disks, and also in S.M.A.R.T., but in Software RAID it is not listed among the devices and the status is "clean, degraded". Do I have to swipe the disk and then recover or is it better to follow another procedure?

    Thank you.

    Code
    cat /proc/mdstat
    Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
    md0 : active raid10 sdb[1] sda[0] sdd[2]
          23437508608 blocks super 1.2 512K chunks 2 near-copies [4/3] [UUU_]
          bitmap: 79/175 pages [316KB], 65536KB chunk
    
    unused devices: <none>
    Code
    blkid
    /dev/sdb: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="a6bb8aa8-4e9b-7f90-b105-45a9301acbce" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    /dev/sde: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="2c68c265-01f7-dd1b-ffff-6d28eb140780" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    /dev/sdd: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="6c9c5433-6838-c39f-abfa-7807205a3238" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    /dev/sda: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="3904f2f1-fe1f-bde3-a965-d9dbe0074f66" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    /dev/sdc1: UUID="2218-DC43" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="09f69470-ba7b-4b6b-9456-c09f4c6ad2ee"
    /dev/sdc2: UUID="87bfca96-9bee-4725-ae79-d8d7893d5a49" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="3c45a8f0-3106-4ba8-89bc-b15d22e81144"
    /dev/md0: LABEL="REDRAID4X12" UUID="5fd65f52-b922-45e3-a940-eb7c75460446" BLOCK_SIZE="4096" TYPE="ext4"
    /dev/sdc3: PARTUUID="fda4b444-cf82-4ae8-b916-01b8244acee3"
    Code
    mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=pandora:Raid4x12TBWdRed UUID=8b767a7d:c52c068d:c04f1a3c:fd8d4c5f
       devices=/dev/sda,/dev/sdb,/dev/sdd

    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

    2 Mal editiert, zuletzt von zerozenit ()

  • That would be the correct procedure

    I performed the procedure and now the RAID is clean again and everything works. But I get this mail from mdadm:

    It is not clear if everything is ok and how to stop receiving these messages. Thank you.

    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

    • Offizieller Beitrag

    It is not clear if everything is ok

    Everything is fine, but the email is odd; post the output of cat /etc/mdadm/mdadm.conf sometimes a reboot can help, it could be that mdadm has detected a missing drive but hasn't caught up with the change.

  • Everything is fine, but the email is odd; post the output of cat /etc/mdadm/mdadm.conf sometimes a reboot can help, it could be that mdadm has detected a missing drive but hasn't caught up with the change.

    Here below:

    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!