RAID10 - Clean, degraded - Why?

  • Hi,


    I just checked my Raid10 status and it says "Clean, degraded"
    Why is that?
    Isn't degraded a bad thing or something?
    I had 1 bad disk '/dev/sdd', so I replaced him and clicked "Recover" and it's finished untill 100%.
    But it's still degraded now, And seems like another disk is missing from the raid.



    Details:


    Version : 1.2
    Creation Time : Tue Apr 12 16:19:22 2016
    Raid Level : raid10
    Array Size : 21487212032 (20491.80 GiB 22002.91 GB)
    Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
    Raid Devices : 22
    Total Devices : 21
    Persistence : Superblock is persistent


    Update Time : Sun Mar 26 22:39:05 2017
    State : clean, degraded
    Active Devices : 21
    Working Devices : 21
    Failed Devices : 0
    Spare Devices : 0


    Layout : near=2
    Chunk Size : 512K


    Name : storage02off:storage02off (local to host storage02off)
    UUID : 12ec260e:98e29218:c957740e:0e958366
    Events : 154450


    Number Major Minor RaidDevice State
    23 8 16 0 active sync /dev/sdb
    1 8 32 1 active sync /dev/sdc
    26 8 48 2 active sync /dev/sdd
    3 8 64 3 active sync /dev/sde
    4 8 80 4 active sync /dev/sdf
    24 8 96 5 active sync /dev/sdg
    6 8 112 6 active sync /dev/sdh
    7 8 128 7 active sync /dev/sdi
    8 8 144 8 active sync /dev/sdj
    9 8 160 9 active sync /dev/sdk
    10 8 176 10 active sync /dev/sdl
    11 8 192 11 active sync /dev/sdm
    12 8 208 12 active sync /dev/sdn
    13 8 224 13 active sync /dev/sdo
    14 8 240 14 active sync /dev/sdp
    15 65 0 15 active sync /dev/sdq
    16 65 16 16 active sync /dev/sdr
    17 65 32 17 active sync /dev/sds
    18 0 0 18 removed
    25 65 64 19 active sync /dev/sdu
    22 65 80 20 active sync /dev/sdv
    21 65 96 21 active sync /dev/sdw



    Ok, I found out that I should have total of 22 disks in my array, but I only have 21.
    1 of them is removed, named /dev/sdt
    He is shown under "Physical devices" so he is connected to the machine.


    But, When I click ' Recover' in Raid Management, I don't see him as unused disk..
    What is that?

  • It looks like /dev/sdt is no longer working and you'll have to replace that drive before you can rebuild/recover your RAID.


    Edit: at lease the OS thinks /dev/sdt is no longer working.

  • I found the solution, I just had to re-add the disk to the raid, Somehow he was no longer part of it.
    So I used this command:
    mdadm --manage /dev/md0 --re-add /dev/sdt

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!