RAID 5 clean degraded

  • Good Morning,


    yesterday unfortunately my NAS was not usable because some chaos in the cable management. I ordered everything new and started OMV. Everything fine except one HD got a problem with a electricity. This I saw in OMV GUI because one HD was missing. I shutdowned the NAS and fixed the problem. Now the RAID is in clean,degraded status. I read a lot of articles about this theme, but because I´m a real newby in LINUX I want to ask for help. (/dev/sdc got the problem)


    Here is the output of the standard commands


    root:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : inactive sdc[2](S)
    2930265560 blocks super 1.2



    md127 : active raid5 sda[0] sdd[3] sdb[1]
    8790795264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]



    unused devices: <none>




    root:~# blkid
    /dev/sda: UUID="1502d52d-95e2-6795-1640-733c336993e9" UUID_SUB="f4b35227-50c1-0395-aacf-ea17c6c0448b" LABEL="PCADS:Media" TYPE="linux_raid_member"
    /dev/sdb: UUID="1502d52d-95e2-6795-1640-733c336993e9" UUID_SUB="747d9708-d350-2453-9796-376b9efd0cec" LABEL="PCADS:Media" TYPE="linux_raid_member"
    /dev/sdc: UUID="1502d52d-95e2-6795-1640-733c336993e9" UUID_SUB="a1f38150-9d8b-90a3-fbaa-6e7ce05e0e9f" LABEL="PCADS:Media" TYPE="linux_raid_member"
    /dev/md127: LABEL="Storage" UUID="fdbab16c-3441-4b48-89a6-dbfa96ae3ff4" TYPE="ext4"
    /dev/sdd: UUID="1502d52d-95e2-6795-1640-733c336993e9" UUID_SUB="259ea68e-e1ae-060a-7f21-74d26e2c127a" LABEL="PCADS:Media" TYPE="linux_raid_member"
    /dev/sde1: UUID="1c616f83-f1f0-4e89-b62f-4a39b3b6a049" TYPE="ext4"
    /dev/sde5: UUID="e4d4e52f-a3a4-4ec7-ba88-19767e6f0b1e" TYPE="swap"




    root:~# fdisk -l | grep "Disk "
    Disk /dev/sda doesn't contain a valid partition table
    Disk /dev/sdb doesn't contain a valid partition table
    Disk /dev/sdc doesn't contain a valid partition table
    Disk /dev/sdd doesn't contain a valid partition table
    Disk /dev/md127 doesn't contain a valid partition table
    Disk /dev/sda: 3000.6 GB, 3000592982016 bytes
    Disk identifier: 0x00000000
    Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
    Disk identifier: 0x00000000
    Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
    Disk identifier: 0x00000000
    Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
    Disk identifier: 0x00000000
    Disk /dev/sde: 63.0 GB, 63023063040 bytes
    Disk identifier: 0x0003987e
    Disk /dev/md127: 9001.8 GB, 9001774350336 bytes
    Disk identifier: 0x00000000




    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #



    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions



    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes



    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>



    # definitions of existing MD arrays
    ARRAY /dev/md0 metadata=1.2 name=PCADS:Media UUID=1502d52d:95e26795:1640733c:336993e9
    MAILADDR root




    root:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/Media level=raid5 num-devices=4 metadata=1.2 name=PCADS:Media UUID=1502d52d:95e26795:1640733c:336993e9
    devices=/dev/sda,/dev/sdb,/dev/sdd
    mdadm: md device /dev/md0 does not appear to be active.




    Can anyone please help me integrating the sdc HD again?

    • Offizieller Beitrag

    You 'could' try mdadm --add /dev/md127 /dev/sdc


    I've never fully understood mdadm and have to search to find an answer, but reading through what you nave written /dev/sdc whilst it's still 'there' is part of the original md0 raid, running the above should add it to md127.


    When you run that it should give an output on the time to complete, what you could also do is to check the drive using smart under storage just to make sure the drive is 'fit'.

    • Offizieller Beitrag

    Further searching, the /dev/sdc is still part of the original md0 array so to remove it you have to fail it,


    try mdadm /dev/md0 --fail /dev/sdc 'if' that works try mdadm /dev/md0 --remove /dev/sdc whilst I'm not sure about this as your md0 displays as inactive.

  • Stop md0 , destroy it and the readd sdc to md127.
    But, if your raid was md0 before,and now is md127 i would check disk health and everything else,because that always points to some kind of error or problem.

  • sorry i´m now confused.


    /dev/md0 is my SSD with OS and OMV, it has nothing to do with the RAID5
    /dev/md127 is the RAID5


    /dev/sdc was a member of the RAID. I can see it in the GUI and on cli session. But it seems that it is nowhere attached.

    • Offizieller Beitrag

    Ok now you're getting me confused, the above tells me that md0 is inactive with sdc as part of that, md127 displays raid5 active with the three drives.


    The output of blkid and fdisk shows you have a 63Gb drive /dev/sde set up with ext4 and swap which I am guessing is your OS/OMV boot drive.


    The output from your mdadm.conf points to the array being /dev/md0 all your problems point to the failure then it being reconnected.


    Another way around this might be to try mdadm --stop /dev/md127 then run mdadm --assemble --scan this should reassemble the raid defined in the config files.

  • I´m sorry, you are right, as I said ..... Linux Newbie, read and understand .... sorry again :)


    mdadm /dev/md0 --fail /dev/sdc
    mdadm: cannot get array info for /dev/md0


    mdadm /dev/md0 --remove /dev/sdc
    mdadm: cannot get array info for /dev/md0


    so should try this one now:
    mdadm --stop /dev/md127
    mdadm --assemble --scan

  • ok, i unmounted and stopped md0, then
    mdadm --assemble --scan
    mdadm: /dev/md0 has been started with 3 drives (out of 4).


    then
    mdadm --add /dev/md0 /dev/sdc
    mdadm: added /dev/sdc


    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdc[4] sda[0] sdd[3] sdb[1]
    8790795264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
    [>....................] recovery = 0.4% (14422656/2930265088) finish=382.7min speed=126963K/sec


    do i understand this correct? HD was added and now RAID5 is built again with 4 HDs`?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!