State of the raid: clean,degraded

  • Hello, my sistem OMV is notifiying RAID "clean,degraded".


    ----------------------


    root@omv:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sda[0] sdc[2] sdb[1]
    11720661504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
    bitmap: 29/30 pages [116KB], 65536KB chunk



    unused devices: <none>


    ----------------------



    root@omv:~# blkid
    /dev/sdb: UUID="cfcdc47b-839f-3ada-5522-ced3ab9c26d8" UUID_SUB="ec958142-e8ca-e9c9-3336-81f2c12f9cf7" LABEL="Panoramix:Raid5" TYPE="linux_raid_member"
    /dev/sda: UUID="cfcdc47b-839f-3ada-5522-ced3ab9c26d8" UUID_SUB="f465aad1-a6ee-2b13-86c7-768de12cfb27" LABEL="Panoramix:Raid5" TYPE="linux_raid_member"
    /dev/sdc: UUID="cfcdc47b-839f-3ada-5522-ced3ab9c26d8" UUID_SUB="170488df-051e-0b35-c4e1-ec79f8a06abd" LABEL="Panoramix:Raid5" TYPE="linux_raid_member"
    /dev/sdd1: LABEL="WDUsb" UUID="582af5dc-d61c-458f-aa32-d40cb4664497" TYPE="ext4" PARTUUID="414b3331-6b68-4204-82e2-6782ac80b63a"
    /dev/sde1: UUID="cf4eb673-11d8-4de8-a17a-a9745e6c57e1" TYPE="ext4" PARTUUID="690bbffe-01"
    /dev/sde5: UUID="2ab62fd2-7e0a-4455-ae7f-fbb69ad9fcdc" TYPE="swap" PARTUUID="690bbffe-05"
    /dev/md0: LABEL="WDRed" UUID="c2888fb6-c83c-4ea3-adcf-863d9f4c262a" TYPE="xfs"


    ----------------------



    root@omv:~# fdisk -l | grep "Disk "
    Disk /dev/sdb: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sda: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdc: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdd: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk identifier: A9B8A8C6-8956-4E3A-B43E-4E8818B2B5C2
    Disk /dev/sde: 29,7 GiB, 31914983424 bytes, 62333952 sectors
    Disk identifier: 0x690bbffe
    Disk /dev/md0: 10,9 TiB, 12001957380096 bytes, 23441323008 sectors
    Disk /dev/sdf: 3,7 TiB, 4000752599040 bytes, 7813969920 sectors



    ------------------------


    root@omv:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #



    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions



    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes



    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>



    # definitions of existing MD arrays
    ARRAY /dev/md0 metadata=1.2 name=omv:Raid5 UUID=cfcdc47b:839f3ada:5522ced3:ab9c26d8



    # instruct the monitoring daemon where to send mail alerts



    ------------------------------



    root@omv:~# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=omv:Raid5 UUID=cfcdc47b:839f3ada:5522ced3:ab9c26d8
    devices=/dev/sda,/dev/sdb,/dev/sdc



    --------------------------------



    The OMV System is mount in HP GEN8 with 4 hdd WD RED of 4TB in RAID5 (sda,sdb,sdc,sdd). The OS OMV is mount in microsd 32gb (sde). And finally a hdd usb of 4tb for backup (sdf).


    Thanks!!!

  • In case it helps the info of RAID5 en OMV is (in red mark the error):Version : 1.2
    Creation Time : Fri Oct 27 21:05:16 2017
    Raid Level : raid5
    Array Size : 11720661504 (11177.69 GiB 12001.96 GB)
    Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
    Raid Devices : 4
    Total Devices : 3
    Persistence : Superblock is persistent



    Intent Bitmap : Internal



    Update Time : Sat Dec 2 14:08:29 2017
    State : clean, degraded
    Active Devices : 3
    Working Devices : 3
    Failed Devices : 0
    Spare Devices : 0



    Layout : left-symmetric
    Chunk Size : 512K



    Name : Panoramix:Raid5 (local to host Panoramix)
    UUID : cfcdc47b:839f3ada:5522ced3:ab9c26d8
    Events : 34530



    Number Major Minor RaidDevice State
    0 8 0 0 active sync /dev/sda
    1 8 16 1 active sync /dev/sdb
    2 8 32 2 active sync /dev/sdc
    6 0 0 6 removed

  • Re,


    yeah, it's degraded, because drive sdd is lost ... check the logs and the SMART for that drive to get the root-cause.


    I have currently to less time to dig deeper here, but there are many posts from me in other threads covering such problem, please use the search-function.


    Note: your blkid shows, that now your USB-drive is sdd! (so sdf is missing ... it was renumbered?)


    Sc0rp

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!