Raid 6 disappeared

  • Hello together,


    cat /proc/mdstat:


    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md127 : inactive sdg[1](S) sdc[3](S) sdb[4](S) sdh[2](S) sdd[5](S) sda[0](S) sde[6](S)

    6835316280 blocks super 1.2


    unused devices: <none>




    blkid:


    /dev/sdc: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="4d1e1b6b-bab1-107e-0789-af8c5f7bf547" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"

    /dev/sdb: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="423a1aad-fcea-7158-436a-115ea156385c" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"

    /dev/sdd: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="f518e6a4-3403-24ae-a75a-88e2b0722a89" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"

    /dev/sda: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="1bed1572-c467-cace-222b-9b0305a4557f" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"

    /dev/sde: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="5c0d7d2a-a569-37d9-1c43-2117fc1246e4" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"

    /dev/sdg: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="291e58ea-9112-4d4b-1563-af9342dc8243" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"

    /dev/sdh: UUID="70b2de20-3e36-8789-e2a1-866ecb067917" UUID_SUB="bbbce84b-1ef4-ba8d-1f98-63d17884e7ae" LABEL="HULK.local:HulkRaid" TYPE="linux_raid_member"

    /dev/sdf1: UUID="CDBE-D745" TYPE="vfat" PARTUUID="299b948a-937b-4097-90b6-59ed7ea06813"

    /dev/sdf2: UUID="475d099c-df49-4b02-b831-92bff08eaec4" TYPE="ext4" PARTUUID="3fef7530-5dad-42c4-bc82-ba7d60381dce"

    /dev/sdf3: UUID="63ecac96-3b50-4975-861d-0ff07f2ed681" TYPE="swap" PARTUUID="f136f054-23a1-46e4-8827-a2a8b4eddf48"




    fdisk -l | grep "Disk ":


    Disk /dev/sdc: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors

    Disk model: ST1000LM048-2E71

    Disk /dev/sdb: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors

    Disk model: WDC WD10SPZX-00Z

    Disk /dev/sdd: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors

    Disk model: WDC WD10EADS-98M

    Disk /dev/sda: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors

    Disk model: SAMSUNG HD103SI

    Disk /dev/sde: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors

    Disk model: ST1000DM003-9YN1

    Disk /dev/sdg: 931 GiB, 999643152384 bytes, 1952428032 sectors

    Disk model: HDD/2

    Disk /dev/sdh: 931 GiB, 999643152384 bytes, 1952428032 sectors

    Disk model: HDD/1

    Disk /dev/sdf: 90 GiB, 96626278400 bytes, 188723200 sectors

    Disk model: Sys/Mirror

    Disk identifier: 111D9AB6-2E07-4633-9EAB-E02AFC72AA27




    cat /etc/mdadm/mdadm.conf:


    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.


    # mdadm.conf

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.

    # alternatively, specify devices to scan, using wildcards if desired.

    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.

    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is

    # used if no RAID devices are configured.

    DEVICE partitions


    # auto-create devices with Debian standard permissions

    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>

    # instruct the monitoring daemon where to send mail alerts

    MAILADDR RalfRichter@Richter-Audio.de

    MAILFROM root


    # definitions of existing MD arrays

    ARRAY /dev/md0 metadata=1.2 name=HULK.local:HulkBuster UUID=8541d67d:e2204d81:769010c7:6e45facf




    mdadm --detail --scan --verbose:


    INACTIVE-ARRAY /dev/md127 num-devices=7 metadata=1.2 name=HULK.local:HulkRaid UUID=70b2de20:3e368789:e2a1866e:cb067917

    devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdg,/dev/sdh



    The setup:

    2 devices (hardware) raid 1 for omv - system

    7 devices (software) raid 6 as an data pool



    Problem:

    Raid 6 disappeared after reboot. I was using the system as usual and after the reboot I wasn't able to get to my data.

    All hdds seems to be recognized

    "only" the data raid is missing.


    I must admit that this is the first time I setup my own nas to reuse older hardware instead of buying a complete new solution.


    Thanks for all help in advance!

  • crashtest

    Hat das Thema freigeschaltet.
    • Offizieller Beitrag

    stop it and reassemble, but the output states md127 mdadm conf shows md0 so something went wrong


    mdadm --stop /dev/md127


    mdadm --assemsble --force --verbose /dev/md127 /dev/sd[abcdegh]

  • okay, this is what happened:


    mdadm --assemble --force --verbose /dev/md127 /dev/sd[abcdegh]:


    mdadm: looking for devices for /dev/md127

    mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.

    mdadm: /dev/sdb is identified as a member of /dev/md127, slot 4.

    mdadm: /dev/sdc is identified as a member of /dev/md127, slot 3.

    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 5.

    mdadm: /dev/sde is identified as a member of /dev/md127, slot 6.

    mdadm: /dev/sdg is identified as a member of /dev/md127, slot 1.

    mdadm: /dev/sdh is identified as a member of /dev/md127, slot 2.

    mdadm: forcing event count in /dev/sdh(2) from 27874 upto 27882

    mdadm: added /dev/sdg to /dev/md127 as 1 (possibly out of date)

    mdadm: added /dev/sdh to /dev/md127 as 2

    mdadm: added /dev/sdc to /dev/md127 as 3

    mdadm: added /dev/sdb to /dev/md127 as 4

    mdadm: added /dev/sdd to /dev/md127 as 5 (possibly out of date)

    mdadm: added /dev/sde to /dev/md127 as 6

    mdadm: added /dev/sda to /dev/md127 as 0

    mdadm: /dev/md127 assembled from 5 drives - not enough to start the array.

  • mdadm --detail /dev/md127

    /dev/md127:

    Version : 1.2

    Raid Level : raid0

    Total Devices : 6

    Persistence : Superblock is persistent


    State : inactive

    Working Devices : 6


    Name : HULK.local:HulkRaid (local to host HULK.local)

    UUID : 70b2de20:3e368789:e2a1866e:cb067917

    Events : 27882


    Number Major Minor RaidDevice


    - 8 32 - /dev/sdc

    - 8 0 - /dev/sda

    - 8 112 - /dev/sdh

    - 8 48 - /dev/sdd

    - 8 16 - /dev/sdb

    - 8 96 - /dev/sdg



    __________________


    Yes, I've a backup - but it's missing some data that might take a lot of time to recreate.


    Might it be an option to substitute the (out of date) hdds? Nontheless - before this step I'd like to check all other options.

    • Offizieller Beitrag

    If that was a Raid 6 it would assemble with just the 5 drives, what makes no sense is that the mdadm conf states md0 not md127 as the output states, so something went wrong during that reboot and it could point to a hardware issue.


    You could try repeating the mdadm assemble but change md127 to md0, another option might be to reboot, but I've never seen anything like this before.

    • Offizieller Beitrag

    If that doesn't work post the output of each drive in the array starting with mdadm --examine /dev/sda at this moment I'm putting money on some sort hardware issue.


    That will generate a lot of output, too much to look at now as I'm about to sign off.

  • Thanks a lot!!!


    The reboot did the trick. Nontheless, I have to think about an aditional backup plan for the crucial data.


    Did I mention? Thanks a lot!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!