Lost Raid5

  • hello after a reboot, omv lost my 6*2TB raid5.
    It doesn't appear in the web interface


    At boot it said that 2 drive were faulty, but the drive appear green in the web interface


    so

    • cat /proc/mdstat

    Personalities : [raid6] [raid5] [raid4]
    unused devices: <none>


    • blkid

    /dev/sda1: UUID="3abc1926-a277-4c6b-a4c1-e8fc9a4db381" TYPE="ext4" PARTUUID="a9d47ec6-01"
    /dev/sda5: UUID="ba395bad-8304-4088-8cbf-423a5570e3d9" TYPE="swap" PARTUUID="a9d47ec6-05"
    /dev/sdc: UUID="c42bd49b-4735-0adb-4252-69c01fec9ead" UUID_SUB="75769897-88a7-8f4a-2fa7-cbfedcbbf4ca" LABEL="openmediavault:NAS" TYPE="linux_raid_member"
    /dev/sdb1: LABEL="NGS" UUID="c266f89c-ade1-4474-acb3-b06328d0c430" TYPE="ext4" PARTUUID="7d9760d7-cfc6-4234-89d6-183e289607cc"
    /dev/sdd: UUID="c42bd49b-4735-0adb-4252-69c01fec9ead" UUID_SUB="c8ada145-ff08-0c8b-62dd-e0d256e5bcaa" LABEL="openmediavault:NAS" TYPE="linux_raid_member"
    /dev/sde: UUID="c42bd49b-4735-0adb-4252-69c01fec9ead" UUID_SUB="8361d72b-1bf4-a501-e97c-285ee2e72bff" LABEL="openmediavault:NAS" TYPE="linux_raid_member"
    /dev/sdf: UUID="c42bd49b-4735-0adb-4252-69c01fec9ead" UUID_SUB="abe575c5-e310-7a7a-5419-eb0764b00054" LABEL="openmediavault:NAS" TYPE="linux_raid_member"
    /dev/sdg: UUID="c42bd49b-4735-0adb-4252-69c01fec9ead" UUID_SUB="ba626cc6-4be5-7aee-503a-19f6f177f8ff" LABEL="openmediavault:NAS" TYPE="linux_raid_member"
    /dev/sdh: UUID="c42bd49b-4735-0adb-4252-69c01fec9ead" UUID_SUB="281b8ef3-eff4-f085-9472-611ae4620f64" LABEL="openmediavault:NAS" TYPE="linux_raid_member"


    • fdisk -l | grep "Disk "

    Disk /dev/sda: 7,5 GiB, 8004132864 bytes, 15633072 sectors
    Disk identifier: 0xa9d47ec6
    Disk /dev/sdc: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sdb: 279,5 GiB, 300090728448 bytes, 586114704 sectors
    Disk identifier: 9DDDBB99-C85E-48CD-BB9D-7849E4777D92
    Disk /dev/sdd: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk identifier: 0x00011257
    Disk /dev/sde: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk identifier: 0x0005e98b
    Disk /dev/sdf: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk identifier: 0x000a55a7
    Disk /dev/sdg: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sdh: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors



    • cat /etc/mdadm/mdadm.conf

    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md127 metadata=1.2 name=openmediavault:NAS UUID=c42bd49b:47350adb:425269c0:1fec9ead


    • mdadm --detail --scan --verbose

    nothing appears....

    • Offizieller Beitrag

    What does this say: mdadm --assemble --force --verbose /dev/md127 /dev/sd[cdefgh]

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • it says :


    mdadm --assemble --force --verbose /dev/md127 /dev/sd[cdefgh]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdc is identified as a member of /dev/md127, slot 3.
    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 0.
    mdadm: /dev/sde is identified as a member of /dev/md127, slot 1.
    mdadm: /dev/sdf is identified as a member of /dev/md127, slot 4.
    mdadm: /dev/sdg is identified as a member of /dev/md127, slot 5.
    mdadm: /dev/sdh is identified as a member of /dev/md127, slot 2.
    mdadm: forcing event count in /dev/sdd(0) from 35080 upto 35088
    mdadm: forcing event count in /dev/sdc(3) from 35080 upto 35088
    mdadm: added /dev/sde to /dev/md127 as 1
    mdadm: added /dev/sdh to /dev/md127 as 2
    mdadm: added /dev/sdc to /dev/md127 as 3
    mdadm: added /dev/sdf to /dev/md127 as 4
    mdadm: added /dev/sdg to /dev/md127 as 5
    mdadm: added /dev/sdd to /dev/md127 as 0
    mdadm: /dev/md127 has been started with 6 drives.
    root@openmediavault:~#

    • Offizieller Beitrag

    since that worked, now do: omv-mkconf mdadm


    It will probably be slow while it is rebuilding.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!