Missing file system (raid5) after power loss

  • Hey guys so I had a power loss today, actually likely 2-3 in a span of half an hour if that's relevant. I usually leave my NAS alone, just login if I get a SMART error or once a month to perform updates.


    So OMV is starting fine as far as I can tell, but in the Software RAID section there's nothing, the file system associated to that raid array is "missing". Drives are all visible in the Drives section.


    I use 6 4tiB drives, mix of WD, Toshiba and Seagate.


    Any help greatly appreciated.


    cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    unused devices: <none>


    blkid

    /dev/sda1: UUID="A9E2-A40E" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="8f08deeb-a2c9-476d-8807-e3033e7fd223"

    /dev/sda2: UUID="5a65c3f3-36a6-470f-bff3-0b29dfbcabd2" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="5fbb5628-3caf-4472-aade-fe58ff57321b"

    /dev/sdd: UUID="4211a61c-1dcb-3783-1228-0f01549d9cde" UUID_SUB="14daa08e-c7ef-72ba-8677-d6b157f65836" LABEL="Athena:0" TYPE="linux_raid_member"

    /dev/sdc: UUID="4211a61c-1dcb-3783-1228-0f01549d9cde" UUID_SUB="7a276e9e-44f6-150b-8537-63f6784a6048" LABEL="Athena:0" TYPE="linux_raid_member"

    /dev/sdg: UUID="4211a61c-1dcb-3783-1228-0f01549d9cde" UUID_SUB="3ce1eccd-bbed-e913-64f0-58a881049919" LABEL="Athena:0" TYPE="linux_raid_member"

    /dev/sdf: UUID="4211a61c-1dcb-3783-1228-0f01549d9cde" UUID_SUB="80aadf8f-55f8-2b14-8511-0921fa05a680" LABEL="Athena:0" TYPE="linux_raid_member"

    /dev/sde: UUID="4211a61c-1dcb-3783-1228-0f01549d9cde" UUID_SUB="63c9a77b-2027-84a5-057c-759b37c6c2fc" LABEL="Athena:0" TYPE="linux_raid_member"

    /dev/sda3: PARTUUID="b0610092-bab0-4090-a201-51659aae791e"

    /dev/sdb1: PARTUUID="7349b940-bc1f-4ff5-9965-c1d78d80b692"


    fdisk -l | grep "Disk "

    Disk /dev/sda: 111.79 GiB, 120034123776 bytes, 234441648 sectors

    Disk model: KINGSTON SV300S3

    Disk identifier: C3D7740C-FB5D-4AEF-B616-CE49BBD08854

    Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68N

    Disk identifier: 6C8E2A46-9F5B-4FA2-8201-D8EB607F82BC

    Disk /dev/sdd: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: TOSHIBA HDWD240

    Disk /dev/sdc: 3.64 TiB, 4000753476096 bytes, 7813971633 sectors

    Disk model: WDC WD40EZRX-00S

    Disk /dev/sdg: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: ST4000DM004-2CV1

    Disk /dev/sdf: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: TOSHIBA MD04ACA4

    Disk /dev/sde: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68N


    cat /etc/mdadm/mdadm.conf

    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.

    # mdadm.conf

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #

    # by default, scan all partitions (/proc/partitions) for MD superblocks.

    # alternatively, specify devices to scan, using wildcards if desired.

    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.

    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is

    # used if no RAID devices are configured.

    DEVICE partitions

    # auto-create devices with Debian standard permissions

    CREATE owner=root group=disk mode=0660 auto=yes

    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>

    # definitions of existing MD arrays

    ARRAY /dev/md/Athena:0 metadata=1.2 name=Athena:0 UUID=4211a61c:1dcb3783:12280f01:549d9cde

    MAILADDR root


    mdadm --detail --scan --verbose

    Nothing after I hit enter in the terminal

  • ryecoaaron

    Hat das Thema freigeschaltet.
    • Offizieller Beitrag

    Well there's nothing to work with, there should be some output from cat /proc/mdstat and mdadm --detail and there isn't, but there could be some corruption to the omv device.


    First try simply restarting the server from OMV's GUI, check raid management and cat /proc/mdstat if it's still in the same state (you're looking for some output) then ->


    Disconnect all the drives and reinstall on a another drive/usb flash drive, set up and update OMV, shut down and connect the data drives and check again

  • Ok so I had focus on other things but I'm back trying to sort this raid array out.

    Rebooting didn't change the output from the commands or what is visible in the web gui.

    Disconnect all the drives and reinstall on a another drive/usb flash drive, set up and update OMV, shut down and connect the data drives and check again

    After doing this, I'm getting an output to mdadm


    mdadm --detail --scan --verbose

    INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=Athena:0 UUID=4211a61c:1dcb3783:12280f01:549d9cde

    devices=/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg

    It says I have 5 drives in this inactive array but it's a raid5 with 6 drives, all 6 are connected as can be seen below and the array wasn't degraded before the power loss.


    For good measure, I ran the other commands again :



    cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md127 : inactive sdc[0](S) sdg[6](S) sdd[5](S) sdf[7](S) sde[4](S)

    19534417592 blocks super 1.2

    unused devices: <none>

    blkid

    /dev/sdc: UUID="4211a61c-1dcb-3783-1228-0f01549d9cde" UUID_SUB="7a276e9e-44f6-150b-8537-63f6784a6048" LABEL="Athena:0" TYPE="linux_raid_member"

    /dev/sde: UUID="4211a61c-1dcb-3783-1228-0f01549d9cde" UUID_SUB="63c9a77b-2027-84a5-057c-759b37c6c2fc" LABEL="Athena:0" TYPE="linux_raid_member"

    /dev/sdd: UUID="4211a61c-1dcb-3783-1228-0f01549d9cde" UUID_SUB="14daa08e-c7ef-72ba-8677-d6b157f65836" LABEL="Athena:0" TYPE="linux_raid_member"

    /dev/sda1: UUID="F9A5-C588" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="fb813cbb-2e0d-4594-9ba5-f7660f0fed7c"

    /dev/sda2: UUID="e10b24bd-a91b-4571-a4b7-587f2f57744e" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="6a00e573-ceae-4484-b766-43fda099ea55"

    /dev/sda3: UUID="88c23d4c-914c-4f47-8de0-922d2caad798" TYPE="swap" PARTUUID="a96c33be-ecbb-40c0-94d0-ace3603cba0e"

    /dev/sdf: UUID="4211a61c-1dcb-3783-1228-0f01549d9cde" UUID_SUB="80aadf8f-55f8-2b14-8511-0921fa05a680" LABEL="Athena:0" TYPE="linux_raid_member"

    /dev/sdg: UUID="4211a61c-1dcb-3783-1228-0f01549d9cde" UUID_SUB="3ce1eccd-bbed-e913-64f0-58a881049919" LABEL="Athena:0" TYPE="linux_raid_member"

    /dev/sdb1: PARTUUID="7349b940-bc1f-4ff5-9965-c1d78d80b692"


    fdisk -l | grep "Disk "

    Disk /dev/sdc: 3.64 TiB, 4000753476096 bytes, 7813971633 sectors

    Disk model: WDC WD40EZRX-00S

    Disk /dev/sde: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68N

    Disk /dev/sdd: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: TOSHIBA HDWD240

    Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68N

    Disk identifier: 6C8E2A46-9F5B-4FA2-8201-D8EB607F82BC

    Disk /dev/sda: 238.47 GiB, 256060514304 bytes, 500118192 sectors

    Disk model: Crucial_CT256MX1

    Disk identifier: 0FE33E33-AB4B-4362-B1FD-1563CC19FFB2

    Disk /dev/sdf: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: TOSHIBA MD04ACA4

    Disk /dev/sdg: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: ST4000DM004-2CV1


    cat /etc/mdadm/mdadm.conf

    # mdadm.conf

    #

    # !NB! Run update-initramfs -u after updating this file.

    # !NB! This will ensure that initramfs has an uptodate copy.

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #

    # by default (built-in), scan all partitions (/proc/partitions) and all

    # containers for MD superblocks. alternatively, specify devices to scan, using

    # wildcards if desired.

    #DEVICE partitions containers

    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>

    # instruct the monitoring daemon where to send mail alerts

    MAILADDR root

    # definitions of existing MD arrays

    # This configuration was auto-generated on Mon, 24 Jul 2023 10:14:16 +0000 by mkconf

  • Update


    After looking at the many similar issues I am assuming the array is viable but for one disk (sdb) for some reason.


    So I mdadm --stop /dev/md127 then mdadm --assemble --force --verbose /dev/md127 /dev/sd[cdefg] which returned this :


    mdadm --detail /dev/md127 returns

    And I can now see the array as degraded in OMV's web gui.

    I cannot recover the array using sdb as it is, I had to wipe it. It is now recovering, fingers crossed.


    If I've missed an important step or there's something else I should check, I would be grateful to get pointed in the right direction ! If not, thanks for the help, I'd have fiddled with my borked system disk for a while before installing on a new one (still investigating whether that first disk is really bad or not).

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!