Lost Raid 5

  • Hello OMV forum


    Today I moved my OMV machine and thought to give it a clean at the same time. Before doing so I booted it to check moving it had caused no problems up and tucked it out the way (so i thought) Unfortunately leaving the case open and on my nephew decided to investigate and in the few moments I was not looking pulled the Sata cables out the drives ;(


    I was running x4 2TB drives in Raid 5.


    Basically now its saying in the logs: mountpoint_srv_dev-disk-by-label-NAS' status failed (1) -- .srv/dev-disk-by-lable-NAS in not a mountpoint


    I've tried following other forum results to no avail :(


    Is this salvageable? it looks like I'm missing a mdadm.conf perhaps?


    Any help would be greatly appreciated. Thanks!



    cat /proc/mdstat


    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sda[0](S) sdb[1](S) sdd[3](S) sdc[2](S)
    7813534048 blocks super 1.2


    unused devices: <none>


    blkid


    /dev/sdb: UUID="f88aab93-ea45-ef37-07cb-eae4d6f4bc3e" UUID_SUB="88445cb1-7a58-0600-afc0-04c29105d30d" LABEL="NAS-2:Raid5" TYPE="linux_raid_member"
    /dev/sda: UUID="f88aab93-ea45-ef37-07cb-eae4d6f4bc3e" UUID_SUB="ed145d12-c3a6-87e0-b48f-ef14f6b7ecca" LABEL="NAS-2:Raid5" TYPE="linux_raid_member"
    /dev/sdc: UUID="f88aab93-ea45-ef37-07cb-eae4d6f4bc3e" UUID_SUB="eab7d791-1f0e-2fa7-893e-df59a6ecd40f" LABEL="NAS-2:Raid5" TYPE="linux_raid_member"
    /dev/sdd: UUID="f88aab93-ea45-ef37-07cb-eae4d6f4bc3e" UUID_SUB="2b3af38a-2037-3a56-5976-691c568d92f7" LABEL="NAS-2:Raid5" TYPE="linux_raid_member"
    /dev/sde1: UUID="ff527af8-02e9-44ef-ab88-308ea18a0030" TYPE="ext4" PARTUUID="6a4e3222-01"
    /dev/sde5: UUID="b45a694e-8dde-43b4-ac63-deffa79218c4" TYPE="swap" PARTUUID="6a4e3222-05"



    fdisk -l | grep "Disk "


    Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sde: 119.2 GiB, 128035676160 bytes, 250069680 sectors
    Disk identifier: 0x6a4e3222



    cat/etc/mdadm/mdadm.conf


    -bash: cat/etc/mdadm/mdadm.conf: No such file or directory
    root@NAS-2:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    INACTIVE-ARRAY /dev/md0 metadata=1.2 name=NAS-2:Raid5 UUID=f88aab93:ea45ef37:07cbeae4:d6f4bc3e
    root@NAS-2:~#




    mdadm --detail --scan --verbose


    mdadm: Unknown keyword INACTIVE-ARRAY
    INACTIVE-ARRAY /dev/md127 num-devices=4 metadata=1.2 name=NAS-2:Raid5 UUID=f88aab93:ea45ef37:07cbeae4:d6f4bc3e
    devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd
    root@NAS-2:~# mdadm: Unknown keyword INACTIVE-ARRAY
    -bash: mdadm:: command not found
    root@NAS-2:~# INACTIVE-ARRAY /dev/md127 num-devices=4 metadata=1.2 name=NAS-2:Raid5 UUID=f88aab93:ea45ef37:07cbeae4:d6f4bc3e

  • Hi


    Code
    INACTIVE-ARRAY


    I dont know if you can switch the array in no INACTIVE mod


    My configuration :



    AMD Ryzen 5 2400G on Asus TUF B450M-PLUS - 16Gb RAM - 3 * 3To RAID5 on LSI Megaraid SAS 9260-8i and 3 SSD in Fractal Design Node 804 Black
    OS: OMV 6.3.2-1 (Shaitan)

    • Offizieller Beitrag

    I was not looking pulled the Sata cables out the drives

    :D There's the problem, if you look at mdadm.conf it reports an array md0, whereas the output of mdadm --detail + /proc/mdstat shows the array as md127.


    There are two things you need to do mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcd] and omv-mkconf mdadm however I don't know what order as I've not seen this before.

  • Mr Geaves


    Thank goodness! I had to first use the command [mdadm --stop /dev/md127]


    Before that it did say: /dev/sdd is busy - skipping
    mdadm: Found some drive for an array that is already active: /dev/md/Raid5
    mdadm: giving up.


    I almost did the same as mdadm above :O




    Lesson learned the hard way for me, backups and watch the boy!


    Thank you soo much for your help!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!