RAID 5 Array missing after reinstall of OMV 3.099

  • Hello,


    About two months ago, I had one of the 5 4TB drives fail in my RAID array. I replaced the drive and the array rebuilt and everything worked fine. a week or so later, I started getting intermittent 'Spares missing" errors. This was odd, but these errors went away before I could figure out what was causing them. A few weeks later, I started having trouble with the OMV Web GUI. Every time I would try to change any parameter I would get the message "The configuration has been changed. You must apply the changes in order for them to take effect", hit the Apply button, and get an error:


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; omv-mkconf postfix 2>&1' with exit code '127': /usr/share/openmediavault/mkconf/postfix: 57: /etc/default/openmediavault: 4c8bc97f-1a07-4e3b-ba6b-7f3aea225fd1: not found


    Also lost the ability to do updates on the GUI, and lost the ability to SSH in anymore. Raid was still working, so I backed up the data and system, disconnected the RAID disks, and did a reinstall of OMV. After the reinstall, the RAID array didn't show up on the GUI. All the disks are there, but no option to mount. I'm at a loss as what to do next. I'd really like to get the RAID mounted, rather than restore the data (takes a long time). Hope someone has an idea to restore the RAID.


    Disk DATA:
    _______________________
    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    unused devices: <none>
    _______________________
    blkid
    /dev/sda1: UUID="df5470bd-d363-48c2-89a4-77b5f082db99" TYPE="ext4" PARTUUID="d2f4adb0-01"
    /dev/sda5: UUID="e7683669-89eb-4952-89bb-2a7e76aa7e63" TYPE="swap" PARTUUID="d2f4adb0-05"
    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    _______________________
    fdisk -l | grep "Disk "
    Disk /dev/sda: 59.6 GiB, 64023257088 bytes, 125045424 sectors
    Disk identifier: 0xd2f4adb0
    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    _______________________
    cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    # definitions of existing MD arrays
    _______________________
    mdadm --detail --scan --verbose
    (no output for above)
    _______________________


    * All Disks are 4 TB SATA, 4 HGST, and 1 WD RED
    * Raid stopped working after reinstall of OMV


    Thanks!



    • Offizieller Beitrag

    Can't say what the problem is (it looks like it's complaining postfix is not installed, but I didn't think the webUI worked at all w/o postfix), but... why did you install OMV 3? Numerous people are having problems with it when they run standard updates.


    Your best course of action is to repeat, but install OMV 4.

  • Thank you KM0201 for the reply. I reinstalled OMV 3 because I was thinking it would be best not to upgrade before resolving the RAID problem. It did fix the problem with the Web GUI.


    I'll install OMV 4, and give it a try and see if that helps.


    Thanks!

  • Installed OMV 4.1.22, then shut down and connected the RAID drives and rebooted. Looks promising, see below.
    Note that the mdadm detail scan now shows INACTIVE-ARRAY /dev/md127. However the array does not show up in the RAID Management GUI.
    Hopefully all it needs is a mount or something? if so, how do I go about it?


    Thanks!


    Disk DATA:


    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
    19534437560 blocks super 1.2
    unused devices: <none>
    _________________
    blkid
    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sda1: UUID="8c781b3b-33bd-42bb-ba23-0cdc03a68fdc" TYPE="ext4" PARTUUID="d73019f2-01"
    /dev/sda5: UUID="98699062-8fa7-403b-b0f0-6a4226840926" TYPE="swap" PARTUUID="d73019f2-05"
    _________________
    root@NAS:~# fdisk -l | grep "Disk "
    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
    Disk identifier: 0xd73019f2
    _________________
    cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    # definitions of existing MD arrays
    _________________
    mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
    devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
    19534437560 blocks super 1.2
    unused devices: <none>
    _________________
    blkid
    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sda1: UUID="8c781b3b-33bd-42bb-ba23-0cdc03a68fdc" TYPE="ext4" PARTUUID="d73019f2-01"
    /dev/sda5: UUID="98699062-8fa7-403b-b0f0-6a4226840926" TYPE="swap" PARTUUID="d73019f2-05"
    _________________
    root@NAS:~# fdisk -l | grep "Disk "
    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
    Disk identifier: 0xd73019f2
    _________________
    cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    # definitions of existing MD arrays
    _________________
    mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
    devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
    _________________

  • Hello,


    With the help of KMO201, I was able to get closer to restoring my RAID array (see info in post directly above). However being a noob, I'm unsure as to the steps to restore my RAID. Would very much appreciate some help in determining what steps to take next.


    Thanks.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!