About two months ago, I had one of the 5 4TB drives fail in my RAID array. I replaced the drive and the array rebuilt and everything worked fine. a week or so later, I started getting intermittent 'Spares missing" errors. This was odd, but these errors went away before I could figure out what was causing them. A few weeks later, I started having trouble with the OMV Web GUI. Every time I would try to change any parameter I would get the message "The configuration has been changed. You must apply the changes in order for them to take effect", hit the Apply button, and get an error:
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; omv-mkconf postfix 2>&1' with exit code '127': /usr/share/openmediavault/mkconf/postfix: 57: /etc/default/openmediavault: 4c8bc97f-1a07-4e3b-ba6b-7f3aea225fd1: not found
Also lost the ability to do updates on the GUI, and lost the ability to SSH in anymore. Raid was still working, so I backed up the data and system, disconnected the RAID disks, and did a reinstall of OMV. After the reinstall, the RAID array didn't show up on the GUI. All the disks are there, but no option to mount. I'm at a loss as what to do next. I'd really like to get the RAID mounted, rather than restore the data (takes a long time). Hope someone has an idea to restore the RAID.
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>
/dev/sda1: UUID="df5470bd-d363-48c2-89a4-77b5f082db99" TYPE="ext4" PARTUUID="d2f4adb0-01"
/dev/sda5: UUID="e7683669-89eb-4952-89bb-2a7e76aa7e63" TYPE="swap" PARTUUID="d2f4adb0-05"
/dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
/dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
/dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
/dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
/dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
fdisk -l | grep "Disk "
Disk /dev/sda: 59.6 GiB, 64023257088 bytes, 125045424 sectors
Disk identifier: 0xd2f4adb0
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
# Please refer to mdadm.conf(5) for information about this file.
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
# definitions of existing MD arrays
mdadm --detail --scan --verbose
(no output for above)
* All Disks are 4 TB SATA, 4 HGST, and 1 WD RED
* Raid stopped working after reinstall of OMV