So I currently have an issue where after adding a new drive to my raid 6, and expanding. The File system I had on the raid is no longer working in OMV.
I made a full backup of all the data on the raid on a separate computer before attempting the expansion. (I've had these kinds of issues before) But I would like to avoid the headache and time delays of recovering from backup if possible.

The raid some reason got confused when adding a drive, it lost contact with an existing drive in the process. and now thinks it has lost 2 drives.
I have waited 4 days to allow it to finish reshaping to the new drive, and it is currently in the drive recovery phase.

I plan to let it finish the recovery before starting to try and address the file system issue.
When looking at the file systems tab in OMV shows that the FS still exists, but is missing its data or isn't found anymore.

If i try and have it show more details it simply waits a long time and comes up with an error.

Trying to mount a new drive doesn't give me an option to remount it.

I'm not sure where to start trying to troubleshoot this.
The system is on a custom built server, the server runs proxmox, and runs OMV as a virtual machine within the server to act as the raid manager.
The HDD are passed through directly to the OMV VM. It is a qty of 5, 8tb Seagate Barracuda (Yes not ideal for a NAS/Raid but its where my budget is at)
root@omv-main:~# root@omv-main:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdf[6] sdb[0] sde[5] sdc[3] sdd[1]
23441682432 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/3] [UU_U_]
[===>.................] recovery = 18.1% (1416663108/7813894144) finish=2175.7min speed=49003K/sec
bitmap: 0/59 pages [0KB], 65536KB chunk
unused devices: <none>
root@omv-main:~# blkid
/dev/sdf: UUID="46adc905-340c-0d45-14c7-713a61913d82" UUID_SUB="f873a232-74e5-8f73-d27b-742f185b26bb" LABEL="openmediavault:0" TYPE="linux_raid_member"
/dev/sdd: UUID="46adc905-340c-0d45-14c7-713a61913d82" UUID_SUB="2efbbc23-2b02-34a2-fc6f-759c6bb5d013" LABEL="openmediavault:0" TYPE="linux_raid_member"
/dev/sdb: UUID="46adc905-340c-0d45-14c7-713a61913d82" UUID_SUB="011ac530-9fbe-8f97-8a40-04dd3a770295" LABEL="openmediavault:0" TYPE="linux_raid_member"
/dev/sr0: BLOCK_SIZE="2048" UUID="2024-02-17-11-39-02-00" LABEL="openmediavault 20240217-12:39" TYPE="iso9660" PTUUID="35659091" PTTYPE="dos"
/dev/sde: UUID="46adc905-340c-0d45-14c7-713a61913d82" UUID_SUB="14356ed5-d411-a762-51e2-063f43b5cf8a" LABEL="openmediavault:0" TYPE="linux_raid_member"
/dev/sdc: UUID="46adc905-340c-0d45-14c7-713a61913d82" UUID_SUB="1ee6b00e-4b3c-7c23-cd98-2d1887634a16" LABEL="openmediavault:0" TYPE="linux_raid_member"
/dev/sda5: UUID="fee293ad-128d-4a5b-9d70-485cb6d3d5dc" TYPE="swap" PARTUUID="8dc0e770-05"
/dev/sda1: UUID="718ac16e-c747-4f3f-bf42-8222b8998950" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="8dc0e770-01"
/dev/md0: UUID="2355b0f0-4f07-4560-936f-b7b0f4e74b6f" BLOCK_SIZE="4096" TYPE="ext4"
root@omv-main:~# fdisk -l | grep "Disk "
Disk /dev/sda: 32 GiB, 34359738368 bytes, 67108864 sectors
Disk model: QEMU HARDDISK
Disk identifier: 0x8dc0e770
Disk /dev/sdc: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: QEMU HARDDISK
Disk /dev/sdd: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: QEMU HARDDISK
Disk /dev/sde: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: QEMU HARDDISK
Disk /dev/sdf: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: QEMU HARDDISK
Disk /dev/sdb: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: QEMU HARDDISK
Disk /dev/md0: 21.83 TiB, 24004282810368 bytes, 46883364864 sectors
Display More
root@omv-main:~# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 spares=1 name=openmediavault:0 UUID=46adc905:340c0d45:14c7713a:61913d82
Display More
root@omv-main:~# mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid6 num-devices=5 metadata=1.2 spares=2 name=openmediavault:0 UUID=46adc905:340c0d45:14c7713a:61913d82
devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf