I replaced one failed and one failing 2TB drive with two new 4TB drives one at a time and grew the OMV array to no effect.
cat /proc/mdstat produces
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid5 sdb sdc sdd sde sda
7813537792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] .
After reboot, the array went missing in OMV. I used the OMV 0.4 installation media in recovery mode to try putting everything back together using bash commands:
sudo mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde --verbose --force--
I then found two drives were busy and I had to stop them. A retry successfully rebuilt the array with 4 of 5 drives.
I added the 5th drive in OMV and rebuilt the array but the size remained the same.
I replaced another drive in the same fashion as it was about to fail (according to OMV). Again the array size remained the same.
I now have 4-4TB and one 2TB drive in the array marked, "clean."
OMV is running on a separate 60GB SSD.
The array only recognizes a file system of 7.28TB. Is there a way to grow the array and file system to take advantage of the extra disk space?