I had the same issue, I've been running OMV 2 and in order to upgrade to OMV 4 I disconnected all data disks as manual tells to and performed fresh install, OMV was booting with no issues, I reconnected data disks (RAID5) and got stuck in (initramfs) console. At that point I remembered that I forgot to edit /etc/mdadm/mdadm.conf before re-install it had
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=openmediavault:Storage UUID=xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
at the bottom (I replaced my UUID value with x-es here), so I disconnected all the drives again, booted, put the ARRAY line in mdadm.conf and ran
> update-initramfs -u
(-u for update) and
> update-grub
for good measure, shut down, connected all the drives, booted up - the problem was gone and the raid is up and running (you have to mount the filesystem (the one that is on the RAID) through OMV interface though)
Note: I don't know about UUID is it in the superblock or is it coming from this config, but I'm pretty sure the name parameter should match the name of your raid array.
Googled mdadm.conf, manual says
uuid=
The value should be a 128 bit uuid in hexadecimal, with punctuation interspersed if desired. This must match the uuid stored in the superblock.
name=
The value should be a simple textual name as was given to mdadm when the array was created. This must match the name stored in the superblock on a device for that device to be included in the array. Not all superblock formats support names.
So it's better to have original mdadm.conf to copy the ARRAY line from, otherwise you have to get that information from the RAID array somehow.