I actually just tried in a VM the same thing without encryption and the result is the same, so it's probably totally unrelated to encryption.
As a pointer mdadm, OMV's software raid is not hot swappable like hardware raid, if a drive has died it gets removed from the array and it's displayed as clean/degraded, add a drive, wipe it, then recover on raid management.
Can you simulate a drive failure -> No, if you 'pull' a drive from an array the raid become inactive.
What I don't understand is, let's imagine one of the dive is completly ruined when the server is off, to the point where it is not recognised in the bios and all the other drives are OK. How is it different from pulling out a drive?
I'm supposed to be able to rebuild the array with another drive. The array become inactive, it makes sense, I suppose that's why it no longer appear in the GUI.
When I tried to search more details with the command line, I was able to confirm the array is indeed inactive, but it is recognised as a 4 disks array instead of 5 and RAID0 instead of RAID5 :
Raid Level : raid0
Total Devices : 4"
I'm kind of lost.