SSH into your box and check that /dev/md0 exists. If it does not but /dev/mdxxx exists instead, it might be the same issue as per this thread:
RAID does not stay!
Beiträge von ozomv
-
-
The above issue with raid 1 not surviving a reboot was reported via the mantis bug tracking system.
It has been resolved in version 1.0.22
For info see: http://bugtracker.openmediavault.org/view.php?id=1101 -
I had a similar thing happen to me - a newly created raid 1 device did not survive a reboot.
I build a new OMV system (1.0.20 - x64) with 2* 3TB disks in Raid 1 just now.
I created a raid 1 setup which started ok and began the sync process, all as it should.After a reboot during the sync process, the raid device /dev/md0 turned into /dev/md127. Hence the array previously created could not start up and continue to sync.
I knew that this can happen if the initramfs is not updated after the initial raid build.
I deleted the md127 device and re created the md0 device. I manually triggered the initramfs update when I noticed an error message pointing to a file in: /usr/share/doc/ called mdadm/README.upgrading-2.5.3.gz. The file did not exist in the OMV file system, so after a bit of searching on the net I found the file. It described that the issue is related to this file: /var/lib/mdadm/CONF-UNCHECKED. I located the file in the OMV file system and renamed it.
Once this file was renamed, the initramfs update worked without complaint and used the /etc/mdadm/mdadm.conf file as it should.
After a reboot, the /dev/md0 device was now retained - as it should.
However, the re syncing still did not work. The array was sitting there as sync status "Pending".
A 'mdadm --readwrite /dev/md0' fixed that - starting the re-sync process.
All appears ok for now.This appears to be a 'feature' of Debian. The above is a workaround but it not really a solution. Perhaps I missed a step when creating the initial raid 1 via the Web Gui ?