I have a recently built NAS with six hard drives, one of which is used for parity. I installed OMV 2.x on debian wheezy, wiped the hard disks, and formatted each of them to the ext4 filesystem. I then set up aufs with snapRAID. Everything was working normally until I performed a hard reset on the system. When it rebooted, the five drives used for data/content (not the parity drive) could not be mounted and had corrupted filesystems.
Because this was a new setup, I decided to start from scratch. I looked for any errors in the logs and couldn't find anything. I do not think all five hard drives could be going bad (they are only one month old). After reinstalling from scratch and reformatting the drives in the same manner, my system is up and running once again.
However, I would like to know if there is a known cause for this. I've seen other threads that describe a similar problem with other RAID setups. Is there a way to prevent this from happening in the future? For example, should I add a delay to the drives? I decided to give the drives a five-minute spindown this time, too, just in case that could cause problems. Any advice to prevent this from happening in the future would be much appreciated. Thank you!