Hi everybody,
I'm trying to test OMV in a Virtual Machine with a RAID1 configuration and simulate a drive failure to check the results.
1- I've three virtual hard disks:
2- I've created a RAID type 1 (Mirror) with harddisks: /dev/sda and /dev/sdb
the name of RAID is: OMV-NAS:RAID1
Part of the output of "cat /proc/mdstat" is
3- I've formated the file system with ext4 type and I mounted the file systems. Then, I shared some folders with NFS service to write some data without any problem.
4- To check the RAID system I've simulate a drive failure by software:
So /dev/sdb is the faulty disk
The system log report about the problem and the state of the RAID change from ACTIVE to CLEAN,DEGRADED
5-I shutdown the system and Add a new HardDisk /dev/sdd (2GB) to replace the faulty disk
6-I noticed that now the RAID has two NEW devices names as:
/dev/md127 clean,degraded /dev/sdb
/dev/md126 clean,degraded /dev/sdc
And the ORIGINAL /dev/md0 has disappeared --> I don't know if this situation is correct.
In addition the device name on mounted filesystem has change to: /dev/md126
However if I type the mount command it report that the device mounted is /dev/md127
¿Is this a normal situation? or I'm missing something.
And if it is a normal situation, How can I recover the ARRAY? I've to click RECOVER button on the /dev/md126 device which is associate to hard drive /dev/sdc
Thank you,
PD: Sorry, but my English is not good enough.