Guys I got the following emails from my OMV box:
ZitatAlles anzeigen
A DegradedArray event had been detected on md device /dev/md0.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda[0] sdb[4] sdd[3] sdf[2](F) sdc[1]
3907041280 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UU_UU]
unused devices: <none>
ZitatAlles anzeigen
This is an automatically generated mail message from mdadm
A Fail event had been detected on md device /dev/md0.
It could be related to component device /dev/sdf.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda[0] sdb[4] sdd[3] sdf[2](F) sdc[1]
3907041280 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UU_UU]
[>....................] check = 0.0% (4/976760320) finish=1017458.6min speed=4K/sec
unused devices: <none>
I have a RAID5 with 5 disks. In the UI, the state is "clean, degraded". This means a drive is bad? right?
In the "Physical Disks" area, OMV is showing serial numbers for all disks, except one (/dev/sdf). Could that be the drive with the problem?
Anything else I can do to figure out what the problem is? One more thing, once the drive is identify, what are the steps to replace it in OMV?
EDIT: I am not using hardware raid, just OMV (software) raid. From reading carefully about it does seem the problem is /dev/sdf. What are the steps to follow when I get the replacement drive?
EDIT: I rebooted OMV, and now /dev/sdf is not shown anywhere in "Physical Disks". RAID5 is "clean,degraded"