I have a 4x 3TB Raid 5 which has been running for lont time now. They are setup in N+1 so standard RAID5 with 1 redundant device.
Last week I had one disk failure, I went to order a new one immediately but missed the delivery on Sat so had to wait until Monday.
I couldn't believe my eyes when I got a second email from OMV telling me that a second disk failed right on Sunday night!
So according to OMV I'm now in a 2/4 operational disk situation. I thought to myself: here you go, now it will take ages to restore the backup...
but to my big surprise though: the raid is still mounted and I could even open files.
root@nas:/mnt/raid5/MOVIES# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md127 8.1T 6.3T 1.9T 77% /mnt/raid5
I'm confused, how is this possible?
root@nas:/mnt/raid5/MOVIES# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active raid5 sdf[0] sde[4] sdd[5](F) sdb[1](F)
8790795264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [U__U]
I haven't restarted the NAS yet but what I can see makes no sense to me. Is it possible for this to be bug?
Thanks
rs232