I received this message from OMV
QuoteDisplay MoreThis is an automatically generated mail message from mdadm
running on omv
A DegradedArray event had been detected on md device /dev/md0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid6] [raid5] [raid4] [raid0] [raid1] [raid10]
md0 : active raid6 sde[2] sdd[0] sdc[1] sdb[4]
1464763392 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/4] [UUU_U]
bitmap: 4/4 pages [16KB], 65536KB chunk
unused devices: <none>
I saw that MD RAID had some kind of problem with the disk.
I logged in, but MD RAID said it was repairing itself, and it seemed to have repaired itself. I don't have anything important there that I don't have a copy of.
How is Recover different, or how does it work, because I could only choose it when MD RAID was repairing itself, and Grow?
Because in both options, I can only select the disk. Maybe they differ in some other way, but I had no way of checking, as I don't have an additional disk or a SATA connector on the motherboard.
QuoteVersion : 1.2 Creation Time : Sat Mar 1 15:03:32 2025 Raid Level : raid6 Array Size : 1464763392 (1396.91 GiB 1499.92 GB) Used Dev Size : 488254464 (465.64 GiB 499.97 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Dec 11 18:12:58 2025 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 512KConsistency Policy : bitmap
Name : omv:0 (local to host omv) UUID : 82fc7d1a:98b1584f:d3a37380:2028a522 Events : 7300
Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 0 1 active sync /dev/sda 2 8 64 2 active sync /dev/sde 3 8 80 3 active sync /dev/sdf 4 8 32 4 active sync /dev/sdc
I have 5 drives in MD RAID again.
I know that one of them has bad sectors according to SMART.
Has it been fixed?