Hello,
I've a OMV with a 8 disks 12T in RAID 5 config and today I've found 3 disks with problems.
mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid5
Total Devices : 8
Persistence : Superblock is persistent
State : inactive
Working Devices : 8
Name : mrpink:0 (local to host mrpink)
UUID : 26eed30c:15f1e887:47b92a32:9c56dedb
Events : 152754
Number Major Minor RaidDevice
- 8 64 - /dev/sde
- 8 32 - /dev/sdc
- 8 112 - /dev/sdh
- 8 80 - /dev/sdf
- 8 48 - /dev/sdd
- 8 16 - /dev/sdb
- 8 128 - /dev/sdi
- 8 96 - /dev/sdg
Display More
sde and sdi are marked "Device has a few bad sector" (yellow alert) and sdf "Device is being used outside design parameters" (red alert).
I'm so so so happy...
The /dev/md0 is down and I need to fix this mess.
This is a full backup NAS so no problem with data but I need to understand what is the correct procedure to replace more disks.
1) remove sdf -> mdadm --manage /dev/md0 --remove /dev/sdf
2) shutdown omv
3) replace broken disk with a new one
4) start omv and add new disk -> mdadm --manage /dev/md0 --add /dev/sdX
5) wait OMV clean the raid array
6) go to 1 and repeat for each broken disk
This is correct?
Or there is some faster procedure?
I need a huge hug