I had a drive start throwing a lot of SMART errors relating to pending sectors and then those eventually became unreadable, etc.. I replaced the drive, and rebuilt the array. All is well, except the RAID status is clean,degraded. All drives that should be in the array are showing up, however there is an 8th drive (there are only 7 drives total in the array) showing as removed. This was /dev/sdi that was the bad drive. However, I have an external USB also plugged into this system that seems to have grabbed /dev/sdi now. I'm not sure if that's causing the issue or what..
Basically this clean,degraded doesn't seem to have any negative impact, but I would like to get it back to just 'clean', assuming it doesn't require another rebuild (if it does, I'll just leave it as is).
Any suggestions?
Here's the output from OMV's 'detail' tab on the array.
Version : 1.2
Creation Time : Thu Feb 27 16:14:09 2014
Raid Level : raid6
Array Size : 17581590528 (16767.11 GiB 18003.55 GB)
Used Dev Size : 2930265088 (2794.52 GiB 3000.59 GB)
Raid Devices : 8
Total Devices : 7
Persistence : Superblock is persistent
Update Time : Mon Apr 13 20:03:50 2015
State : clean, degraded
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : leo:3TBARRAY (local to host leo)
UUID : f552e7d5:bc610fde:36613907:b7c335f6
Events : 85594
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 96 1 active sync /dev/sdg
9 8 112 2 active sync /dev/sdh
5 8 32 3 active sync /dev/sdc
4 8 80 4 active sync /dev/sdf
6 8 0 5 active sync /dev/sda
8 8 64 6 active sync /dev/sde
7 0 0 7 removed
Alles anzeigen