Hi everyone,
A few days ago, the RAID in my OMV box disappeared. The details are in this post. RAID 5 array has vaporized! Advice needed please
Last night the RAID was rebuilding, it was accessible and when I went to bed, the rebuild was up to 80% complete. All looked good in my world.
Then, I woke up this morning... and found that it was reporting the RAID as clean, FAILED. Then I checked the details and found this:
Code
mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sun Jan 1 13:44:03 2006
Raid Level : raid5
Array Size : 7813523456 (7451.56 GiB 8001.05 GB)
Used Dev Size : 1953380864 (1862.89 GiB 2000.26 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Thu Apr 16 03:21:20 2015
State : clean, FAILED
Active Devices : 3
Working Devices : 4
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : OMV2:OMV
UUID : 3e952187:f4e8e08a:19b763a4:cdc912c7
Events : 1816228
Number Major Minor RaidDevice State
5 8 64 0 active sync /dev/sde
6 8 80 1 active sync /dev/sdf
2 8 32 2 active sync /dev/sdc
3 0 0 3 removed
4 0 0 4 removed
3 8 48 - faulty spare /dev/sdd
7 8 16 - spare /dev/sdb
Display More
BLKID gave this:
Code
blkid
/dev/sda1: UUID="d81eacfe-439c-4e12-bbb2-a933e69d4dfa" TYPE="ext4"
/dev/sda5: UUID="6e724718-95ae-4e0c-9e17-a469c4a7627e" TYPE="swap"
/dev/sdc: UUID="3e952187-f4e8-e08a-19b7-63a4cdc912c7" LABEL="OMV2:OMV" TYPE="linux_raid_member"
/dev/sdd: UUID="3e952187-f4e8-e08a-19b7-63a4cdc912c7" LABEL="OMV2:OMV" TYPE="linux_raid_member"
/dev/sde: UUID="3e952187-f4e8-e08a-19b7-63a4cdc912c7" LABEL="OMV2:OMV" TYPE="linux_raid_member"
/dev/sdf: UUID="3e952187-f4e8-e08a-19b7-63a4cdc912c7" LABEL="OMV2:OMV" TYPE="linux_raid_member"
/dev/md127: LABEL="OMV" UUID="13a7164c-7be5-49e9-ab63-d704f96f890e" TYPE="ext4"
/dev/sdg: UUID="3e952187-f4e8-e08a-19b7-63a4cdc912c7" LABEL="OMV2:OMV" TYPE="linux_raid_member"
/dev/sdb: UUID="3e952187-f4e8-e08a-19b7-63a4cdc912c7" LABEL="OMV2:OMV" TYPE="linux_raid_member"
cat /proc/mdstat gave this:
Code
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid5 sdb[7](S) sde[5] sdd[3](F) sdc[2] sdf[6]
7813523456 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/3] [UUU__]
unused devices: <none>
Looking at each drive with mdadm --examine, I found this:
Code
root@OMV:/# mdadm --examine /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3e952187:f4e8e08a:19b763a4:cdc912c7
Name : OMV2:OMV
Creation Time : Sun Jan 1 13:44:03 2006
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Array Size : 15627046912 (7451.56 GiB 8001.05 GB)
Used Dev Size : 3906761728 (1862.89 GiB 2000.26 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 174d04b9:c97d1665:84cd30bd:8ed0ecd1
Update Time : Thu Apr 16 11:02:19 2015
Checksum : 69e9846d - correct
Events : 1827322
Layout : left-symmetric
Chunk Size : 512K
Device Role : spare
Array State : AAA.. ('A' == active, '.' == missing)
root@OMV:/# mdadm --examine /dev/sdc
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3e952187:f4e8e08a:19b763a4:cdc912c7
Name : OMV2:OMV
Creation Time : Sun Jan 1 13:44:03 2006
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Array Size : 15627046912 (7451.56 GiB 8001.05 GB)
Used Dev Size : 3906761728 (1862.89 GiB 2000.26 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 099d5100:821d9963:776e9700:cc04ecbc
Update Time : Thu Apr 16 11:02:29 2015
Checksum : de1f4954 - correct
Events : 1827330
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAA.. ('A' == active, '.' == missing)
root@OMV:/# mdadm --examine /dev/sdd
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3e952187:f4e8e08a:19b763a4:cdc912c7
Name : OMV2:OMV
Creation Time : Sun Jan 1 13:44:03 2006
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Array Size : 15627046912 (7451.56 GiB 8001.05 GB)
Used Dev Size : 3906761728 (1862.89 GiB 2000.26 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 0c21a9e5:9f039ea9:12ac15c7:c11a2e3c
Update Time : Thu Apr 16 03:21:11 2015
Checksum : 4f46ee71 - correct
Events : 1816220
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAAA ('A' == active, '.' == missing)
root@OMV:/# mdadm --examine /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3e952187:f4e8e08a:19b763a4:cdc912c7
Name : OMV2:OMV
Creation Time : Sun Jan 1 13:44:03 2006
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Array Size : 15627046912 (7451.56 GiB 8001.05 GB)
Used Dev Size : 3906761728 (1862.89 GiB 2000.26 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : ea732e82:10c69291:df0a29b0:7d4aad34
Update Time : Thu Apr 16 11:02:49 2015
Checksum : b548ab04 - correct
Events : 1827346
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAA.. ('A' == active, '.' == missing)
root@OMV:/# mdadm --examine /dev/sdf
/dev/sdf:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3e952187:f4e8e08a:19b763a4:cdc912c7
Name : OMV2:OMV
Creation Time : Sun Jan 1 13:44:03 2006
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Array Size : 15627046912 (7451.56 GiB 8001.05 GB)
Used Dev Size : 3906761728 (1862.89 GiB 2000.26 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 8886034f:2e225de1:55989cc1:3ef2fea4
Update Time : Thu Apr 16 11:02:49 2015
Checksum : 53ad4ef9 - correct
Events : 1827346
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAA.. ('A' == active, '.' == missing)
root@OMV:/# mdadm --examine /dev/sdg
/dev/sdg:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3e952187:f4e8e08a:19b763a4:cdc912c7
Name : OMV2:OMV
Creation Time : Sun Jan 1 13:44:03 2006
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 3906762752 (1862.89 GiB 2000.26 GB)
Array Size : 15627046912 (7451.56 GiB 8001.05 GB)
Used Dev Size : 3906761728 (1862.89 GiB 2000.26 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : active
Device UUID : 44029c1f:b0a408e3:6d6d35c2:b5660be0
Update Time : Sun Mar 1 06:44:42 2015
Checksum : 614b8b1f - correct
Events : 153067
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 4
Array State : AAAAA ('A' == active, '.' == missing)
Display More
So, I figured that if I add another drive in as a spare, that the RAID would start to rebuild, but it didn't.
Code
mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sun Jan 1 13:44:03 2006
Raid Level : raid5
Array Size : 7813523456 (7451.56 GiB 8001.05 GB)
Used Dev Size : 1953380864 (1862.89 GiB 2000.26 GB)
Raid Devices : 5
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Thu Apr 16 21:09:24 2015
State : clean, FAILED
Active Devices : 3
Working Devices : 5
Failed Devices : 1
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 512K
Name : OMV2:OMV
UUID : 3e952187:f4e8e08a:19b763a4:cdc912c7
Events : 1833930
Number Major Minor RaidDevice State
5 8 64 0 active sync /dev/sde
6 8 80 1 active sync /dev/sdf
2 8 32 2 active sync /dev/sdc
3 0 0 3 removed
4 0 0 4 removed
3 8 48 - faulty spare /dev/sdd
7 8 16 - spare /dev/sdb
8 8 128 - spare /dev/sdi
Display More
(Continued)