Raid 5 setup with 6 drives. I had a drive fail started to replace it . Noticed that another drive is giving me SMART errors. I swapped the dead drive, and started the rebuild. It failed, probably because of the bad drive. OMV labeled the raid as "Clean, Failed", set the new drive as a "spare" and labeld the bad drive as "faulty". My main question, Am I out of luck or is there any hope of coming back?
After the failed rebuild, I rebooted and of course the array failed to load, so I ran
mdadm --stop /dev/md127
mdadm --assemble --force --verbose /dev/md127 /dev/sd[abcdfg]
Code
root@tronomv:~# mdadm --assemble --force --verbose /dev/md127 /dev/sd[abcdfg]
mdadm: looking for devices for /dev/md127
mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
mdadm: /dev/sdb is identified as a member of /dev/md127, slot 1.
mdadm: /dev/sdc is identified as a member of /dev/md127, slot 2.
mdadm: /dev/sdd is identified as a member of /dev/md127, slot -1.
mdadm: /dev/sdf is identified as a member of /dev/md127, slot 4.
mdadm: /dev/sdg is identified as a member of /dev/md127, slot 5.
mdadm: forcing event count in /dev/sdb(1) from 74014 upto 77496
mdadm: clearing FAULTY flag for device 1 in /dev/md127 for /dev/sdb
mdadm: Marking array /dev/md127 as 'clean'
mdadm: added /dev/sdb to /dev/md127 as 1
mdadm: added /dev/sdc to /dev/md127 as 2
mdadm: no uptodate device for slot 3 of /dev/md127
mdadm: added /dev/sdf to /dev/md127 as 4
mdadm: added /dev/sdg to /dev/md127 as 5
mdadm: added /dev/sdd to /dev/md127 as -1
mdadm: added /dev/sda to /dev/md127 as 0
mdadm: /dev/md127 has been started with 5 drives (out of 6) and 1 spare.
Alles anzeigen
Code
root@tronomv:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active (auto-read-only) raid5 sda[0] sdd[6](S) sdg[5] sdf[4] sdc[2] sdb[1]
29301952000 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/5] [UUU_UU]
bitmap: 3/44 pages [12KB], 65536KB chunk
unused devices: <none>
Code
root@tronomv:~# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sat May 28 16:58:26 2022
Raid Level : raid5
Array Size : 29301952000 (27944.52 GiB 30005.20 GB)
Used Dev Size : 5860390400 (5588.90 GiB 6001.04 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Sep 5 10:31:07 2022
State : clean, degraded
Active Devices : 5
Working Devices : 6
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : TRONovm:Media
UUID : 6c97fe9a:bfe41742:f70d6c5c:a013b0ef
Events : 77496
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
- 0 0 3 removed
4 8 80 4 active sync /dev/sdf
5 8 96 5 active sync /dev/sdg
6 8 48 - spare /dev/sdd
Alles anzeigen
The Bad drive is /sdb
Code
root@tronomv:~# mdadm --examine /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 6c97fe9a:bfe41742:f70d6c5c:a013b0ef
Name : TRONovm:Media
Creation Time : Sat May 28 16:58:26 2022
Raid Level : raid5
Raid Devices : 6
Avail Dev Size : 11720780976 (5588.90 GiB 6001.04 GB)
Array Size : 29301952000 (27944.52 GiB 30005.20 GB)
Used Dev Size : 11720780800 (5588.90 GiB 6001.04 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=176 sectors
State : clean
Device UUID : d0b5d0eb:b64ba855:2f4a56a1:5fab600e
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Sep 5 10:26:26 2022
Bad Block Log : 512 entries available at offset 32 sectors - bad blocks present.
Checksum : f0a2ba8d - correct
Events : 77496
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
Alles anzeigen