Made it almost a week thanks to geaves help, but, sadly, here I am again.
RAID status is now showing "clean, FAILED", rather than missing.
Here are the results of the initially required (ryecoarron's) inquiries as well as some of geaves' requested info from the first go-round. I scanned discs C and D as they showed (to my novice eye) issues with reporting (faulty).
Again, my set up is/was RAID5 with (4) 6 TB WD Red discs. And I haven't rebooted since discovering this last night. Any help is most appreciated.
Code
root@Server:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sda[4] sdc[1](F) sde[3] sdd[2](F)
17581174272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [U__U]
bitmap: 3/44 pages [12KB], 65536KB chunk
unused devices: <none>
root@Server:~#
Code
root@Server:~# blkid
/dev/sdf1: UUID="b8f86e19-3cb3-4d0a-b1e2-623620314887" TYPE="ext4" PARTUUID="79c1501c-01"
/dev/sdf5: UUID="a631ab49-ab21-4169-8352-aa1829c8a95b" TYPE="swap" PARTUUID="79c1501c-05"
/dev/sdb1: LABEL="BackUp" UUID="9238bbb9-e494-487d-941e-234cad83a670" TYPE="ext4" PARTUUID="d6e47150-672f-4fb8-a57d-72c6ff0ca4ae"
/dev/sde: UUID="98379905-d139-d263-d58d-5eb3893ba95b" UUID_SUB="97feb0f7-c46c-0e05-4b6f-4c40a9448f9f" LABEL="Server:Raid1" TYPE="linux_raid_member"
/dev/sdc: UUID="98379905-d139-d263-d58d-5eb3893ba95b" UUID_SUB="98a8cd6c-cb21-5f16-8540-aa6c88960541" LABEL="Server:Raid1" TYPE="linux_raid_member"
/dev/sdd: UUID="98379905-d139-d263-d58d-5eb3893ba95b" UUID_SUB="df979bad-92c3-ac42-f3e5-512838996555" LABEL="Server:Raid1" TYPE="linux_raid_member"
/dev/md0: LABEL="Raid1" UUID="0f1174dc-fa73-49b0-8af3-c3ddb3caa7ef" TYPE="ext4"
/dev/sda: UUID="98379905-d139-d263-d58d-5eb3893ba95b" UUID_SUB="cd9ad946-ea0f-65a1-a2a3-298a258b2f76" LABEL="Server:Raid1" TYPE="linux_raid_member"
root@Server:~#
Code
root@Server:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=Server:Raid1 UUID=98379905:d139d263:d58d5eb3:893ba95b
root@Server:~#
Display More
Code
root@Server:~# mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=Server:Raid1 UUID=98379905:d139d263:d58d5eb3:893ba95b
devices=/dev/sda,/dev/sdc,/dev/sdd,/dev/sde
root@Server:~#
Code
root@Server:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Nov 25 18:05:25 2019
Raid Level : raid5
Array Size : 17581174272 (16766.71 GiB 18003.12 GB)
Used Dev Size : 5860391424 (5588.90 GiB 6001.04 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Apr 15 05:40:00 2021
State : clean, FAILED
Active Devices : 2
Working Devices : 2
Failed Devices : 2
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : Server:Raid1 (local to host Server)
UUID : 98379905:d139d263:d58d5eb3:893ba95b
Events : 191669
Number Major Minor RaidDevice State
4 8 0 0 active sync /dev/sda
- 0 0 1 removed
- 0 0 2 removed
3 8 64 3 active sync /dev/sde
1 8 32 - faulty /dev/sdc
2 8 48 - faulty /dev/sdd
root@Server:~#
Display More
Code
root@Server:~# mdadm --examine /dev/sdc
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 98379905:d139d263:d58d5eb3:893ba95b
Name : Server:Raid1 (local to host Server)
Creation Time : Mon Nov 25 18:05:25 2019
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 11720783024 (5588.90 GiB 6001.04 GB)
Array Size : 17581174272 (16766.71 GiB 18003.12 GB)
Used Dev Size : 11720782848 (5588.90 GiB 6001.04 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=176 sectors
State : clean
Device UUID : 98a8cd6c:cb215f16:8540aa6c:88960541
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Apr 14 04:13:31 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 802c48bb - correct
Events : 191597
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@Server:~#
Display More
Code
root@Server:~# mdadm --examine /dev/sdd
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 98379905:d139d263:d58d5eb3:893ba95b
Name : Server:Raid1 (local to host Server)
Creation Time : Mon Nov 25 18:05:25 2019
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 11720783024 (5588.90 GiB 6001.04 GB)
Array Size : 17581174272 (16766.71 GiB 18003.12 GB)
Used Dev Size : 11720782848 (5588.90 GiB 6001.04 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=176 sectors
State : clean
Device UUID : df979bad:92c3ac42:f3e55128:38996555
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Apr 14 04:13:31 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : bd4f81e8 - correct
Events : 191597
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@Server:~#
Display More