Hey everyone,
I've already seen that I'm not the only one with this kind of problem but I don't want to hijack other threads so I'm making my own.
I recently started getting emails with an increasing error log on one of my drive which resulted in an "OfflineUncorrectableSector" so I turned the system off, waited for the new drive to arrive and plugged it in today.
After starting up again I got the following email:
Zitat von monit alert -- Status failed mountpoint_media_...Alles anzeigenStatus failed Service mountpoint_media_0ccf9178-985e-4e03-a859-e717a89a20dd
Date: Fri, 09 Aug 2019 17:50:43
Action: alert
Host: NAS
Description: status failed (1) -- /media/0ccf9178-985e-4e03-a859-e717a89a20dd is not a mountpoint
Your faithful employee,
Monit
And when I wanted to rebuild the raid it wasn't showing in the Raid section, though all drives are visible in the Drives section.
Below some outputs that I found in another thread to provide, though please treat me as a Linux noob, I'm still learning
I have 3x 2TB WD red
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdc[2](S) sdb[1](S)
5860271024 blocks super 1.2
unused devices: <none>
blkid
/dev/sdb: UUID="25f41e2c-c766-3a75-462d-a5746b47a522" UUID_SUB="cb760801-4799-3a1d-5a12-60d9d7e07abf" LABEL="NAS:Raid5x1" TYPE="linux_raid_member"
/dev/sdc: UUID="25f41e2c-c766-3a75-462d-a5746b47a522" UUID_SUB="c2b22e85-6da0-f2d1-806a-b3b6c54cc381" LABEL="NAS:Raid5x1" TYPE="linux_raid_member"
/dev/sdd1: UUID="e9cc3846-3bd3-4099-8f55-ff16e09e4c32" TYPE="ext4" PARTUUID="000df838-01"
/dev/sdd5: UUID="de8db28c-13c5-408f-9ce1-9c3ddc625c4a" TYPE="swap" PARTUUID="000df838-05"
fdisk -l | grep Disk
The primary GPT table is corrupt, but the backup appears OK, so that will be used.Disk /dev/sda: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdb: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Partition 1 does not start on physical sector boundary.
Disk /dev/sdc: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk identifier: D305B4BA-D562-4DEE-9B34-8EA95FBC8337
Disk /dev/sdd: 28 GiB, 30016659456 bytes, 58626288 sectors
Disk identifier: 0x000df838
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md/Raid5x1 metadata=1.2 spares=0 name=NAS:Raid5x1 UUID=25f41e2c:c7663a75:462da574:6b47a522
# instruct the monitoring daemon where to send mail alerts
MAILADDR my.email@addre.ss
Alles anzeigen
mdadm --detail --scan --verbose
INACTIVE-ARRAY /dev/md127 num-devices=2 metadata=1.2 name=NAS:Raid5x1 UUID=25f41e2c:c7663a75:462da574:6b47a522
devices=/dev/sdb,/dev/sdc
I hope someone can help me