I have a RAID 5 with three Hard drives on my system. I'm using Open Media Vault 3.0.59 with Debian 8.9
A couple of days ago, after rebooting, I got a message that my RAID was degrading. So I checked the hard drive for failures using SMART. But it shows no failure (or at least not any I can see....):
https://pastebin.com/3KESMji3
More precisely, /dev/sdb is missing from the RAID after reboot.
So I just used the OMV interface to restore the RAID with the drive that is missing. After it was finished, everything looked fine:
/dev/sdc: UUID="cb7391ea-b6dd-64df-ffe7-775c1d15cb62" UUID_SUB="954aa619-68be-616b-76c8-4feb16f64316" LABEL="tdog42:MainRaid" TYPE="linux_raid_member"
/dev/sda1: UUID="e7484e9d-96fa-48cf-8972-a9e0755321c5" TYPE="ext4" PARTUUID="7c0d5bdf-01"
/dev/sda5: UUID="b281244e-402c-46e4-8ff8-f2b35530dbb0" TYPE="swap" PARTUUID="7c0d5bdf-05"
/dev/md0: LABEL="Raid" UUID="e394d3a6-1f9d-431b-b778-4a65d24f3cd2" TYPE="ext4"
/dev/sdd: UUID="cb7391ea-b6dd-64df-ffe7-775c1d15cb62" UUID_SUB="dcb862d0-4ac5-3a40-b25d-2209e3f56ea3" LABEL="tdog42:MainRaid" TYPE="linux_raid_member"
/dev/sdb: UUID="cb7391ea-b6dd-64df-ffe7-775c1d15cb62" UUID_SUB="139941d9-8723-997d-3517-9f081ac2e8db" LABEL="tdog42:MainRaid" TYPE="linux_raid_member"
All three drives (sdb, sdc, sdd) are now part of the Raid again.
But after another reboot I had the same problem as before:
/dev/sdd: UUID="cb7391ea-b6dd-64df-ffe7-775c1d15cb62" UUID_SUB="dcb862d0-4ac5-3a40-b25d-2209e3f56ea3" LABEL="tdog42:MainRaid" TYPE="linux_raid_member"
/dev/sda1: UUID="e7484e9d-96fa-48cf-8972-a9e0755321c5" TYPE="ext4" PARTUUID="7c0d5bdf-01"
/dev/sda5: UUID="b281244e-402c-46e4-8ff8-f2b35530dbb0" TYPE="swap" PARTUUID="7c0d5bdf-05"
/dev/md0: LABEL="Raid" UUID="e394d3a6-1f9d-431b-b778-4a65d24f3cd2" TYPE="ext4"
/dev/sdc: UUID="cb7391ea-b6dd-64df-ffe7-775c1d15cb62" UUID_SUB="954aa619-68be-616b-76c8-4feb16f64316" LABEL="tdog42:MainRaid" TYPE="linux_raid_member"
/dev/sdb: PTUUID="4ce8b032-80ac-426d-9029-4f3eeaf8ee98" PTTYPE="gpt"
dmesg gave me the following output: https://pastebin.com/w1qCMfMS
I then formatted the HDD as an ext4 drive and mounted it. That also works perfectly, so I guess the hard drive is not failing....
At this point I'm a little unsure what to do next, but maybe someone else has any ideas. Just in case, my /etc/fstab:
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdb1 during installation
UUID=e7484e9d-96fa-48cf-8972-a9e0755321c5 / ext4 errors=remount-ro 0 1
# swap was on /dev/sdb5 during installation
UUID=b281244e-402c-46e4-8ff8-f2b35530dbb0 none swap sw 0 0
tmpfs /tmp tmpfs defaults 0 0
# >>> [openmediavault]
UUID=e394d3a6-1f9d-431b-b778-4a65d24f3cd2 /media/e394d3a6-1f9d-431b-b778-4a65d24f3cd2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
# <<< [openmediavault]
and my mdadm.conf
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 spares=1 name=tdog42:MainRaid UUID=cb7391ea:b6dd64df:ffe7775c:1d15cb62
Alles anzeigen
cat /proc/mdstat:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc[1] sdd[2]
11720782848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
bitmap: 19/44 pages [76KB], 65536KB chunk
unused devices: <none>
fdisk -l | grep "Disk ":
Disk /dev/sdc: 5,5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk /dev/sdb: 5,5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk identifier: 4CE8B032-80AC-426D-9029-4F3EEAF8EE98
Disk /dev/sda: 298,1 GiB, 320072933376 bytes, 625142448 sectors
Disk identifier: 0x7c0d5bdf
Disk /dev/sdd: 5,5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk /dev/md0: 10,9 TiB, 12002081636352 bytes, 23441565696 sectors
mdadm --detail --scan --verbose: