Hello all together,
a few days I got an email notification about a degraded array which I am a little bit confused about:
Alles anzeigenThis is an automatically generated mail message from mdadm
running on openmediavault
A DegradedArray event had been detected on md device /dev/md0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [raid1]
md0 : active raid1 sdb[0]
1953383488 blocks super 1.2 [2/1] [U_]
bitmap: 15/15 pages [60KB], 65536KB chunk
unused devices: <none>
This let me logging into OMV and checking the state of the drives.
They are alle accessible and their SMART status seems to be OK:
The raid was configured between /dev/sdb and /dev/sdc, but somehow has a state:
Zitat von OMVAlles anzeigenVersion : 1.2
Creation Time : Fri Jan 27 02:40:34 2017
Raid Level : raid1
Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Feb 4 22:24:12 2019
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : openmediavault:RAID (local to host openmediavault)
UUID : 5439acfd:992cf538:fc8d08f4:fa4f8fd7
Events : 595312
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
2 0 0 2 removed
And if I click on recover, I can't select any device:
Any help appreciated!
Here's the required information about my setup:
- OMV 3.0.99 Erasmus with Kernel 3.16.0-4-amd64
- 4 Total disks (one 500 GB OMV host, 2x 2TB in RAID1, 1x 8TB)
Here are the log outputs:
root@openmediavault:~# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb[0]
1953383488 blocks super 1.2 [2/1] [U_]
bitmap: 15/15 pages [60KB], 65536KB chunk
unused devices: <none>
root@openmediavault:~# blkid
/dev/sda1: UUID="b279d58d-a670-4db5-a4a2-a70bbd0c1f10" TYPE="ext4" PARTUUID="56d063f1-01"
/dev/sda5: UUID="8feb4be8-3916-4d2c-acd7-d76fe2089a47" TYPE="swap" PARTUUID="56d063f1-05"
/dev/sdb: UUID="5439acfd-992c-f538-fc8d-08f4fa4f8fd7" UUID_SUB="de544b7f-38e8-9789-c0a0-48064749c10b" LABEL="openmediavault:RAID" TYPE="linux_raid_member"
/dev/sdd1: UUID="1eaa2aa6-3acf-4b46-bd11-1954749d8470" TYPE="ext4" PARTUUID="6a6bdcf6-2744-4a33-9d43-c3b09c167dd5"
/dev/md0: UUID="4559837c-667d-46d0-9ec5-053f67eed5fa" TYPE="ext4"
/dev/sdc: UUID="5439acfd-992c-f538-fc8d-08f4fa4f8fd7" UUID_SUB="2ded4613-7409-0a9c-1958-2730c4b2e0c7" LABEL="openmediavault:RAID" TYPE="linux_raid_member"
root@openmediavault:~# fdisk -l | grep "Disk "
Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Disk identifier: 0x56d063f1
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk identifier: 849C94FC-17E3-4AE3-8440-05852118C172
Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/md0: 1.8 TiB, 2000264691712 bytes, 3906766976 sectors
root@openmediavault:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=openmediavault:RAID UUID=5439acfd:992cf538:fc8d08f4:fa4f8fd7
root@openmediavault:~# mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=openmediavault:RAID UUID=5439acfd:992cf538:fc8d08f4:fa4f8fd7
devices=/dev/sdb
root@openmediavault:~#