I recently added a drive to my RAID 5 array in OMV to go from 4 drives to 5. The drives are 3TB each so the new capacity is roughly 12TB. The process went smoothly and everything seems to check out; however, I began getting a mail notification each day that has these two lines in the body of the email.
/etc/cron.daily/openmediavault-mdadm:
mdadm: only give one device per ARRAY line: /dev/md0 and metadata-1.2
I went to the chron entry and ran the same command manually and get the same output as line two above. All my searching seems to point to my mdadm.conf file, but I can't figure out what the issue is. Here is my mdadm.conf file
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata-1.2 name=omv:primary UUID=6001aebd:0b9b04ab:b6821595:72f5f027
# instruct the monitoring daemon where to send mail alerts
MAILADDR <my email address here>
MAILFROM root
Alles anzeigen
I am not an expert, but everything I can find seems to say this is correct. The output of the command "mdadm --detail /dev/md0" shows
root@omv:~# mdadm --detail /dev/md0
mdadm: only give one device per ARRAY line: /dev/md0 and metadata-1.2
/dev/md0:
Version : 1.2
Creation Time : Mon Jan 1 00:40:30 2018
Raid Level : raid5
Array Size : 11720540160 (11177.58 GiB 12001.83 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Jan 3 10:45:28 2019
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : omv:primary (local to host omv)
UUID : 6001aebd:0b9b04ab:b6821595:72f5f027
Events : 47740
Number Major Minor RaidDevice State
0 8 48 0 active sync /dev/sdd
1 8 16 1 active sync /dev/sdb
2 8 0 2 active sync /dev/sda
3 8 32 3 active sync /dev/sdc
4 8 64 4 active sync /dev/sde
Alles anzeigen
Any ideas about what is causing this error to show up? Again, the array seems to be working fine, but it does hold the family jewels (so to speak) so I am loathe to leave it as-is. Any help or insight is much appreciated. BTW; I upgraded to the latest OMV (4.1.x Arrakis) to see if that would clear it... nope.
Thanks