Hi all
New to linux and OMV; having some issues with the status of my RAID0
Here are the 5 relevant outputs:
root@SRV-OMV01:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda[0]
976630464 blocks super 1.2 [2/1] [U_]
bitmap: 8/8 pages [32KB], 65536KB chunk
root@SRV-OMV01:~# blkid
/dev/sdb1: UUID="3046-996B" TYPE="vfat" PARTUUID="e1585f92-50c4-49b2-b360-5fad9c610758"
/dev/sdb2: UUID="0d979471-7f34-4287-9fa1-d6db66e05644" TYPE="ext4" PARTUUID="d0985a5e-4e71-4659-9dcd-3e981c20265d"
/dev/sdb3: UUID="24bcf1b4-7f7f-461d-86b5-592beb119172" TYPE="swap" PARTUUID="95c05587-b015-4c3b-bf67-b0abef312625"
/dev/sda: UUID="e8cc98f0-fd92-6fd9-fc9d-f7fcf080c302" UUID_SUB="3c182d7e-ad3c-09f9-54d7-1ab85d0a2433" LABEL="SRV-OMV01:OMVmirror" TYPE="linux_raid_member"
/dev/md0: LABEL="Backups" UUID="db6ac9ce-605f-46ea-b705-273a8c42b9fb" TYPE="ext4"
/dev/sdc: UUID="e8cc98f0-fd92-6fd9-fc9d-f7fcf080c302" UUID_SUB="cf940a7e-c770-6c0e-d930-14227b0948a8" LABEL="SRV-OMV01:OMVmirror" TYPE="linux_raid_member"
root@SRV-OMV01:~# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=SRV-OMV01:OMVmirror UUID=e8cc98f0:fd926fd9:fc9df7fc:f080c302
root@SRV-OMV01:~# mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=SRV-OMV01:OMVmirror UUID=e8cc98f0:fd926fd9:fc9df7fc:f080c302
devices=/dev/sda
As I say, I'm new to this; I attempted to troubleshoot and find myself stuck and confused at the moment.
It's not obvious to me what the issue is.
when I do a simple lsblk:
root@SRV-OMV01:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
└─md0 9:0 0 931.4G 0 raid1
sdb 8:16 0 111.8G 0 disk
├─sdb1 8:17 0 512M 0 part /boot/efi
├─sdb2 8:18 0 110.3G 0 part /
└─sdb3 8:19 0 976M 0 part [SWAP]
sdd 8:48 0 931.5G 0 disk
I attempted to use mdadm -add to add the third device with /dev/md0 and it seemed to work. Through the web console, I could see that it was recovering but after recovery, it stills showing as clean, degraded but the very confusing thing to me is when I check the disks, I see that the device name has changed. The device above "sdd" was previously "sdc". I tried the same command again and at the moment the device shows as "sde". Not sure if this is a red herring?
Any help would be great, thank you!
Vino