Dear all,
I have a strange behavior of my RAID 1 I've build for my OS, using two NVME SSD.
OMV reports 2 versions of this array, one active (/dev/md127) and one with "False" state (/dev/md127p1)
mdstat only sees the active array
fstab show that root / is mounted on the false array (/dev/md127p1)
and here is lsblk output:
Code
jerome@DTC-JEJE:/$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 3,6T 0 disk
└─md125 9:125 0 3,6T 0 raid1 /srv/dev-disk-by-uuid-e5954363-9d99-4c6f-9dd6-7c2ca9fc4d9e
sdb 8:16 1 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
sdc 8:32 1 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
sdd 8:48 0 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
sde 8:64 0 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
sdf 8:80 0 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
sdg 8:96 0 3,6T 0 disk
└─md125 9:125 0 3,6T 0 raid1 /srv/dev-disk-by-uuid-e5954363-9d99-4c6f-9dd6-7c2ca9fc4d9e
sdh 8:112 0 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
nvme1n1 259:0 0 232,9G 0 disk
├─nvme1n1p1 259:1 0 512M 0 part /boot/efi2
└─nvme1n1p2 259:2 0 232,4G 0 part
└─md127 9:127 0 232,3G 0 raid1
└─md127p1 259:6 0 232,3G 0 part /
nvme0n1 259:3 0 232,9G 0 disk
├─nvme0n1p1 259:4 0 512M 0 part /boot/efi
└─nvme0n1p2 259:5 0 232,4G 0 part
└─md127 9:127 0 232,3G 0 raid1
└─md127p1 259:6 0 232,3G 0 part /
Alles anzeigen
any ideas on how to clean this setup?
thank you
JR