Recently did a clean install of OMV 4.1.22 (from 3.0.99). Used a new system SSD and disconnected my RAID 5 array. Shut down after initial install and reconnected RAID. Rebooted and checked the OMV web interface and RAID was not listed. All the disks were listed in DISKS, but nothing in RAID management.
I am looking for help getting the RAID back up. Information Below:
cat /proc/mdstat
Code
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
19534437560 blocks super 1.2
unused devices: <none>
blkid
Code
/dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
/dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
/dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
/dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
/dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
/dev/sda1: UUID="8c781b3b-33bd-42bb-ba23-0cdc03a68fdc" TYPE="ext4" PARTUUID="d73019f2-01"
/dev/sda5: UUID="98699062-8fa7-403b-b0f0-6a4226840926" TYPE="swap" PARTUUID="d73019f2-05"
fdisk -l | grep "Disk "
Code
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Disk identifier: 0xd73019f2
cat /etc/mdadm/mdadm.conf[/b]
Code
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
Alles anzeigen
mdadm --detail --scan --verbose[/b]
Code
INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
19534437560 blocks super 1.2
unused devices: <none>
cat /etc/fstab[/b]
Code
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=8c781b3b-33bd-42bb-ba23-0cdc03a68fdc / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=98699062-8fa7-403b-b0f0-6a4226840926 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
tmpfs /tmp tmpfs defaults 0 0
# >>> [openmediavault]
# <<< [openmediavault]
Alles anzeigen
Drive Type: all 5 drives are 4TB SATA disks 4HGST, 1 WD Red
Stopped working after clean install of OMV 4.1.22