Hi all,
a few days ago my omv web interface stopped working and the access to our samba share was very slow. So i decided to reinstall OMV without the raid devices and recover them afterwards, Somehow I didn't work out automatically as I thought. I have a ssd (dev/sda) and 5 WD Red as Raid 5. I have the following infos:
cat /proc/mdstat
Code
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdd[5](S) sdf[2](S) sdb[3](S) sdc[4](S) sde[1](S)
14650677560 blocks super 1.2
unused devices: <none>
blkid
Code
/dev/sda1: UUID="3614-7B2D" TYPE="vfat" PARTUUID="135fcc74-b1c5-47b4-bfd7-cb3472a10b42"
/dev/sda2: UUID="b6ec9193-cb8b-4c69-970e-3f7342645c4b" TYPE="ext4" PARTUUID="783a0940-0c3b-4b73-a20b-5bcf0c5f4763"
/dev/sda3: UUID="df415e4d-7cac-4b19-972d-21ca17232be6" TYPE="swap" PARTUUID="fa809040-657c-422c-b3cf-66484b0250ea"
/dev/sdf: UUID="79686a5b-5c57-3d1d-195b-af3df4226866" UUID_SUB="4b25ec98-84a7-e6a8-7425-c7d5740c1617" LABEL="nasgul:Daten" TYPE="linux_raid_member"
/dev/sdb: UUID="79686a5b-5c57-3d1d-195b-af3df4226866" UUID_SUB="e5227779-9ab2-d205-f323-f59b08fd9c9a" LABEL="nasgul:Daten" TYPE="linux_raid_member"
/dev/sde: UUID="79686a5b-5c57-3d1d-195b-af3df4226866" UUID_SUB="448791b2-ce27-7e38-381a-458e02938798" LABEL="nasgul:Daten" TYPE="linux_raid_member"
/dev/sdc: UUID="79686a5b-5c57-3d1d-195b-af3df4226866" UUID_SUB="882a53b5-2e57-61d1-212e-b1af30b4a118" LABEL="nasgul:Daten" TYPE="linux_raid_member"
/dev/sdd: UUID="79686a5b-5c57-3d1d-195b-af3df4226866" UUID_SUB="f504861c-dc1c-c4d7-d7df-a069bc119e00" LABEL="nasgul:Daten" TYPE="linux_raid_member"
fdisk -l | grep "Disk "
Code
Disk /dev/sda: 111,8 GiB, 120034123776 bytes, 234441648 sectors
Disk model: KINGSTON SV300S3
Disklabel type: gpt
Disk identifier: E3FA400C-C943-4FE8-884B-E17DEDFD63AA
Disk /dev/sdf: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68E
Disk /dev/sdb: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68E
Disk /dev/sde: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68E
Disk /dev/sdc: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68E
Disk /dev/sdd: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EFRX-68E
Alles anzeigen
cat /etc/mdadm/mdadm.conf
Code
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This configuration was auto-generated on Wed, 25 Mar 2020 16:23:07 +0000 by mkconf
Alles anzeigen
mdadm --detail --scan --verbose
Code
INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=nasgul:Daten UUID=79686a5b:5c573d1d:195baf3d:f4226866
devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
Hopefully somebody can help me. I tried to assemble the raid again.
mdadm --assemble --run /dev/md127
Could this be the problem and if so what has to be the content for my config file.
Thank you for any help.
Kaos