Raid missing after upgrade to OMV 4.1.22

  • Recently did a clean install of OMV 4.1.22 (from 3.0.99). Used a new system SSD and disconnected my RAID 5 array. Shut down after initial install and reconnected RAID. Rebooted and checked the OMV web interface and RAID was not listed. All the disks were listed in DISKS, but nothing in RAID management.


    I am looking for help getting the RAID back up. Information Below:


    cat /proc/mdstat

    Code
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
    19534437560 blocks super 1.2
    unused devices: <none>

    blkid

    Code
    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sda1: UUID="8c781b3b-33bd-42bb-ba23-0cdc03a68fdc" TYPE="ext4" PARTUUID="d73019f2-01"
    /dev/sda5: UUID="98699062-8fa7-403b-b0f0-6a4226840926" TYPE="swap" PARTUUID="d73019f2-05"


    fdisk -l | grep "Disk "


    Code
    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
    Disk identifier: 0xd73019f2


    cat /etc/mdadm/mdadm.conf[/b]





    mdadm --detail --scan --verbose[/b]




    Code
    INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
    devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
    19534437560 blocks super 1.2
    unused devices: <none>


    cat /etc/fstab[/b]





    Drive Type: all 5 drives are 4TB SATA disks 4HGST, 1 WD Red



    Stopped working after clean install of OMV 4.1.22

    Einmal editiert, zuletzt von 1dx () aus folgendem Grund: left out some importand information.

  • ']ou'll need to assemble it mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcdef]


    Thank you geaves for the reply.


    Just tried to assemble it:
    [tt]
    mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcdef]
    [mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdc is busy - skipping
    mdadm: /dev/sdd is busy - skipping
    mdadm: /dev/sde is busy - skipping
    mdadm: /dev/sdf is busy - skipping
    /tt]


    Not sure what that means.

  • The array stopped:


    mdadm --stop /dev/md127
    mdadm: stopped /dev/md127

    Then tried to assemble:

    mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcdef]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdb is identified as a member of /dev/md127, slot 4.
    mdadm: /dev/sdc is identified as a member of /dev/md127, slot 2.
    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 1.
    mdadm: /dev/sde is identified as a member of /dev/md127, slot 0.
    mdadm: /dev/sdf is identified as a member of /dev/md127, slot 3.
    mdadm: forcing event count in /dev/sdf(3) from 14035 upto 14059
    mdadm: forcing event count in /dev/sdc(2) from 13901 upto 14059
    mdadm: clearing FAULTY flag for device 1 in /dev/md127 for /dev/sdc
    mdadm: clearing FAULTY flag for device 4 in /dev/md127 for /dev/sdf
    mdadm: Marking array /dev/md127 as 'clean'
    mdadm: added /dev/sdd to /dev/md127 as 1
    mdadm: added /dev/sdc to /dev/md127 as 2
    mdadm: added /dev/sdf to /dev/md127 as 3
    mdadm: added /dev/sdb to /dev/md127 as 4
    mdadm: added /dev/sde to /dev/md127 as 0
    mdadm: /dev/md127 assembled from 5 drives - not enough to start the array.


    The array has 5 drives, which it should, but it's saying that's not enough to start.

  • Update: The RAID is showing up on the OMV RAID Management, Showing clean with all 5 drives listed.


    SMART shows that sdb is red due to Reallocated Sector Count. I'll have to replace that drive today. I wonder if this is why it's saying: "not enough to start the array"?

    • Offizieller Beitrag

    The array has 5 drives, which it should, but it's saying that's not enough to start.

    That doesn't look good, have got a backup just in case?


    You'll need to run this mdadm --examine /dev/sdbon each drive, so replace b with c and so on, run each command in turn and post each output using </> on the toolbar I'll have a look in the morning, have you also checked each drive's SMART status.


    You have 5 drives in that Raid5, Raid5 can only have 1 drive failure, the error 'not enough to start' would suggest that 2 drives c and f are the problem.

  • Check your fstab and mdadm entries


    • Offizieller Beitrag

    SMART shows that sdb is red due to Reallocated Sector Count. I'll have to replace that drive today.

    You can do that from the GUI, select the Raid, then click remove and select the drive -> Ok that, add the new drive, under disks select the drive and then wipe, short is sufficient, you may then have to format the drive, back to raid management, select the raid then click recover a dialogue box will open with the new drive shown, select click Ok and the new drive will sync with the array.

  • Zitat

    There's no reference to the raid in either, so omv-mkconf fstab same again but mdadm

    Okay, ran the omv-mkconf fstab and below are fstab and mdadm.conf:

    Einmal editiert, zuletzt von 1dx () aus folgendem Grund: put wrong information in mdam.conf

    • Offizieller Beitrag

    Okay, ran the omv-mkconf fstab and below are fstab and mdadm.conf:

    Then there is still a problem! those two commands should write the relevant information to those files thereby recreating them. You say the it's showing in Raid Management, is it mounted under file systems. I've stopped using raid so I'm doing this from memory :)

  • Swapped out faulty drive and md127 just finished recovering. Then mounted drive in file systems (wasn't mounted before), then ran omv-mkconf on fstab and mdadm. How do these look?


    Now it looks like I will have to set up file systems.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!