Raid missing after upgrade from OMV 4 to 5

  • Last night I upgraded OMV from version 4.x to 5.5.4-1 using the instructions and script from this post by ryecoaaron: RE: OMV 5.0 - finally out! :-)


    The upgrade went smoothly but afterwards my raid is missing in action. The three drives that should make up the array are there, but the raid isn't assembled, and nothing is showing in Storage - RAID in the web interface. I've found the FAQ on this topic. It didn't help me resolve the issue. So, I'm including the requested information in the hope someone has an idea here.

    mdstat

    Code
    root@mimas:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdc[1](S) sdd[2](S)
    3906767024 blocks super 1.2
    unused devices: <none>

    blkid

    Code
    root@mimas:~# blkid
    /dev/sda1: UUID="93472a57-7231-44f9-82ca-5257364bcd42" TYPE="ext4" PARTUUID="b8e2fdeb-01"
    /dev/sda3: LABEL="data0" UUID="2c5026ca-d497-4dc5-b570-251723e1e8b7" TYPE="ext4" PARTUUID="b8e2fdeb-03"
    /dev/sda5: UUID="dba378a9-ae33-4486-935e-bf403b814154" TYPE="swap" PARTUUID="b8e2fdeb-05"
    /dev/sdc: UUID="dafdd650-f6ff-f4c0-aea7-f00edd3676ed" UUID_SUB="a325b8d7-3270-9fe7-e298-29f31c10f0d8" LABEL="mimas:mainpool" TYPE="linux_raid_member"
    /dev/sdd: UUID="dafdd650-f6ff-f4c0-aea7-f00edd3676ed" UUID_SUB="96d76f24-0e6e-3ea5-7085-2a46f59d6780" LABEL="mimas:mainpool" TYPE="linux_raid_member"
    /dev/sdb1: PARTLABEL="primary" PARTUUID="64e56ceb-db8a-4e32-81ac-a7648a7df2a3"

    fdisk

    mdadm


    Code
    root@mimas:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 num-devices=2 metadata=1.2 name=mimas:mainpool UUID=dafdd650:f6fff4c0:aea7f00e:dd3676ed
       devices=/dev/sdc,/dev/sdd


    You can see four drives in the machine, a 60GB partitioned SSD for system and data (sda) and the three WD 2TB drives for the RAID (sdb/c/d). Drives are there, it worked before the upgrade, but now nothing. Trying to assemble the raid from the command line doesn't do anything, command just returns, no error message, no result.

    Code
    root@mimas:~# mdadm --assemble --scan
    root@mimas:~#


    I'm reasonably competent in Linux administration but not an expert on raids. Am I missing something obvious? Any help appreciated.

  • Thank you, this helps a bit. At least I can get at the data. But it doesn't give me back a fully functioning RAID:

    Code
    root@mimas:~# mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    root@mimas:~# mdadm --assemble --force --verbose /dev/md127 /dev/sd[cd]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdc is identified as a member of /dev/md127, slot 1.
    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 2.
    mdadm: no uptodate device for slot 0 of /dev/md127
    mdadm: added /dev/sdd to /dev/md127 as 2
    mdadm: added /dev/sdc to /dev/md127 as 1
    mdadm: /dev/md127 has been started with 2 drives (out of 3).

    It is missing sdb. Did you leave that off on purpose? Because it's reported differently? And if so, any idea why sdb must be treated differently?

    • Offizieller Beitrag

    It is missing sdb. Did you leave that off on purpose

    No, that's because it's not listed in blkid as a linux raid member

    Because it's reported differently

    sdb reports this in blkid /dev/sdb1: PARTLABEL="primary" PARTUUID="64e56ceb-db8a-4e32-81ac-a7648a7df2a3" that is a partition not a complete block device/drive

    And if so, any idea why sdb must be treated differently?

    OK I'm lost on that, you can't treat it differently OMV will use a complete drive to create an array, sdb clearly shows a partition with no raid signature. You can't put a square peg in a round hole :)


    Output of mdadm --detail /dev/md127

  • Okay, thank you. What you write makes sense. The irritating part is that sdb actually is the third disk from the array. It is the same physical disk that was part of the Raid 5 as /dev/sdb before I did the upgrade. Now, why it now reports something different after the upgrade...

  • NeXTguy Were the RAID drives connected or not connected during the upgrade?
    @All Could it be that the upgrade did something (strange) with them? I've read similar error reports here in the board several times after an upgrade.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Yeah, I guess that is the rookie mistake I made; I left the array attached when I did the upgrade.


    With your help I have a path forward now: I'll simply reformat sdb and add it as a fresh disk to the now degraded array. This will of course require a rebuild but I can live with that.


    By the way, cabrio_leo, the NAS in question is in a Node 304 case.

    • Offizieller Beitrag

    I'll simply reformat sdb and add it as a fresh disk to the now degraded array

    Wipe the drive first, technically you shouldn't have to format it, wipe the drive, raid management, select the raid and hit recover on the menu the drive should be displayed, select it and click ok and the raid should rebuild

    • Offizieller Beitrag

    The upgrade isn't causing this and you should be able to leave the array connected. I think there is a difference in the mdadm package between Debian 9 and 10 that causes this.

    What is the output of: mdadm --detail --scan


    If the output of that command doesn't match the array line(s) in /etc/mdadm/mdadm.conf then I think the array won't mount. If they are different, execute:


    omv-salt deploy run mdadm


    Then reboot and see if it assembles.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!