File System (RAID1) "Missing" after Upgrade

  • Hello,


    After upgrading OMV 3.x to 4.x my RAID1 file system has the status: "Missing" . Other file systems are not affected and continue to work perfectly - just like the RAID1 file system used to work for years untill the recent upgrade. I've read about some apparently similar issues here in the forum and that there may be no immediate solution other than reinstalling OMV 3.x .


    I'm preparing to
    -backup my (faulty) OS drive with clonezilla
    -install OMV 3.x back again
    -backup RAID1 file system data content
    -install OMV 4.x , recreate RAID file system and restore data on it


    Any suggestions in advance would be very much appreciated.


    Thank you.




    File Systems


    SysLog during upgrade from 3.x to 4.x

    syslog.txt


    System Information

    MB: Gigabyte GA-H97N-WIFI Intel H97 So .1150 Dual Channel DDR3 Mini-ITX
    CPU: Intel C eleron G1840 2x 2.80GHz So .1150 BOX


    HDD (RAID1): 2x 3000GB WD Red WD30EFRX 64MB, SATA 6Gb/ s


    SSD (OS): 64GB SanDisk SATA 2.5"


    OS: OpenMediaVault 4.1.7 (Arrakis)

    I did not backup the initial OMV 3.x system drive.




    Disks & S.M.A.R.T





    RAID Management




    ;(

  • Ok, here's the missing info:



    1. cat /proc/mdstat


    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active (auto-read-only) raid1 sdb[0] sda[1]
    2930135360 blocks super 1.2 [2/2] [UU]


    unused devices: <none>



    2. blkid


    /dev/sda: UUID="cf45872d-fb54-61ce-42b1-0d7b539de8f0" UUID_SUB="b9109e79-fcac-1b1c-71a4-3bf66137383b" LABEL="BanatNAS2:1mirror" TYPE="linux_raid_member"
    /dev/sdb: UUID="cf45872d-fb54-61ce-42b1-0d7b539de8f0" UUID_SUB="640fb6e5-4ae0-ea8e-4bb5-c948e223ed5b" LABEL="BanatNAS2:1mirror" TYPE="linux_raid_member"
    /dev/sdd1: LABEL="2" UUID="c7421b41-e5b8-483e-8616-ca091d27dff4" TYPE="ext4" PARTUUID="775e9a95-7809-4812-ab7b-3279ca6cbbfa"
    /dev/sdc1: UUID="121e73e3-610f-4eeb-aade-0494e5d66852" TYPE="ext4" PARTUUID="000894cb-01"
    /dev/sdc5: UUID="19aba07f-c0d8-4e9a-a203-7ab1035e011d" TYPE="swap" PARTUUID="000894cb-05"



    3. fdisk -l | grep "Disk "


    Disk /dev/sda: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdb: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdd: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk identifier: B251AF73-ADE5-4C79-B435-C33750B9C27B
    Disk /dev/sdc: 58,7 GiB, 63023063040 bytes, 123091920 sectors
    Disk identifier: 0x000894cb
    Disk /dev/md127: 2,7 TiB, 3000458608640 bytes, 5860270720 sectors



    4. cat /etc/mdadm/mdadm.conf


    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #



    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions



    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes



    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>



    # definitions of existing MD arrays
    ARRAY /dev/md/1mirror metadata=1.2 name=BanatNAS2:1mirror UUID=cf45872d:fb5461ce:42b10d7b:539de8f0
    MAILADDR root



    5. mdadm --detail --scan --verbose


    ARRAY /dev/md/1mirror level=raid1 num-devices=2 metadata=1.2 name=BanatNAS2:1mirror UUID=cf45872d:fb5461ce:42b10d7b:539de8f0
    devices=/dev/sda,/dev/sdb



    6. Post type of drives and quantity being used as well.


    SSD1 (sdc/OMV): Sandisk SDSSDP 64GB, SATA III, 2.5"
    HDD1 (sda/RAID1): WD Red WD30EFRX 3TB, 64MB, SATA III, 3.5"
    HDD2 (sdb/RAID1): WD Red WD30EFRX 3TB, 64MB, SATA III, 3.5"
    HDD3 (sdd): WD Red WD40EFRX 4TB 64MB, SATA III, 3.5"


    HDD4 (/Backups): WD Elements 2TB USB 2.0, 3.5", external Drive



    7. Post what happened for the array to stop working? Reboot? Power loss?


    sudo apt-get update
    sudo apt-get upgrade
    sudo omv-update
    sudo omv-release-upgrade
    ...
    reboot.





    Maybe I shouldn't have skipped
    sudo apt-get dist-upgrade


    Greets

  • Maybe I shouldn't have skipped

    It wasn't necessary. All you need is omv-update and omv-release-upgrade.


    Your array is assembled but in readonly mode. Try:


    mdadm --readwrite /dev/md127


    If that works, then run:


    omv-mkconf mdadm

    omv 5.6.18 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.3 | kvm plugin 5.1.7
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • mdadm --readwrite /dev/md127
    mdadm: failed to set writable for /dev/md127: Device or resource busy


    reboot... retried with same result.

  • You need to unmount it.

    omv 5.6.18 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.3 | kvm plugin 5.1.7
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • sudo umount /dev/md127
    umount: /dev/md127: not mounted


    ...


    mdadm --readwrite /dev/md127
    mdadm: failed to set writable for /dev/md127: Device or resource busy



    ...hmmm

  • Boot something like systemrescuecd and try the command.

    omv 5.6.18 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.3 | kvm plugin 5.1.7
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • booted with SystemRescueCd USB Stick...



    umount /dev/md127
    umount: /dev/md127: not mounted


    mdadm --readwrite /dev/md127
    mdadm: failed to set writable for /dev/md127: Device or resource busy

  • That makes sense it wasn't mounted under systemrescuecd but I wouldn't have thought it would be busy. What is the output of: cat /proc/mdstat If it is still readonly then I would stop the array and re-assemble.


    mdadm --stop /dev/md127
    mdadm --assemble --force --verbose /dev/md127 /dev/sd[ba]

    omv 5.6.18 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.3 | kvm plugin 5.1.7
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • booted with SystemRescueCd USB Stick...


    root@sysresccd /root % cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid1 sdb[0] sda[1]
    2930135360 blocks super 1.2 [2/2] [UU]
    unused devices: <none>




    root@sysresccd / % umount /dev/md127
    umount: /dev/md127: not mounted.



    root@sysresccd / % mdadm --readwrite /dev/md127
    mdadm: failed to set writable for /dev/md127: Device or resource busy

  • Not sure why you are trying to unmount it when it shouldn't be mounted. And there is no need to execute the readwrite command since the output of mdstat says the array is assembled correctly now. I would boot back into OMV to see if it is still assembled correctly.

    omv 5.6.18 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.3 | kvm plugin 5.1.7
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!


  • Quote from ryecoaaron

    Not sure why you are trying to unmount it when it shouldn't be mounted.

    Yeah no problem, believe me, I don't know what I'm doing either :) .
    Rebooted from SystemRescueCd to OMV.
    The OMV GUI still lists the RAID1 file system as "Missing".
    That is the "Degraded or missing raid array questions" output:



    root@banatnas2:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active (auto-read-only) raid1 sda[1] sdb[0]
    2930135360 blocks super 1.2 [2/2] [UU]
    unused devices: <none>



    root@banatnas2:~# blkid
    /dev/sda: UUID="cf45872d-fb54-61ce-42b1-0d7b539de8f0" UUID_SUB="b9109e79-fcac-1b1c-71a4-3bf66137383b" LABEL="BanatNAS2:1mirror" TYPE="linux_raid_member"
    /dev/sdb: UUID="cf45872d-fb54-61ce-42b1-0d7b539de8f0" UUID_SUB="640fb6e5-4ae0-ea8e-4bb5-c948e223ed5b" LABEL="BanatNAS2:1mirror" TYPE="linux_raid_member"
    /dev/sdd1: LABEL="2" UUID="c7421b41-e5b8-483e-8616-ca091d27dff4" TYPE="ext4" PARTUUID="775e9a95-7809-4812-ab7b-3279ca6cbbfa"
    /dev/sdc1: UUID="121e73e3-610f-4eeb-aade-0494e5d66852" TYPE="ext4" PARTUUID="000894cb-01"
    /dev/sdc5: UUID="19aba07f-c0d8-4e9a-a203-7ab1035e011d" TYPE="swap" PARTUUID="000894cb-05"



    root@banatnas2:~# fdisk -l | grep "Disk "
    Disk /dev/sda: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdb: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdd: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk identifier: B251AF73-ADE5-4C79-B435-C33750B9C27B
    Disk /dev/sdc: 58,7 GiB, 63023063040 bytes, 123091920 sectors
    Disk identifier: 0x000894cb
    Disk /dev/md127: 2,7 TiB, 3000458608640 bytes, 5860270720 sectors



    root@banatnas2:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #



    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions



    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes



    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>



    # definitions of existing MD arrays
    ARRAY /dev/md/1mirror metadata=1.2 name=BanatNAS2:1mirror UUID=cf45872d:fb5461ce:42b10d7b:539de8f0
    MAILADDR root

  • If it is still readonly then I would stop the array and re-assemble.


    mdadm --stop /dev/md127
    mdadm --assemble --force --verbose /dev/md127 /dev/sd[ba]


    Just did that and rebooted, the RAID1 file system is still "Missing".



    root@banatnas2:/# mdadm --stop /dev/md127
    mdadm: stopped /dev/md127



    root@banatnas2:/# mdadm --assemble --force --verbose /dev/md127 /dev/sd[ba]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sda is identified as a member of /dev/md127, slot 1.
    mdadm: /dev/sdb is identified as a member of /dev/md127, slot 0.
    mdadm: added /dev/sda to /dev/md127 as 1
    mdadm: added /dev/sdb to /dev/md127 as 0
    mdadm: /dev/md127 has been started with 2 drives.



    root@banatnas2:/# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active (auto-read-only) raid1 sdb[0] sda[1]
    2930135360 blocks super 1.2 [2/2] [UU]

  • Try:


    omv-mkconf mdadm


    and then reboot again. The only output you need to post is cat /proc/mdstat

    omv 5.6.18 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.3 | kvm plugin 5.1.7
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Done, apparently no change.
    OMV GUI: RAID1 file system status: "Missing".


    root@banatnas2:/# omv-mkconf mdadm
    update-initramfs: Generating /boot/initrd.img-4.9.0-0.bpo.6-amd64
    update-initramfs: Generating /boot/initrd.img-4.9.0-0.bpo.4-amd64


    ...reboot...


    root@banatnas2:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active (auto-read-only) raid1 sdb[0] sda[1]
    2930135360 blocks super 1.2 [2/2] [UU]
    unused devices: <none>

  • Quote from ryecoaaron


    If it is still readonly then I would stop the array and re-assemble.
    mdadm --stop /dev/md127
    mdadm --assemble --force --verbose /dev/md127 /dev/sd[ba]

    Done. RAID1 file system status: "Missing"


    root@banatnas2:~# mdadm --stop /dev/md127
    mdadm: stopped /dev/md127


    root@banatnas2:~# mdadm --assemble --force --verbose /dev/md127 /dev/sd[ba]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sda is identified as a member of /dev/md127, slot 1.
    mdadm: /dev/sdb is identified as a member of /dev/md127, slot 0.
    mdadm: added /dev/sda to /dev/md127 as 1
    mdadm: added /dev/sdb to /dev/md127 as 0
    mdadm: /dev/md127 has been started with 2 drives.


    ...reboot...


    root@banatnas2:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active (auto-read-only) raid1 sdb[0] sda[1]
    2930135360 blocks super 1.2 [2/2] [UU]
    unused devices: <none>

  • I'm having the same problem and have gone through the same troubleshooting steps and have come up empty handed as well:


    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md0 : active (auto-read-only) raid6 sdh[3] sdf[1] sdg[2] sda[7](S) sde[5] sdb[6](S) sdi[4]
    5860150272 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
    bitmap: 0/15 pages [0KB], 65536KB chunk


    unused devices: <none>


    I tried all the troubleshooting from above. I can also out put more of the commands from blkid etc, but my output is nearly identical to this.


    I've got a backup of the raid contents, but naturally would rather not have to rebuild. Seems weird that this would be an issue.


    Hopefully there's a fix on the short horizon :-)

  • my output is nearly identical to this.

    Your array is assembled but in readonly mode. Try mdadm --readwrite /dev/md0

    omv 5.6.18 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.3 | kvm plugin 5.1.7
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Yes I tried that as it was a troubleshooting step already in this thread. It still does not show in OMV, or mount.



    Code
    root@omvserver:~# mdadm --readwrite /dev/md0
    root@omvserver:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md0 : active raid6 sdh[3] sdf[1] sdg[2] sda[7](S) sde[5] sdb[6](S) sdi[4]
    5860150272 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
    bitmap: 0/15 pages [0KB], 65536KB chunk
    unused devices: <none>
    root@omvserver:~#

    I also ran the omv-mkconf mdadm command afterwards, as mentioned in the thread as well:



    root@omvserver:/# omv-mkconf mdadmupdate-initramfs: Generating /boot/initrd.img-4.16.0-0.bpo.2-amd64W: Possible missing firmware /lib/firmware/isci/isci_firmware.bin for module isciupdate-initramfs: Generating /boot/initrd.img-4.9.0-0.bpo.6-amd64W: Possible missing firmware /lib/firmware/isci/isci_firmware.bin for module isciupdate-initramfs: Generating /boot/initrd.img-4.9.0-0.bpo.5-amd64W: Possible missing firmware /lib/firmware/isci/isci_firmware.bin for module isciupdate-initramfs: Generating /boot/initrd.img-3.16.0-5-amd64W: Possible missing firmware /lib/firmware/isci/isci_firmware.bin for module isciupdate-initramfs: Generating /boot/initrd.img-3.16.0-4-amd64W: Possible missing firmware /lib/firmware/isci/isci_firmware.bin for module isciroot@omvserver:/# (can't put two code blocks in this post for some reason. the forum software fails with no error.)


    Not sure if the isci errors are related, but I've not set anything up iscsi.

    Edited once, last by krispayne: adding output from command requested. ().

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!