Missing file system after upgrade from 3.x to 4.x

  • This is a new one


    logged in as root


    run /proc/mdstat


    get Permission denied error


    root@omv2:~# /proc/mdstat
    bash: /proc/mdstat: Permission denied
    root@omv2:~#



    So I ran:


    root@omv2:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active (auto-read-only) raid5 sdc[1] sde[3] sdb[0] sdd[2]
    8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
    unused devices: <none>
    root@omv2:~#


    Then ran:
    root@omv2:~# mdadm --readwrite /dev/md127
    root@omv2:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid5 sdc[1] sde[3] sdb[0] sdd[2]
    8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
    unused devices: <none>
    root@omv2:~#



    Shows active raid5 now


    rebooted


    file system still shows as:
    referenced = yes
    mounted = no
    Status = missing


    this is turning into a real puzzle ;)



    any more advice?


    thanks for all your help.

    • Offizieller Beitrag

    Shows active raid5 now


    rebooted

    Your array was fixed with the command but after rebooting, it is probably going into auto-read-only again. This generally happens when something is wrong with the array/drive in the array. I would fix it with the readwrite command again and then omv-mkconf mdadm. Leave the system on for a while to make sure everything is happening on the array that needs to happen.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Ran these commands with these results - - will leave on for awhile without touching to see what happens.


    Thanks


    root@omv2:~# cat /proc/mdstat


    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active (auto-read-only) raid5 sdb[0] sdc[1] sdd[2] sde[3]
    8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]


    unused devices: <none>


    root@omv2:~# mdadm --readwrite /dev/md127


    root@omv2:~# cat /proc/mdstat


    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid5 sdb[0] sdc[1] sdd[2] sde[3]
    8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]


    unused devices: <none>


    root@omv2:~# omv-mkconf mdadm


    update-initramfs: Generating /boot/initrd.img-4.18.0-0.bpo.1-amd64
    update-initramfs: Generating /boot/initrd.img-4.9.0-8-amd64
    update-initramfs: Generating /boot/initrd.img-4.9.0-0.bpo.6-amd64


    root@omv2:~#

    • Offizieller Beitrag

    file system still showing as missing

    If the array starts in auto-read-only mode every time you reboot, there is something wrong with the array. I'm not sure what is causing the issue though. Maybe google has an answer.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OK - thank you for all your help.


    I will just go back to version 3.x (which I have done before and the array was just fine) - back up everything to my other NAS, and then completely rebuild the array (maybe using ZFS)

  • wipefs -n /dev/sd[bcde]

    I ended up backing up everything to my 2nd NAS - then upgrading to Version 4.1.12. I installed the Proxmox kernel and installed ZFS. I had to wipe the 4 drives before they showed up as available to add to the ZFS pool - but all is well now.


    Thanks again for all your help.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!