Missing file system after upgrade from 3.x to 4.x

  • On my 3.x OMV machine (which I have had fr a long time) - I decided to do the upgrade to 4.x.


    I uninstalled all plugins (not running that many)


    SSH's in as root


    Ran command "omv-release-upgrade" and let run to completion (from 3.x to 4.1.12)


    No major errors other than the python warning


    Let system reboot
    Wait until system is fully up
    login in as admin
    all looks good - until I go to;
    Storage
    File Systems


    I see the OS partition - and all looks good


    But - my storage volume shows all n/a's and a status of "missing" (Raid 5 array appears to be just fine under Raid Management)


    Any thoughts as to what to look for. I vaguely remember reading something a while ago about a problem with certain Linux kernals and Western Digital WD30EFRX-68E drives - - of which I have 4 in my array)


    Any advice greatly appreciated


    TIA


    -George-

  • Based on this post - Degraded or missing raid array I am posting the requested information.


    Code
    root@omv2:/etc/openmediavault# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active (auto-read-only) raid5 sdd[2] sdb[0] sdc[1] sde[3]
          8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
    
    
    unused devices: <none>


    Code
    root@omv2:/etc/openmediavault# blkid
    /dev/sdb: UUID="de9ec887-074a-453e-f6ad-3e138e8d12a6" UUID_SUB="f3504832-c1f4-a281-0054-281a58d56c06" LABEL="OMV2:HULK" TYPE="linux_raid_member"
    /dev/sda1: UUID="78995f57-b587-4bff-999d-e5d2b1c8adfb" TYPE="ext4" PARTUUID="db3b5bed-01"
    /dev/sda5: UUID="56f4fc83-52b8-4ac6-b1c7-768adc1b6be3" TYPE="swap" PARTUUID="db3b5bed-05"
    /dev/sde: UUID="de9ec887-074a-453e-f6ad-3e138e8d12a6" UUID_SUB="c79f6587-6542-0563-c085-ed992a531362" LABEL="OMV2:HULK" TYPE="linux_raid_member"
    /dev/sdc: UUID="de9ec887-074a-453e-f6ad-3e138e8d12a6" UUID_SUB="813e417d-a5d9-1a5d-0cf6-8f537e869886" LABEL="OMV2:HULK" TYPE="linux_raid_member"
    /dev/sdd: UUID="de9ec887-074a-453e-f6ad-3e138e8d12a6" UUID_SUB="a898fada-ec5a-6011-f16b-735c6ba2870e" LABEL="OMV2:HULK" TYPE="linux_raid_member"
    root@omv2:/etc/openmediavault#
    Code
    root@omv2:/etc/openmediavault# fdisk -l | grep "Disk "
    Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sda: 37.3 GiB, 40018599936 bytes, 78161328 sectors
    Disk identifier: 0xdb3b5bed
    Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/md127: 8.2 TiB, 9001374842880 bytes, 17580810240 sectors
    root@omv2:/etc/openmediavault#



    Code
    root@omv2:/etc/openmediavault# mdadm --detail --scan --verbose
    ARRAY /dev/md127 level=raid5 num-devices=4 metadata=1.2 name=OMV2:HULK UUID=de9ec887:074a453e:f6ad3e13:8e8d12a6
       devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde
    root@omv2:/etc/openmediavault#
    • Offizieller Beitrag

    I needed to get to my data - so I reinstalled 3.0.99 and my raid files are all there

    Did you look at the link ananas posted? I would curious to see the output of: wipefs -n /dev/sd[bcde] (no, it won't wipe anything)

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    Einmal editiert, zuletzt von ryecoaaron ()

  • Hi there,
    if the array is recognized, then the ZFS signatures on the member disks can not be the issue.
    How about the signatures on the software raid ?
    What is the output of "wipefs -n /dev/md127" ?


    Cheers,
    Thomas

  • root@omv2:~# wipefs -n /dev/md127
    offset type
    ----------------------------------------------------------------
    0x438 ext4 [filesystem]
    LABEL: HULK
    UUID: 689e2054-f38c-4415-a5fd-941a2384a054



    BTW - I have never run ZFS on this NAS

    • Offizieller Beitrag

    if the array is recognized, then the ZFS signatures on the member disks can not be the issue.

    Not quite true. With Debian 8 (OMV 3.x), mdadm didn't seem to care if there was a zfs signature on the disk(s). With Debian 9 (OMV 4.x), this changed and it did affect assembling.

    root@omv2:~# wipefs -n /dev/md127

    Can you post the wipefs command I asked for? A bad signature wouldn't be on the array since it has already been assembled. It would be on the physical disks themselves.


    BTW - I have never run ZFS on this NAS

    zfs isn't the only signature that might be causing the problem.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Those all look fine (you could have run the command I posted to get it all in one :) )


    I should have looked at your /proc/mdstat more carefully. Your array assembled but was in auto-read-only. Did you every try mdadm --readwrite /dev/md127?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • No - I never tried that - - do you think it would have worked? I guess I can do an upgrade in place again and see what happens (since I know I can always put 3.x back on there) - and if it does the same thing, I can try the command you reference above.



    root@omv2:~# wipefs -n /dev/[bcde]
    wipefs: error: /dev/[bcde]: probing initialization failed: No such file or directory
    root@omv2:~#




    Thanks


    George

    • Offizieller Beitrag

    do you think it would have worked?

    Probably. That is what the command is supposed to fix.

    root@omv2:~# wipefs -n /dev/[bcde]
    wipefs: error: /dev/[bcde]: probing initialization failed: No such file or directory
    root@omv2:~#

    I had a typo in my post. It should be wipefs -n /dev/sd[bcde]

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • mdadm --readwrite /dev/md127

    ran apt-update
    apt-get upgrade (nothing to upgrade)
    removed clamav plugin
    Ran in-place upgrade to go from 3.x to 4.x
    completed with no errors
    logged in via web u=interface
    here is what I see under "File Systems"



    Run the command you referenced - from telnet "mdadm --readwrite /dev/md127"
    Reboot


    File System same - - not mounted and Missing


    ran "apt-get update" and apt-get upgrade" just to make sure it was not due to some missing package update
    rebooted system
    File System for raid array still the same - "Missing"


    Any other thoughts? Or should I just go back to 3.x and be happy.


    Thank you

    • Offizieller Beitrag

    Run the command you referenced - from telnet "mdadm --readwrite /dev/md127"
    Reboot


    File System same - - not mounted and Missing


    Any other thoughts? Or should I just go back to 3.x and be happy.

    That command can only fix an array in auto-read-only mode. What is the output of: cat /proc/mdstat


    I also assume you rebooted after moving to OMV 4.x? Do you have the 4.18 kernel installed?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • root@omv2:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active (auto-read-only) raid5 sdc[1] sde[3] sdd[2] sdb[0]
    8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]


    unused devices: <none>
    root@omv2:~#



    Kernal is Linux 4.9.0.8-amd64 (obtained from System Information - Overview)



    yes - I rebooted multiple times ;)




    root@omv2:~# grep -ir backport /etc/apt/*
    /etc/apt/apt.conf.d/01autoremove: "linux-backports-modules-.*";
    /etc/apt/apt.conf.d/01autoremove-kernels: "^linux-backports-modules-.*-4\.9\.0-0\.bpo\.6-amd64$";
    /etc/apt/apt.conf.d/01autoremove-kernels: "^linux-backports-modules-.*-4\.9\.0-8-amd64$";
    /etc/apt/preferences.d/openmediavault-kernel-backports.pref:Pin: release a=jessie-backports
    /etc/apt/preferences.d/openmediavault-kernel-backports.pref:Pin: release a=jessie-backports
    /etc/apt/preferences.d/openmediavault-kernel-backports.pref:Pin: release a=jessie-backports
    /etc/apt/preferences.d/openmediavault-kernel-backports.pref:Pin: release a=jessie-backports
    /etc/apt/sources.list.d/openmediavault-kernel-backports.list:deb http://httpredir.debian.org/debian jessie-backports main contrib non-free
    root@omv2:~#



    installed omv-extras for OMV 4


    ran this command to successful completion;
    apt-get install -t stretch-backports linux-headers-4.18.0-0.bpo.1-amd64


    rebooted


    System info still shows kernal 4.9.0.8


    Go to OMV-Extras
    Click Kernal
    Installed Kernals drop down only shows 4.9.0.8's



    Went back in and ran:
    apt-get install linux-image-(TAB)
    saw the 4.18 amd64 kernal on the list - and installed it successfully.


    Went back to OMV-Extras / Kernel.
    4.18 now shows up in installed kernals drop down
    select it and click "set as default boot kernal"


    rebooted


    System info now shows kernal as;
    Linux 4.18.0-0.bpo.1-amd64


    Raid File System still shows as missing ;(


    Went to Update-Management and upgraded all available packages


    Then ran;
    apt-get update
    apt-get upgrade
    still missing file system


    rebooted


    file system still missing


    any thoughts??


    thanks

    • Offizieller Beitrag

    If /proc/mdstat still shows the array in auto-read-only mode, now would be the time to execute the mdadm --readwrite /dev/md127 command. After that post, the output of cat /proc/mdstat again.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!