Missing file system after upgrade from 3.x to 4.x

    • OMV 4.x
    • Upgrade 3.x -> 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Missing file system after upgrade from 3.x to 4.x

      On my 3.x OMV machine (which I have had fr a long time) - I decided to do the upgrade to 4.x.

      I uninstalled all plugins (not running that many)

      SSH's in as root

      Ran command "omv-release-upgrade" and let run to completion (from 3.x to 4.1.12)

      No major errors other than the python warning

      Let system reboot
      Wait until system is fully up
      login in as admin
      all looks good - until I go to;
      Storage
      File Systems

      I see the OS partition - and all looks good

      But - my storage volume shows all n/a's and a status of "missing" (Raid 5 array appears to be just fine under Raid Management)

      Any thoughts as to what to look for. I vaguely remember reading something a while ago about a problem with certain Linux kernals and Western Digital WD30EFRX-68E drives - - of which I have 4 in my array)

      Any advice greatly appreciated

      TIA

      -George-
    • Please go to the WebUI and post the Staus report under diagnostic. If possible the /etc/openmediavault/config.xml file, too, or at least the filesystem/mntent part of it.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Based on this post - Degraded or missing raid array I am posting the requested information.

      Source Code

      1. root@omv2:/etc/openmediavault# cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
      3. md127 : active (auto-read-only) raid5 sdd[2] sdb[0] sdc[1] sde[3]
      4. 8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      5. unused devices: <none>


      Source Code

      1. root@omv2:/etc/openmediavault# blkid
      2. /dev/sdb: UUID="de9ec887-074a-453e-f6ad-3e138e8d12a6" UUID_SUB="f3504832-c1f4-a281-0054-281a58d56c06" LABEL="OMV2:HULK" TYPE="linux_raid_member"
      3. /dev/sda1: UUID="78995f57-b587-4bff-999d-e5d2b1c8adfb" TYPE="ext4" PARTUUID="db3b5bed-01"
      4. /dev/sda5: UUID="56f4fc83-52b8-4ac6-b1c7-768adc1b6be3" TYPE="swap" PARTUUID="db3b5bed-05"
      5. /dev/sde: UUID="de9ec887-074a-453e-f6ad-3e138e8d12a6" UUID_SUB="c79f6587-6542-0563-c085-ed992a531362" LABEL="OMV2:HULK" TYPE="linux_raid_member"
      6. /dev/sdc: UUID="de9ec887-074a-453e-f6ad-3e138e8d12a6" UUID_SUB="813e417d-a5d9-1a5d-0cf6-8f537e869886" LABEL="OMV2:HULK" TYPE="linux_raid_member"
      7. /dev/sdd: UUID="de9ec887-074a-453e-f6ad-3e138e8d12a6" UUID_SUB="a898fada-ec5a-6011-f16b-735c6ba2870e" LABEL="OMV2:HULK" TYPE="linux_raid_member"
      8. root@omv2:/etc/openmediavault#

      Source Code

      1. root@omv2:/etc/openmediavault# fdisk -l | grep "Disk "
      2. Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      3. Disk /dev/sda: 37.3 GiB, 40018599936 bytes, 78161328 sectors
      4. Disk identifier: 0xdb3b5bed
      5. Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      6. Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      7. Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      8. Disk /dev/md127: 8.2 TiB, 9001374842880 bytes, 17580810240 sectors
      9. root@omv2:/etc/openmediavault#


      Source Code

      1. root@omv2:/etc/openmediavault# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md127 metadata=1.2 name=OMV2:HULK UUID=de9ec887:074a453e:f6ad3e13:8e8d12a6
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR gibeaug@otc.edu
      20. MAILFROM rootroot@omv2:/etc/openmediavault#
      Display All


      Source Code

      1. root@omv2:/etc/openmediavault# mdadm --detail --scan --verbose
      2. ARRAY /dev/md127 level=raid5 num-devices=4 metadata=1.2 name=OMV2:HULK UUID=de9ec887:074a453e:f6ad3e13:8e8d12a6
      3. devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde
      4. root@omv2:/etc/openmediavault#
    • ggibeau wrote:

      I needed to get to my data - so I reinstalled 3.0.99 and my raid files are all there
      Did you look at the link ananas posted? I would curious to see the output of: wipefs -n /dev/sd[bcde] (no, it won't wipe anything)
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!

      The post was edited 1 time, last by ryecoaaron ().

    • ananas wrote:

      if the array is recognized, then the ZFS signatures on the member disks can not be the issue.
      Not quite true. With Debian 8 (OMV 3.x), mdadm didn't seem to care if there was a zfs signature on the disk(s). With Debian 9 (OMV 4.x), this changed and it did affect assembling.

      ggibeau wrote:

      root@omv2:~# wipefs -n /dev/md127
      Can you post the wipefs command I asked for? A bad signature wouldn't be on the array since it has already been assembled. It would be on the physical disks themselves.

      ggibeau wrote:

      BTW - I have never run ZFS on this NAS
      zfs isn't the only signature that might be causing the problem.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • TeX Source Code

      1. root@omv2:~# wipefs -n /dev/sdb
      2. offset type
      3. ----------------------------------------------------------------
      4. 0x1000 linux_raid_member [raid]
      5. LABEL: OMV2:HULK
      6. UUID: de9ec887-074a-453e-f6ad-3e138e8d12a6
      7. root@omv2:~# wipefs -n /dev/sdc
      8. offset type
      9. ----------------------------------------------------------------
      10. 0x1000 linux_raid_member [raid]
      11. LABEL: OMV2:HULK
      12. UUID: de9ec887-074a-453e-f6ad-3e138e8d12a6
      13. root@omv2:~# wipefs -n /dev/sdd
      14. offset type
      15. ----------------------------------------------------------------
      16. 0x1000 linux_raid_member [raid]
      17. LABEL: OMV2:HULK
      18. UUID: de9ec887-074a-453e-f6ad-3e138e8d12a6
      19. root@omv2:~# wipefs -n /dev/sde
      20. offset type
      21. ----------------------------------------------------------------
      22. 0x1000 linux_raid_member [raid]
      23. LABEL: OMV2:HULK
      24. UUID: de9ec887-074a-453e-f6ad-3e138e8d12a6
      Display All
    • Those all look fine (you could have run the command I posted to get it all in one :) )

      I should have looked at your /proc/mdstat more carefully. Your array assembled but was in auto-read-only. Did you every try mdadm --readwrite /dev/md127?
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • No - I never tried that - - do you think it would have worked? I guess I can do an upgrade in place again and see what happens (since I know I can always put 3.x back on there) - and if it does the same thing, I can try the command you reference above.


      root@omv2:~# wipefs -n /dev/[bcde]
      wipefs: error: /dev/[bcde]: probing initialization failed: No such file or directory
      root@omv2:~#



      Thanks

      George
    • ggibeau wrote:

      do you think it would have worked?
      Probably. That is what the command is supposed to fix.

      ggibeau wrote:

      root@omv2:~# wipefs -n /dev/[bcde]
      wipefs: error: /dev/[bcde]: probing initialization failed: No such file or directory
      root@omv2:~#
      I had a typo in my post. It should be wipefs -n /dev/sd[bcde]
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      mdadm --readwrite /dev/md127
      ran apt-update
      apt-get upgrade (nothing to upgrade)
      removed clamav plugin
      Ran in-place upgrade to go from 3.x to 4.x
      completed with no errors
      logged in via web u=interface
      here is what I see under "File Systems"



      Run the command you referenced - from telnet "mdadm --readwrite /dev/md127"
      Reboot

      File System same - - not mounted and Missing

      ran "apt-get update" and apt-get upgrade" just to make sure it was not due to some missing package update
      rebooted system
      File System for raid array still the same - "Missing"

      Any other thoughts? Or should I just go back to 3.x and be happy.

      Thank you

      The post was edited 1 time, last by ggibeau ().

    • ggibeau wrote:

      Run the command you referenced - from telnet "mdadm --readwrite /dev/md127"
      Reboot

      File System same - - not mounted and Missing

      Any other thoughts? Or should I just go back to 3.x and be happy.
      That command can only fix an array in auto-read-only mode. What is the output of: cat /proc/mdstat

      I also assume you rebooted after moving to OMV 4.x? Do you have the 4.18 kernel installed?
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • root@omv2:~# cat /proc/mdstat
      Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
      md127 : active (auto-read-only) raid5 sdc[1] sde[3] sdd[2] sdb[0]
      8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

      unused devices: <none>
      root@omv2:~#


      Kernal is Linux 4.9.0.8-amd64 (obtained from System Information - Overview)


      yes - I rebooted multiple times ;)



      root@omv2:~# grep -ir backport /etc/apt/*
      /etc/apt/apt.conf.d/01autoremove: "linux-backports-modules-.*";
      /etc/apt/apt.conf.d/01autoremove-kernels: "^linux-backports-modules-.*-4\.9\.0-0\.bpo\.6-amd64$";
      /etc/apt/apt.conf.d/01autoremove-kernels: "^linux-backports-modules-.*-4\.9\.0-8-amd64$";
      /etc/apt/preferences.d/openmediavault-kernel-backports.pref:Pin: release a=jessie-backports
      /etc/apt/preferences.d/openmediavault-kernel-backports.pref:Pin: release a=jessie-backports
      /etc/apt/preferences.d/openmediavault-kernel-backports.pref:Pin: release a=jessie-backports
      /etc/apt/preferences.d/openmediavault-kernel-backports.pref:Pin: release a=jessie-backports
      /etc/apt/sources.list.d/openmediavault-kernel-backports.list:deb httpredir.debian.org/debian jessie-backports main contrib non-free
      root@omv2:~#


      installed omv-extras for OMV 4

      ran this command to successful completion;
      apt-get install -t stretch-backports linux-headers-4.18.0-0.bpo.1-amd64

      rebooted

      System info still shows kernal 4.9.0.8

      Go to OMV-Extras
      Click Kernal
      Installed Kernals drop down only shows 4.9.0.8's


      Went back in and ran:
      apt-get install linux-image-(TAB)
      saw the 4.18 amd64 kernal on the list - and installed it successfully.

      Went back to OMV-Extras / Kernel.
      4.18 now shows up in installed kernals drop down
      select it and click "set as default boot kernal"

      rebooted

      System info now shows kernal as;
      Linux 4.18.0-0.bpo.1-amd64

      Raid File System still shows as missing ;(

      Went to Update-Management and upgraded all available packages

      Then ran;
      apt-get update
      apt-get upgrade
      still missing file system

      rebooted

      file system still missing

      any thoughts??

      thanks

      The post was edited 4 times, last by ggibeau ().

    • If /proc/mdstat still shows the array in auto-read-only mode, now would be the time to execute the mdadm --readwrite /dev/md127 command. After that post, the output of cat /proc/mdstat again.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Users Online 1

      1 Guest