The ZFS modules are not loaded after reboot

  • Today I found my NAS stop working.


    Here are some feature of the problems:


    1. ZFS module not loaded since Jul 1 07:00:33

    Code
    $ less /var/log/syslog|grep 'The ZFS modules are not loaded'
    Jul  1 07:00:33 openmediavault zpool[505]: The ZFS modules are not loaded.
    Jul  1 07:00:33 openmediavault zfs[507]: The ZFS modules are not loaded.
    Jul  1 07:00:33 openmediavault zvol_wait[508]: The ZFS modules are not loaded.
    ...

    It must be the problem about reboot, because I set a routine reboot (in /etc/crontab) at 7:00 am in first day of every month. It never showed error till today.

    It seems that ZFS module collapsed as I reboot (Maybe ZFS is working then. I had set an auto snapshot at 3:00 am and destory at 5:00 am every day)


    Other informations were also support the ZFS-collapse hypothesis. For example, there are some error imformation in web GUI:

    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; zfs list -H -t snapshot -o name,used,refer 2>&1' with exit code '1': The ZFS modules are not loaded. Try running '/sbin/modprobe zfs' as root to load them.

    When try /sbin/modprobe zfs

    Code
    $ /sbin/modprobe zfs
    modprobe: FATAL: Module zfs not found in directory /lib/modules/5.5.0-0.bpo.2-amd64

    At the same time, S.M.A.R.T was also stopped, but I'm not sure the relationship between them:

    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; smartctl -x '/dev/sda' 2>&1' with exit code '1': smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.5.0-0.bpo.2-amd64] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org /var/lib/smartmontools/drivedb/drivedb.h(5775): Syntax error, '"' expected


    2. Disks are not damaged.

    Considering reboot sometimes may damage disks, I try fdisk -l:

    They were all be identified by system, so I think the disk is OK.


    3. system information of my NAS:

    Code
    Linux openmediavault.local 5.5.0-0.bpo.2-amd64 #1 SMP Debian 5.5.17-1~bpo10+1 (2020-04-23) x86_64 GNU/Linux


    Conclusion:

    ZFS is not working now, causing all files not available for visit. It's mostly likely to be the loss of ZFS module.

    How to fix it? ;(

    • Offizieller Beitrag

    Use the proxmox kernel instead of the debian kernel. There is a button to install the proxmox kernel in omv-extras. This kernel has the module built-in.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Use the proxmox kernel instead of the debian kernel. There is a button to install the proxmox kernel in omv-extras. This kernel has the module built-in.

    Thank you, it exactly works for ZFS coming back. Thank you very much!!


    However, something creepy still exist.


    First, check storage system

    Code
    $ df -lh
    Filesystem Size Used Avail Use% Mounted on
    udev 1.9G 0 1.9G 0% /dev
    tmpfs 383M 7.3M 376M 2% /run
    /dev/sda1 11G 5.2G 5.1G 51% /
    tmpfs 1.9G 0 1.9G 0% /dev/shm
    tmpfs 5.0M 0 5.0M 0% /run/lock
    tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
    tmpfs 1.9G 12K 1.9G 1% /tmp
    tmpfs 383M 0 383M 0% /run/user/0

    Where's /nas (6.83TiB), position of my ZFS storage?


    After some searching, I solved the problem. Here's the solustion:


    First, check the mount status of zfs.

    According to this message, it's most likely that the fake "/nas" in the system disk is not empty, which stopped ZFS, the virtual disk, from mounting.


    The most feature of fake "/nas" is the minimal size (several bp or kb, small anyway), which's not consistent with the original ones. The /nas virtual disk had not been mounted but the ZFS software still working, resulting in some fake folders in system disk.


    Second, based on the answer I found in ZFS not mounting - OMV 4.x, I did steps following:


    1. Uninstall ZFS plugin

    2. Delete all the fake(empty) folders in /nas.

    3. Install ZFS plugin. It would apper the error information like "Failed to read from socket: Connection reset by peer", but it doesn't matter.

    4. reboot.


    Because the /nas had been empty, so the ZFS could be mounted automatically. Finally it works!


    Check disk status again:

    /nas was just back!


    Code
    $ systemctl status zfs-mount
    ● zfs-mount.service - Mount ZFS filesystems
       Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
       Active: active (exited) since Thu 2020-07-02 09:08:14 CST; 7min ago
         Docs: man:zfs(8)
      Process: 1019 ExecStart=/sbin/zfs mount -a (code=exited, status=0/SUCCESS)
     Main PID: 1019 (code=exited, status=0/SUCCESS)
    
    Jul 02 09:08:14 openmediavault.local systemd[1]: Starting Mount ZFS filesystems...
    Jul 02 09:08:14 openmediavault.local systemd[1]: Started Mount ZFS filesystems.

    zfs-mount.service now is running. Happy ending.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!