Failure while configuring zfsutils-linux after update to 4.19.0-0.bpo.6-amd64

    • OMV 4.x
    • Resolved
    • Update

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Failure while configuring zfsutils-linux after update to 4.19.0-0.bpo.6-amd64

      Hi omv community
      I ran into a problem while updating my OMV machine.
      It seems like zfsutils-linux and zfs-zed won't update.'

      The output from apt -f install :

      Source Code

      1. Paketlisten werden gelesen... Fertig
      2. Abhängigkeitsbaum wird aufgebaut.
      3. Statusinformationen werden eingelesen.... Fertig
      4. 0 aktualisiert, 0 neu installiert, 0 zu entfernen und 0 nicht aktualisiert.
      5. 2 nicht vollständig installiert oder entfernt.
      6. Nach dieser Operation werden 0 B Plattenplatz zusätzlich benutzt.
      7. zfsutils-linux (0.7.12-2+deb10u1~bpo9+1) wird eingerichtet ...
      8. Job for zfs-mount.service failed because the control process exited with error code.
      9. See "systemctl status zfs-mount.service" and "journalctl -xe" for details.
      10. invoke-rc.d: initscript zfs-mount, action "restart" failed.
      11. ● zfs-mount.service - Mount ZFS filesystems
      12. Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
      13. Active: failed (Result: exit-code) since Sat 2020-01-04 18:28:37 CET; 5ms ago
      14. Docs: man:zfs(8)
      15. Process: 15158 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
      16. Main PID: 15158 (code=exited, status=1/FAILURE)
      17. CPU: 5ms
      18. Jan 04 18:28:36 fvmnas systemd[1]: Starting Mount ZFS filesystems...
      19. Jan 04 18:28:37 fvmnas zfs[15158]: cannot mount '/mnt/fvm/fvm': directory is not empty
      20. Jan 04 18:28:37 fvmnas systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
      21. Jan 04 18:28:37 fvmnas systemd[1]: Failed to start Mount ZFS filesystems.
      22. Jan 04 18:28:37 fvmnas systemd[1]: zfs-mount.service: Unit entered failed state.
      23. Jan 04 18:28:37 fvmnas systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
      24. dpkg: Fehler beim Bearbeiten des Paketes zfsutils-linux (--configure):
      25. Unterprozess installiertes post-installation-Skript gab den Fehlerwert 1 zurück
      26. dpkg: Abhängigkeitsprobleme verhindern Konfiguration von zfs-zed:
      27. zfs-zed hängt ab von zfsutils-linux (>= 0.7.12-2+deb10u1~bpo9+1); aber:
      28. Paket zfsutils-linux ist noch nicht konfiguriert.
      29. dpkg: Fehler beim Bearbeiten des Paketes zfs-zed (--configure):
      30. Abhängigkeitsprobleme - verbleibt unkonfiguriert
      31. Fehler traten auf beim Bearbeiten von:
      32. zfsutils-linux
      33. zfs-zed
      34. E: Sub-process /usr/bin/dpkg returned an error code (1)
      Display All



      Output from systemctl status zfs-mount.service:

      Source Code

      1. ● zfs-mount.service - Mount ZFS filesystems
      2. Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
      3. Active: failed (Result: exit-code) since Sat 2020-01-04 18:28:37 CET; 2min 48s ago
      4. Docs: man:zfs(8)
      5. Process: 15158 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
      6. Main PID: 15158 (code=exited, status=1/FAILURE)
      7. CPU: 5ms
      8. Jan 04 18:28:36 fvmnas systemd[1]: Starting Mount ZFS filesystems...
      9. Jan 04 18:28:37 fvmnas zfs[15158]: cannot mount '/mnt/fvm/fvm': directory is not empty
      10. Jan 04 18:28:37 fvmnas systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
      11. Jan 04 18:28:37 fvmnas systemd[1]: Failed to start Mount ZFS filesystems.
      12. Jan 04 18:28:37 fvmnas systemd[1]: zfs-mount.service: Unit entered failed state.
      13. Jan 04 18:28:37 fvmnas systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
      Display All
      and journalctl -xe:

      Source Code

      1. Jan 04 18:28:36 fvmnas systemd[1]: anacron.timer: Adding 1min 39.805182s random time.
      2. Jan 04 18:28:36 fvmnas systemd[1]: apt-daily-upgrade.timer: Adding 47min 27.495642s random time.
      3. Jan 04 18:28:36 fvmnas systemd[1]: apt-daily.timer: Adding 8h 29min 10.104225s random time.
      4. Jan 04 18:28:36 fvmnas systemd[1]: Reloading.
      5. Jan 04 18:28:36 fvmnas systemd[1]: anacron.timer: Adding 1min 16.234237s random time.
      6. Jan 04 18:28:36 fvmnas systemd[1]: Starting Mount ZFS filesystems...
      7. -- Subject: Unit zfs-mount.service has begun start-up
      8. -- Defined-By: systemd
      9. -- Support: https://www.debian.org/support
      10. --
      11. -- Unit zfs-mount.service has begun starting up.
      12. Jan 04 18:28:37 fvmnas zfs[15158]: cannot mount '/mnt/fvm/fvm': directory is not empty
      13. Jan 04 18:28:37 fvmnas systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
      14. Jan 04 18:28:37 fvmnas systemd[1]: Failed to start Mount ZFS filesystems.
      15. -- Subject: Unit zfs-mount.service has failed
      16. -- Defined-By: systemd
      17. -- Support: https://www.debian.org/support
      18. --
      19. -- Unit zfs-mount.service has failed.
      20. --
      21. -- The result is failed.
      22. Jan 04 18:28:37 fvmnas systemd[1]: zfs-mount.service: Unit entered failed state.
      23. Jan 04 18:28:37 fvmnas systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
      24. Jan 04 18:30:01 fvmnas CRON[15262]: pam_unix(cron:session): session opened for user root by (uid=0)
      25. Jan 04 18:30:01 fvmnas CRON[15263]: pam_unix(cron:session): session opened for user root by (uid=0)
      26. Jan 04 18:30:01 fvmnas CRON[15264]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
      27. Jan 04 18:30:01 fvmnas CRON[15265]: (root) CMD (/usr/sbin/omv-mkgraph >/dev/null 2>&1)
      28. Jan 04 18:30:01 fvmnas CRON[15263]: pam_unix(cron:session): session closed for user root
      29. Jan 04 18:30:02 fvmnas CRON[15262]: pam_unix(cron:session): session closed for user root
      Display All
      I am running the normal Debian Kernel (no Proxmox).
      Is there a way in which I can easily fix this?
      Any help would be greatly appreciated!

      Many Thanks
      vln0x

      The post was edited 1 time, last by vln0x ().

    • So I got it fixed myself and will leave here what worked for me in case other people run into this issue.
      ***If this is not wanted by the admins you can remove this thread***


      To fix it I unmounted all zfs filesystems using zfs unmount -f [zfsmount] and temporarily removed the mountpoints so the point where the pool should be mounted is completely empty.
      After that it was possible for me to run the update using apt -f install. Then you only need to recreate the mountpoints and mount the zfs filesystems. I had some problems with mounting the filesystem afterwards but this was gone after a restart.