i'lll get this error even with dpkg-dev installed ...
Well, I know dpkg-dev provides the command. Not sure why it still fails.
i'lll get this error even with dpkg-dev installed ...
Well, I know dpkg-dev provides the command. Not sure why it still fails.
I set up a VM on another machine to do some testing. I followed the same procedere as I did on my nas build.
Istalled fresh OMV4, patched everything incl. 4.16, installed OMV-Extras, reboot, installed dpkg-dev
linux-kbuild-4.16 (4.16.5-1~bpo9+1) wird eingerichtet ...
zfs-dkms (0.7.9-2) wird eingerichtet ...
Loading new zfs-0.7.9 DKMS files...
Building for 4.14.0-0.bpo.3-amd64 4.16.0-0.bpo.1-amd64
Module build for kernel 4.14.0-0.bpo.3-amd64 was skipped since the
kernel headers for this kernel does not seem to be installed.
Building initial module for 4.16.0-0.bpo.1-amd64
configure: error:
*** Please make sure the kmod spl devel <kernel> package for your
*** distribution is installed then try again. If that fails you
*** can specify the location of the spl objects with the
*** '--with-spl-obj=PATH' option. Failed to find spl_config.h in
*** any of the following:
/usr/src/spl-0.7.9/4.16.0-0.bpo.1-amd64
/usr/src/spl-0.7.9
Error! Bad return status for module build on kernel: 4.16.0-0.bpo.1-amd64 (x86_64)
Consult /var/lib/dkms/zfs/0.7.9/build/make.log for more information.
linux-headers-4.16.0-0.bpo.1-amd64 (4.16.5-1~bpo9+1) wird eingerichtet ...
/etc/kernel/header_postinst.d/dkms:
cp: der Aufruf von stat für '/var/lib/dkms/spl/0.7.9/build/spl_config.h' ist nicht möglich: Datei oder Verzeichnis nicht gefunden
cp: der Aufruf von stat für '/var/lib/dkms/spl/0.7.9/build/module/Module.symvers' ist nicht möglich: Datei oder Verzeichnis nicht gefunden
Trigger für openmediavault (4.1.6) werden verarbeitet ...
Restarting engine daemon ...
libzpool2linux (0.7.9-2) wird eingerichtet ...
linux-headers-amd64 (4.16+93~bpo9+1) wird eingerichtet ...
libzfs2linux (0.7.9-2) wird eingerichtet ...
zfsutils-linux (0.7.9-2) wird eingerichtet ...
Created symlink /etc/systemd/system/zfs-import.target.wants/zfs-import-cache.service → /lib/systemd/system/zfs-import-cache.service.
Created symlink /etc/systemd/system/zfs-mount.service.wants/zfs-import.target → /lib/systemd/system/zfs-import.target.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-import.target → /lib/systemd/system/zfs-import.target.
Created symlink /etc/systemd/system/zfs-share.service.wants/zfs-mount.service → /lib/systemd/system/zfs-mount.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-mount.service → /lib/systemd/system/zfs-mount.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-share.service → /lib/systemd/system/zfs-share.service.
Created symlink /etc/systemd/system/multi-user.target.wants/zfs.target → /lib/systemd/system/zfs.target.
zfs-import-scan.service is a disabled or a static unit, not starting it.
Job for zfs-mount.service failed because the control process exited with error code.
See "systemctl status zfs-mount.service" and "journalctl -xe" for details.
zfs-mount.service couldn't start.
Job for zfs-share.service failed because the control process exited with error code.
See "systemctl status zfs-share.service" and "journalctl -xe" for details.
zfs-share.service couldn't start.
zfs-zed (0.7.9-2) wird eingerichtet ...
Created symlink /etc/systemd/system/zed.service → /lib/systemd/system/zfs-zed.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-zed.service → /lib/systemd/system/zfs-zed.service.
openmediavault-zfs (4.0.2-1) wird eingerichtet ...
modprobe: FATAL: Module zfs not found in directory /lib/modules/4.14.0-0.bpo.3-amd64
dpkg: Fehler beim Bearbeiten des Paketes openmediavault-zfs (--configure):
Unterprozess installiertes post-installation-Skript gab den Fehlerwert 1 zurück
Trigger für libc-bin (2.24-11+deb9u3) werden verarbeitet ...
Trigger für openmediavault (4.1.6) werden verarbeitet ...
Restarting engine daemon ...
Fehler traten auf beim Bearbeiten von:
openmediavault-zfs
Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7f3abbb19730>
Traceback (most recent call last):
File "/usr/lib/python3.5/weakref.py", line 117, in remove
TypeError: 'NoneType' object is not callable
Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7f3abbb19730>
Traceback (most recent call last):
File "/usr/lib/python3.5/weakref.py", line 117, in remove
TypeError: 'NoneType' object is not callable
E: Sub-process /usr/bin/dpkg returned an error code (1)
root@openmediavault:~#
Alles anzeigen
So pretty much the same as on my main machine.
Edit: After uninstalling und reinstalling the ZFS-Plugin, it works on the VM-machine .... don't know why ....
Now I'll try on nas-machine.
Edit2: I set up LUKS with autounlock at boot and on top my ZFS. After each reboot my pool is gone but can be reimportet. Drives are unlocked, but all shared folders have an NA-path. The error-code is
Failed to execute XPath query '//system/fstab/mntent[uuid='70a6b2de-e75b-47c6-a0ed-0a56ec382d7f']'.
This occurs after messing with mysql (thread here)
It might have to do with an attached USB backup drive I use for restoring my data?! uuid='70a6b2de-e75b-47c6-a0ed-0a56ec382d7f' is my usb drive. Trying to readjust the paths from my shared folders to my ZDF datasets gives me a similar error:
Failed to execute XPath query '//system/fstab/mntent[uuid='39efbf6e-c32f-4ccb-88aa-87b53f9d3190']'.
seems to be similar to that issue?
I use my ZFS pool and assinged differend datasets for each shared folder. During cration of my shared folders I chose the corresponding "drive" or dataset as path. The shares get UUIDs ind config.xml, but are no longer found.?! My shrared folder definitions in config.xml look like
<sharedfolder>
<uuid>88aa1bf4-e5e1-4cf2-a909-5dd5b0831b84</uuid>
<name>ablage</name>
<comment></comment>
<mntentref>39efbf6e-c32f-4ccb-88aa-87b53f9d3190</mntentref>
<reldirpath>ablage/</reldirpath>
<privileges></privileges>
</sharedfolder>
okay, creating a new dataset works. My config.xml has after the creation of a new dataset following entry
<fstab>
<mntent>
<uuid>34297f48-6906-4f27-85bc-de16ba3c4ad9</uuid>
<fsname>Storage/test</fsname>
<dir>/Storage/test</dir>
<type>zfs</type>
<opts>rw,relatime,xattr,noacl</opts>
<freq>0</freq>
<passno>0</passno>
<hidden>1</hidden>
</mntent>
</fstab>
Alles anzeigen
but none for the orher datasets. A little further down in config.xml
<sharedfolder>
<uuid>6fb0a79b-062a-469b-9c09-ff0c1d2e49ff</uuid>
<name>test</name>
<comment></comment>
<mntentref>34297f48-6906-4f27-85bc-de16ba3c4ad9</mntentref>
<reldirpath>test/</reldirpath>
<privileges></privileges>
</sharedfolder>
So, can I reasign the correct UUIDs to my datasets in the fstab section to fix my problem?
EDIT: HEUREKA! The solution was much easier:
Creating my ZFS Pool Storage and a dataset Storage/backup and aplying a shared folder to it dropped a new folder with the same name, e.g. /Storage/backup/backup. Whatever happened to my pool, left me with a directory structure /Storage/backup/backup. Since /Storage/backup is the mountpoint for my dataset, zfs mount -a could not mount to that location, because the folder was not empty. Simply deleting the lowest folder (in my example /Storage/backup/backup) solved the problem for all datasets.
Its a great day!
So i tried installing OMV and ZFS for hours now, no matter how much i try an read it won't work. Can somebody please tell wich version of omv to install and what to do get it working, i'm out of ideas. I used OMV 2 till today but my ssd died on me and of course i didn't make an system drive backup.
Here is what I just did (all commands executed as root):
Everything seems to be working.
Do you mean the 4.1.3.iso or the 4.0.14.iso? i don't find a 4.0.13.iso.
Thanks a lot for your help
Do you mean the 4.1.3.iso or the 4.0.14.iso? i don't find a 4.0.13.iso.
Typo fixed but it doesn't matter since the system is upgraded to the latest packages before doing anything zfs.
Okay now we're in business, thanks a lot. I importet the Pool, but how can i access my old folders and data?
Any idea how to move from kernel 4.14 with ZFS 0.7.6 to kernel 4.16 with ZFS 0.7.9 ?
Any idea how to move from kernel 4.14 with ZFS 0.7.6 to kernel 4.16 with ZFS 0.7.9 ?
I can't test because the 4.14 kernel headers aren't available anymore. I would backup your system before trying this:
Remove the 4.14 kernel headers
Install 4.16 kernel and zfs 0.7.9 at same time. It should skip compiling the zfs module for the 4.14 kernel since the headers aren't there.
Reboot using the 4.16 kernel
Remove the 4.14 kernel
but how can i access my old folders and data?
Create shared folders and configure services to access it. You should be able to see the data on the command line as well.
The problem is, that i can't access the device in the drop down menu in the share folder section (and yes i did import it in the omv-interface). Any idea how to fix that?
That is a different issue. See here: ZFS device(s) not listed in devices dropdown
I have my ZFS running quite well right now, but I struggle with auto snapshots and scrub.
I sticked to this old thread to set my snapshots up; for example for my backup folder mounted to
it looks like this:
/sbin/zfs list -t snapshot -o name | /bin/grep Storage/backup@backup_ | /usr/bin/sort -r | /usr/bin/tail -n +30 | /usr/bin/xargs -n 1 /sbin/zfs destroy -r
pushing this manually with ssh works, but not with planned tasks.
Any advice how to get these things to work?
I can't test because the 4.14 kernel headers aren't available anymore. I would backup your system before trying this:
Remove the 4.14 kernel headers
Install 4.16 kernel and zfs 0.7.9 at same time. It should skip compiling the zfs module for the 4.14 kernel since the headers aren't there.
Reboot using the 4.16 kernel
Remove the 4.14 kernel
Will be happy to try but how can I get the ZFS 0.7.9 packages ?
The only version I have is the 0.7.6 with stretch-backport
root@home-server:/usr/local/src# apt-cache show zfs-dkms
Package: zfs-dkms
Source: zfs-linux
Version: 0.7.6-1~bpo9+1
Installed-Size: 10436
Maintainer: Debian ZFS on Linux maintainers <pkg-zfsonlinux-devel@lists.alioth.debian.org>
Architecture: all
Provides: zfs-modules
Depends: dkms (>> 2.1.1.2-5), lsb-release, debconf (>= 0.5) | debconf-2.0
Pre-Depends: spl-dkms (<< 0.7.6.), spl-dkms (>= 0.7.6)
Recommends: zfs-zed, zfsutils-linux (>= 0.7.6-1~bpo9+1), linux-libc-dev (<= 4.16)
Alles anzeigen
EDIT: Found in OMV-extra Testing
Hi guys,
at the moment I am replacing my 8x 4 wd red disks with 8x 10 wd red disks of my pool one after the other. The first disk is resilvering now. To get the full disk space after resilvering the last disk, you have to enable "autoexpand" at the pool. So I did following at the command line:
So, I wanted to check, if I can see that setting in the omv-zfs plugin. Yesterday this wasn't the case. Is it a normal behavior?
If yes: Feature Request: Is it possible to develop that the plugin reads all the zfs settings and make them configurable in the omv webui?
But today I have a problem with zfs plugin. If I go to "Storage - ZFS" I only see "loading" which ends with "communication failure". Have a look at the following Screenshots:
Round about a minute later I get the following message:
If I click "ok", I see nothing:
But my pool is still reachable by smb and I can see all my zfs file systems under "storage - file systems" in the omv webui. Syslog gives me the following output:
Jun 1 08:33:52 omv4 zed: eid=20 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:33:55 omv4 zed: eid=21 class=vdev_autoexpand pool_guid=0xD62DC0706B1DC411 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4KYX7HZ-part1 vdev_state=ONLINE
Jun 1 08:33:55 omv4 zed: eid=22 class=vdev_autoexpand pool_guid=0xD62DC0706B1DC411 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E3NV2D2F-part1 vdev_state=ONLINE
Jun 1 08:33:55 omv4 zed: eid=23 class=vdev_autoexpand pool_guid=0xD62DC0706B1DC411 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E5EC1TZ9-part1 vdev_state=ONLINE
Jun 1 08:33:56 omv4 zed: eid=24 class=vdev_autoexpand pool_guid=0xD62DC0706B1DC411 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E6LA42K7-part1 vdev_state=ONLINE
Jun 1 08:33:56 omv4 zed: eid=25 class=vdev_autoexpand pool_guid=0xD62DC0706B1DC411 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E6LA4ZJ8-part1 vdev_state=ONLINE
Jun 1 08:33:56 omv4 zed: eid=26 class=vdev_autoexpand pool_guid=0xD62DC0706B1DC411 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E7HP68ZE-part1 vdev_state=UNAVAIL
Jun 1 08:33:56 omv4 zed: eid=27 class=vdev_autoexpand pool_guid=0xD62DC0706B1DC411 vdev_path=/dev/disk/by-id/ata-WDC_WD100EFAX-68LHPN0_2TJZH1PD-part1 vdev_state=ONLINE
Jun 1 08:33:56 omv4 zed: eid=28 class=vdev_autoexpand pool_guid=0xD62DC0706B1DC411 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E7XKY0DJ-part1 vdev_state=ONLINE
Jun 1 08:33:56 omv4 zed: eid=29 class=vdev_autoexpand pool_guid=0xD62DC0706B1DC411 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E7XKY1AN-part1 vdev_state=ONLINE
Jun 1 08:38:43 omv4 zed: eid=30 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:38:50 omv4 zed: eid=31 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:38:57 omv4 zed: eid=32 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:39:00 omv4 systemd[1]: Starting Clean php session files...
Jun 1 08:39:00 omv4 systemd[1]: Started Clean php session files.
Jun 1 08:39:01 omv4 CRON[27388]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
Jun 1 08:39:04 omv4 zed: eid=33 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:39:11 omv4 zed: eid=34 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:39:18 omv4 zed: eid=35 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:39:25 omv4 zed: eid=36 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:39:32 omv4 zed: eid=37 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:39:39 omv4 zed: eid=38 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:39:46 omv4 zed: eid=39 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:39:53 omv4 zed: eid=40 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:39:53 omv4 zed: eid=41 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:00 omv4 zed: eid=42 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:07 omv4 zed: eid=43 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:14 omv4 zed: eid=44 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:21 omv4 zed: eid=45 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:28 omv4 zed: eid=46 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:35 omv4 zed: eid=47 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:35 omv4 zed: eid=48 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:42 omv4 zed: eid=49 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:42 omv4 zed: eid=50 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:49 omv4 zed: eid=51 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:49 omv4 zed: eid=52 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:56 omv4 zed: eid=53 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:40:56 omv4 zed: eid=54 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:41:03 omv4 zed: eid=55 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:41:03 omv4 zed: eid=56 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:41:10 omv4 zed: eid=57 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:41:17 omv4 zed: eid=58 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:41:24 omv4 zed: eid=59 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:41:31 omv4 zed: eid=60 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:41:38 omv4 zed: eid=61 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:41:45 omv4 zed: eid=62 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:45:01 omv4 CRON[2318]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
Jun 1 08:46:01 omv4 zed: eid=63 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:46:08 omv4 zed: eid=64 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:46:15 omv4 zed: eid=65 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:46:22 omv4 zed: eid=66 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:46:29 omv4 zed: eid=67 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:46:36 omv4 zed: eid=68 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:46:43 omv4 zed: eid=69 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:46:50 omv4 zed: eid=70 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:46:58 omv4 zed: eid=71 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:47:05 omv4 zed: eid=72 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:47:12 omv4 zed: eid=73 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:51:41 omv4 zed: eid=74 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:51:48 omv4 zed: eid=75 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:51:55 omv4 zed: eid=76 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:52:02 omv4 zed: eid=77 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:52:09 omv4 zed: eid=78 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:52:16 omv4 zed: eid=79 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:52:23 omv4 zed: eid=80 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:52:30 omv4 zed: eid=81 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:52:37 omv4 zed: eid=82 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:52:44 omv4 zed: eid=83 class=history_event pool_guid=0xD62DC0706B1DC411
Jun 1 08:52:52 omv4 zed: eid=84 class=history_event pool_guid=0xD62DC0706B1DC411
Alles anzeigen
"zpool status" looks as expected:
root@omv4:~# zpool status
pool: mediatank
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Thu May 31 12:15:43 2018
19,8T scanned out of 22,2T at 274M/s, 2h30m to go
2,42T resilvered, 89,38% done
config:
NAME STATE READ WRITE CKSUM
mediatank DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E4KYX7HZ ONLINE 0 0 0
ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E3NV2D2F ONLINE 0 0 0
ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E5EC1TZ9 ONLINE 0 0 0
ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E6LA42K7 ONLINE 0 0 0
ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E6LA4ZJ8 ONLINE 0 0 0
replacing-5 DEGRADED 0 0 0
4772221678079379963 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E7HP68ZE-part1
ata-WDC_WD100EFAX-68LHPN0_2TJZH1PD ONLINE 0 0 0 (resilvering)
ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E7XKY0DJ ONLINE 0 0 0
ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E7XKY1AN ONLINE 0 0 0
errors: No known data errors
Alles anzeigen
Maybe I shouldn't change the configuration of my pool while resilvering.
The resilvering process of the first replaced disk still needs three hours. I think the problem gets solved after the shutdown of my server to replace the second disk and restarting the server.
EDIT: OK, Problem solved. After the Resilvering procedure for the first disk the section "storage - zfs" in the omv webui works as expected again.
Regards Hoppel
I have notice something long time ago which is bit annoying but strangely doesn't affect the operation
Jun 05 17:50:28 nasbox systemd[1]: Starting Mount ZFS filesystems...
Jun 05 17:50:28 nasbox zfs[2123]: cannot mount '/mnt/storage': directory is not empty
Jun 05 17:50:30 nasbox systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Jun 05 17:50:30 nasbox systemd[1]: Failed to start Mount ZFS filesystems.
Jun 05 17:50:30 nasbox systemd[1]: zfs-mount.service: Unit entered failed state.
Jun 05 17:50:30 nasbox systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
if I restart on single mode and remove the directory there is no error message. It will happen on the next reboot (which is not happen often so I don't know what else can be the cause)
As I mention everything looks that working perfect. Any idea why this is happening?
I don't see that we had the same problem
Well, I know dpkg-dev provides the command. Not sure why it still fails.
The problem should be that the dpkg-dev package is not a dependency of the zfs-dpkg package in the debian repositories.
The problem should be that the dpkg-dev package is not a dependency of the zfs-dpkg package in the debian repositories.
The command fails even if the package is installed. Therefore, adding it as a dependency would not help.
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!