Beiträge von kapoira

    Little by little, after i have delete the filesystem extra, that i have created by error and rename to the old name


    Seems that openmediavault the Storage->ZFS and Storage->Sistemas de archivos are ok


    I have to delete the NFS share, after that i can delete the shared folder,....


    and after i have configure the shared folders seems to be the zfs working, but now i'm working to recover the zfs via nfs....


    AND IT WORKS ;) (i change the tittle to SOLVED)

    OK: i'm seeing the data:


    root@tokyo:~# zfs mount Tokyopool/Tokyo

    cannot open 'Tokyopool/Tokyo': dataset does not exist

    root@tokyo:~# zfs mount Tokyopool/Tokyo2

    cannot mount 'Tokyopool/Tokyo2': filesystem already mounted

    root@tokyo:~# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 3.9G 0 3.9G 0% /dev

    tmpfs 790M 8.9M 781M 2% /run

    /dev/sda2 102G 4.1G 93G 5% /

    tmpfs 3.9G 0 3.9G 0% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup

    tmpfs 3.9G 0 3.9G 0% /tmp

    /dev/sda1 511M 132K 511M 1% /boot/efi

    Tokyopool 3.7T 128K 3.7T 1% /Tokyopool

    Tokyopool/Tokyo2 7.1T 3.4T 3.7T 48% /Tokyopool/Tokyo2

    root@tokyo:~# cd /Tokyopool/Tokyo2/

    root@tokyo:/Tokyopool/Tokyo2# ls

    Tokyo

    root@tokyo:/Tokyopool/Tokyo2# cd Tokyo/

    root@tokyo:/Tokyopool/Tokyo2/Tokyo# ls

    Series


    before that the tokyopool and the tokypool/tokyo2 wasn't mounted....¿?¿?


    At least i have confirmed that the data is not lost, but now i have a problem with the configuration between the zfs and the openmediavault

    via ssh i get:


    root@tokyo:~# zfs get all


    NAME PROPERTY VALUE SOURCE

    Tokyopool type filesystem -

    Tokyopool creation Sun Aug 25 0:55 2019 -

    Tokyopool used 3.34T -

    Tokyopool available 3.68T -

    Tokyopool referenced 139K -

    Tokyopool compressratio 1.00x -

    Tokyopool mounted yes -

    .

    .

    .

    Tokyopool guid 3737134360514090584 -

    .

    .

    .

    Tokyopool omvzfsplugin:uuid 6ac263a5-6605-4ba9-b050-a83da04f1f1c local

    Tokyopool/Tokyo2 type filesystem -

    Tokyopool/Tokyo2 creation Sun Aug 25 0:56 2019 -

    Tokyopool/Tokyo2 used 3.34T -

    Tokyopool/Tokyo2 available 3.68T -

    Tokyopool/Tokyo2 referenced 3.34T -

    Tokyopool/Tokyo2 compressratio 1.00x -

    Tokyopool/Tokyo2 mounted yes -

    Tokyopool/Tokyo2 quota none default

    Tokyopool/Tokyo2 reservation none default

    Tokyopool/Tokyo2 recordsize 128K default

    Tokyopool/Tokyo2 mountpoint /Tokyopool/Tokyo2 default

    Tokyopool/Tokyo2 sharenfs off default

    .

    .

    .

    Tokyopool/Tokyo2 guid 8543986563402028725 -

    .

    .

    .

    Tokyopool/Tokyo2 omvzfsplugin:uuid 8baec714-a543-42d9-b231-fb7b2c45679b local

    Tokyopool/Tokyo2/Tokyo2 type filesystem -

    Tokyopool/Tokyo2/Tokyo2 creation Wed Apr 1 14:18 2020 -

    Tokyopool/Tokyo2/Tokyo2 used 128K -

    Tokyopool/Tokyo2/Tokyo2 available 3.68T -

    Tokyopool/Tokyo2/Tokyo2 referenced 128K -

    Tokyopool/Tokyo2/Tokyo2 compressratio 1.00x -

    Tokyopool/Tokyo2/Tokyo2 mounted yes -

    Tokyopool/Tokyo2/Tokyo2 quota none default

    Tokyopool/Tokyo2/Tokyo2 reservation none default

    Tokyopool/Tokyo2/Tokyo2 recordsize 128K default

    Tokyopool/Tokyo2/Tokyo2 mountpoint /Tokyopool/Tokyo2 local

    .

    .

    .

    Tokyopool/Tokyo2/Tokyo2 omvzfsplugin:uuid 8baec714-a543-42d9-b231-fb7b2c45679b inherited from Tokyopool/Tokyo2



    and now i get

    root@tokyo:~# omv-confdbadm read --prettify conf.system.filesystem.mountpoint

    [

    {

    "dir": "/Tokyopool",

    "freq": 0,

    "fsname": "Tokyopool",

    "hidden": true,

    "opts": "rw,relatime,xattr,noacl",

    "passno": 0,

    "type": "zfs",

    "uuid": "6ac263a5-6605-4ba9-b050-a83da04f1f1c"

    },

    {

    "dir": "/export/Tokyo",

    "freq": 0,

    "fsname": "/Tokyopool/Tokyo/Tokyo",

    "hidden": false,

    "opts": "bind,nofail,_netdev",

    "passno": 0,

    "type": "none",

    "uuid": "70b67db7-3f61-427f-a999-6a31a7694ded"

    },

    {

    "dir": "/Tokyopool/Tokyo2",

    "freq": 0,

    "fsname": "Tokyopool/Tokyo2",

    "hidden": true,

    "opts": "rw,relatime,xattr,noacl",

    "passno": 0,

    "type": "zfs",

    "uuid": "9d166b96-f35e-44ad-bbdc-324d016bfb7d"

    }



    Because i was trying to rename the object of the pool.... and now the gui tells me that the zfs have to objets trying to


    The configuration object 'conf.system.filesystem.mountpoint' is not unique. An object with the property 'dir' and value '/Tokyopool/Tokyo2' already exists.


    i want to delete from the zpool the filesystem /tokyopool/tokyo2/tokyo2 (that is created by error trying to rename and copy the data)


    rename the tokyopool/tokyo2 to tokyopool/tokyo

    and delete the sharedfolder and get the zfs shared via nfs

    This is the problem, the system have gone "corrupt" i suposse because the kernel and the zfs.


    if this will be correct, i will have the 2nd that is the filesystem mounted.



    Now i have recover the zfs, but something of the last configuration don't works.... and i can't delete the shared folders. or the objects of the zpool

    Now i have and OMV 4.x running the kernel proxmox kernel


    But the problem is that i see the zpool


    But the system don't mount the filesystem of the share of the zpool name tokyo...


    I supose that the data is here because the zpool says:


    root@tokyo:~# zpool list

    NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT

    Tokyopool 10.9T 5.02T 5.86T - 1% 46% 1.00x ONLINE -

    root@tokyo:~# zpool status

    pool: Tokyopool

    state: ONLINE

    scan: scrub repaired 0B in 4h15m with 0 errors on Sun Dec 8 04:39:03 2019

    config:


    NAME STATE READ WRITE CKSUM

    Tokyopool ONLINE 0 0 0

    raidz1-0 ONLINE 0 0 0

    ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K4SUS7Y4 ONLINE 0 0 0

    ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K2ZTN3PZ ONLINE 0 0 0

    ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K3LVY319 ONLINE 0 0 0


    errors: No known data errors

    root@tokyo:~#


    But i don't know how to recover the internal share that is using the 46% of the zpool,

    I have tried to purge the openmedia-zfs and to install another time but i obtain this:


    + echo 'New plugin install, not inserting uuid property into existing datasets'

    New plugin install, not inserting uuid property into existing datasets


    in the webUI i see the Pool and the share but i don't know how to mount it, or delete the shared folder of the nfs any idea?

    I post my /etc/fstab


    root@tokyo:~# cat /etc/fstab

    # /etc/fstab: static file system information.

    #

    # Use 'blkid' to print the universally unique identifier for a

    # device; this may be used with UUID= as a more robust way to name devices

    # that works even if disks are added and removed. See fstab(5).

    #

    # <file system> <mount point> <type> <options> <dump> <pass>

    # / was on /dev/sda2 during installation

    UUID=ce90eabb-277e-4238-8ba3-c5eaa5fc2ee4 / ext4 errors=remount-ro 0 1

    # /boot/efi was on /dev/sda1 during installation

    UUID=9754-A00D /boot/efi vfat umask=0077 0 1

    # swap was on /dev/sda3 during installation

    UUID=5fecd852-9b64-4973-b898-d97e6d651657 none swap sw 0 0

    tmpfs /tmp tmpfs defaults 0 0

    # >>> [openmediavault]

    /Tokyopool/Tokyo/Tokyo /export/Tokyo none bind,nofail,_netdev 0 0

    # <<< [openmediavault]


    i'm not sure about the none in the line


    /Tokyopool/Tokyo/Tokyo /export/Tokyo none bind,nofail,_netdev 0 0



    Perhaps the openmediavault have changed something when goes to the new kernel without zfs and can't understand de zfs?¿?¿


    now i think that i'm having problems becaused openmediavault don't mount the zpool and after that it generates another problems?



    Anyidea, of how i can do that openmediavualt to regenerate the etc/fstab?

    I can't activate the button,.....


    Seems grey to me,....


    Any idea?


    i obtain this info in the edit shared if will get you an idea of the problem


    Failed to execute XPath query '//system/fstab/mntent[uuid='8baec714-a543-42d9-b231-fb7b2c45679b']'.

    Error #0:
    OMV\Config\DatabaseException: Failed to execute XPath query '//system/fstab/mntent[uuid='8baec714-a543-42d9-b231-fb7b2c45679b']'. in /usr/share/php/openmediavault/config/database.inc:78
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/sharemgmt.inc(231): OMV\Config\Database->get('conf.system.fil...', '8baec714-a543-4...')
    #1 [internal function]: OMVRpcServiceShareMgmt->get(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('get', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('ShareMgmt', 'get', Array, Array, 1)
    #5 {main}


    In the systemctl now i'm getting and error:


    ● zfs-mount.service loaded failed failed Mount ZFS filesystems


    but the zpool seems ok:


    root@tokyo:~# zpool list

    NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT

    Tokyopool 10.9T 5.02T 5.86T - 1% 46% 1.00x ONLINE -

    root@tokyo:~# zpool status

    pool: Tokyopool

    state: ONLINE

    scan: scrub repaired 0B in 4h15m with 0 errors on Sun Dec 8 04:39:03 2019

    config:


    NAME STATE READ WRITE CKSUM

    Tokyopool ONLINE 0 0 0

    raidz1-0 ONLINE 0 0 0

    ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K4SUS7Y4 ONLINE 0 0 0

    ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K2ZTN3PZ ONLINE 0 0 0

    ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K3LVY319 ONLINE 0 0 0


    errors: No known data errors



    Any idea??

    Sorry but now i see the zpool, but i can't see the zfs via nfs,.....


    i'm seeing (see the screenshot)


    but i can't edit the Dispositivo because i obtain:


    Failed to execute XPath query '//system/fstab/mntent[uuid='8baec714-a543-42d9-b231-fb7b2c45679b']'.


    It's normal? if i delete the shared folder and recreate i will delete all the data? (i think that no, but i will wait to see how to do it)

    This morning i have tried to


    apt-get remove openmediavault-zfs

    apt-get install openmediavault-zfs


    but i obtain this:


    root@tokyo:~# apt-get clean openmediavault-zfs

    root@tokyo:~# apt-get install openmediavault-zfs

    Reading package lists... Done

    Building dependency tree

    Reading state information... Done

    openmediavault-zfs is already the newest version (4.0.4).

    0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    root@tokyo:~# apt-get remove openmediavault-zfs

    Reading package lists... Done

    Building dependency tree

    Reading state information... Done

    The following packages were automatically installed and are no longer required:

    libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-zed zfsutils-linux

    Use 'apt autoremove' to remove them.

    The following packages will be REMOVED:

    openmediavault-zfs

    0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.

    After this operation, 501 kB disk space will be freed.

    Do you want to continue? [Y/n] y

    (Reading database ... 197954 files and directories currently installed.)

    Removing openmediavault-zfs (4.0.4) ...

    Processing triggers for openmediavault (4.1.35-1) ...

    Restarting engine daemon ...

    Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7fb5f31d07b8>

    Traceback (most recent call last):

    File "/usr/lib/python3.5/weakref.py", line 117, in remove

    TypeError: 'NoneType' object is not callable

    Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7fb5f31d07b8>

    Traceback (most recent call last):

    File "/usr/lib/python3.5/weakref.py", line 117, in remove

    TypeError: 'NoneType' object is not callable

    root@tokyo:~# apt-get install openmediavault-zfs

    Reading package lists... Done

    Building dependency tree

    Reading state information... Done

    The following NEW packages will be installed:

    openmediavault-zfs

    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.

    Need to get 55.8 kB of archives.

    After this operation, 501 kB of additional disk space will be used.

    Get:1 https://dl.bintray.com/openmed…plugin-developers/arrakis stretch/main amd64 openmediavault-zfs amd64 4.0.4 [55.8 kB]

    Fetched 55.8 kB in 3s (14.4 kB/s)

    Selecting previously unselected package openmediavault-zfs.

    (Reading database ... 197874 files and directories currently installed.)

    Preparing to unpack .../openmediavault-zfs_4.0.4_amd64.deb ...

    Unpacking openmediavault-zfs (4.0.4) ...

    Setting up openmediavault-zfs (4.0.4) ...

    + . /etc/default/openmediavault

    ++ OMV_CONFIG_FILE=/etc/openmediavault/config.xml

    ++ OMV_CONFIG_TEMPLATE_FILE=/usr/share/openmediavault/templates/config.xml

    ++ OMV_PRODUCTINFO_FILE=/usr/share/openmediavault/productinfo.xml

    ++ OMV_SCRIPTS_DIR=/usr/share/openmediavault/scripts

    ++ OMV_DATAMODELS_DIR=/usr/share/openmediavault/datamodels

    ++ OMV_I18N_LOCALE_DIR=/usr/share/openmediavault/locale

    ++ OMV_MOUNT_DIR=/srv

    ++ OMV_SHAREDFOLDERS_DIR=/sharedfolders

    ++ OMV_NFSD_EXPORT_DIR=/export

    ++ OMV_CACHE_DIR=/var/cache/openmediavault

    ++ OMV_LOG_DIR=/var/log/openmediavault

    ++ OMV_SSL_CERTIFICATE_DIR=/etc/ssl

    ++ OMV_SSL_CERTIFICATE_PREFIX=openmediavault-

    ++ OMV_SSH_KEYS_DIR=/etc/ssh

    ++ OMV_SSH_KEY_PREFIX=openmediavault-

    ++ OMV_DPKGARCHIVE_DIR=/var/cache/openmediavault/archives

    ++ OMV_DOCUMENTROOT_DIR=/var/www/openmediavault

    ++ OMV_CRONSCRIPTS_DIR=/var/lib/openmediavault/cron.d

    ++ OMV_CONFIGIMPORT_SCRIPTS_DIR=/usr/share/openmediavault/configimport

    ++ OMV_MKCONF_SCRIPTS_DIR=/usr/share/openmediavault/mkconf

    ++ OMV_ENGINED_DIR=/usr/share/openmediavault/engined

    ++ OMV_ENGINED_SO_ADDRESS=/var/lib/openmediavault/engined.sock

    ++ OMV_ENGINED_SO_OWNERGROUP_NAME=openmediavault-engined

    ++ OMV_ENGINED_SO_SNDTIMEO=10

    ++ OMV_ENGINED_SO_RCVTIMEO=180

    ++ OMV_ENGINED_DIRTY_MODULES_FILE=/var/lib/openmediavault/dirtymodules.json

    ++ OMV_INITSYSTEM_SCRIPTS_DIR=/usr/share/openmediavault/initsystem

    ++ OMV_INITSYSTEM_FILE=/var/lib/openmediavault/initsystem.req

    ++ OMV_USERMGMT_DEFAULT_GROUP=users

    ++ OMV_RRDGRAPH_DIR=/var/lib/openmediavault/rrd

    ++ OMV_RRDGRAPH_ERROR_IMAGE=/usr/share/openmediavault/icons/rrd_graph_error_64.png

    ++ OMV_WEBGUI_FILE_OWNERGROUP_NAME=openmediavault-webgui

    ++ OMV_CONFIGOBJECT_NEW_UUID=fa4b1c66-ef79-11e5-87a0-0002b3a176b4

    ++ OMV_DEBUG_SCRIPT=NO

    ++ OMV_DEBUG_PHP=NO

    ++ OMV_DEBUG_EXTJS=NO

    ++ OMV_APT_USE_KERNEL_BACKPORTS=NO

    + . /usr/share/openmediavault/scripts/helper-functions

    ++ . /etc/default/openmediavault

    +++ OMV_CONFIG_FILE=/etc/openmediavault/config.xml

    +++ OMV_CONFIG_TEMPLATE_FILE=/usr/share/openmediavault/templates/config.xml

    +++ OMV_PRODUCTINFO_FILE=/usr/share/openmediavault/productinfo.xml

    +++ OMV_SCRIPTS_DIR=/usr/share/openmediavault/scripts

    +++ OMV_DATAMODELS_DIR=/usr/share/openmediavault/datamodels

    +++ OMV_I18N_LOCALE_DIR=/usr/share/openmediavault/locale

    +++ OMV_MOUNT_DIR=/srv

    +++ OMV_SHAREDFOLDERS_DIR=/sharedfolders

    +++ OMV_NFSD_EXPORT_DIR=/export

    +++ OMV_CACHE_DIR=/var/cache/openmediavault

    +++ OMV_LOG_DIR=/var/log/openmediavault

    +++ OMV_SSL_CERTIFICATE_DIR=/etc/ssl

    +++ OMV_SSL_CERTIFICATE_PREFIX=openmediavault-

    +++ OMV_SSH_KEYS_DIR=/etc/ssh

    +++ OMV_SSH_KEY_PREFIX=openmediavault-

    +++ OMV_DPKGARCHIVE_DIR=/var/cache/openmediavault/archives

    +++ OMV_DOCUMENTROOT_DIR=/var/www/openmediavault

    +++ OMV_CRONSCRIPTS_DIR=/var/lib/openmediavault/cron.d

    +++ OMV_CONFIGIMPORT_SCRIPTS_DIR=/usr/share/openmediavault/configimport

    +++ OMV_MKCONF_SCRIPTS_DIR=/usr/share/openmediavault/mkconf

    +++ OMV_ENGINED_DIR=/usr/share/openmediavault/engined

    +++ OMV_ENGINED_SO_ADDRESS=/var/lib/openmediavault/engined.sock

    +++ OMV_ENGINED_SO_OWNERGROUP_NAME=openmediavault-engined

    +++ OMV_ENGINED_SO_SNDTIMEO=10

    +++ OMV_ENGINED_SO_RCVTIMEO=180

    +++ OMV_ENGINED_DIRTY_MODULES_FILE=/var/lib/openmediavault/dirtymodules.json

    +++ OMV_INITSYSTEM_SCRIPTS_DIR=/usr/share/openmediavault/initsystem

    +++ OMV_INITSYSTEM_FILE=/var/lib/openmediavault/initsystem.req

    +++ OMV_USERMGMT_DEFAULT_GROUP=users

    +++ OMV_RRDGRAPH_DIR=/var/lib/openmediavault/rrd

    +++ OMV_RRDGRAPH_ERROR_IMAGE=/usr/share/openmediavault/icons/rrd_graph_error_64.png

    +++ OMV_WEBGUI_FILE_OWNERGROUP_NAME=openmediavault-webgui

    +++ OMV_CONFIGOBJECT_NEW_UUID=fa4b1c66-ef79-11e5-87a0-0002b3a176b4

    +++ OMV_DEBUG_SCRIPT=NO

    +++ OMV_DEBUG_PHP=NO

    +++ OMV_DEBUG_EXTJS=NO

    +++ OMV_APT_USE_KERNEL_BACKPORTS=NO

    ++ OMV_XMLSTARLET_GET_SHAREDFOLDER_PATH='-m //system/shares/sharedfolder[uuid=current()/sharedfolderref] -v concat(//system/fstab/mntent[uuid=current()/mntentref]/dir,'\''/'\'',reldirpath) -b'

    ++ OMV_XMLSTARLET_GET_SHAREDFOLDER_NAME='-m //system/shares/sharedfolder[uuid=current()/sharedfolderref] -v name -b'

    ++ OMV_XMLSTARLET_GET_SHAREDFOLDER_MOUNT_DIR='-m //system/shares/sharedfolder[uuid=current()/sharedfolderref] -v concat(//system/fstab/mntent[uuid=current()/mntentref]/dir,'\''/'\'',reldirpath) -b'

    + case "$1" in

    + SERVICE_XPATH_NAME=zfs

    + SERVICE_XPATH=/config/services/zfs

    ++ omv_uuid

    ++ uuid -v 4

    + object='<uuid>1d28b1a0-dfb7-4b8b-a581-5341d5eeef56</uuid>'

    + object='<uuid>1d28b1a0-dfb7-4b8b-a581-5341d5eeef56</uuid><id>zfs</id>'

    + object='<uuid>1d28b1a0-dfb7-4b8b-a581-5341d5eeef56</uuid><id>zfs</id><enable>0</enable>'

    + omv_config_add_node_data /config/system/notification/notifications notification '<uuid>1d28b1a0-dfb7-4b8b-a581-5341d5eeef56</uuid><id>zfs</id><enable>0</enable>'

    + local xpath name data tmpdata tmpfile

    + xpath=/config/system/notification/notifications

    + name=notification

    + data='<uuid>1d28b1a0-dfb7-4b8b-a581-5341d5eeef56</uuid><id>zfs</id><enable>0</enable>'

    ++ tempfile

    + tmpfile=/tmp/fileO047PD

    ++ mktemp --dry-run XXXXXXXXXXXX

    + tmpdata=mSD0rTbbVvZH

    + xmlstarlet edit -P -s /config/system/notification/notifications -t elem -n notification -v mSD0rTbbVvZH /etc/openmediavault/config.xml

    + tee /tmp/fileO047PD

    ++ omv_quotemeta '<uuid>1d28b1a0-dfb7-4b8b-a581-5341d5eeef56</uuid><id>zfs</id><enable>0</enable>'

    ++ echo -n '<uuid>1d28b1a0-dfb7-4b8b-a581-5341d5eeef56</uuid><id>zfs</id><enable>0</enable>'

    ++ sed -e 's/\\/\\\\/g' -e 's/\//\\\//g' -e 's/&/\\\&/g'

    + sed -i 's/mSD0rTbbVvZH/<uuid>1d28b1a0-dfb7-4b8b-a581-5341d5eeef56<\/uuid><id>zfs<\/id><enable>0<\/enable>/' /tmp/fileO047PD

    + cat /tmp/fileO047PD

    + rm -f -- /tmp/fileO047PD

    + rm -f /etc/insserv/overrides/zfs-mount

    + /sbin/modprobe zfs

    modprobe: FATAL: Module zfs not found in directory /lib/modules/4.19.0-0.bpo.5-amd64

    + dpkg --compare-versions 4.0.4 lt-nl 4.0.3

    + echo 'New plugin install, not inserting uuid property into existing datasets'

    New plugin install, not inserting uuid property into existing datasets

    + echo zfs

    + dpkg-trigger update-fixperms

    + dpkg-trigger update-locale

    + exit 0

    Processing triggers for openmediavault (4.1.35-1) ...

    Updating locale files ...

    Updating file permissions ...

    Purging internal cache ...

    Restarting engine daemon ...

    Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7f49327297b8>

    Traceback (most recent call last):

    File "/usr/lib/python3.5/weakref.py", line 117, in remove

    TypeError: 'NoneType' object is not callable

    Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7f49327297b8>

    Traceback (most recent call last):

    File "/usr/lib/python3.5/weakref.py", line 117, in remove

    TypeError: 'NoneType' object is not callable

    root@tokyo:~# uname

    Linux

    root@tokyo:~# uname -a

    Linux tokyo 4.19.0-0.bpo.5-amd64 #1 SMP Debian 4.19.37-5+deb10u2~bpo9+1 (2019-08-16) x86_64 GNU/Linux



    and i think that the errors continue in the line:


    sbin/modprobe zfs

    modprobe: FATAL: Module zfs not found in directory /lib/modules/4.19.0-0.bpo.5-amd64


    How i can obtain this module for the kernel 4.19.0.0.bpo.5?

    and i'm not very sure but i'm seeing more kernels 4.19 without the zfs the old kernels 4.15 the module it's in the folder

    Hello to all , first sorry for my english not my first language


    i have reset my openmediavault and have mounted the zfs via nfs and to my surprise was empty.... i look and this is not my zfs it's only 100MB


    I go to the openmediavault gui and i obtain the next message:


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; zfs list -H -t snapshot -o name,used,refer 2>&1' with exit code '1': /dev/zfs and /proc/self/mounts are required.
    Try running 'udevadm trigger' and 'mount -t proc proc /proc' as root.


    i see that i have the zfsplugin installed.


    i have try to apt-get update and upgrade and reset and don't seems to works


    I think that my problem is with the kernel and the zfs because the command:


    systemctl


    give this information:


    systemd-journal-flush.service loaded active exited Flush Journal to Persistent Storage

    systemd-journald.service loaded active running Journal Service

    systemd-logind.service loaded active running Login Service

    ● systemd-modules-load.service loaded failed failed Load Kernel Modules

    systemd-random-seed.service loaded active exited Load/Save Random Seed

    systemd-remount-fs.service loaded active exited Remount Root and Kernel File Systems

    systemd-sysctl.service loaded active exited Apply Kernel Variables

    systemd-tmpfiles-setup-dev.service loaded active exited Create Static Device Nodes in /dev

    systemd-tmpfiles-setup.service loaded active exited Create Volatile Files and Directories

    systemd-udev-settle.service loaded active exited udev Wait for Complete Device Initialization

    systemd-udev-trigger.service loaded active exited udev Coldplug all Devices

    systemd-udevd.service loaded active running udev Kernel Device Manager

    systemd-update-utmp.service loaded active exited Update UTMP about System Boot/Shutdown

    systemd-user-sessions.service loaded active exited Permit User Sessions

    watchdog.service loaded active running watchdog daemon

    ● zfs-import-cache.service loaded failed failed Import ZFS pools by cache file

    ● zfs-mount.service loaded failed failed Mount ZFS filesystems

    ● zfs-share.service loaded failed failed ZFS file system shares

    ● zfs-zed.service loaded failed failed ZFS Event Daemon (zed)


    the dots are services that have failed

    and the command:


    root@tokyo:~# modprobe zfs

    modprobe: FATAL: Module zfs not found in directory /lib/modules/4.19.0-0.bpo.5-amd64


    now i don't remember if i was using the normal kernel or the kernel of a plugin for the zfs


    The kernel now is:


    Linux tokyo 4.19.0-0.bpo.5-amd64 #1 SMP Debian 4.19.37-5+deb10u2~bpo9+1 (2019-08-16) x86_64 GNU/Linux



    and sorry but i have done via the GUI

    apt-get clean

    apt-get update

    apt-get upgrade

    apt-get dist-upgrade


    but i don't have the zfs , what is the best option to go, from this point to obtain the plugin-zfs and don't lose the data and the pools in the drives that have the zfs

    in the gui i see that all the drives are ok, but i have very little experience with the zfs, and i will wait to the knowledge of the people of the forum.


    Many thanks in advance