[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • Offizieller Beitrag

    i'lll get this error even with dpkg-dev installed ...

    Well, I know dpkg-dev provides the command. Not sure why it still fails.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I set up a VM on another machine to do some testing. I followed the same procedere as I did on my nas build.


    Istalled fresh OMV4, patched everything incl. 4.16, installed OMV-Extras, reboot, installed dpkg-dev






    So pretty much the same as on my main machine.


    Edit: After uninstalling und reinstalling the ZFS-Plugin, it works on the VM-machine .... don't know why ....
    Now I'll try on nas-machine.


    Edit2: I set up LUKS with autounlock at boot and on top my ZFS. After each reboot my pool is gone but can be reimportet. Drives are unlocked, but all shared folders have an NA-path. The error-code is


    Code
    Failed to execute XPath query '//system/fstab/mntent[uuid='70a6b2de-e75b-47c6-a0ed-0a56ec382d7f']'.


    This occurs after messing with mysql (thread here)


    It might have to do with an attached USB backup drive I use for restoring my data?! uuid='70a6b2de-e75b-47c6-a0ed-0a56ec382d7f' is my usb drive. Trying to readjust the paths from my shared folders to my ZDF datasets gives me a similar error:

    Code
    Failed to execute XPath query '//system/fstab/mntent[uuid='39efbf6e-c32f-4ccb-88aa-87b53f9d3190']'.


    seems to be similar to that issue?


    I use my ZFS pool and assinged differend datasets for each shared folder. During cration of my shared folders I chose the corresponding "drive" or dataset as path. The shares get UUIDs ind config.xml, but are no longer found.?! My shrared folder definitions in config.xml look like

    Code
    <sharedfolder>
            <uuid>88aa1bf4-e5e1-4cf2-a909-5dd5b0831b84</uuid>
            <name>ablage</name>
            <comment></comment>
            <mntentref>39efbf6e-c32f-4ccb-88aa-87b53f9d3190</mntentref>
            <reldirpath>ablage/</reldirpath>
            <privileges></privileges>
          </sharedfolder>


    okay, creating a new dataset works. My config.xml has after the creation of a new dataset following entry

    but none for the orher datasets. A little further down in config.xml


    Code
    <sharedfolder>
            <uuid>6fb0a79b-062a-469b-9c09-ff0c1d2e49ff</uuid>
            <name>test</name>
            <comment></comment>
            <mntentref>34297f48-6906-4f27-85bc-de16ba3c4ad9</mntentref>
            <reldirpath>test/</reldirpath>
            <privileges></privileges>
          </sharedfolder>

    So, can I reasign the correct UUIDs to my datasets in the fstab section to fix my problem?


    EDIT: HEUREKA! The solution was much easier:
    Creating my ZFS Pool Storage and a dataset Storage/backup and aplying a shared folder to it dropped a new folder with the same name, e.g. /Storage/backup/backup. Whatever happened to my pool, left me with a directory structure /Storage/backup/backup. Since /Storage/backup is the mountpoint for my dataset, zfs mount -a could not mount to that location, because the folder was not empty. Simply deleting the lowest folder (in my example /Storage/backup/backup) solved the problem for all datasets.


    Its a great day!

    Chaos is found in greatest abundance wherever order is being sought.
    It always defeats order, because it is better organized.
    Terry Pratchett

    11 Mal editiert, zuletzt von riff-raff ()

  • So i tried installing OMV and ZFS for hours now, no matter how much i try an read it won't work. Can somebody please tell wich version of omv to install and what to do get it working, i'm out of ideas. I used OMV 2 till today but my ssd died on me and of course i didn't make an system drive backup.

    • Offizieller Beitrag

    Here is what I just did (all commands executed as root):


    • install OMV 4.x using 4.1.3 iso
    • install omv-extras with: wget -O - http://omv-extras.org/install | bash
    • enable omv-extras testing repo in the Repo tab of omv-extras in the web interface
    • apt-get update
    • apt-get dist-upgrade
    • apt-get clean
    • reboot with 4.16 kernel
    • Remove 4.14 kernel with apt-get purge linux-image-4.14.0-0.bpo.3-amd64
    • Install zfs plugin with apt-get install openmediavault-zfs (compile seems to move faster than installing via the web interface)
    • echo "zfs" > /etc/modules-load.d/zfs.conf (probably need to add this to the plugin)
    • reboot

    Everything seems to be working.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    Einmal editiert, zuletzt von ryecoaaron ()

    • Offizieller Beitrag

    Do you mean the 4.1.3.iso or the 4.0.14.iso? i don't find a 4.0.13.iso.

    Typo fixed but it doesn't matter since the system is upgraded to the latest packages before doing anything zfs.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Any idea how to move from kernel 4.14 with ZFS 0.7.6 to kernel 4.16 with ZFS 0.7.9 ?

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    • Offizieller Beitrag

    Any idea how to move from kernel 4.14 with ZFS 0.7.6 to kernel 4.16 with ZFS 0.7.9 ?

    I can't test because the 4.14 kernel headers aren't available anymore. I would backup your system before trying this:


    Remove the 4.14 kernel headers
    Install 4.16 kernel and zfs 0.7.9 at same time. It should skip compiling the zfs module for the 4.14 kernel since the headers aren't there.
    Reboot using the 4.16 kernel
    Remove the 4.14 kernel

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    but how can i access my old folders and data?

    Create shared folders and configure services to access it. You should be able to see the data on the command line as well.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I have my ZFS running quite well right now, but I struggle with auto snapshots and scrub.


    I sticked to this old thread to set my snapshots up; for example for my backup folder mounted to


    Code
    /Storage/backup/backup


    it looks like this:

    Code
    /sbin/zfs snapshot -r Storage/backup@backup_`date +"%F"`
    Code
    /sbin/zfs list -t snapshot -o name | /bin/grep Storage/backup@backup_ | /usr/bin/sort -r | /usr/bin/tail -n +30 | /usr/bin/xargs -n 1 /sbin/zfs destroy -r


    pushing this manually with ssh works, but not with planned tasks.


    Any advice how to get these things to work?

    Chaos is found in greatest abundance wherever order is being sought.
    It always defeats order, because it is better organized.
    Terry Pratchett

  • I can't test because the 4.14 kernel headers aren't available anymore. I would backup your system before trying this:
    Remove the 4.14 kernel headers
    Install 4.16 kernel and zfs 0.7.9 at same time. It should skip compiling the zfs module for the 4.14 kernel since the headers aren't there.
    Reboot using the 4.16 kernel
    Remove the 4.14 kernel

    Will be happy to try but how can I get the ZFS 0.7.9 packages ?
    The only version I have is the 0.7.6 with stretch-backport

    EDIT: Found in OMV-extra Testing ;)

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    Einmal editiert, zuletzt von sbocquet ()

  • Hi guys,


    at the moment I am replacing my 8x 4 wd red disks with 8x 10 wd red disks of my pool one after the other. The first disk is resilvering now. To get the full disk space after resilvering the last disk, you have to enable "autoexpand" at the pool. So I did following at the command line:


    Code
    root@omv4:~# zpool set autoexpand=on mediatank


    So, I wanted to check, if I can see that setting in the omv-zfs plugin. Yesterday this wasn't the case. Is it a normal behavior?


    If yes: Feature Request: Is it possible to develop that the plugin reads all the zfs settings and make them configurable in the omv webui?


    But today I have a problem with zfs plugin. If I go to "Storage - ZFS" I only see "loading" which ends with "communication failure". Have a look at the following Screenshots:




    Round about a minute later I get the following message:




    If I click "ok", I see nothing:




    But my pool is still reachable by smb and I can see all my zfs file systems under "storage - file systems" in the omv webui. Syslog gives me the following output:



    "zpool status" looks as expected:



    Maybe I shouldn't change the configuration of my pool while resilvering. ;)

    • But Is there something in the plugin that can be corrected by you devs?
    • Can somebody check, if you can reproduce that error in virtual machine? I don't have a vm at the moment.


    The resilvering process of the first replaced disk still needs three hours. I think the problem gets solved after the shutdown of my server to replace the second disk and restarting the server.


    EDIT: OK, Problem solved. After the Resilvering procedure for the first disk the section "storage - zfs" in the omv webui works as expected again.


    Regards Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    6 Mal editiert, zuletzt von hoppel118 ()

  • I have notice something long time ago which is bit annoying but strangely doesn't affect the operation


    Jun 05 17:50:28 nasbox systemd[1]: Starting Mount ZFS filesystems...
    Jun 05 17:50:28 nasbox zfs[2123]: cannot mount '/mnt/storage': directory is not empty
    Jun 05 17:50:30 nasbox systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
    Jun 05 17:50:30 nasbox systemd[1]: Failed to start Mount ZFS filesystems.
    Jun 05 17:50:30 nasbox systemd[1]: zfs-mount.service: Unit entered failed state.
    Jun 05 17:50:30 nasbox systemd[1]: zfs-mount.service: Failed with result 'exit-code'.


    if I restart on single mode and remove the directory there is no error message. It will happen on the next reboot (which is not happen often so I don't know what else can be the cause)
    As I mention everything looks that working perfect. Any idea why this is happening?

    • Offizieller Beitrag

    The problem should be that the dpkg-dev package is not a dependency of the zfs-dpkg package in the debian repositories.

    The command fails even if the package is installed. Therefore, adding it as a dependency would not help.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!