[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • OMV 1.0

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

      2016.11.09 actualization: please read this post first before to install: Error trying to install ZFS-Plugin

      2018.09.08 actualization: Good video of HOW to Install from @Technodadlife.


      Well This post is intendent to show how to start to work with ZFS & ZFS Plugin in OMV, Is usefull for beginers & Users that come from BSD ( FreeNAs & Nas4Free) World to understand certain diferrences between ZFS and ZFS on Linux.

      Some usefull links:

      Issues in ZFS-Plugins: github.com/OpenMediaVault-Plug…openmediavault-zfs/issues
      Name conventions when creating pools: zfsonlinux.org/faq.html#WhatDe…uldIUseWhenCreatingMyPool
      ZFS-HandBook: allanjude.com/zfs_handbook/zfs-zpool.html
      Info About Performance: open-zfs.org/wiki/Performance_tuning
      Article about ZED: louwrentius.com/category/zfs.html

      Requeriments:

      To Use ZFS you need at least OMV 1.12 & OMV_Extras 1.10

      If you have an updated OMV, go to step 2

      1 -
      If start from actual OMV.ISO: openmediavault.org/download.html

      your first screen is:


      So you nee to update to letas version and install all updates:

      Finally you can have OMV 1.12 or up:

      Now you nee to install OMV-Extras: OMV-Extras.org Plugin

      once done you have OMV-Extras 1.10 or up





      2 -

      Now you are ready to instal ZFS-Plugin.

      Select ZFS Testing Repo:


      Update system:


      Install ZFS (0.6.3.6 when writting this guide):
      OMV 4.1.11 x64 on a HP T510, 16GB CF as Boot Disk & 32GB SSD 2,5" disk for Data, 4 GB RAM, CPU VIA EDEN X2 U4200 is x64 at 1GHz

      Post: HPT510 SlimNAS ; HOWTO Install Pi-Hole ; HOWTO install MLDonkey ; HOHTO Install ZFS-Plugin ; OMV_OldGUI ; ShellinaBOX ;
      Dockers: MLDonkey ; PiHole ; weTTY
      Videos: @TechnoDadLife

      The post was edited 5 times, last by raulfg3: actualization, do not use regular repo, use testing ().

    • If all works fine & expected, you must see your ZFS icon:


      Now you have 2 possible paths,

      1- Import your existing Pool ( use option in ZFS menu) ; remember that latest FreeNAS pools (9.3 and up) can't be imported due a Feature Flag not still implemented on ZFS for Linux (9.2 and down can be imported without problem), So please revise what feature Flags have your pool beforo to try to import on OMV
      2 - Create a new Pool.

      If you want to create a pool your disk must be clean, to be sure use wipe from menu:



      Now you are ready to create pools, some examples:





      Now a more complicated pool: a 4x3TB vdev + other 4+3TB vdev in one big pool.

      You need to create in 2 steps

      First you need to create a pool using only 4 disk (first vdev), in my case I select to use ashift=12 = 4K disk


      Once created, use Grow icon to add 4 more disk in raidZ1



      This second vdev is added to first vdev so finally you have the pool taht you want , and in future, you can grow RPoll in the same way

      Now time to create some filesystems to share it:


      Images
      • Rpool 1.jpg

        64.7 kB, 559×475, viewed 2,838 times
      OMV 4.1.11 x64 on a HP T510, 16GB CF as Boot Disk & 32GB SSD 2,5" disk for Data, 4 GB RAM, CPU VIA EDEN X2 U4200 is x64 at 1GHz

      Post: HPT510 SlimNAS ; HOWTO Install Pi-Hole ; HOWTO install MLDonkey ; HOHTO Install ZFS-Plugin ; OMV_OldGUI ; ShellinaBOX ;
      Dockers: MLDonkey ; PiHole ; weTTY
      Videos: @TechnoDadLife

      The post was edited 2 times, last by raulfg3 ().

    • Import is really easy, only go to button import and import your previsoly exported/created pool.

      DISCLAIMER: Actual ZFS version on linux DO not import latest FreeNAS pool because FreeNAS use some featured Flags NOT supported by actual ZFS version of ZFSonLinux.





      I test a scrub too, previously I corrupt some files:

      Source Code

      1. Pool status (zpool status):
      2. pool: Rpool
      3. state: ONLINE
      4. status: One or more devices has experienced an error resulting in data
      5. corruption. Applications may be affected.
      6. action: Restore the file in question if possible. Otherwise restore the
      7. entire pool from backup.
      8. see: http://zfsonlinux.org/msg/ZFS-8000-8A
      9. scan: scrub repaired 326M in 4h11m with 0 errors on Fri Feb 20 12:02:53 2015
      10. config:
      11. NAME STATE READ WRITE CKSUM
      12. Rpool ONLINE 0 0 6
      13. raidz1-0 ONLINE 0 0 68
      14. ata-TOSHIBA_DT01ACA300_Z2P58ZSAS ONLINE 0 0 116
      15. ata-TOSHIBA_DT01ACA300_Z2P592PAS ONLINE 0 0 106
      16. ata-TOSHIBA_DT01ACA300_Z2P4LS0GS ONLINE 0 0 134
      17. ata-TOSHIBA_DT01ACA300_Z2P5909AS ONLINE 0 0 119
      18. raidz1-1 ONLINE 0 0 59
      19. ata-TOSHIBA_DT01ACA300_63NZKLSKS ONLINE 0 0 148
      20. ata-TOSHIBA_DT01ACA300_63NZ4Z9GS ONLINE 0 0 107
      21. ata-TOSHIBA_DT01ACA300_63QZR0TGS ONLINE 0 0 127
      22. ata-TOSHIBA_DT01ACA300_63NZK2UGS ONLINE 0 0 99
      23. errors: 6 data errors, use '-v' for a list
      24. Pool details (zpool get all):
      Display All






      A lot of CPU & RAM is used when scrub is initiated:




      OMV 4.1.11 x64 on a HP T510, 16GB CF as Boot Disk & 32GB SSD 2,5" disk for Data, 4 GB RAM, CPU VIA EDEN X2 U4200 is x64 at 1GHz

      Post: HPT510 SlimNAS ; HOWTO Install Pi-Hole ; HOWTO install MLDonkey ; HOHTO Install ZFS-Plugin ; OMV_OldGUI ; ShellinaBOX ;
      Dockers: MLDonkey ; PiHole ; weTTY
      Videos: @TechnoDadLife

      The post was edited 2 times, last by raulfg3 ().

    • Some captures of file transfer (Videos , avi, and mkv):





      OMV 4.1.11 x64 on a HP T510, 16GB CF as Boot Disk & 32GB SSD 2,5" disk for Data, 4 GB RAM, CPU VIA EDEN X2 U4200 is x64 at 1GHz

      Post: HPT510 SlimNAS ; HOWTO Install Pi-Hole ; HOWTO install MLDonkey ; HOHTO Install ZFS-Plugin ; OMV_OldGUI ; ShellinaBOX ;
      Dockers: MLDonkey ; PiHole ; weTTY
      Videos: @TechnoDadLife

      The post was edited 2 times, last by raulfg3 ().

    • documentation, ZFS Evil tunning guide: solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

      Some tunning that you can use in function of your RAM

      Please read: wiki.gentoo.org/wiki/ZFS
      arstechnica.com/information-te…-gen-filesystem-on-linux/

      in my case I use:

      Source Code

      1. echo "options zfs zfs_arc_max=6312427520" >> /etc/modprobe.d/zfs.conf
      2. echo "options zfs zfs_arc_min=6312427520" >> /etc/modprobe.d/zfs.conf


      and then reboot NAS so next time use new zfs_arc_max & min values


      other good option is for 4GB:

      Source Code

      1. echo "options zfs zfs_arc_max=3221225472" >> /etc/modprobe.d/zfs.conf
      2. echo "options zfs zfs_arc_min=3221225472" >> /etc/modprobe.d/zfs.conf



      I use values that experience recommended on BSD= Nas4Free if you search for a good tunning tool called ZFSKerntune and analize, you can see that only fixes zfs_arc_max and zfs_arc_min to desirables values.


      Matrix of desirables values is:

      Source Code

      1. "X32_1024MB" => array("kmem" => "512M", "arcmin" => "128M", "arcmax" => "128M"),
      2. "X32_1536MB" => array("kmem" => "1024M", "arcmin" => "256M", "arcmax" => "256M"),
      3. "X32_2048MB" => array("kmem" => "1400M", "arcmin" => "400M", "arcmax" => "400M"),
      4. "X64_2GB" => array("kmem" => "1536M", "arcmin" => "512M", "arcmax" => "512M"),
      5. "X64_3GB" => array("kmem" => "2048M", "arcmin" => "1024M", "arcmax" => "1024M"),
      6. "X64_4GB" => array("kmem" => "2560M", "arcmin" => "1536M", "arcmax" => "1536M"),
      7. "X64_6GB" => array("kmem" => "4608M", "arcmin" => "3072M", "arcmax" => "3072M"),
      8. "X64_8GB" => array("kmem" => "6656M", "arcmin" => "5120M", "arcmax" => "5120M"),
      9. "X64_12GB" => array("kmem" => "10752M", "arcmin" => "9216M", "arcmax" => "9216M"),
      10. "X64_16GB" => array("kmem" => "14336M", "arcmin" => "12288M", "arcmax" => "12288M"),
      11. "X64_24GB" => array("kmem" => "22528M", "arcmin" => "20480M", "arcmax" => "20480M"),
      12. "X64_32GB" => array("kmem" => "30720M", "arcmin" => "28672M", "arcmax" => "28672M"),
      13. "X64_48GB" => array("kmem" => "47104M", "arcmin" => "45056M", "arcmax" => "45056M"),
      14. "X64_64GB" => array("kmem" => "62464M", "arcmin" => "59392M", "arcmax" => "59392M"),
      15. "X64_96GB" => array("kmem" => "95232M", "arcmin" => "92160M", "arcmax" => "92160M"),
      16. "X64_128GB" => array("kmem" => "128000M", "arcmin" => "124928M", "arcmax" => "124928M"),
      17. "X64_192GB" => array("kmem" => "193536M", "arcmin" => "190464M", "arcmax" => "190464M"),
      18. "X64_256GB" => array("kmem" => "259072M", "arcmin" => "256000M", "arcmax" => "256000M")
      Display All

      OMV 4.1.11 x64 on a HP T510, 16GB CF as Boot Disk & 32GB SSD 2,5" disk for Data, 4 GB RAM, CPU VIA EDEN X2 U4200 is x64 at 1GHz

      Post: HPT510 SlimNAS ; HOWTO Install Pi-Hole ; HOWTO install MLDonkey ; HOHTO Install ZFS-Plugin ; OMV_OldGUI ; ShellinaBOX ;
      Dockers: MLDonkey ; PiHole ; weTTY
      Videos: @TechnoDadLife

      The post was edited 2 times, last by raulfg3 ().

    • ARM?

      Greetings
      David
      "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"

      Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.


      Upload Logfile via WebGUI/CLI
      #openmediavault on freenode IRC | German & English | GMT+1
      Absolutely no Support via PM!

      I host parts of the omv-extras.org Repository, the OpenMediaVault Live Demo and the pre-built PXE Images. If you want you can take part and help covering the costs by having a look at my profile page.
    • I have lots of arm boards and used to have a bananapi. I assume you have it running with fuse? I would hate to see dedup try and run on an arm board.

      The repo we get the zfs debian packages from is amd64 only. So, that will be the only supported arch for the plugin. The plugin code is here if you want to experiment with it on arm.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • This is interesting ;)

      On a reboot my ZFS pool isn't mounted. The pool isn't listed in the ZFS plugin panel... I try to reimport and I get:

      Source Code

      1. Error
      2. #0:
      3. exception 'OMVModuleZFSException' with message 'cannot create 'Media':
      4. missing dataset name' in /usr/share/omvzfs/Dataset.php:353
      5. Stack trace:
      6. #0 /usr/share/omvzfs/Dataset.php(191): OMVModuleZFSDataset->exec('zfs
      7. create -p "...', Array, 1)
      8. #1 /usr/share/omvzfs/Dataset.php(77): OMVModuleZFSDataset->create()
      9. #2 /usr/share/omvzfs/OMVStorageZvol.php(342):
      10. OMVModuleZFSDataset->__construct('Media')
      11. #3 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(231):
      12. OMVFilesystemZFS->isMounted()
      13. #4 [internal function]:
      14. OMVRpcServiceFileSystemMgmt->enumerateMountedFilesystems(Array,
      15. Array)
      16. #5 /usr/share/php/openmediavault/rpcservice.inc(125):
      17. call_user_func_array(Array, Array)
      18. #6 /usr/share/php/openmediavault/rpc.inc(79):
      19. OMVRpcServiceAbstract->callMethod('enumerateMounte...', Array, Array)
      20. #7 /usr/sbin/omv-engined(500): OMVRpc::exec('FileSystemMgmt',
      21. 'enumerateMounte...', Array, Array, 1)
      22. #8 {main}
      Display All


      Hrm... 'zpool status Media' gives me:

      Source Code

      1. pool: Media
      2. state: UNAVAIL
      3. status: One or more devices could not be used because the label is missing
      4. or invalid. There are insufficient replicas for the pool to continue
      5. functioning.
      6. action: Destroy and re-create the pool from
      7. a backup source.
      8. see: http://zfsonlinux.org/msg/ZFS-8000-5E
      9. scan: none requested
      10. config:
      11. NAME STATE READ WRITE CKSUM
      12. Media UNAVAIL 0 0 0 insufficient replicas
      13. raidz1-0 UNAVAIL 0 0 0 insufficient replicas
      14. sdd ONLINE 0 0 0
      15. sde UNAVAIL 0 0 0
      16. sdf UNAVAIL 0 0 0
      Display All


      It appears that this is because the drives were originally: sdd, sde, and sdf. They now appear to have been reassigned sdb, sdc, and and sdd.

      This could be a failing on my part, I expect I should have selected 'By ID' not 'By Path' but the "Specifies which device alias should be used. Don't change unless needed." made me leave it alone... surely it should always be by ID and this should be the default? Anyway, fixed with:

      Source Code

      1. zpool export Media


      then imported the pool again. :)

      By importing the pool again, it appears that the drives have now been imported by ID:

      Source Code

      1. /mnt/Media # zpool status Media
      2. pool: Media
      3. state: ONLINE
      4. scan: none requested
      5. config:
      6. NAME STATE READ WRITE CKSUM
      7. Media ONLINE 0 0 0
      8. raidz1-0 ONLINE 0 0 0
      9. ata-ST4000DM000-1F2168_XXXXXXX ONLINE 0 0 0
      10. ata-ST4000DM000-1F2168_XXXXXXX ONLINE 0 0 0
      11. ata-ST4000DM000-1F2168_XXXXXXX ONLINE 0 0 0
      Display All


      Thus preventing this from occuring again.

      Is it possible to have the default option for 'Device alias' changed to 'By ID'? - especially if the user is warned not to touch it.

      The post was edited 6 times, last by ellnic ().

    • really , both are version 5000 to be more complex, only diference between latest FreeNAS pools and the rest is the use of feature flags:
      Actual ZFS version on linux DO not import latest FreeNAS pool because FreeNAS use some featured Flags NOT supported by actual ZFS version of ZFSonLinux.
      OMV 4.1.11 x64 on a HP T510, 16GB CF as Boot Disk & 32GB SSD 2,5" disk for Data, 4 GB RAM, CPU VIA EDEN X2 U4200 is x64 at 1GHz

      Post: HPT510 SlimNAS ; HOWTO Install Pi-Hole ; HOWTO install MLDonkey ; HOHTO Install ZFS-Plugin ; OMV_OldGUI ; ShellinaBOX ;
      Dockers: MLDonkey ; PiHole ; weTTY
      Videos: @TechnoDadLife
    • In my opinion, zfs is stable but I'm not using it for production. The plugin may also be missing features. The plugin does not add a cron job for scrubbing.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!