[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • OMV 1.0

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • What is the output of
      zpool status
      and
      zfs list <poolname>?

      Your usable space can be seen under 'AVAIL'.
      I think the unit of the reported value is TiB not TB.

      There is also some overhead: serverfault.com/questions/5919…-on-4k-sector-disks-going
      OMV 3.0.78 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304

      The post was edited 2 times, last by cabrio_leo ().

    • cabrio_leo wrote:

      What is the output of
      zpool status
      and
      zfs list <poolname>?

      Your usable space can be seen under 'AVAIL'.
      I think the unit of the reported value is TiB not TB.

      There is also some overhead: serverfault.com/questions/5919…-on-4k-sector-disks-going
      Thank you for your reply!
      As per your questions:

      Source Code

      1. zpool status
      2. pool: data
      3. state: ONLINE
      4. scan: none requested
      5. config:
      6. NAME STATE READ WRITE CKSUM
      7. data ONLINE 0 0 0
      8. raidz2-0 ONLINE 0 0 0
      9. ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1XV83SA ONLINE 0 0 0
      10. ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N1768209 ONLINE 0 0 0
      11. ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7XR0NYN ONLINE 0 0 0
      12. ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1XV8XL6 ONLINE 0 0 0
      13. ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N1761585 ONLINE 0 0 0
      14. ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N3CP7C0U ONLINE 0 0 0
      15. ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N3CP7ZT8 ONLINE 0 0 0
      16. ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N6EZ3VL7 ONLINE 0 0 0
      17. errors: No known data errors
      Display All

      Source Code

      1. zfs list data
      2. NAME USED AVAIL REFER MOUNTPOINT
      3. data 512K 15.0T 205K /data

      I figured there would be overhead but hoped it would be less then 8TB but if it is so, then it is so.
    • iddqd wrote:

      I figured there would be overhead but hoped it would be less then 8TB
      I think the overhead, caused by the ZFS-FS itself, is less than 8TB. You loose 6 TB for redundancy: 24TB - 6 TB = 18TB. Then you have to subtract the efficiency overhead, which is aprox. 92,4% for RAIDZ2 with 8 disks (see table in that link):
      18TB x 92,4% = 16,63TB which is equal to 15,14 TíB.

      I think everything is OK with your pool.

      You could try to create a striped RAID Z4 with 4 disks per each RAIDZ2. Maybe then you have less overhead.
      OMV 3.0.78 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304

      The post was edited 1 time, last by cabrio_leo ().

    • cabrio_leo wrote:

      iddqd wrote:

      I figured there would be overhead but hoped it would be less then 8TB
      I think the overhead, caused by the ZFS-FS itself, is less than 8TB. You loose 6 TB for redundancy: 24TB - 6 TB = 18TB. Then you have to subtract the efficiency overhead, which is aprox. 92,4% for RAIDZ2 with 8 disks (see table in that link):18TB x 92,4% = 16,63TB which is equal to 15,14 TíB.

      I think everything is OK with your pool.

      You could try to create a striped RAID Z4 with 4 disks per each RAIDZ2. Maybe then you have less overhead.
      Thank you for your explanation, it makes sense! So 15 TB is the size of my pool.
    • Hi,

      Trying to create mirror pool with 4 disks in latest OMV 3 with zfs plugin, gives following error

      Error #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; ls -la /dev/disk/by-id | grep 'sda$'' with exit code '1': ' in /usr/share/php/openmediavault/system/process.inc:175Stack trace:#0 /usr/share/omvzfs/Utils.php(395): OMV\System\Process->execute(Array, 1)#1 /usr/share/omvzfs/Utils.php(135): OMVModuleZFSUtil::exec('ls -la /dev/dis...', Array, 1)#2 /usr/share/openmediavault/engined/rpc/zfs.inc(136): OMVModuleZFSUtil::getDiskId('/dev/sda')#3 [internal function]: OMVRpcServiceZFS->addPool(Array, Array)#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#5 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('addPool', Array, Array)#6 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('ZFS', 'addPool', Array, Array, 1)#7 {main}

      Is there a workaround for this?

      Another question what would be the best choice for 4 disks and being safe for 2 disks failures i don't care loosing space using now RAID10 , so i guess a mirrored stripe would be good, any hints what commands to do in cli as i can't seem to do it in the plugin webinterface.

      Thanks

      UPDATE:

      I did this is it correct?

      root@omv:~# zpool create data mirror /dev/sda /dev/sdb
      root@omv:~# zpool status
      pool: data
      state: ONLINE
      scan: none requested
      config:
      NAME STATE READ WRITE CKSUM
      data ONLINE 0 0 0
      mirror-0 ONLINE 0 0 0
      sda ONLINE 0 0 0
      sdb ONLINE 0 0 0
      errors: No known data errors

      root@omv:~# zpool add data mirror /dev/sdc /dev/sdd
      root@omv:~# zpool status
      pool: data
      state: ONLINE
      scan: none requested
      config:
      NAME STATE READ WRITE CKSUM
      data ONLINE 0 0 0
      mirror-0 ONLINE 0 0 0
      sda ONLINE 0 0 0
      sdb ONLINE 0 0 0
      mirror-1 ONLINE 0 0 0
      sdc ONLINE 0 0 0
      sdd ONLINE 0 0 0
      errors: No known data errors
      root@omv:~#

      The post was edited 1 time, last by ArmandH ().

    • That is basically correct and gives you RAID10-like functionality (striped mirrors). You can also do zpool create data mirror /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd to create the whole thing in one fell swoop.

      Before you go loading it up with data though, I would recommend you do the following, to avoid issues with drives being re-named on a reboot.

      1. zpool export data
      2. zpool import -d /dev/disk/by-id data

      That will use drive disk IDs instead of sda, sdb, etc. My drives have a tendency to change which letter they're assigned to on every reboot, but the ID stays the same.
    • [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

      As wolffstarr has said, use the dev ID. For future reference, if you haven't already created the pool:

      Get the IDs first:

      Source Code

      1. ls -l /dev/disk/by-id/ | grep sd


      Then choose the identifiers (starting wwn) for the disks. You can see which disk (e.g sda) has what identifier and use the full string (if it has a -partX at the end, you've got the partition number too, truncate at the -)

      This way, you will add the disks by ID and it's SO much easier to track down failed disks etc as the sdX identifiers can change.

      To make a new RAID10 pool mounted at /mnt/Tank you would use:

      Source Code

      1. zpool create Tank -m /mnt/Tank mirror wwn-0x1xxxxxxxxxxx wwn-0x2xxxxxxxxxxx mirror wwn-0x3xxxxxxxxxxxx wwn-0x4xxxxxxxxxxx


      So wwn1 and wwn2 are mirrors, as are wwn3 and wwn4.

      You could also use the identifiers starting ata. Usually, the serial number is the last part of the ata string. So I label the caddys with this. And when I do a zpool status it displays these against each drive.

      Edit, correction about wwn and ata.


      Sent from my iPhone using Tapatalk
      Server: ASRock X99 WS, Xeon E5-2695 V3, 32GB DDR4 Registered ECC | Drives: OS: Kingston V300 120GB SSD, Array: 8 x Seagate ST4000DM000's in RAIDZ2
      OS: OMV Stoneburner 2.1 on Debian 7, Backports 3.16 Kernel, OMV Extras | OMV Extras: ZFS Plug-in, Emby Plug-in
      [ Server Thread | Router Thread ]

      The post was edited 1 time, last by ellnic ().

    • ArmandH wrote:

      zpool create data
      I you are using disks with 4k sectors you should create your pool with the ashift option to get correct sector alignment:

      zpool create -o ashift=12 data....

      You have to do this when you create your pool. You can´t change it later (as far as I know).
      OMV 3.0.78 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • ArmandH wrote:

      Another question what would be the best choice for 4 disks and being safe for 2 disks failures i don't care loosing space using now RAID10
      Yes, that gives a good performance and less time for resilvering because the data is more or less copied from the other disks and need not to be recalculated out of parity information.
      OMV 3.0.78 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • ArmandH wrote:

      Error #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; ls -la /dev/disk/by-id | grep 'sda$'' with exit code '1': ' in /usr/share/php/openmediavault/system/process.inc:175Stack trace:#0 /usr/share/omvzfs/Utils.php(395): OMV\System\Process->execute(Array, 1)#1 /usr/share/omvzfs/Utils.php(135): OMVModuleZFSUtil::exec('ls -la /dev/dis...', Array, 1)#2 /usr/share/openmediavault/engined/rpc/zfs.inc(136): OMVModuleZFSUtil::getDiskId('/dev/sda')#3 [internal function]: OMVRpcServiceZFS->addPool(Array, Array)#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#5 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('addPool', Array, Array)#6 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('ZFS', 'addPool', Array, Array, 1)#7 {main}
      This is a well known error and I think this was happening while you were trying to add disks to the basic mirror. Several people have this reported already in this thread.
      E.g. myself: forum.openmediavault.org/index…?postID=150321#post150321

      There I wrote: Conclusion: I would not recommend to use the plugin for expanding a pool.
      OMV 3.0.78 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304

      The post was edited 1 time, last by cabrio_leo: typo ().

    • [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

      Yes, ashift 12 is fastest writes, a little less space. You can also use ashift 9 for 4K. A bit less write speed, more space. Benchmark both and decide. Zfs should detect most 4K sector drives and set the ashift to 12 automatically. If you have created a pool and didn't specify an ashift, you can check it using:

      Source Code

      1. zdb -C


      It will also appear in:

      Source Code

      1. zfs get all Tank


      But usually only if you've specified an ashift. Whereas zdb shows it regardless.

      If you do a:

      Source Code

      1. zfs get all | grep ashift


      And get nothing returned, you likely didn't specify it. Do:

      Source Code

      1. zdb -C | grep ashift


      For my smaller dedicated emby box this gives:

      Source Code

      1. ashift=12
      2. ashift=12


      One for each mirrored pair.


      Sent from my iPhone using Tapatalk
      Server: ASRock X99 WS, Xeon E5-2695 V3, 32GB DDR4 Registered ECC | Drives: OS: Kingston V300 120GB SSD, Array: 8 x Seagate ST4000DM000's in RAIDZ2
      OS: OMV Stoneburner 2.1 on Debian 7, Backports 3.16 Kernel, OMV Extras | OMV Extras: ZFS Plug-in, Emby Plug-in
      [ Server Thread | Router Thread ]

    • Tried to install it but it's stuck on Building initial module for 4.9.0-0.bpo.3-amd64. Waited for 1 hour but it just stopped working :(

      Source Code

      1. .......
      2. Unpacking libuutil1linux (0.6.5.9-2~bpo8+1) ...
      3. Selecting previously unselected package libnvpair1linux.
      4. Preparing to unpack .../libnvpair1linux_0.6.5.9-2~bpo8+1_amd64.deb ...
      5. Unpacking libnvpair1linux (0.6.5.9-2~bpo8+1) ...
      6. Selecting previously unselected package libzpool2linux.
      7. Preparing to unpack .../libzpool2linux_0.6.5.9-2~bpo8+1_amd64.deb ...
      8. Unpacking libzpool2linux (0.6.5.9-2~bpo8+1) ...
      9. Selecting previously unselected package libzfs2linux.
      10. Preparing to unpack .../libzfs2linux_0.6.5.9-2~bpo8+1_amd64.deb ...
      11. Unpacking libzfs2linux (0.6.5.9-2~bpo8+1) ...
      12. Selecting previously unselected package linux-compiler-gcc-4.9-x86.
      13. Preparing to unpack .../linux-compiler-gcc-4.9-x86_4.9.30-2+deb9u2~bpo8+1_amd64.deb ...
      14. Unpacking linux-compiler-gcc-4.9-x86 (4.9.30-2+deb9u2~bpo8+1) ...
      15. Selecting previously unselected package linux-headers-4.9.0-0.bpo.3-common.
      16. Preparing to unpack .../linux-headers-4.9.0-0.bpo.3-common_4.9.30-2+deb9u2~bpo8+1_all.deb ...
      17. Unpacking linux-headers-4.9.0-0.bpo.3-common (4.9.30-2+deb9u2~bpo8+1) ...
      18. Selecting previously unselected package linux-kbuild-4.9.
      19. Preparing to unpack .../linux-kbuild-4.9_4.9.30-2+deb9u2~bpo8+1_amd64.deb ...
      20. Unpacking linux-kbuild-4.9 (4.9.30-2+deb9u2~bpo8+1) ...
      21. Selecting previously unselected package linux-headers-4.9.0-0.bpo.3-amd64.
      22. Preparing to unpack .../linux-headers-4.9.0-0.bpo.3-amd64_4.9.30-2+deb9u2~bpo8+1_amd64.deb ...
      23. Unpacking linux-headers-4.9.0-0.bpo.3-amd64 (4.9.30-2+deb9u2~bpo8+1) ...
      24. Selecting previously unselected package linux-headers-amd64.
      25. Preparing to unpack .../linux-headers-amd64_4.9+80~bpo8+1_amd64.deb ...
      26. Unpacking linux-headers-amd64 (4.9+80~bpo8+1) ...
      27. Selecting previously unselected package zfsutils-linux.
      28. Preparing to unpack .../zfsutils-linux_0.6.5.9-2~bpo8+1_amd64.deb ...
      29. Unpacking zfsutils-linux (0.6.5.9-2~bpo8+1) ...
      30. Selecting previously unselected package zfs-zed.
      31. Preparing to unpack .../zfs-zed_0.6.5.9-2~bpo8+1_amd64.deb ...
      32. Unpacking zfs-zed (0.6.5.9-2~bpo8+1) ...
      33. Selecting previously unselected package openmediavault-zfs.
      34. Preparing to unpack .../openmediavault-zfs_3.0.18_amd64.deb ...
      35. Unpacking openmediavault-zfs (3.0.18) ...
      36. Processing triggers for man-db (2.7.0.2-5) ...
      37. Processing triggers for openmediavault (3.0.86) ...
      38. Restarting engine daemon ...
      39. Setting up zfs-dkms (0.6.5.9-2~bpo8+1) ...
      40. Loading new zfs-0.6.5.9 DKMS files...
      41. Building for 4.9.0-0.bpo.3-amd64
      42. Building initial module for 4.9.0-0.bpo.3-amd64
      Display All

      Post by Grinchy ().

      This post was deleted by the author themselves: Worked on a second try ().
    • [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

      @Grinchy Maybe it's an alternative for you to use the Proxmox LTS kernel instead. If so, go to "OMV-Extras - Kernel" and "Install Proxmox kernel" at the omv webui.

      This works fine for me and some other guys here.

      Greetings Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton | emby for kodi | vnsi
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - proxmox | openmediavault | debian jessie | kernel 4.4 lts | zfs | docker | emby | vdr | vnsi
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | raid-z2 | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------

      The post was edited 1 time, last by hoppel118 ().

    • hoppel118 wrote:

      @Grinchy Maybe it's an alternative for you to use the Proxmox LTS kernel instead. If so, go to "OMV-Extras - Kernel" and "Install Proxmox kernel" at the omv webui.

      This works fine for me and some other guys here.

      Greetings Hoppel
      @hoppel118
      Why do you prefer another kernel? What is the advantage in contrast to the 4.9.0 bpo? Where is the reference to ZFS? I always thought the Proxmox kernel is only necessary, if a want to run virtual machines within OMV.
      OMV 3.0.78 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

      No, you can use the Proxmox kernel without virtualizing. Proxmox is also debian.

      The Proxmox kernel simply works in combination with zol, because Proxmox supports zfs by default. That is the only point.

      @Grinchy Take it or leave it! ;)
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton | emby for kodi | vnsi
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - proxmox | openmediavault | debian jessie | kernel 4.4 lts | zfs | docker | emby | vdr | vnsi
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | raid-z2 | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • Why would a ZFS pool be imported with the sda,sdb etc naming convention instead of the dev ID? I'm relocating drives between servers and when I import they come up as sdx. At first I didn't care, then something decided to reshuffle my drives and the pool disappeared. It was easy enough to import, but now the shared folders don't work. Nor will it let me edit. I have to delete them and re-create. Seems bothersome unless they can be imported using their ID name which never changes. Thoughts?