[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • OMV 1.0

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • hoppel118 wrote:

      @luxflow made this a default. So, this should not be a problem.
      Are you sure? I started ZFS installation on OMV in February 2017 with ZFS plugin version 3.0.9 and I had to edit also the file /etc/default/zfs and to add
      ZPOOL_IMPORT_PATH="/dev/disk/by-id",
      because it was missing.
      There was already this entry
      #ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
      but it was commented.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • nicjo814 wrote:

      hoppel118 wrote:

      Nice to see you back here.
      It's nice to be back :) . I will try to be more active on the forum again...
      Regarding the change by luxflow, do you know which file he modified to correct the issue? I just made a fresh install of the plugin and /etc/default/zfs had the "wrong" settings as far as I could tell.
      Sorry, I don't know the file he modified.


      cabrio_leo wrote:

      hoppel118 wrote:

      @luxflow made this a default. So, this should not be a problem.
      Are you sure? I started ZFS installation on OMV in February 2017 with ZFS plugin version 3.0.9 and I had to edit also the file /etc/default/zfs and to addZPOOL_IMPORT_PATH="/dev/disk/by-id",
      because it was missing.
      There was already this entry
      #ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
      but it was commented.
      Yes I am sure. We discussed that here in this thread at the beginning of September 2016. Have a look at the following posts:

      [HOWTO] Instal ZFS-Plugin & use ZFS on OMV
      [HOWTO] Instal ZFS-Plugin & use ZFS on OMV
      [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

      But you are right. For me it is also commented out under "/etc/default/zfs":


      Source Code

      1. #ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
      At the moment it still looks as it should:


      Source Code

      1. root@omv:~# zpool status
      2. pool: mediatank
      3. state: ONLINE
      4. scan: scrub repaired 0 in 12h17m with 0 errors on Sun Jul 9 12:41:55 2017
      5. config:
      6. NAME STATE READ WRITE CKSUM
      7. mediatank ONLINE 0 0 0
      8. raidz2-0 ONLINE 0 0 0
      9. ata-WDC_WD40EFRX-68WT0N0_WD-WCCXXXXXXXXX ONLINE 0 0 0
      10. ata-WDC_WD40EFRX-68WT0N0_WD-WCCXXXXXXXXX ONLINE 0 0 0
      11. ata-WDC_WD40EFRX-68WT0N0_WD-WCCXXXXXXXXX ONLINE 0 0 0
      12. ata-WDC_WD40EFRX-68WT0N0_WD-WCCXXXXXXXXX ONLINE 0 0 0
      13. ata-WDC_WD40EFRX-68WT0N0_WD-WCCXXXXXXXXX ONLINE 0 0 0
      14. ata-WDC_WD40EFRX-68WT0N0_WD-WCCXXXXXXXXX ONLINE 0 0 0
      15. ata-WDC_WD40EFRX-68WT0N0_WD-WCCXXXXXXXXX ONLINE 0 0 0
      16. ata-WDC_WD40EFRX-68WT0N0_WD-WCCXXXXXXXXX ONLINE 0 0 0
      Display All
      By the way... There seems to be a cron job to scrub my pool, but I didn't define one. Where is it configured?


      Greetings Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - debian | kernel 4.4 lts | proxmox | openmediavault | zfs raid-z2 | docker | emby | vdr | vnsi | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • hoppel118 wrote:

      By the way... There seems to be a cron job to scrub my pool, but I didn't define one. Where is it configured?
      As I know, no scrubbing is started automatically. I have created a script which is located in /etc/cron.weekly for that job (When the script is located there it is also executed by anacron!).

      Or did you start a scrub job by command line and the nas was meanwhile shut downed? In this case I have seen that the scrubbing is being continued when the NAS is online again.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

      I didn't start a scrub job manually and I didn't create a cron job to scrub my zfs. My server runs 24/7.

      Where do I have to look?
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - debian | kernel 4.4 lts | proxmox | openmediavault | zfs raid-z2 | docker | emby | vdr | vnsi | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • try by cli, if start , you have no problem to execute same command fron a cron job.

      Source Code

      1. zpool scrub myzpool

      eg: docs.oracle.com/cd/E18752_01/html/819-5461/gbbwa.html
      OMV 3.0.96 x64 on a HP T510, 16GB CF as Boot Disk & 32GB SSD 2,5" disk for Data, 4 GB RAM, CPU VIA EDEN X2 U4200 is x64 at 1GHz

      Post: HPT510 SlimNAS ; HOWTO Install Pi-Hole ; HOWTO install MLDonkey ; HOHTO Install ZFS-Plugin ; OMV_OldGUI ; ShellinaBOX ;
      Dockers: MLDonkey ; PiHole ;
    • [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

      raulfg3 wrote:

      try by cli, if start , you have no problem to execute same command fron a cron job.

      Source Code

      1. zpool scrub myzpool

      eg: docs.oracle.com/cd/E18752_01/html/819-5461/gbbwa.html


      The manual scrub from the command line works fine. The question is, why is the scrub running by itself, while I didn't configure a cron job?

      Greetings Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - debian | kernel 4.4 lts | proxmox | openmediavault | zfs raid-z2 | docker | emby | vdr | vnsi | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • hoppel118 wrote:

      The manual scrub from the command line works fine. The question is, why is the scrub running by itself, while I didn't configure a cron job?
      Here

      Source Code

      1. root@nb:~ # dpkg-query -S /etc/cron.d/zfsutils-linux
      2. zfsutils-linux: /etc/cron.d/zfsutils-linux
      3. root@nb:~ # cat /etc/cron.d/zfsutils-linux
      4. PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
      5. # Scrub the second Sunday of every month.
      6. 24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub
      7. root@nb:~ #
      New wiki
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • subzero79 wrote:

      hoppel118 wrote:

      The manual scrub from the command line works fine. The question is, why is the scrub running by itself, while I didn't configure a cron job?
      Here

      Source Code

      1. root@nb:~ # dpkg-query -S /etc/cron.d/zfsutils-linux
      2. zfsutils-linux: /etc/cron.d/zfsutils-linux
      3. root@nb:~ # cat /etc/cron.d/zfsutils-linux
      4. PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
      5. # Scrub the second Sunday of every month.
      6. 24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub
      7. root@nb:~ #

      Ah yes, there is it. Thanks.

      Is it somehow possible to show this cron job under "scheduled jobs" in the webui per default?

      At the moment I don't see it at this place, where it should be shown in my opinion.

      Greetings Hoppel
      ---------------------------------------------------------------------------------------------------------------
      frontend software - android tv | libreelec | win10 | kodi krypton
      frontend hardware - nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2 | harmony smart control
      -------------------------------------------
      backend software - debian | kernel 4.4 lts | proxmox | openmediavault | zfs raid-z2 | docker | emby | vdr | vnsi | fhem
      backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x4tb wd red | digital devices max s8
      ---------------------------------------------------------------------------------------------------------------------------------------
    • subzero79 wrote:

      Here

      root@nb:~ # dpkg-query -S /etc/cron.d/zfsutils-linux
      zfsutils-linux: /etc/cron.d/zfsutils-linux
      root@nb:~ # cat /etc/cron.d/zfsutils-linux
      PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

      # Scrub the second Sunday of every month.
      24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub
      root@nb:~ #
      This scrub job is really well hidden! But it´s never too late to learn, as you can see here. :rolleyes:
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • OMV 3.0.96 x64 on a HP T510, 16GB CF as Boot Disk & 32GB SSD 2,5" disk for Data, 4 GB RAM, CPU VIA EDEN X2 U4200 is x64 at 1GHz

      Post: HPT510 SlimNAS ; HOWTO Install Pi-Hole ; HOWTO install MLDonkey ; HOHTO Install ZFS-Plugin ; OMV_OldGUI ; ShellinaBOX ;
      Dockers: MLDonkey ; PiHole ;
    • What's up guys!?!? Sorry if this has been asked before. I've done some searching and I couldn't find my answer. Anyways, I'm new to OMV, and I'm loving it so far. I just couldn't get comfortable w/ Freenas. I come from and Ubuntu Server background, so I'm right at home on Debian! Anyways, I've installed the ZFS plugin and created my pool. I have a few 80GB SSDs on the way to me off of ebay, and I was looking through the GUI trying to figure out how I would add these as cache disks for my pool that I just created? In freenas, I could expand the pool, and add a cache disk. I notice that option isn't available in OMV GUI. Can this be done w/ ZFS for Linux via the CLI? Any help would be appreciated. In the mean time, I'll do some more Googling lol. Thanks to the dev for this amazing setup. EXACTLY what I was looking for!!!
    • nicjo814 wrote:

      I don't think I added support to have cache disks added via gui. However you can add them via cli, I've done that on my own server.
      Would be a cool feature to have added if it isn't something that's really hard to incorporate. I'm assuming that you created the ZFS plugin? Thanks for it by the way. About adding the cache disks via command line. I'm pretty decent with Linux. I'm not asking you to tell me how to do it, but could you point me in the right direction? I'd like to read up on it some before my SSDs arrive. Also, you think I should've gone with 128GB or 256GB SSDs for the cache disks instead of the 80's? Do you think 80GB disks will be enough cache? I'm planning to add 1 SSD per vdev. I have 2 vdevs, both raidz2. One vdev consists of 6 8TB disks, and the second is comprised of 6 160GB disks that I plan to swap out with 8TB drives as time goes by. Sorry for so many questions... Thanks for the reply none the less.
    • nicjo814 wrote:

      I'm one of the devs of the plug-in, however there are lots of other people here that are much more skilled at using and setting up zfs. I don't know what is the best cache size for your vdevs, but I remember there were some guidelines avaliable "somewhere" when I did the setup myself.
      Lol. Ok. I'm looking around now. There seems to be some documentation on it out there, so hopefully it wont be that hard for me to figure out. Thanks for the answers.
    • nicjo814 wrote:

      @milfaddict A bit late but I might have an idea regarding your issue. I haven't read all posts so this could already have been discussed.

      By default ZFS On Linux (ZoL), which is the ZFS implementation used on Debian imports pools with "by-dev" policy (/dev/sdX), which could be bad if your /dev/sdX assignments change on reboot, which I've understood could happen on some systems.

      To fix this you should edit the file /etc/default/zfs and set ZPOOL_IMPORT_PATH="/dev/disk/by-id" (This setting is commented by default).

      Next create the pool, either via cli or the plugin. Then export the pool, and finally re-import it again with: zpool import -d /dev/disk/by-id <pool_name>

      Check that it works via: zpool status

      Try to reboot and check if the pool comes back online as it should.
      Yeah, editing system files in a CLI is way above what I am comfortable and capable of. I could easily do this in Windows. I ended up creating my pool in FreeNAS and importing it into OMV. Everything seems to be working... for now.

      nicjo814 wrote:

      milfaddict wrote:

      raidz3-1 DEGRADED 0 0 0
      18351847122423021727 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-TOSHIBA_HDWE160_Y5A8K225F56D-part1
      @milfaddict In the zpool status output it looks strange that you have a "part1" added to two of your disks (example above). This looks like you specified a partition (dev/sdxa) instead of a disk (/dev/sdx) for two of the entries in your pool.
      None of my disks have partitions. I don't what the problem could be. This may sound stupid but maybe there is a bug in the way OMV handles more than 26 drives (sda-sdz)?
      Any idea when OMV's ZFS plugin will be updated?
    • nicjo814 wrote:

      @milfaddict In the zpool status output it looks strange that you have a "part1" added to two of your disks (example above). This looks like you specified a partition (dev/sdxa) instead of a disk (/dev/sdx) for two of the entries in your pool.

      milfaddict wrote:

      None of my disks have partitions. I don't what the problem could be. This may sound stupid but maybe there is a bug in the way OMV handles more than 26 drives (sda-sdz)?
      Hi guys, the last days I have expanded my ZFS pool by two disk or rather said I have created a striped raidz1 out of 2x3 disk newly. So I had the possibility to do some tests. So these are my results, with OMV V3.0.78 and ZFS plugin 3.0.18:
      • Already used disk are still shown in the selection box, if trying to expand the pool.
      • To create the basic pool is working without problems with the plugin (in my test case a mirror out of 2 disks). But when trying to expand it by a second mirror I got an rpc error. Nevertheless the pool was expanded which I have checked by zpool status
      • Two control elements in the plugin (e.g. set ashift) are wrongly positioned. They are overlapping with the text line.
      • I did several repetitions of the test. Each time I destroyed the pool and then did a quick wipe of all disks between each test. Exactely one time I got the same error pattern which @milfaddict has described here. A degraded pool was reported, similar like that:

        "raidz3-1 DEGRADED 0 0 0
        18351847122423021727 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-TOSHIBA_HDWE160_Y5A8K225F56D-part1"

        It happend with 4 disks. So it seems to be independent from the number of used disks.

        The problem is that i could not reproduce it a second time. I have tried to create a situation where this behavior always happen. But I didn´t find one. But the RPC error I got every time.
        But nevertheless now it should be much easier to reproduce it with 4 disks instead of 26 :)
      Conclusion: I would not recommend to use the plugin for expanding a pool. I also think in the meantime that there are some problems with it.

      At the end I have created and expanded my pool by CLI. This has worked without problems and the changes of the pool is also recognised by the plugin. It then requests to save the changes.


      Despite of this problems I will continue using ZFS. For me everything seems to be stable after the pool is initially created.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304

      The post was edited 3 times, last by cabrio_leo ().

    • milfaddict wrote:

      Any idea when OMV's ZFS plugin will be updated?
      The plugin doesn't have anything to do with the version of zfs it uses. The plugin uses whatever is in the Debian repos. 0.7.0 isn't even in Debian Sid yet.
      omv 4.1.4 arrakis | 64 bit | 4.15 backports kernel | omvextrasorg 4.1.3
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!