[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

  • @luxflow made this a default. So, this should not be a problem.

    Are you sure? I started ZFS installation on OMV in February 2017 with ZFS plugin version 3.0.9 and I had to edit also the file /etc/default/zfs and to add
    ZPOOL_IMPORT_PATH="/dev/disk/by-id",
    because it was missing.
    There was already this entry
    #ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
    but it was commented.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • raidz3-1 DEGRADED 0 0 0
    18351847122423021727 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-TOSHIBA_HDWE160_Y5A8K225F56D-part1

    @milfaddict In the zpool status output it looks strange that you have a "part1" added to two of your disks (example above). This looks like you specified a partition (dev/sdxa) instead of a disk (/dev/sdx) for two of the entries in your pool.

  • It's nice to be back :) . I will try to be more active on the forum again...
    Regarding the change by luxflow, do you know which file he modified to correct the issue? I just made a fresh install of the plugin and /etc/default/zfs had the "wrong" settings as far as I could tell.

    Sorry, I don't know the file he modified.



    Are you sure? I started ZFS installation on OMV in February 2017 with ZFS plugin version 3.0.9 and I had to edit also the file /etc/default/zfs and to addZPOOL_IMPORT_PATH="/dev/disk/by-id",
    because it was missing.
    There was already this entry
    #ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
    but it was commented.

    Yes I am sure. We discussed that here in this thread at the beginning of September 2016. Have a look at the following posts:


    [HOWTO] Instal ZFS-Plugin & use ZFS on OMV
    [HOWTO] Instal ZFS-Plugin & use ZFS on OMV
    [HOWTO] Instal ZFS-Plugin & use ZFS on OMV


    But you are right. For me it is also commented out under "/etc/default/zfs":



    Code
    #ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"

    At the moment it still looks as it should:



    By the way... There seems to be a cron job to scrub my pool, but I didn't define one. Where is it configured?



    Greetings Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • By the way... There seems to be a cron job to scrub my pool, but I didn't define one. Where is it configured?

    As I know, no scrubbing is started automatically. I have created a script which is located in /etc/cron.weekly for that job (When the script is located there it is also executed by anacron!).


    Or did you start a scrub job by command line and the nas was meanwhile shut downed? In this case I have seen that the scrubbing is being continued when the NAS is online again.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • I didn't start a scrub job manually and I didn't create a cron job to scrub my zfs. My server runs 24/7.


    Where do I have to look?

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • try by cli, if start , you have no problem to execute same command fron a cron job.


    Code
    zpool scrub myzpool


    eg: https://docs.oracle.com/cd/E18…/html/819-5461/gbbwa.html


    The manual scrub from the command line works fine. The question is, why is the scrub running by itself, while I didn't configure a cron job?


    Greetings Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    • Offizieller Beitrag

    The manual scrub from the command line works fine. The question is, why is the scrub running by itself, while I didn't configure a cron job?

    Here

    Code
    root@nb:~ # dpkg-query -S /etc/cron.d/zfsutils-linux 
    zfsutils-linux: /etc/cron.d/zfsutils-linux
    root@nb:~ # cat /etc/cron.d/zfsutils-linux 
    PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    
    
    # Scrub the second Sunday of every month.
    24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub
    root@nb:~ #
  • Here

    Code
    root@nb:~ # dpkg-query -S /etc/cron.d/zfsutils-linux 
    zfsutils-linux: /etc/cron.d/zfsutils-linux
    root@nb:~ # cat /etc/cron.d/zfsutils-linux 
    PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    
    
    # Scrub the second Sunday of every month.
    24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub
    root@nb:~ #


    Ah yes, there is it. Thanks.


    Is it somehow possible to show this cron job under "scheduled jobs" in the webui per default?


    At the moment I don't see it at this place, where it should be shown in my opinion.


    Greetings Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • This scrub job is really well hidden! But it´s never too late to learn, as you can see here. :rolleyes:

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • What's up guys!?!? Sorry if this has been asked before. I've done some searching and I couldn't find my answer. Anyways, I'm new to OMV, and I'm loving it so far. I just couldn't get comfortable w/ Freenas. I come from and Ubuntu Server background, so I'm right at home on Debian! Anyways, I've installed the ZFS plugin and created my pool. I have a few 80GB SSDs on the way to me off of ebay, and I was looking through the GUI trying to figure out how I would add these as cache disks for my pool that I just created? In freenas, I could expand the pool, and add a cache disk. I notice that option isn't available in OMV GUI. Can this be done w/ ZFS for Linux via the CLI? Any help would be appreciated. In the mean time, I'll do some more Googling lol. Thanks to the dev for this amazing setup. EXACTLY what I was looking for!!!

  • I don't think I added support to have cache disks added via gui. However you can add them via cli, I've done that on my own server.

    Would be a cool feature to have added if it isn't something that's really hard to incorporate. I'm assuming that you created the ZFS plugin? Thanks for it by the way. About adding the cache disks via command line. I'm pretty decent with Linux. I'm not asking you to tell me how to do it, but could you point me in the right direction? I'd like to read up on it some before my SSDs arrive. Also, you think I should've gone with 128GB or 256GB SSDs for the cache disks instead of the 80's? Do you think 80GB disks will be enough cache? I'm planning to add 1 SSD per vdev. I have 2 vdevs, both raidz2. One vdev consists of 6 8TB disks, and the second is comprised of 6 160GB disks that I plan to swap out with 8TB drives as time goes by. Sorry for so many questions... Thanks for the reply none the less.

  • I'm one of the devs of the plug-in, however there are lots of other people here that are much more skilled at using and setting up zfs. I don't know what is the best cache size for your vdevs, but I remember there were some guidelines avaliable "somewhere" when I did the setup myself.

  • I'm one of the devs of the plug-in, however there are lots of other people here that are much more skilled at using and setting up zfs. I don't know what is the best cache size for your vdevs, but I remember there were some guidelines avaliable "somewhere" when I did the setup myself.

    Lol. Ok. I'm looking around now. There seems to be some documentation on it out there, so hopefully it wont be that hard for me to figure out. Thanks for the answers.

  • Yeah, editing system files in a CLI is way above what I am comfortable and capable of. I could easily do this in Windows. I ended up creating my pool in FreeNAS and importing it into OMV. Everything seems to be working... for now.

    @milfaddict In the zpool status output it looks strange that you have a "part1" added to two of your disks (example above). This looks like you specified a partition (dev/sdxa) instead of a disk (/dev/sdx) for two of the entries in your pool.

    None of my disks have partitions. I don't what the problem could be. This may sound stupid but maybe there is a bug in the way OMV handles more than 26 drives (sda-sdz)?

    Any idea when OMV's ZFS plugin will be updated?

  • @milfaddict In the zpool status output it looks strange that you have a "part1" added to two of your disks (example above). This looks like you specified a partition (dev/sdxa) instead of a disk (/dev/sdx) for two of the entries in your pool.

    None of my disks have partitions. I don't what the problem could be. This may sound stupid but maybe there is a bug in the way OMV handles more than 26 drives (sda-sdz)?

    Hi guys, the last days I have expanded my ZFS pool by two disk or rather said I have created a striped raidz1 out of 2x3 disk newly. So I had the possibility to do some tests. So these are my results, with OMV V3.0.78 and ZFS plugin 3.0.18:

    • Already used disk are still shown in the selection box, if trying to expand the pool.
    • To create the basic pool is working without problems with the plugin (in my test case a mirror out of 2 disks). But when trying to expand it by a second mirror I got an rpc error. Nevertheless the pool was expanded which I have checked by zpool status
    • Two control elements in the plugin (e.g. set ashift) are wrongly positioned. They are overlapping with the text line.
    • I did several repetitions of the test. Each time I destroyed the pool and then did a quick wipe of all disks between each test. Exactely one time I got the same error pattern which @milfaddict has described here. A degraded pool was reported, similar like that:


      "raidz3-1 DEGRADED 0 0 0
      18351847122423021727 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-TOSHIBA_HDWE160_Y5A8K225F56D-part1"


      It happend with 4 disks. So it seems to be independent from the number of used disks.


      The problem is that i could not reproduce it a second time. I have tried to create a situation where this behavior always happen. But I didn´t find one. But the RPC error I got every time.
      But nevertheless now it should be much easier to reproduce it with 4 disks instead of 26 :)

    Conclusion: I would not recommend to use the plugin for expanding a pool. I also think in the meantime that there are some problems with it.


    At the end I have created and expanded my pool by CLI. This has worked without problems and the changes of the pool is also recognised by the plugin. It then requests to save the changes.



    Despite of this problems I will continue using ZFS. For me everything seems to be stable after the pool is initially created.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    3 Mal editiert, zuletzt von cabrio_leo ()

    • Offizieller Beitrag

    Any idea when OMV's ZFS plugin will be updated?

    The plugin doesn't have anything to do with the version of zfs it uses. The plugin uses whatever is in the Debian repos. 0.7.0 isn't even in Debian Sid yet.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!