[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

  • @wolfstarr


    Perhaps some creative use of cronjobs could be an alternative for basic auto-snapshot creation without the need for
    zfs-auto-snapshot.

    I was using cron jobs for zfsnap snapshotting. The problem is I have, excluding child Docker datasets, 17 different datasets. You're suggesting 17 * 5 = 85 cron jobs for snapshotting, plus 2-3 more for snapshot cleanup, versus installing zfs-auto-snapshot and setting maybe 2 filesystem properties each, depending on whether or not I want Frequent snapshots.


    You also mentioned Docker; when I installed the Docker-GUI plugin and set the base directory through that to the ZFS dataset I created for Docker's use, it automagically loaded the ZFS driver and started creating filesystems and clones all over the place. Actually had to modify the plugin because of it to prevent the Docker driver from causing issues (this has been uploaded and released in the official plugin already).

  • @wolfstarr


    DIY cronjobs would only be for simple case. I've yet to look in any detail at zfs-auto-snapshot or seriously think abut a final config and the implications of zfs datasets versus directories and the complications of parent child relationships and how this all fits with recursive snapshot options. Have you considered this: https://github.com/jimsalterjrs/sanoid


    As I'm still only testing OMV3 in VBox and I've not come across docker zfs behaviour you described. I set up docker before installing the zfs plugin and used the default settings for the paths etc I don't know what the best settings would be.


    Thinking a little about backup/restore of a ext4 system drive, if you kept a system backup on a zfs array then you' need some kind of zfs aware bootable rescue cd/usb for restore. What have you done about this?


    There's a lot to mull over and I've still not decided whether to move to zfs or stick with ext4 & mdadm.

  • DIY cronjobs would only be for simple case. I've yet to look in any detail at zfs-auto-snapshot or seriously think abut a final config and the implications of zfs datasets versus directories and the complications of parent child relationships and how this all fits with recursive snapshot options. Have you considered this: github.com/jimsalterjrs/sanoid

    The way zfs-auto-snapshot works is that it has Frequent (every 15 minutes), Hourly, Daily, Weekly, and Monthly cron jobs. Each job looks at all ZFS datasets, looks to see if the dataset has a particular attribute telling it NOT to take that snapshot for that dataset, and if not, takes the snapshot. This means you only have to set the basic attribute to true to get snapshots going, and only disable the ones you don't want for a given dataset.


    I use datasets for several reasons; for one, that's the way the filesystem is designed. For another, I like being able to manage the data types. My movies don't need snapshots more than once a day or so, and they certainly don't need offsite backups - I can always re-rip. Family photos and personal files like tax docs though? Yeah, every 15 minutes to prevent loss if hit with a crypto-locker, and replication to remote sites.


    Sanoid is interesting, and if/when @luxflow gets free time to work on the plugin again, I'd recommend he look into it for the plugin's handling of snapshots. It looks much more flexible than zfs-auto-snapshot without the management overhead of zfsnap to get that flexibility.


    As I'm still only testing OMV3 in VBox and I've not come across docker zfs behaviour you described. I set up docker before installing the zfs plugin and used the default settings for the paths etc I don't know what the best settings would be.

    You wouldn't come across it in that situation then. This happened when I added Openmediavault-Docker-GUI plugin to my existing OMV3 + ZFS system, enabled Docker, and then on the Settings tab changed the base directory to a ZFS dataset before pulling any images down or doing any other configurations.


    Thinking a little about backup/restore of a ext4 system drive, if you kept a system backup on a zfs array then you' need some kind of zfs aware bootable rescue cd/usb for restore. What have you done about this?

    Are you talking about a backup of OMV, or backup images for another system stored on OMV? In the first case, yes you'd have that worry. The answer is to throw the backup on a thumb drive attached to the OMV system for no other purpose. If you want to automate it, set up your schedule for the backup, and every so often rotate the drive out so you've got a separate backup that won't get fried in a lightning strike or whatever.


    If you're talking about backing up images of other machines to OMV, that's easy, just set up NFS and Samba shares for your backup storage. No need to understand ZFS that way.


    To answer the question asked, though, I use a recent copy of Antergos Live image, which is capable of installing to root ZFS filesystem, if I really need it.

  • I’ve not had time to make any progress with my test OMV3 config for a few days, but I did come across yet another zfs utility. This one’s called znapzend - https://github.com/oetiker/znapzend


    Using a portainer container shows that with just 4 containers running on my OMV3 test, 47 volumes are used. By default these live under /var/lib/docker along with all other docker dirs & files. If these all turned into datasets when you choose docker plugin paths to be on your zfs pool, then it would quickly become unwieldy.


    Thanks for the reminder that Antergos is supposed to support zfs.


    I have other outstanding questions re: OMV which I’ll have to post elsewhere on the forum before I’m ready put OMV on my microserver.

  • How do I actually tune zfs? There are a few parameters I change on every boot to get scrub speed up (Similar one for resilvering i guess, but i havnt done that yet).


    Can I add them in the GUI or some config file somewhere?


    zfs_vdev_scrub_min_active to 4
    zfs_vdev_scrub_max_active to 8
    zfs_top_maxinflight to 512
    zfs_prefetch_disable to 1
    zfs_scrub_delay to 0
    Setting these gets my scrub around 300MB/s. Compared to ~20MB/s


    Also my zfs_arc_max show as 0. (isnt that related to memory?)
    Same as zfs_vdev_cache_size = 0


    Maybe the later chache size has to be added at boot ( again.. ? )


    BR
    M

  • How do I actually tune zfs?

    There are two possibilities:

    • You can edit /etc/default/zfs
      There are some variables in which you can change or add new:
      Personally I have replaces this line:
      ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
      by
      ZPOOL_IMPORT_PATH="/dev/disk/by-id"


    • Create the file /etc/modprobe.d/zfs.conf
      If have these two lines in this file:
      options zfs zfs_arc_min=8589934592
      options zfs zfs_arc_max=12884901888

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Anyone know how the snap-shot function works? any ways to stop snapshots, change frequency etc?

    There is no function in the WebUi for creating snapshots, control the frequency and so on. If you have already created some snapshots, then in the "Snapshots"-tab they are displayed, can be deleted and you can trigger a rollback.


    All other snapshot related things must be done manually be CLI.


    For controlling snapshots I personally use znapzend: http://www.znapzend.org/, which is really powerful and quite easy to configure. And there is a debian package available which can be installed directly in OMV3: https://github.com/Gregy/znapzend-debian/releases


    By the way I am using ZFS snapshots with the shadow copy function of Samba to create Windows previous versions in MS Explorer - also for user home directories. A little bit tricky to configure but now it works.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    There is no function in the WebUi for creating snapshots, control the frequency and so on. If you have already created some snapshots, then in the "Snapshots"-tab they are displayed, can be deleted and you can trigger a rollback.


    All other snapshot related things must be done manually be CLI.

    You can manually create snapshots in the plugin. Select a filesystem, click Add object, and pick snapshot from the dropdown. The snapshot tab will show the snapshots and allow you to revert to/delete the snapshot.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • You can manually create snapshots in the plugin.

    Oh sorry. I had not realised that. Again something learned :)

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Before I use the ZFS plugin for OMV, I just want to make sure I understand this before I continue.


    My goal is to create a RAID-Z3 pool with 3 vdevs consisting of 12x6TB drives.


    So as I understand it. I would create the zpool with 12 drives and then expand it with 12 more drives and then again with the last 12, right?

  • I would create the zpool with 12 drives and then expand it with 12 more drives and then again with the last 12, right?

    So I understand, you want to create one pool out of 36 drives (3 vdevs which 12 disks each)? That´s really big! So for my opinion each vdev should be a raidz3 or at least a raidz2. I am not sure if this is possible by the WebUI. Could be that you have to use the CLI.


    You may have a look at this article: ZFS Raidz Performance, Capacity and Integrity
    This is a comparison of different ZFS Raid Levels which different number of disks.


    You can also have a look here: 19.3.2. Adding and Removing Devices at the FreeBSD ZFS administration handbook. Please pay attention to the difference between 'zpool add' and 'zpool attach'!

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • So I understand, you want to create one pool out of 36 drives (3 vdevs which 12 disks each)? That´s really big! So for my opinion each vdev should be a raidz3 or at least a raidz2. I am not sure if this is possible by the WebUI. Could be that you have to use the CLI.
    You may have a look at this article: ZFS Raidz Performance, Capacity and Integrity
    This is a comparison of different ZFS Raid Levels which different number of disks.


    You can also have a look here: 19.3.2. Adding and Removing Devices at the FreeBSD ZFS administration handbook. Please pay attention to the difference between 'zpool add' and 'zpool attach'!

    I use Napp-it on another server and creating the zpool and vdevs was a really simple operation. I am finding the webUI for ZFS on OMV to be lacking.


    I am asking for clarification because this isn't clear to me in the webUI. The webUI is giving me two impressions. I can either make 1 pool consisting of a 36 drive vdev or 3 pools consisting of a 12 drive vdev each. Neither is what I want.


    I would strongly prefer not to have to use the CLI do build my RAID-Z3 array. I shouldn't even need to.

  • use Napp-it on another server and creating the zpool and vdevs was a really simple operation

    Ok, then you are quite familiar with ZFS :)
    Maybe the ZFS plugin has some restrictions and is supporting only some basic ZFS procedures. This plugin should be easing the first steps with ZFS but has no professional approach to support anything which is possible with ZFS.
    For example you can create a single snapshot but you can´t create scheduled automated snapshots. For this you have to use a special script or another tool like znapzend or zfs-auto-snapshot.


    Btw: The creation of a pool is a job which had to be done one time and should be no sorcery. Once you have created your pool (by CLI), you can create the filesystems and so on with the ZFS plugin.


    AFAIK the developer of the ZFS plugin is (temporarily) not available. So @ryecoaaron is doing some bugfixes generously, but I would not necessarily expect greater enhancements.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    3 Mal editiert, zuletzt von cabrio_leo ()

  • It is frustrating that the ZFS implementation is so basic and featureless. I have other servers and will be trying out other options.


    For now, can you please confirm that as I understand it. I would create the zpool with 12 drives and then expand it with 12 more drives and then again with the last 12?


    I kinda need to get this up and running ASAP.

    • Offizieller Beitrag

    ZFS implementation is so basic and featureless.

    You are the first I have seen that has described the plugin that way. Most OMV users are looking for drive pooling with redundancy and bitrot protection without the specifics of how that is done. And that type of user just happens to be what OMV is targeted at.


    I kinda need to get this up and running ASAP.

    Trying things out in a VM would help a lot of the problems/questions you have.

  • You are the first I have seen that has described the plugin that way. Most OMV users are looking for drive pooling with redundancy and bitrot protection without the specifics of how that is done. And that type of user just happens to be what OMV is targeted at.

    I hope I didn't insult anyone, that was not my intention.


    Zitat von ryecoaaron


    Trying things out in a VM would help a lot of the problems/questions you have.

    I thought about that but setting up VM's seems complicated and I am too stupid to understand any of that.

  • I kinda need to get this up and running ASAP.

    At the beginning of this thread in post no. 2 there are some examples how to create and grow a pool.


    Otherwise I would say: Just try it out! In worst case you can wipe the data disks and start from the beginning. With CLI> zpool status you can monitor your progress.


    And I saw in other threads that you had a lot of problems installing ZFS. So once you have a stable omv system configuration, please, please make a backup copy of your omv system disk with clonezilla! This is also included in OMV. Then you can restore the last stable configuration in case of problem installing other plugins.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Before I use the ZFS plugin for OMV, I just want to make sure I understand this before I continue.


    My goal is to create a RAID-Z3 pool with 3 vdevs consisting of 12x6TB drives.


    So as I understand it. I would create the zpool with 12 drives and then expand it with 12 more drives and then again with the last 12, right?

    Yes, that is how you would create that, and yes, it is confusing to say the least when you're looking for multiple-vdev setups. If I could code worth a damn I'd have already started fixing it, but I badly flubbed a simple one line edit, so yeah.


    Also, I'm jealous of your storage space. :)

  • Yes, that is how you would create that, and yes, it is confusing to say the least when you're looking for multiple-vdev setups. If I could code worth a damn I'd have already started fixing it, but I badly flubbed a simple one line edit, so yeah.

    Well I wish someone would. I have been having nothing but problems since I started using OMV.


    So When I created the pool with the first 12 drives, everything seemed to work just fine.


    Then when I went to expand the pool with 12 more drives, I first got this error. https://i.imgur.com/wZdfoGE.png


    So I clicked "Ok" and tried again but then I got this error. https://i.imgur.com/Aos4PRs.png


    So I rebooted and noticed the pool had expanded but said "DEGRADED".


    I then repeated those same steps, got the same errors, rebooted, noticed the pool had expanded again but still said "DEGRADED".


    So when I check the details, I can see that the second and third vdev only show 11 drives instead of 12.


    My raw storage space is 198TB, minus 49.5 for parity, I should have about 148.5TB but am only showing 137TB which is consistent with the 11TB missing due to the 2 missing drives.



    Someone please tell me what is going on and how I may fix it.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!