ZFS settings

  • Hey guys,


    Are there any ZFS settings I should be using or doing?
    I just setup a new OMV and installed ZFS and not really done anything else.


    I see my one ZFS pool is doing snapshots and the other isnt.
    The only extra thing I added was Compression: lz4


    Is there anything else I should do or use?
    https://i.imgur.com/7prcDEc.png


    And here you can see the one pool is doing snapshots and the other isnt.
    https://i.imgur.com/IIubTjT.png


    Lastly I cannot delete these snapshots either.

    • Offizieller Beitrag

    To achieve a close approximation of posix (Linux) permissions, I use the following on the CLI.


    (Where ZFS1 is replaced with the name of your pool.)


    zfs set aclinherit=passthrough ZFS1
    zfs set acltype=posixacl ZFS1
    zfs set xattr=sa ZFS1
    zfs set compression=lz4 ZFS1


    (You've already set the last attribute for compression. It's not required, but provided for readers who may want to do the same.)


    Unfortunately, the above pool attributes need to be applied to the pool before you move data into it. (They only apply to data added to the pool, after they are applied.)
    Otherwise, your files will have mixed attributes which, from a permissions perspective, can cause odd issues.
    __________________________________________________________


    Lastly I cannot delete these snapshots either

    This is not normal behavior.

    And here you can see the one pool is doing snapshots and the other isnt.

    The pool that's not deleting snapshots, was it the second pool created?
    _________________________________________________________


    Out of curiosity, why do you have two pools?
    You said this a new setup so I'm guessing you have backup?

  • Thanks for the reply.


    So 2 pools, 2x 1tb SSD mirror for downloads and dockers and stuff
    and 12x 8tb Raid Z2 for main storage.


    The vault (main storage) was created second yes, but this one isnt even creating snapshots.
    The first pool (jailhouse) for downloads and stuff was created first and this one has snapshots and I cannot delete them.


    I have setup stuff but nothing I cant re-do and all my stuff is still on my old server.
    I can blow away the whole OMV and restart, but I just didnt really have a good guide on how to do it, so followed the video for ZFS and installed proxmox kernel and just went from there.


    So if there is a beginners guide on correct setup, im all for it :)


    Unfortunately, the above pool attributes need to be applied to the pool before you move data into it. (They only apply to data added to the pool, after they are applied.)
    Otherwise, your files will have mixed attributes which, from a permissions perspective, can cause odd issues.
    -- This makes me think im going to be starting again regardless :D

    • Offizieller Beitrag

    I get the reason for two pools now. I asked because ZFS works with block devices and it seems that there are those who will partition drives, without considering the effects.
    _____________________________________________


    There's a thread on the forum regarding installation and running ZFS (which I can't seem to find at the moment :) ).
    The thread has gone long and would require a lot of reading so I'll give you a short course of action to consider.


    Since you're not committed yet and it appears that your data store will be huge:


    If it was me I would;


    - rebuild.
    - get everything up-to-date and reboot.
    - install OMVExtra's and enable the testing repo, and run "apt clean"
    Here, you'll be at a decision point. In the OMVExtras Kernel Tab, you'll find the Proxmox Kernel. The Proxmox kernel already has the ZFS header modules built and compiled into the kernel. If you switch to the Proxmox kernel there's nothing to go wrong when the ZFS plugin is added.


    **On the other hand, since you're having difficulty going the Proxmox Kernel route, maybe going with the standard back ports kernel might be the way to go. I'm using kernel 4.19 with no problems.**


    - install the ZFS plugin
    If you didn't go with the proxmox kernel, creating the header modules will awhile - get up and walk away until it completes. There may be a few errors during the install. Generally speaking, if the plugin shows up, it's good. And while it may not be necessary at this point, I'd reboot.


    - create a pool
    - run the commands posted above for correct permissions
    - create at least one child filesystem on the pool. (Settings on the parent are passed through to the file system. It's good practice to store data in a filesystem versus just using the parent pool which is similar to dumping data at the root of a drive.)
    - add some data
    - test snapshots, etc.


    - create your second pool, etc.


    In the bottom line, you should be able to take a snapshot and delete a snapshot in the GUI.
    ____________________________________________


    Let us know if you have questions and/or how it went.


    Tagging @hoppel118 and @ellnic who are ZFS users and may want to chime in.

  • Mate amazing! @crashtest


    I will start wiping and reinstalling tonight.
    When you say:
    **On the other hand, since you're having difficulty going the Proxmox Kernel route, maybe going with the standard back ports kernel might be the way to go. I'm using kernel 4.19 with no problems.**
    What do you mean by back ports kernel?


    As in just leave the kernel that comes with OMV and install ZFS and let it compile the headers?


    Lastly what does the testing repo do?

    • Offizieller Beitrag

    Mate

    Are you an Australian ? :)

    What do you mean by back ports kernel?

    The current OMV kernel (4.19) is a Debian back ports kernel. Proxmox is based on Ubuntu 4.15 (I think). Back ports kernels are the standard in OMV. They run the latest hardware with fewer issues. As an example, I think the "standard" kernel for Debian 9 (stretch) is 4.9.


    As in just leave the kernel that comes with OMV and install ZFS and let it compile the headers?

    Sure. That's what I'm using. I certainly won't hurt to try it. But if you go this route and everything works as it should, put a hold on the kernel so there are no automatic upgrades. (OMVExtra's has a provision for this.)

  • Are you an Australian ? :)

    The current OMV kernel (4.19) is a Debian back ports kernel. Proxmox is based on Ubuntu 4.15 (I think). Back ports kernels are the standard in OMV. They run the latest hardware with fewer issues. As an example, I think the "standard" kernel for Debian 9 (stretch) is 4.9.

    Sure. That's what I'm using. I certainly won't hurt to try it. But if you go this route and everything works as it should, put a hold on the kernel so there are no automatic upgrades. (OMVExtra's has a provision for this.)

    Close I'm South African but living in London :D


    So just wondering if i did a clean install which route would be the best/preferred method for longevity?
    Everywhere I read they mentioned Proxmox kernel as it has built in support.


    If i do a clean install, and do it properly which method would be the better tried and true way?

  • Are you an Australian ? :)

    Close I'm South African but living in London :D


    So just wondering if i did a clean install which route would be the best/preferred method for longevity?
    Everywhere I read they mentioned Proxmox kernel as it has built in support.


    If I'm starting again I'd like to try and do it the preferred most supported method.

    • Offizieller Beitrag

    Arguments could be made for going either way.


    As an example, using OMV's standard kernel (what I'm currently doing) comes with a risk. Since it is a backports kernel, closer to the cutting edge, the next kernel upgrade may have a problem or issue with current ZOL (ZFS on Linux) packages. And easy way to remove this risk is to set the working kernel as "default" (or put a hold on it) and check to insure compatibility before upgrading to a new kernel. Also, if an upgrade proves to a problem for ZFS, dropping back to the old kernel works. That's easy enough to do in OMVExtras. BTW, the current kernel 4.19 works fine with current ZFS packages. If your install works fine, put a hold on the kernel, done.


    The Proxmox kernel comes with precompiled ZFS modules, in a Ubuntu kernel, so ZFS incompatibility would never be a risk. On the other hand updates are slightly different for this kernel and (I'm not sure about this) there might be other slight userland differences. Nothing truly significant for the NAS use case (OMV) in any case.


    Either way it's about knowing the differences and how to deal with them.


    **Edit**
    A second fall back that I use is operating system backup. With full OS backup, even if some upgrade (the kernel or other) goes south, I can fall back to a known good working state with backup. I use USB thumbdrives to boot - cloneing them for backup is real easy. Restorations take about 3 minutes.

    • Offizieller Beitrag

    While you wouldn't need to do it right off the bat, if I were you, I'd seriously consider it. There's a huge difference between taking a occasional manual snapshot and automating them.


    I use zfs-auto-snapshot in the way it's defined in the How-To, on 2 servers. Read over the notes at the end (note 3) for advice on high turnover filesystems. In such cases, I've turned off all but daily snapshots, maintaining a history of 31 days. For the remainder of my largely static file systems, I've turned off "frequent", "hourly" and "weekly" just to cut down on snapshot clutter.


    While there's a more sophisticated approach (Znapzend), I've had zero issues using zfs-auto-snapshot. Most of the use cases on this forum, with largely static data, would work fine with zfs-auto-snapshot.
    ___________________


    After you get all of this working, give serious thought to backing up your OS. :) (It would be a shame to lose a good, working, configuration.)


    If you're interested in booting from USB thumbdrives, and cloning them for OS backup, take a look at this -> guide.

  • Yeah I was going to do just daily snapshots too.
    Can you see them in the GUI and restore/delete snapshots from the UI after doing the cron job?


    Lastly I have 2x 250gb SSD's (1 is used as the boot drive) the other is spare that I want to do a clone of my boot drive too incase it dies then I can just saw the drive over.
    Is clonezilla the way to do this? Can I setup Clonezilla to automate this or do I have to boot into it each time?


    EDIT: So redid my entire server from OS to everything.
    Followed your guide but trying Proxmox kernel again.


    I created one docker with netdata and just let it run overnight went to bed. Woke up this morning and see: https://i.imgur.com/Bc03K6v.png
    I didnt create any manual snapshots or anything, this was all auto, is this right?


    If I try to delete one of those 'Clone'
    I get this:


    No such Mntent exists
    Error #0:OMVModuleZFSException: No such Mntent exists in /usr/share/omvzfs/Utils.php:86Stack trace:#0 /usr/share/openmediavault/engined/rpc/zfs.inc(254): OMVModuleZFSUtil::deleteOMVMntEnt(Array, Object(OMVModuleZFSFilesystem))#1 [internal function]: OMVRpcServiceZFS->deleteObject(Array, Array)#2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)#3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('deleteObject', Array, Array)#4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('ZFS', 'deleteObject', Array, Array, 1)#5 {main}

  • To achieve a close approximation of posix (Linux) permissions, I use the following on the CLI.


    zfs set aclinherit=passthrough ZFS1


    zfs set acltype=posixacl ZFS1
    zfs set xattr=sa ZFS1

    100% wrong. Posix (Linux) permissions are not related to those ZFS settings that are about ACLs. And setting those ACL related ZFS prefs is mostly useful if you configure Samba in a compatible way to assign ACLs from a Windows Client in Windows Explorer. But what to expect from someone who can't differentiate between authentication and permission problems.

    If it was me I would ... rebuild

    What a crazy 'advice'. If something's not working it's time to have a look at it and why. This is OMV and Linux, logs exist, commands exist. No need for trial&error or wild guesses.

    The Proxmox kernel already has the ZFS header modules built

    You obviously have no idea what you're talking about. There is no such thing as 'header modules'. If the kernel sources aren't available at least kernel headers are needed for the compilation of 3rd party drivers. That's the procedure that has to happen when not using the Proxmox kernel since then the needed modules (zfs/spl) need to be built via DKMS. No kernel headers required when using the Proxmox kernel.

    If you switch to the Proxmox kernel there's nothing to go wrong when the ZFS plugin is added

    There still can something go wrong as already explained in a 'thread' you pretended to follow: https://github.com/openmediava…01#issuecomment-488582805 (a few comments above is also explained what can go wrong and what exactly went wrong when using the backports kernel instead)

    The current OMV kernel (4.19) is a Debian back ports kernel

    OMV does not change anything wrt kernel. On amd64 you remain on 4.9 unless you install OMV-Extras, enable backports there and choose to install the backports kernel. This is 100% optional (but was recommended in the past since more recent kernel results in better driver support for newer hardware. The recommendation has changed now that the Proxmox kernel is just one click away: Proxmox Kernel)

    The Proxmox kernel comes with precompiled ZFS modules, in a Ubuntu kernel, so ZFS incompatibility would never be a risk. On the other hand updates are slightly different for this kernel and (I'm not sure about this) there might be other slight userland differences

    What? 'In a Ubuntu kernel'? The Proxmox kernel is based on the upstream stable Ubuntu kernel with some further modifications / additional hardware support. Also the kernel is the exact opposite of 'userland'. There are no 'userland differences' in a kernel.


    You simply have not the slightest idea what you're babbling about.

    Can you see them in the GUI and restore/delete snapshots from the UI after doing the cron job?

    Use Znapzend and stop following Dunning-Kruger advice. With Znapzend you're able to benefit from snapshots in a way your Windows clients being able to access older document revisions from within Windows Explorer ('previous versions' feature):



    A guide how to do this with btrfs is linked from here. You could set this up in a similar fashion with ZnapZend but currently not with zfs-autosnapshot (needs some ugly tweaks or a more recent Samba version and as such only available with OMV5 later). It makes simply no sense to promote zfs-autosnapshot today.

  • It seems that there is no proper good qualified way to setup ZFS in OMV.
    Everyone has different opinions and there is no clear course.


    And as BTRFS is becoming the norm in OMV6.x wouldnt it be wise to just go that way now?
    Be prepared for the future?

    • Offizieller Beitrag

    @savellm - I see someone has decided to make an introduction. :) I didn't read the entire novelette written above - I never do. Some, very few at a guess, want to get on the command line for everything and turn every relatively minor clean up or recovery operation into a dissertation. OMV was not created for that sort of user.


    On a rebuild: If you want to get on the command line and try to figure out what happened and "maybe" figure it out (or maybe miss something?) all I can say is good luck. How long that might take is unknown - versus investing 15 to 20 minutes in a rebuild where the outcome is a bit more trustworthy. Which would be shorter? I'll leave that to you.


    Before taking anything seriously, note that there are those who believe the entire world is "stupid", "dumb", "don't know what they're doing", etc., and will scream it to all who will listen. As an prime example, take a look at this post, and maybe read through the thread to get some perspective. I'm forced to agree with the Armbian forum moderators link to very good advice. Yes, unfortunately, every village has one.


    Further, some have known anger management issues. In such cases, where anger takes over, common sense and logic goes out the window. This is well known - nothing new.



    If you're still interested in running ZFS, we can take this into a private conversation. That's in the upper left hand area, on this page. I'll lend you a hand, but I know from experience that this thread will become so polluted and confusing, it would be pointless to continue it.


    Regards.

    • Offizieller Beitrag

    I created one docker with netdata and just let it run overnight went to bed. Woke up this morning and see: i.imgur.com/Bc03K6v.png
    I didnt create any manual snapshots or anything, this was all auto, is this right?

    After taking this matter into a PM, (where interference was eliminated) the following was found.
    It may be of use to ZFS users who run into the same problem.


    The issue occurs with Docker containers are created in a ZFS pool. The result is, the ZFS pool lists what appears to be a series of child "clones" and a sub-filesystem or two, under the filesystem where Docker containers are stored. Since the (Docker) container is in use, these clones can't be deleted or otherwise removed.




    The fix is easy enough. Stop and delete all containers, Docker images, and relocate the Docker storage directory to the default location (the boot drive) or another non-ZFS drive dedicated to utility uses.


    There's a -> ZFS driver for Dockers for those who may want to set it up. On the other hand, the setup notes indicate that this driver may not be suitable for the typical OMV home user.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!