[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

  • ZFS uses half of the RAM size as file cache per default. ZFS writes to disk only every 5 seconds (write flush). Therefore the RAM is heavily used as cache.

    Got it thanks!

    BTW: during copying and maybe 30 minutes after - 55-60% RAM was used, but after that - again my previous result 20% used.

    Not sure if I add one more 6TB drive to stripe should I add extra memory? Now it seems that 1TB=1GB of RAM, probably 8GB not enough for 9TB stripe from 2 disks.

    I would say that this is not ZFS related and must have other reasons. If one file is copied from ext4 to ZFS you must see one additional file on ZFS.

    It`s very strange, because I have less folders and files on ZFS and I have tried copying a few big folders with many files/folders inside from one ext4 drive to another ext4 drive - size and amount of folders/files were equals (==).

    And all these experiments I done 10 hours ago, few minutes ago tried the same - check amount files/folders on source (ext4) and destination (zfs) - and they are equal, probably some data was cached in RAM.

    But anyway I can`t see 'ZFS you must see one additional file on ZFS'

    Very interesting behavior.

  • Does anyone get "Previous version" folder visible for shared zfs dataset via Samba on OMV?

    Is it correct, that you want to see the ".zfs" folder within the SAMBA share? To have direct access to a previous version of a certain file is something different.


    You can make the folder visible by CLI> zfs set snapdir=visible <pool>/<dataset>


    I have checked it just now. It works immediately.

    Do you use same zfs-auto-snaphot script or something other?

    znapzend

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

  • But anyway I can`t see 'ZFS you must see one additional file on ZFS'

    :?::?::?:

    In your last screenshot everything is O.K. You have the same amount of files and folders on ext4 and on ZFS. Where is the problem?

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Thank a lot!

    You directed me on the right way.

    I meant "Previous version" tool in Windows 10 explorer and I got it.

    My problem was in unpriveleged user (non-root) who can't get access to .zfs/snapdir. I just added "root" for the user and it works

    • Offizieller Beitrag

    ... with single drive I am not sure why I need ZFS, is that better then ext4 with these settings

    There are two things you can do with a single ZFS drive, that you can't do with EXT4.

    - Snapshots:
    This would let you retrieve a previous version of a file or a deleted file. (Dependent on the retention period.) For documents and other similar files types, my retention period is 1 year. Snapshots are, also, an insurance policy against ransomware.

    - Bit-Rot Protection:
    We'll say that you have documents or family pictures stored on your server. You might want to insure that they won't degrade. If you set copies=2 for that particular ZFS filesystem, a "scrub" will delete and recopy a corrupted file from the 2nd good copy or report a permanent error. The cost for bit-rot protection is 2X the space but documents, pictures, and other files of real importance tend to be small, a drop in the bucket, on today's enormous drives.


    On the other side of the coin EXT4 is faster and has utilities that may (or may not) "patch" corrupted files or a corrupted filesystem. Either way, 100% backup is a very good idea.

  • There are two things you can do with a single ZFS drive, that you can't do with EXT4.

    - Snapshots:
    This would let you retrieve a previous version of a file or a deleted file. (Dependent on the retention period.) For documents and other similar files types, my retention period is 1 year. Snapshots are, also, an insurance policy against ransomware.

    - Bit-Rot Protection:
    We'll say that you have documents or family pictures stored on your server. You might want to insure that they won't degrade. If you set copies=2 for that particular ZFS filesystem, a "scrub" will delete and recopy a corrupted file from the 2nd good copy or report a permanent error. The cost for bit-rot protection is 2X the space but documents, pictures, and other files of real importance tend to be small, a drop in the bucket, on today's enormous drives.


    On the other side of the coin EXT4 is faster and has utilities that may (or may not) "patch" corrupted files or a corrupted filesystem. Either way, 100% backup is a very good idea.

    Thanks you for explanation!

    So, according to your explanation - in my cases I don`t need ZFS :)

    Something very important I can copy to external HDD and put it somewhere in the drawer.

    Snapshots and Bit-Rot Protection in my opinion not very helpful, both kind of complicated and I have to remember what exactly and where I saved/setup.

    Moving back to ext4


    One question - should I switch back to default kernel or I can continue using Proxmox kernel?

    • Offizieller Beitrag

    Thanks you for explanation!

    So, according to your explanation - in my cases I don`t need ZFS

    There are other capabilities but those two mentioned (snapshots and bit-rot protection) offer the most bang for the buck.


    One question - should I switch back to default kernel or I can continue using Proxmox kernel?


    You can keep the Proxmox kernel if you like. Kernel versions are tend to me more about hardware compatibility with newer backports kernels being more likely to be compatible with cutting edge hardware. If the Proxmox kernel works with your hardware, there's nothing wrong with keeping it. It's very stable.

  • most bang for the buck.

    And I have a question - what was wrong, my today story below:

    I received new WD Gold HDD, decided add it to my NAS, I turned off my OMV5 via UI (selected shutdown in menu), installed new disk, also have changed a few sata cables and turned on again.


    All path to my disks were changed and I couldn`t see ZFS drive.

    Opened ZFS plugin - no any ZFS pools, tried to check

    zfs status

    or

    zpool status

    Response:

    Code
    no datasets available

    Checked drive

    fdisk -l /dev/sdc1 - is present in system (previously it was /dev/sdb1)

    Tried to create again

    zpool create tank /dev/sdc1

    Response:

    Code
    invalid vdev specification
    use '-f' to override the following errors:
    /dev/sdc1 is part of potentially active pool 'tank'


    What was that and why I see no ZFS after changing sata-cables?


    ext4 disks have their own labels and it works fine after any sata cables changes, but ZFS is sensitive to changing path?

  • Maybe I should add some additional parameters to ZFS pool or to disk?

    Pool was created and setup by next commands:

    Code
    zpool create tank /dev/sdb1
    
    zfs set aclinherit=passthrough tank 
    zfs set acltype=posixacl tank 
    zfs set xattr=sa tank 
    zfs set compression=lz4 tank 
    zfs set atime=off tank 

    Then:

    1. I have opened ZFS plugin

    2. Was displayed OMV save dialog

    3.I pressed 'Apply'

    4. Created new 'Object' in ZFS plugin

    5.I pressed 'Apply'

    6. Added this new 'Object' to share folder and to samba.

  • Maybe I should add some additional parameters to ZFS pool or to disk?

    That doesn´t help.

    zpool create tank /dev/sdb1

    That is your problem. If you create the pool with the device name (path) /dev/sdb1 then it is not allowed to change it later. You have added a new disk later, so the order in /dev has changed.


    If you create the pool with /dev/disk/by-id/ instead of the device name (path), then the sata connection order doesn´t matter.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    4 Mal editiert, zuletzt von cabrio_leo () aus folgendem Grund: typo

    • Offizieller Beitrag

    As cabrio_leo stated, if you want to move disks around, setting them on different ports:


    When setting up the pool, under Device Alias, select by ID . This will identify the disks in the pool by their UUID.

    If you select by Path, you can't change disk SATA ports. Physical SATA or SAS ports are tied to the device path.

  • That doesn´t help.

    That is your problem. If you create the pool with the device name (path) /dev/sdb1 then it is not allowed to change it later. You have added a new disk later, so the order in /dev has changed.


    If you create the pool with /dev/disk/by-id/ instead of the device name (path), then the sata connection order doesn´t matter.

    To change only need to First export.


    zpool export


    And then import by-id

  • To change only need to First export.


    zpool export


    And then import by-id

    Yes, that´s another possibility. But in the past OMV didn´t like it when the pool was exported. After the next import of the pool all share references get lost. Didn´t know if this problem is fixed meanwhile.


    Nevertheless I would assume that this is possible only, when the pool is still existing. In the current case when zpool status returns no datasets available it´s maybe to late for this approach.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • That is good practice when datasets/filesystems are disconnected from any Debian-OMV services before export or any action lead to mountpoint relocation. It includes sharedfolders, smb/cifs, docker volumes and other.

    Yesterday I couldn't export pool because of "the pool is busy" due running docker containers whish wasn't binded to this pool.

  • That is your problem. If you create the pool with the device name (path) /dev/sdb1 then it is not allowed to change it later. You have added a new disk later, so the order in /dev has changed.


    If you create the pool with /dev/disk/by-id/ instead of the device name (path), then the sata connection order doesn´t matter.

    I didn`t know that, thanks!

    Usually I make the same for ext4 with command e2label /dev/sdX <LABEL>

    Can I change label now when I already have zfs on disk and some data OR I have to create pool from the scratch?

    Copied all data from ZFS disk, tried to destroy pool - got error

    pool has experienced I/O failures and link https://zfsonlinux.org/msg/ZFS-8000-HC/

    Also found open issue https://github.com/openzfs/zfs/issues/2878

    Could not remove/destroy/unmount disk at all, but been able to wipe it via OMV UI.


    Now I can see that ZFS plugin has option 'Device alias' for adding pool (first time I created pool from terminal).

    Thinking: should I give a try again with ZFS with alias 'by id' OR use this disk like others two in ext4 file system.

  • Thinking: should I give a try again with ZFS with alias 'by id' OR use this disk like others two in ext4 file system.

    crashtest has made a small comparison of ZFS and ext4 in post #948. But you have to decide by yourself.:)


    In your case I would decide for ext4. In my NAS there is the main ZFS pool and additionally a single disk which is used for UrBackup. This disk is ext4 formatted. The handling of ZFS can be a little bit tricky in OMV. Therefore I use it only for the disk pool.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    In my NAS there is the main ZFS pool and additionally a single disk which is used for UrBackup. This disk is ext4 formatted.

    I've found it to be a good idea to have a separate utility disk, formatted to EXT4, for backups, Dockers, image files, etc. :thumbup:

  • crashtest has made a small comparison of ZFS and ext4 in post #948. But you have to decide by yourself.:)


    In your case I would decide for ext4. In my NAS there is the main ZFS pool and additionally a single disk which is used for UrBackup. This disk is ext4 formatted. The handling of ZFS can be a little bit tricky in OMV. Therefore I use it only for the disk pool.

    Got that, thanks! Decided to use XFS ^^

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!