[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • Offizieller Beitrag

    If you imported the pool and have existing folders under the parent pool:
    When setting up a share, did you select the device the pool is located on and the folder Icon to select the existing path?
    Or, is the physical disk where the pool is located, missing?

  • I've imported the pool (Data), then I went into sharefolders, selected /Data then the subfolder and created the 4 sharefolders that you can see in the screenshot. Should I have folloewd a different path?


    If I try to edit an existing sharefolder I get this error:

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    Einmal editiert, zuletzt von Blabla ()

    • Offizieller Beitrag

    If you "imported" the pool (as in, the pool is new to the OMV installation it was imported into), any existing shared folders are broken. (Unless they're pointed to another, existing drive.) Existing shared folders would not exist on an imported pool. An "imported pool" would be new to that installation of OMV. (Or did I miss something?)


    You could go into edit mode, on an existing share, and try to repoint it. In the device box, select the device where the imported pool exists, then click on the folder icon and select the appropriate folder. Hopefully this will work and will save without an error. If it does, the services layered on top of the existing shared folder will follow.

    Otherwise, if that doesn't work, you could try setting up new shared folders.
    In the new shared folder dialog, for the device, select the drive where the pool is, then click on the folder icon and navigate to an existing folder, and give the shared folder a name.

  • Maybe I wasn't clear:

    1) I created the pool Data

    2) I created 4 sharefolder using Data

    3) Now I need to add a 5° sharedfolder but I can't select Data from the Device list, I can only see the backup disk.


    I don't get why Data and the already created sharefolders are working even if

    - I can't find Data in the device list to create a new sharefolder

    - The 4 sharefolder instead of /Data or /Data/anime have n/a and I can't edit them. If I try to edit them I get the error of the preview post.

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • Update: not sure what happen but now I can see Data in the Device list :/

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    I don't know how to recreate what seems to have happened to you. On the other hand, I haven't attempted to set up ZFS using standard folders at the root of the pool for a very long time.


    Maybe this is the time to setup ZFS as was intended, with a parent pool and child filesystems. That would mean starting over but,,,


    1. Create the pool in the GUI - you can set the ashift value to 12 if you want. I went with the default which sets the vault to 0 for auto-detecting sector size. This is working well for me.
    2. Run the commands from above, on the command line, before data is added.

    3. Use the +add object button to create child filesystems on the pool.

    4. When creating a shared folder, the device selected will be the child file system itself. The path will /


    **Edit - I see the problem appears to be solved.**

  • Hi,


    I have a ODROID-HC with OMV5 installed.

    It has a 1Tb disk and I just got a 4Tb to substitute it.


    I 'm considering format this new disk as ZFS.

    I 'd like to know if ZFS on OMV5 is stable to use, and if I will get any benefits on using it on a one-disk configuration. HC2 has only one SATA interface and only one disk.


    Thanks.

    • Offizieller Beitrag

    I 'm considering format this new disk as ZFS.

    btrfs is available on Armbian and filesystems can be created within the GUI of OMV out of the box.

    This is not a recommendation, just an information. If you decide to use btrfs (or ZFS on another platform) you need to be aware about the pros and cons compared to "standard" ext4.

    • Offizieller Beitrag

    ZFS is only compatible with AMD64 CPU, revise your architecture before.

    That is no longer true. I started building the plugin for all architectures because Debian has the packages available - https://packages.debian.org/se…on=names&keywords=zfs-zed

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I 'd like to know if ZFS on OMV5 is stable to use, and if I will get any benefits on using it on a one-disk configuration.

    With ZFS on a one-disk-configuration you will get no self-healing feature because for that some redundancy information or a mirror is needed. But you could benefit of the checksum feature of ZFS. Therefore e.g. a hidden bitrot should be detected nevertheless by a scrub. But ZFS would not be able to repair it but you should then know that you have to restore the damaged file from backup.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Hi all, looking for a bit of help. I ran a FreeNAS system for about 3 or 4 years. Was all good until I wanted to upgrade - no matter what I did I couldn't get permissions working like i wanted. It was clearly an issue on my end but I moved to OMV using the ZFS plugin and haven't looked back - it just works.


    All has gone well until I decided to add another vdev to my array. Currently I have 6x10tb drives in a Z2 pool. I am trying to add another vdev in the form of 12x6tb in a Z2 pool but am getting an error saying 'raidz contains devices of different sizes'. I thought as long as the drives in the vdev were the same size (which these are) then you could add them? I am using the expand button in the ZFS tab in OMV. What am i doing wrong or is what I am looking for not possible? I am using the PVE kernel and am on OMV4 running all the updates.


    Any help appreciated.


  • none, but OMV ZFS WebGUI is not as complete as FreeNAS one, so please try add the vdev by shell.


    eg :


    Code
    zpool add -f "NASPool" raidz2 ata-WDC_WD60EZRZ-00RWYB1_WD-WX21DB509H6X ata-WDC_WD60EZRZ-00GZ5B1_WD-WXN1H8464WML ata-WDC_WD60EFRX-68L0BN1_WD-WX11D389L200 ata-WDC_WD60EZRZ-00GZ5B1_WD-WX11D55PXJ16 ata-WDC_WD60EZRZ-00GZ5B1_WD-WX61D9783HX3 ata-WDC_WD60EZRZ-00GZ5B1_WD-WX31D96KN40T ata-WDC_WD60EZRZ-00RWYB1_WD-WX21D25HE0SU ata-WDC_WD60EZRZ-00GZ5B1_WD-WX61DC742T8Y ata-WDC_WD60EZRZ-00GZ5B1_WD-WX11D65CUYSU ata-WDC_WD60EZRZ-00GZ5B1_WD-WXN1H844EA1N ata-WDC_WD60EZRZ-00RWYB1_WD-WXA1D65E3KV0 ata-WDC_WD60EZRZ-00GZ5B1_WD-WX11DC6H1K3H
  • . What am i doing wrong or is what I am looking for not possible?

    This is a known bug in the ZFS plugin. See here (just one example): https://forum.openmediavault.org/index.php?thread/29036-omv5-zfs-unable-to-add-mirrored-vdev-to-pool/


    Unfortunately ZFS is only rudimentarily supported in OMV. And that won't change until there is someone to take care of the plugin.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Hi!

    I installed the script for creating auto-snaphots. May be you already use it too

    Code
    apt install zfs-auto-snaphot

    Does anyone get "Previous version" folder visible for shared zfs dataset via Samba on OMV?

    I tried many variations of additional options in "smb/cifs" properties but couldn't get my autosnaphots visible.

    https://github.com/zfsonlinux/zfs-auto-snapshot/wiki/Samba

  • Wanted to try ZFS, read last 4 pages and everything is fine. My algorithm:

    • Install Proxmox kernel
    • Reboot
    • Remove non-Proxmox kernels
    • Install plugin zfs
      • Realized that with new kernel I have a lot of updates (just in case install all of them)
      • Copy all my data from patient (WD RED 3TB bought it in 2014) to another drive
      • Remove all shared folders from OMV UI which located on patient
      • Unmount drive
    • From SSH:
      • check disk sudo fdisk -l /dev/sdX
      • sudo zpool create -o ashift=12 zfs1 /dev/sdX
      • sudo zpool status
      • zfs set aclinherit=passthrough zfs1
      • zfs set acltype=posixacl zfs1
      • zfs set xattr=sa zfs1
      • zfs set compression=lz4 zfs1
      • zfs set atime=off zfs1
    • Open UI and check displayed new device in 'File systems' and in ZFS plugin
    • In ZFS plugin created new Filesystem as child of pool by pressing 'Add Object'
    • Add this folder to 'Shared folders' and to 'Samba'

    Right now copying my data back to zfs1 disk from another ext4 HDD.


    From all these flags for zfs only 2 I checked very carefully and decided to give a try: atime=off and ashift=12, not sure how it will affect performance.


    What I can see now after copying 40 GB back of my data and have to copy more (~900 GB)

    1. ZFS uses 40% of RAM, previously (on ext4) was 20%. 40% without any operations

    2. RAM utilization graphics looks like saw even without any operations

  • And maybe someone can explain me, why I have different amount of folders/files of the same data?

    Just copied from ext4 to zfs via samba.

    Different size - probably because compression=lz4, but folder and files...

  • What I can see now after copying 40 GB back of my data and have to copy more (~900 GB)

    1. ZFS uses 40% of RAM, previously (on ext4) was 20%. 40% without any operations

    2. RAM utilization graphics looks like saw even without any operations

    ZFS uses half of the RAM size as file cache per default. ZFS writes to disk only every 5 seconds (write flush). Therefore the RAM is heavily used as cache.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • And maybe someone can explain me, why I have different amount of folders/files of the same data?

    I would say that this is not ZFS related and must have other reasons. If one file is copied from ext4 to ZFS you must see one additional file on ZFS.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Does anyone get "Previous version" folder visible for shared zfs dataset via Samba on OMV?

    Yes.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!