[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • OMV 1.0
    • Sorry if i dont answer before , im still reading all this thead :) , i'm on page 15 !!
      Thank for your reply , iv take a look soon !!!!

      EDIT:

      Good morning at all , yesterday i was reading an article about zfs "improvement/tuning" ...I've tried some settings for the limit arc cache.
      When i run the command in the terminal " cat /proc/spl/kstat/zfs/arcstats |grep c_" the result was :
      c_min 33554432
      c_max 8392192000
      My system had 16G ram ( not so much, but not so bad ) ,so I tried to change this value creating a file zfs.conf in /etc/modprobe.d whit this parameter :
      #Min: 4GB

      options zfs zfs_arc_min=4000000000

      #Max: 10GB

      options zfs zfs_arc_max=10000000000

      And then reboot the server, after another "cat /proc/spl/kstat/zfs/arcstats |grep c_" this is the new value of arc cache

      c_min 4 4000000000
      c_max 4 10000000000
      arc_no_grow 4 0
      arc_tempreserve 4 0
      arc_loaned_bytes 4 0
      arc_prune 4 0
      arc_meta_used 4 7203712
      arc_meta_limit 4 6294145536
      arc_meta_max 4 7206128
      arc_meta_min 4 16777216
      arc_need_free 4 0
      arc_sys_free 4 262254592

      At moment the server run ok, and the read/write speed is higher than before, but someone could tell me if this parameter could impact on the server function ??
      Someone have some advice for other improvements??
      Thank's everybody
      HP GEN 8 ,SSD OS , 4x4TB WD RED ,16 G ECC Ram

      The post was edited 1 time, last by dlucca ().

    • Good morning at all , yesterday i was reading an article about zfs "improvement/tuning" ...I've tried some settings for the limit arc cache.
      When i run the command in the terminal " cat /proc/spl/kstat/zfs/arcstats |grep c_" the result was :
      c_min 33554432
      c_max 8392192000
      My system had 16G ram ( not so much, but not so bad ) ,so I tried to change this value creating a file zfs.conf in /etc/modprobe.d whit this parameter :

      #Min: 4GB

      options zfs zfs_arc_min=4000000000

      #Max: 10GB

      options zfs zfs_arc_max=10000000000

      And then reboot the server, after another "cat /proc/spl/kstat/zfs/arcstats |grep c_" this is the new value of arc cache

      c_min 4 4000000000
      c_max 4 10000000000
      arc_no_grow 4 0
      arc_tempreserve 4 0
      arc_loaned_bytes 4 0
      arc_prune 4 0
      arc_meta_used 4 7203712
      arc_meta_limit 4 6294145536
      arc_meta_max 4 7206128
      arc_meta_min 4 16777216
      arc_need_free 4 0
      arc_sys_free 4 262254592

      At moment the server run ok, and the read/write speed is higher than before, but someone could tell me if this parameter could impact on the server function ??
      Someone have some advice for other improvements??
      Thank's everybody
      HP GEN 8 ,SSD OS , 4x4TB WD RED ,16 G ECC Ram
    • @dlucca In order to avoid double posts simply wait about 30s if you get a forum error. Then reload the page. It is not necessary to send the post again. If the post is marked green then an admin has to unlock it before it can be seen in the forum. This can take some time.

      dlucca wrote:

      Someone have some advice for other improvements??
      I made these ZFS arc modifications in my ZFS setup too. It was recommended in the NAS4Free forum. I discovered no drawbacks with this settings.

      Then I edit /etc/default/zfs. I added the line "ZPOOL_IMPORT_PATH="/dev/disk/by-id to make sure that the disks are recognised by their serial numbers. Not sure if this is necessary at all.

      For creating snapshots automatically I am using ZnapZend.znapzend.org
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • dlucca wrote:

      Im sorry for the double post , excuse me !!
      My post was not meant as a reprimand. I didn´t want to blame you. It is a common behavior of the forum software that from time to time there is a forum error when tryíng to send a post, mostly when the text is longer. It seems there is no remedy for that.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • cabrio_leo wrote:

      I made these ZFS arc modifications in my ZFS setup too. It was recommended in the NAS4Free forum. I discovered no drawbacks with this settings.


      Then I edit /etc/default/zfs. I added the line "ZPOOL_IMPORT_PATH="/dev/disk/by-id to make sure that the disks are recognised by their serial numbers. Not sure if this is necessary at all.

      For creating snapshots automatically I am using ZnapZend.znapzend.org

      The ZPOOL_IMPORT_PATH line isn't necessary at this point; currently when the plugin creates an array it does so using by-id. That should only matter when you import an external pool from another system; it saves a few keystrokes.
    • The zfs 0.7.3 packages for amd64 (packages built on OMV 4.x box from the Debian Sid source code) are now in the OMV 4.x omv-extras testing repo. This should fix the 4.13 kernel compilation issues and offer new features.
      omv 4.0.17 arrakis | 64 bit | 4.14 backports kernel | omvextrasorg 4.1.2
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • Important Notices
      • The new systemd zfs-import.target file was added to the RPM packages but not automatically enabled at install time. This leads to an incorrect unit ordering on startup and missing mounted file systems. This issue can be resolved by running systemctl enable zfs-import.target after installing the packages. #6953
      see: github.com/zfsonlinux/zfs/releases/tag/zfs-0.7.4 = github.com/zfsonlinux/zfs/pull/6764
      OMV 3.0.96 x64 on a HP T510, 8GB CF as Boot Disk & 32GB SSD 2,5" disk for Data, 4 GB RAM, CPU VIA EDEN X2 U4200 is x64 at 1GHz

      Post: HPT510 SlimNAS ; HOWTO Install Pi-Hole ; HOWTO install MLDonkey ; HOHTO Install ZFS-Plugin ; OMV_OldGUI ; ShellinaBOX ;