[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

  • Sorry if i dont answer before , im still reading all this thead :) , i'm on page 15 !!
    Thank for your reply , iv take a look soon !!!!


    EDIT:


    Good morning at all , yesterday i was reading an article about zfs "improvement/tuning" ...I've tried some settings for the limit arc cache.
    When i run the command in the terminal " cat /proc/spl/kstat/zfs/arcstats |grep c_" the result was :
    c_min 33554432
    c_max 8392192000
    My system had 16G ram ( not so much, but not so bad ) ,so I tried to change this value creating a file zfs.conf in /etc/modprobe.d whit this parameter :
    #Min: 4GB


    options zfs zfs_arc_min=4000000000


    #Max: 10GB


    options zfs zfs_arc_max=10000000000


    And then reboot the server, after another "cat /proc/spl/kstat/zfs/arcstats |grep c_" this is the new value of arc cache


    c_min 4 4000000000
    c_max 4 10000000000
    arc_no_grow 4 0
    arc_tempreserve 4 0
    arc_loaned_bytes 4 0
    arc_prune 4 0
    arc_meta_used 4 7203712
    arc_meta_limit 4 6294145536
    arc_meta_max 4 7206128
    arc_meta_min 4 16777216
    arc_need_free 4 0
    arc_sys_free 4 262254592


    At moment the server run ok, and the read/write speed is higher than before, but someone could tell me if this parameter could impact on the server function ??
    Someone have some advice for other improvements??
    Thank's everybody

    HP GEN 8 ,SSD OS , 4x4TB WD RED ,16 G ECC Ram

    Einmal editiert, zuletzt von dlucca ()

  • Good morning at all , yesterday i was reading an article about zfs "improvement/tuning" ...I've tried some settings for the limit arc cache.
    When i run the command in the terminal " cat /proc/spl/kstat/zfs/arcstats |grep c_" the result was :
    c_min 33554432
    c_max 8392192000
    My system had 16G ram ( not so much, but not so bad ) ,so I tried to change this value creating a file zfs.conf in /etc/modprobe.d whit this parameter :


    #Min: 4GB


    options zfs zfs_arc_min=4000000000


    #Max: 10GB


    options zfs zfs_arc_max=10000000000


    And then reboot the server, after another "cat /proc/spl/kstat/zfs/arcstats |grep c_" this is the new value of arc cache


    c_min 4 4000000000
    c_max 4 10000000000
    arc_no_grow 4 0
    arc_tempreserve 4 0
    arc_loaned_bytes 4 0
    arc_prune 4 0
    arc_meta_used 4 7203712
    arc_meta_limit 4 6294145536
    arc_meta_max 4 7206128
    arc_meta_min 4 16777216
    arc_need_free 4 0
    arc_sys_free 4 262254592


    At moment the server run ok, and the read/write speed is higher than before, but someone could tell me if this parameter could impact on the server function ??
    Someone have some advice for other improvements??
    Thank's everybody

    HP GEN 8 ,SSD OS , 4x4TB WD RED ,16 G ECC Ram

  • @dlucca In order to avoid double posts simply wait about 30s if you get a forum error. Then reload the page. It is not necessary to send the post again. If the post is marked green then an admin has to unlock it before it can be seen in the forum. This can take some time.


    Someone have some advice for other improvements??

    I made these ZFS arc modifications in my ZFS setup too. It was recommended in the NAS4Free forum. I discovered no drawbacks with this settings.


    Then I edit /etc/default/zfs. I added the line "ZPOOL_IMPORT_PATH="/dev/disk/by-id to make sure that the disks are recognised by their serial numbers. Not sure if this is necessary at all.


    For creating snapshots automatically I am using ZnapZend.znapzend.org

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Im sorry for the double post , excuse me !!

    My post was not meant as a reprimand. I didn´t want to blame you. It is a common behavior of the forum software that from time to time there is a forum error when tryíng to send a post, mostly when the text is longer. It seems there is no remedy for that.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod


  • The ZPOOL_IMPORT_PATH line isn't necessary at this point; currently when the plugin creates an array it does so using by-id. That should only matter when you import an external pool from another system; it saves a few keystrokes.

    • Offizieller Beitrag

    The zfs 0.7.3 packages for amd64 (packages built on OMV 4.x box from the Debian Sid source code) are now in the OMV 4.x omv-extras testing repo. This should fix the 4.13 kernel compilation issues and offer new features.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Important Notices

    • The new systemd zfs-import.target file was added to the RPM packages but not automatically enabled at install time. This leads to an incorrect unit ordering on startup and missing mounted file systems. This issue can be resolved by running systemctl enable zfs-import.target after installing the packages. #6953

    see: https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.7.4 = https://github.com/zfsonlinux/zfs/pull/6764

  • Hello. I`m several days openmediavault 4 user.
    I have a problem with zed notifications.
    If I have pool with status DEGRADED, no email notification about this problem.
    If I manually run zpool scrub I got notification imminently.


    If I repaired zpool and it`s change status to ONLINE zed automatically send email that faulty event finished.



    Why does not zed check my raidz? How force zed to check my raid and send email notification if status changed?

  • Hi @du_ku I wrote a small HowTo about the usage of ZED to avoid an AutoShutDown will a ZFS scrub is running:


    (HowTo) avoid Autoshutdown while a ZFS scub is running


    As I remember you have to modify ZED resource file /etc/zfs/zed.d/zed.rc.


    Add a new line:
    ZED_NOTIFY_VERBOSE=1
    In verbose mode an email is sent.


    More about ZED in the link above.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod


  • As I remember you have to modify ZED resource file /etc/zfs/zed.d/zed.rc.


    Add a new line:
    ZED_NOTIFY_VERBOSE=1
    In verbose mode an email is sent.

    Thanks.
    But zed does not send any notification. Only when I run "zpool scrub". I do not understand why zed do not monitor raidz state?
    Do I need run "zpool scrub" every 5 minutes to know raidz state?

  • To test for ZFS health, and send an email if fails, I run a small script every 5 minutes that looks for the result of zpool status.


    I adapted from this one :
    >>> gist.github.com/petervanderdoes/bd6660302404ed5b094d


    I run it via cron every 5 minutes with parameter 0
    and once every friday with parameter 1 to force it to send an email


    Hope you can understand it, if not, let me know

    OMV 4.x. OMV-Extras ZFS iSCSI Infiniband. Testing OMV 5.1. Testing OMV arm64

    Einmal editiert, zuletzt von vcp_ai ()

  • I do not understand why zed do not monitor raidz state?

    What state do you want to monitor by zed?
    Did you had a look in post #8 where I mentioned some links to the zed documentation?

    Do I need run "zpool scrub" every 5 minutes to know raidz state?

    No, because a scrub is a task which runs several hours depending of the used data size in your pool.
    Try zpool status <yourpool> or zpool status -x
    If the status is ONLINE, everything is OK.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • What state do you want to monitor by zed?Did you had a look in post #8 where I mentioned some links to the zed documentation?

    For example

    Zed do not send notification in this case.
    If I run "zpool scrub main" I got email about problem.


    Yes I read post #8.
    I think I do not understand principle how it works.

  • To test for ZFS health, and send an email if fails, I run a small script every 5 minutes that looks for the result of zpool status.


    I adapted from this one :
    >>> gist.github.com/petervanderdoes/bd6660302404ed5b094d

    I run it via cron every 5 minutes with parameter 0
    and once every friday with parameter 1 to force it to send an email

    It`s work but I have e message "/zpool_checker.sh: 16: [: Illegal number:"

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!