Zfs caching

  • Greetings @TechnoDadLife,


    I would say, this is done "out of the box" by ZFS. It´s the ARC cache: ZFS ARC on Linux, how to set and monitor on Linux? ZFS takes half of the RAM as default for the max. buffer size.


    There are two parameters where the amount of ram can be customized: zfs_arc_min and zfs_arc_max.
    The current settings can be checked here: /sys/module/zfs/parameter/zfs_arc_min and /sys/module/zfs/parameter/zfs_arc_max. The current RAM usage can be checked with CLI> arcstat.


    Please look here how to do the modification: ZFS to use more than 50% ram?

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Sorry for my lack of knowledge on ZFS, but is it possible to do disk caching or ram caching on ZFS on OMV?
    Thanks

    YES, is like cabrio_leo says, but in my experience, no noticeable speed is noticed on normal NAS home usage, only in intensive I/O and concurrency file access on productive office enviroments

  • YES, is like cabrio_leo says, but in my experience, no noticeable speed is noticed on normal NAS home usage, only in intensive I/O and concurrency file access on productive office enviroments

    Yes, I agree with that. I think, the default settings are sufficient for home usage.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Yes, I read about that. But as far as I know, a dedicated device (e.g. SSD) is required.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Hi, is it possible to lower how much RAM ZFS will use? In zfs_arc_max I have "0". Can I change that value so that ZFS will use no more than 2/3GB?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • right now it's using nearly 4GB. For my usage is a little too much since I only have 8GB of ram.

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • Can I change that value so that ZFS will use no more than 2/3GB?

    Try zfs_arc_max=2147483648


    Edit: And it could be that you have also to modify/reduce the zfs_arc_min value.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    2 Mal editiert, zuletzt von cabrio_leo ()

  • After editing it, do I need to restart my server?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • If this is possible in your environment I would do this. With CLI> arcstat you can check if the changed settings have become active.

    If have read that the used ARC memory is not changed immediately. It takes a certain amount of time. So a reboot may the fastest way to work with the new settings.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • For now I only edited the file zfs_arc_max, then rebooted. Lets see :)

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • tried to edit the file, but after rebooting it will return to 0.

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • With CLI> arcstat you can check if the changed settings have become active.

    Sorry, but I think this statement is wrong. arcstat shows the current memory allocation / size of the ARC. This could be 0 after a reboot.

    Please have a look at post #2 of this thread. There I have written how to check the settings.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • My bad, didn't see the link about how to limit how much RAM ZFS will use.

    So I jusu should run this 3 commands from cli right?

    Code
    echo "options zfs zfs_arc_max=1073741824" >> /etc/modprobe.d/zfs.conf
    echo 1073741824 > /sys/module/zfs/parameters/zfs_arc_max
    update-initramfs -u

    I don't really think I need more than 1/2GB of RAM since most of my file are media files.

    I'm not really sure what will improve if I give 1/2/4GB of RAM to ZFS.

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • 1 echo "options zfs zfs_arc_max=1073741824" >> /etc/modprobe.d/zfs.conf
    2 echo 1073741824 > /sys/module/zfs/parameters/zfs_arc_max
    3 update-initramfs -u

    Line 1 is necessary if you want a persistent change of the setting which takes effect upon reboot.

    Line 2 is to effect the running module.

    Line 3: I am no Linux guru. Therefore I have do admit, that I do not know exactly if this is necessary at all. Maybe you can explain to me what it is for? I mean to remember that I have done this too.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    When starting a Linux system the bootloader loads the kernel and a compressed filesystem with modules needed by the kernel to initialize filesystems before they can be accessed. This compressed filesystem is the initramfs. To use some filesystems the kernel may need to load some modules first. And the initramfs makes these modules available to the kernel during boot, before ordinary filesystems are available.


    Sometimes, when you want to boot from certain filesystems, you need to make sure the modules for that are available to the kernel during boot. You update the initramfs.


    Most likely you could still use ZFS for data volumes, without updating initramfs, but you wouldn't be able to boot from ZFS. Also the boot process may be faster if modules are loaded using initramfs.


    Once you needed to recompile the kernel to add support for filesystems during boot. Now it may be enough to update initramfs before next reboot.

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

  • Adoby Thanks for the detailed explanation!

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!