Low Write speed for Transparent Compression (ZFS)

  • I've been noticing that my drives have been quite sluggish whenever i enable any form of compression on them. Particularly with incompressible data speeds will often drop massively down from an average of 90MB/s uncompressed (on the same file) down to an inconsistent 30-10MB/s when compression is enabled.


    If i didn't know any better i'd think this entirely expected, But from what i've seen this isn't particularly so. At least for ZFS, It should still be reasonably fast on incompressible data.


    I'm running this in a VM on Proxmox, The CPU usage does not appear particularly high (30% at *most* on LZ4) and while the ram is a bit tight (8 GB) i'm not aware of that being a particular issue for write performance.


    Is this just sort of a known issue with OMV? Or a possible inefficiency with my configuration/system in particular? I'd appreciate any suggestions or anecdotes that might help get to the bottom of this.



    Some misc info about my setup:

    The disks are passed through via ID to the VM and handled by OMV's ZFS plugin as i prefer managing it that way.

    I have 4 CPU threads assigned to the VM, of the 8 in my E3-1270 (V1)

    The Ashift oddly reports 0 in ZFS, I'm not entirely sure if that's normal for virtualized hardware.

    BTRFS (LZO) performs similarly horribly.

  • Is it compressing it twice? Once in OMV and again in Proxmox? Is it swapping memory to disk?

    Proxmox isn't using ZFS. I used the qm set command to pass the unpartitioned disks to OMV, Then formatted them in the ZFS plugin's interface.


    Do you mean the cache setting by that last bit? If so, I had it enabled on directsync for a while. The uncompressed transfers i did seemed unhampered by it despite it being one of the slower options.

    • Offizieller Beitrag

    Is this just sort of a known issue with OMV?

    Nothing to do with OMV. It is just a web interface. This is a filesystem/vm issue.

    Or a possible inefficiency with my configuration/system in particular?

    What cpu are you using for the VM? Hopefully host-passthrough.

    What controller are you using for the passed thru drives? Hopefully virtio.

    I also like for the disks to have cache=none and io=native but I can't remember if you can set those on proxmox.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • What cpu are you using for the VM? Hopefully host-passthrough.

    What controller are you using for the passed thru drives? Hopefully virtio.

    I also like for the disks to have cache=none and io=native but I can't remember if you can set those on proxmox.

    Thanks, On some further testing and those changes it seems to be fixed. (Hopefully for good)


    I believe io=native and no cache are the default for Proxmox. Probably my tampering that caused this, Since i had mistakenly thought no cache was unsafe.


    For others' reference:

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!