OMV into Proxmox, low writing rate

  • I owned a QNAP 419p+ that was doing its dirty work with just 500MB ram and a pair of 100Mbps ethernet interfaces , but it started to get old...


    I moved to a OMV system as virtual machine in a Proxmox environment hosted by a 8th Gen HP microserver with 16GB ram.


    Well, ZFS system eats by default half of RAM, so actually remaining 8GB are half splitted between OMV and another VM running a basic IP pbx


    Yesterday I was transferring about 150 GB of data from my PC to OMV, experiencing 2 to 20 MBps transfer rate out of 120MBps (120*8=960Mbps) I get toward a Synology NAS on same 1Gbps lan (Proxmox/HP host is connected via a 10Gbps adapter to switch SFP+).


    Top shows a little cpu utilization where, instead, Proxmox monitor shows almost all of 4GB ram is continuously used by OMV

    Proxmox ZFS storage is over a pair (Raid1) of 4TB WD-red (NAS) HDD


    Is the limited amount of RAM to be blamed ?

    Or should I check against a bad configuration or a equipment misconception/setup ?


    Or better, should I install OMVdirectly on bare metal, forgetting Proxmox environment ?


    Thank you

  • How is the OMV VM configured in Proxmox? Are you working with virtual data disks or passthrough data disks in the OMV VM? What filesystem have you used when formatting the data disks in the OMV VM?


    PS Most of that stuff about options to speed up SMB are voodoo or out of date or are even already part of the the global smb config.

  • 7nbzylG9Lcem.jpg


    No direct access to disks, filesystem is ZFS


    I doubled RAM for the VM but it is still full used :


    7nc2S3njkVRQ.jpg


    There's a strange ram usage timing...


    Suggestions ?

  • Not much point in increasing OMV VM RAM as you will just starve Proxmox itself of memory. Also, if the CPU on your HP micro gen8 has only 4 cores, just use 2 or 3 in the VM and use host cpu. You might as well turn guest agent on in the VM config. (See here if you want to limit the zfs arc memory: https://pve.proxmox.com/wiki/ZFS_on_Linux )


    What is Proxmox installed on? Is it the HDD zfs mirror? Really, I don't know what kind of performance you expected with this. It's not really RAM that's the problem, the IOPS of the zpool mirror are low and zvols don't perform that well. Hopefully Proxmox picked the right ashift for you, or something other than ashift=12 will just make things worse. But the OMV VM looks to be using more memory than you might expect, so you going to have monitor the VM from within OMV.

  • Yes I've already tweaked zfs arc to free some memory.

    Proxmox ZFS is on Raid1 mirror (managed by proxmox itself, not HW raid).


    So the problem seems to be the low writing speed the system (ZFS) is capable to write into disks ?

    Really, I don't know what performances to expect, but as said, writing a single file (the test I done are about 8GB single .mkv file) extimated speed stays within 155-118 MBps for thirty seconds then collapse down to 500KBps to 10MBps oscillating : just a false estimation by windows ??

    Because this DOESN'T HAPPEN when transferring the same file into a Synology NAS : flat 119MBps transfer all the time (Synology and OMV use the same WD red Disks).

    The 8300MByte file takes 340 seconds to be copied into OMV

    It takes 72 seconds into Synology DS224+ !!!


    Clearly something's going wrong....


    Again, it's not a problem for me to install OMV alone on bare metal, assuming I can achieve the same Synology performances....

  • You're writing to a ext4 formatted zvol through the virtualisation layer of the OMV VM. It is going to be slow. Just monitor the writes within OMV, On the proxmox host, something like "watch -n 1 zpool iostat -vly 1 1" will give you an idea of how the underlying zpool performs. You will also see if you are generating a lot of sync writes on your zpool. The way proxmox picks a zvol blocksize is also going to determine the degree of write amplification. Look at the zvol properties on the host.


    An alternative is to install Proxmox on a single hard drive, it will use LVM to give you snapshots. Of course the system and any VM will be much slower than using a dedicated ssd. Then just configure proxmox storage on the second drive not using zfs which can be used as virtual drives for any VM.


    Or you could go with OMV on baremetal and use KMV within OMV. Disk writes to ext4 on a single HDD will be what you'd expect. But really for any docker/VM work you know SDDs are a better choice.

  • ik3umt one side to note, strongly rethink putting OMV on a bare machine, do this only if you have straightforward networking, no vlans, virtual nics etc, otherwise you'll need to learn about salt states in omv so it won't overwrite your netplan configs as OMV doesn't provide any gui for this and it will be a pain - trust me, have this problem now, thinking about moving out from OMV from bare machine.

  • ik3umt one side to note, strongly rethink putting OMV on a bare machine, do this only if you have straightforward networking, no vlans, virtual nics etc, otherwise you'll need to learn about salt states in omv so it won't overwrite your netplan configs as OMV doesn't provide any gui for this and it will be a pain - trust me, have this problem now, thinking about moving out from OMV from bare machine.

    Very true. OMV using netplan means no vlan aware bridges and no easy use of ovs , etc. Not surprising really as Proxmox is designed as a virtualisation host with sdn and OMV primarily as a NAS


    One option I didn't mention was of course not to use OMV at all. Create your own file server on Proxmox in a lxc container with bindmounts to the proxmox storage. You could, for example, either install and manually configure a samba/nfs server in the os image of your choice, or use a turnkey file server image with webmin, or install and use cockpit in an lxc with some packages from 45drives.

  • Thanks for suggestions

    Yes OMV is built over Debian, so I think all the network stack (not managed by GUI) has to be managed at machine level , however having a little skill in this I don't need it as the network is a very simple home one, or at least it will be added to an unaware untagged vlan port.

    And of course, I can throw away OMV for a basic SMB environment over Proxmox (or any container) ...

    For now I'm rsync-ing OMV into old QNAP just to try new OMV over bare metal, let's see how it goes....

  • Yes OMV is built over Debian, so I think all the network stack (not managed by GUI) has to be managed at machine level ,

    yes it can but then you have to fiddle with omv to not break your config if you accidentally touch it in gui, because OMV doesn't care and doesn't check, just overwrite! You'd need to manually add additional settings to omv to preserve your configs.

    And then it's a problem because if you have custom configs on network, you can't use them in other places so you're kind of stuck.

    Long story short, if you're network is anything bigger than simple one lan few simple nics? Stay away from OMV.

  • Ok , I have OMV on bare metal now.

    Transfer rate at full throttle 115 MBps on gigabit ethernet !

    And it's consuming 9% CPU and 500MB RAM out of 16GB available.... :sleeping:

    That's what you'd expect. Writing to a qcow2 virtual disk with ext4 in use on host and in guest is typically 60-80 MBps at best.

    • Offizieller Beitrag

    I think the ZFS was limiting. Shouldn't be using it without ECC memory anyway.

    Whether or not you need ECC memory is not unique to ZFS. It can be applied to any file system. The only difference is that the ZFS documentation placed more emphasis on this aspect and the others did not.

  • Whether or not you need ECC memory is not unique to ZFS. It can be applied to any file system. The only difference is that the ZFS documentation placed more emphasis on this aspect and the others did not.

    Agree with the first two sentences. 3rd one not so much.


    Quote from Joshua Paetzel, one of the FreeNAS developers:

    Zitat

    ZFS does something no other filesystem you’ll have available to you does: it checksums your data, and it checksums the metadata used by ZFS, and it checksums the checksums. If your data is corrupted in memory before it is written, ZFS will happily write (and checksum) the corrupted data. Additionally, ZFS has no pre-mount consistency checker or tool that can repair filesystem damage. [...] If a non-ECC memory module goes haywire, it can cause irreparable damage to your ZFS pool that can cause complete loss of the storage.

  • jollyrogr We can go round in circles about zfs & ECC all night if you want to but it's an old cannard and a waste of time. Latest OpenZFS statment is here:


    FAQ — OpenZFS documentation



    Zitat

    Do I have to use ECC memory for ZFS?

    Using ECC memory for OpenZFS is strongly recommended for enterprise environments where the strongest data integrity guarantees are required. Without ECC memory rare random bit flips caused by cosmic rays or by faulty memory can go undetected. If this were to occur OpenZFS (or any other filesystem) will write the damaged data to disk and be unable to automatically detect the corruption.

    Unfortunately, ECC memory is not always supported by consumer grade hardware. And even when it is, ECC memory will be more expensive. For home users the additional safety brought by ECC memory might not justify the cost. It’s up to you to determine what level of protection your data requires.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!