Another SMB performance issue..

  • Hi all,


    i have a problem with my SMB performance. Currently i can get arround 30 MB/s only.


    Infos:

    Proxmox HV

    6GB Mem

    2 Cores (host passtrough) from my good old i5-7600k

    2 Disk, both with zfs in background, IOThread enabled


    What i can see: I Can reach my performance Target over SSH (SCP) oder NFS, but Samba is terrible slow. iam using the official OMV Image inside my VM. I dont know whats happening here. I tried the Parameters from the Threads pinned but no change, still terrible slow. Iam using an ubuntu ISO for testing purpose.

  • Iam using an ubuntu ISO for testing purpose.

    For what testing? So the OMV VM has one virtual disk for the OS and another single virtual as a data disk? If your are using zfs on proxmox how is that configured? And in OMV what filesystem are you using on the virtual data disk?

  • No, sorry, iam using the ISO to test the copy process... :D (so a big blob file, not small files)


    ZFS is not the issue here, because its working fine over SSH, it IS related to SMB somehow.

  • Really? I've no idea how your zfs storage is set up, or if you are using hdd or ssd or nvme. You don't say if you've used fio on the virtual disk attached your OMV to get some kind of benchmark for direct writes inside OMV. Nor have you said between which two hosts the xfer of data is occurring - OMv6 and ?? and across what network. So, I've nothing to add.

  • Test over same network:



    Test through firewall



    Specs of Server:

    As written in Post one


    Spec of Client:

    Ryzen 5900X/32GB RAM

    All connected over CAT7 trough the unifi PoE Switch (US-24-250W)


    And again:

    Its running as it should over SSH, so i dont get why you are trying to search in the network area?

  • dMopp The answer to your question should be obvious. People here are not mind readers, they are ready and wiling to help but you seem reluctant to provide the kind of details that my assist them and yourself to perhaps identify your problem. So let my ask another question. So what does "its running as it should over ssh" mean in your case? I can guess you mean you did one or more xfers over scp between client and server at > 30MB/s, but I might be wrong. Care to share the info I asked about before?


    Also as a comparison, have you tried setting up a file server in an lxc container, maybe using a turnkey image, or installing and configuring a simple smb share in another linux based vm?

  • repeat test playing with this options until you see improved performance, there are no magic, only try and test repetitions.






    Samba — openmediavault 6.x.y documentation

  • dMopp The answer to your question should be obvious. People here are not mind readers, they are ready and wiling to help but you seem reluctant to provide the kind of details that my assist them and yourself to perhaps identify your problem. So let my ask another question. So what does "its running as it should over ssh" mean in your case? I can guess you mean you did one or more xfers over scp between client and server at > 30MB/s, but I might be wrong. Care to share the info I asked about before?


    Also as a comparison, have you tried setting up a file server in an lxc container, maybe using a turnkey image, or installing and configuring a simple smb share in another linux based vm?

    To summarize it:


    Transfer the file from NAS over SMB: round 30MB/s

    Transfer the file with SCP: around 80MB/s*.


    Regarding Mind Readers: I did write in the satrt post, that nfs AND scp are working fine.


    From the settings perspective: I tried out a lot of the settings there, it might help for 1-5MB/s but not that much. Interesting: Doing the same with TrueNAS (which i dont like because i dont want to use ZFS on ZFS) i dont have the speed issues over SMB.



    *Both trough the Firewall because of different subnets.

  • dMopp The things I do for people. My promox tinkerbox has stayed dormant for several months. But here's a tantalising bit of info. My kit is not so super-dumper. Sandybridge i5 desktop PC running linux. Proxmox on a xeon e3-1220v3 16gb the test OMV not updated for awhile has two virtual disks. Speed of xfer of a 1.5gb iso from desktop to OMV approx 69MiB/s via dolphin file manager, over 100MiB/s when using CIFS kernel mount on Linux PC. I can tell you I used the absolute bog standard SMB/CIFS in OMv, no messing with any values. So why the difference? I'll let you guess what storage I'm using.

  • I wish i can tell you why this is the case. Anyway i guess zfs or normal ext4.


    What i can Tell:


    ZFS + ZFS + TrueNas (Debian) + SMB: Bit Slower

    ZFS + ZFS + TrueNas (BSD) + SMB: Fastest

    ZFS + EXT4 + OMV + SMB: Slow

    ZFS + EXT4 + OMV + NFS: Fine

    LVM + EXT4 + QNAP + SMB: Fine


    It must be something like an edgecase.


    Its not the Storage, its not network, not SMB, not Debian, not Proxmox, not ZFS, …


    And a lot of ppl complaining about that topic. Dont get me wrong, iam not fingerpointing at all, iam trying to find a direction where to look.

  • Well, its LVM/ext4 on PVE on a s3610 SSD in this case. I forgot to look at iowait in my briefest of tests.


    I'm not surprised that TrueNAS CORE was the best tuned system, but zfs on zfs is crazy. Unless you compare OMV set up with LVM + EXT4 to that of QNAP you can't say QNAP has a better default SMB setup, I somehow doubt that.


    It's a pain trying to track this stuff down. I don't know if you've already changed global or individual share various options as a lot of the stuff floating around on the web is out of data or just inaccurate.


    Whether monitoring the io pattern & latency on your zvols on the zfs pool will reveal anything I don't know. e.g. watching

    Code
    zpool iostat -vly 1 1

    zvols themselves don't perform that well, there's a lot of chat about that on the proxmox forums and all the stuff about write amplification knackering your crumby consumer SSDs and the need to get the volblocksize correct for a given use case.


    The fact that NFS is OK, but not SMB on OMV is the opposite to what you might expect as SMB woudn't be generating sync writes on your zfs pool.


    Rather than keep digging, I'd still be inclined to first config a simple samba file share in a linux lxc container based on zfs storage for a comparison, or go LVM +ext4 for OMV.

    • Offizieller Beitrag

    Interesting this has come up again in a matter of a few weeks, works for ssh and nfs but is slow on smb, the issue in the other thread was related to the network card on the omv machine, specifically the driver that was used in the kernel.

  • geaves I've not read through the other threads on this same issue. In this case it's proxmox, so I'd guess it's a virtual network driver on VM but unknown driver on the host. But perhaps the OP can confirm what's in use. In my case it was virt driver with intel.

    • Offizieller Beitrag

    In this case it's proxmox, so I'd guess it's a virtual network driver on VM

    Ah, so a driver could be the issue in relation to the hardware, the other thread turned out to be a Qualcomm Atheros network card and the driver being used, replacing that with an Intel solved the issue. Trying to tune smb proved to be pointless as like here it worked over ssh and nfs.


    The difficulty is trying to pin point where the problem is especially when iperf, nfs, ssh all work as expected.

  • I have to ask, as this has been proven in some testing that ryecoaaron and I were doing. Is your client system a mac or any other BSD based system? BSD based systems when used as a client to OMV, or any linux server that I have tested for that matter, seem to have very poor samba performance.


    A Linux or Windows client system can often hit anywhere between 80MBps to 105MBps in a 1Gbpd network, while BSD and MacOS, (which is BSD based) often max put at 30MB to 50MBps, but usually less from my tests, (you may even get the 80MBps to 105MBps in reads of you are lucky, but the writes are bad)

  • BernH Does anyone use FreeBSD as a SMB client, AFAIK support for SMB2 is patchy, does it even do SMB3? MAC OS users have their own problems. I'm pretty sure the OP is savvy enough to know what to expect from a 1Gbe network, the only BSD stuff in his list are servers. It would be nice to know if the OP pinpoints the problem and it was all along network related and not smb per se.

  • BernH Does anyone use FreeBSD as a SMB client, AFAIK support for SMB2 is patchy, does it even do SMB3? MAC OS users have their own problems. I'm pretty sure the OP is savvy enough to know what to expect from a 1Gbe network, the only BSD stuff in his list are servers. It would be nice to know if the OP pinpoints the problem and it was all along network related and not smb per se.

    I'm pretty sure BSD as a client, aside from mac is an anomaly, But I threw it out there as the issue I referred to is not just MAC, as it was evident in the straight BSD test I did, so it seems the MAC base too.

  • Sorry, was kinda busy.


    For the Questions:

    - Intel NIC (I219-V)

    - Stock drivers

    - Virt Card in VM OR Intel E1000 (testet both)

    - Win11 and MacOS as Client


    But one thing I found out: When the traffic is not going through the firewall, I can reach 80MByte/s, which is interesting, because iperf tells me it could be faster. The firewall is running in proxmox, too on a different node with Realtek NIC. So pointing in that direction is a good idea. But all the issues are gone on my qnap, behind the same firewall. Very very weird.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!