Beiträge von auanasgheps

    In many cases a share to share transfer that is initiated remotely will actually transfer the data from the server to the remote client and then back to the server. Are you sure this is not happening?

    Yes I am.
    Task Manager shows 0% activity and the transfer speed is above 1 GB/s.
    Has been said before, but the current combination of Windows + Samba supports remote transfers.

    has Adaptive QoS (Quality of Service) advertised to support "deliver lag-free online gaming and smooth 4K UHD streaming" .

    If I remember correctly in another thread QoS was a reason for bad SMB performance.

    Are there other gaming or streaming devices connected?
    How many other WLAN routers are visible at your location?

    No, all QoS / traffic analysis features are disabled.


    When I'm doing these tests the network is not loaded by other devices.


    I still believe the issue is not the network: I get these drops also when I transfer a file inside the NAS from a share to another, which is handled remotely

    that is a great decision :)

    From the other details provided a simple software related root cause can be ruled out.


    In the past network hardware was often the root cause in other cases.
    Mainly cable connections and WLAN routers with single core processors.

    Could you please elaborate on the network equipment and topology used?

    I get the same issue both in PC to NAS transfers and NAS to NAS transfers, so my network doesn't really matter.
    Still, I am using a Asus RT-AC86U as router, my NAS and PC are both connected with CAT6 cables at 1GB/s. The router has plenty of horsepower!


    I still believe there's something going wrong with SMB itself, because this issue also happens If I copy from/to the NVMe SSD on the NAS.

    According to https://www.samba.org/samba/do…/man-html/smb.conf.5.html both options are enabled by default.

    Yeah I eventually figured this out too, but it seems that they are are actually improving performance. It doesn't make sense, so I'll do more testing soon.


    Do you agree that if an option is not specified in the main configuration but is enabled by default according to SMB documentation, is actually enabled?


    In a test VM I get Yes, so it should be. I'll do actual testing soon.


    Code
    root@omv6:~# testparm --parameter-name="write raw"
    Load smb config files from /etc/samba/smb.conf
    Loaded services file OK.
    Weak crypto is allowed
    Press enter to see a dump of your service definitions
    
    
    Yes

    You may also like the following:


    Code
    mangled names = no
    # fixes weird file names that cannot be shown by Windows clients
    access based share enum = yes
    # only show shares that are available to user during enumeration
    hide unreadable = yes
    # only shows shares that are available to user during normal use

    UPDATE!

    Easy peasy fix for me.


    I have to admit I always ignored Samba Optimization threads, because they commonly target low powered machines with speed problems. My server at first glance maxes out 1 GB/s and I don't lack computing power.


    I also observed that many recommendations are deprecated, not relevant, do not follow best practices or simply specify... the default value.


    However, these settings actually make the difference on systems like mine, even if hits only 25% CPU utilization during a F A T file transfer.

    Code
    read raw = yes
    write raw = yes
    
    getwd cache = yes # IT IS ALREADY ENABLED BY DEFAULT, do not bother with this.

    I am now maxing out my HDD write speed and there are no dips or interruptions whatsoever!

    LMRig please try and let me know.



    EDIT: According to the official documentation, getwd cache is already enabled by default. Now you understand why I don't trust random dudes on the web who tell to enable this and that...


    EDIT2: YES it is enabled by default indeed.

    I removed the option, then ran testparm -v which shows all of your settings + defaults. The option is enabled.


    So you only need

    Code
    read raw = yes
    write raw = yes

    EDIT3: Apparently read raw and write raw are also default settings, but they truly make a difference when explicitly enabled. I'm not sure why, can somebody explain?

    Thanks I put logging to minimum, it seems the logs are now under control.

    I also deleted all the extra options and now SMB saturate my 1gb lan speed consistently.


    I will monitor the logs and post back if they re being spammed again in a few days.

    I'm glad logging is now ok.

    To be honest I just found out that SOME extra features are actually useful.


    Most guides you'll find on the web are outdated or specify redundant values.


    However, these values appear to be still useful.


    Code
    read raw = yes
    write raw = yes
    getwd cache = yes

    What kind of logging have you set on the Samba configuration page? These entries come from it.
    Set it to "None" or "Minimum"


    Also note that socket options = TCP_NODELAY IPTOS_LOWDELAY is redundant, as is already applied by default with OMV.

    According to Samba Wiki the performance improvement feature "Server-Side_Copy" is available since SAMBA version 4.1


    Server-side copying already happens since a lot of time. My network activity is close to zero when I move a file within the NAS.


    I'm not sure that "File Explorer on your PC" does make use of the feature.

    yes it does:

    Clients making use of server-side copy support, such as Windows Server 2012 and Windows 8, can experience considerable performance improvements

    I'm using Windows 10 (21H2).

    Have you enabled SMBv1 protocol in OMV?

    For the love of god, please, no.

    I am actually enforcing: min protocol = SMB2_10


    What software versions are used on client?

    Using Windows 10 21H2, means I'm getting the latest and greatest SMB versions. I also have a Windows 11 client.

    If I run Get-SmbConnection, my shares show up as SMB 3.1.1.



    Make it extra clear: when I transfer a big file, speed is good and almost maxes out my hard drives write speed. However, during the transfer, speed can fall almost to 0, then it goes back up. It does not fluctuate as much as OP, it only happens 2-3 times during a large file transfer.


    I'm sure is related to Samba and how it manages writes. I am using MergerFS, but since we are talking about just one big file, it does not matter as it is being written to just one HDD.

    I am using stock OMV SMB settings, plus the following:

    Code
    min protocol = SMB2_10
    mangled names=no
    access based share enum = yes
    hide unreadable = yes

    Here's an example: average speed was pretty good around 150/160 MB/s, but there's a clear dip that took some time to recover.

    And hostname --all-fqdns is going to try to get an fqdn/hostname for each virtual network adapter. Since docker creates one for every container, that can cause hostname to take a long time.

    It should not matter. I have 30+ containers and the command takes maximum one second to execute.

    Thanks for stepping in and giving the steps towards the final resolution, I could not do much more, but iI'll save these commands in my wiki for future reference :)

    So I tried to change the domain to 'casa',

    Italian spotted here. Ciao!

    but the resolv.conf file still has the search .

    I agree it's odd. But your router should send the search domain, (mine does).


    I'm happy you resolved the problem, but manually... I never had to edit /etc/systemd/resolved.conf file, clearly this is still a workaround. Actually /etc/resolv.conf should automatically set the internal DNS.


    I believe there's still something dirty left or forgotten from the upgrade, but only votdev can tell.


    Try running omv-firstaid and select the option to reconfigure network(s). Maybe this tool can delete all existing network configs! I used it in the past to fix mistakes.

    are these IP addresses correct?

    Code
    192.168.3.2             omv.canonica omv
    192.168.3.5             omv.canonica omv

    You could also try to disconnect one of the two nics, disable/remove the nic configuration and see if it makes a difference. When disabling the NIC the process will be still slow, afterwards try again.


    EDIT:

    Also run salt-call grains.get fqdn. It should take a long time (in your case, mine takes a split second).


    Also run hostname --all-fqdns and paste the result!



    Additionally, the entry 127.0.1.1 omv.canonica omv should be in the hostfile. I checked my VM I use for testing and is there as well, I don't know why yours is missing.

    Please try again, I previously mistyped the entry NAS (which is mine)