Posts by auanasgheps

    Piping stdout and stderr to a log file would be the better way.

    That's what is happening already with the AIO Script. Thanks for explaning!


    By the way, if you want to track when the job has started, completed successfully or failed, I already integrated support for Healthchecks.io. Please read the GitHub page for more info.

    Fresh install is done. Data drives mapped without issues and some basic settings are also applied. The process was actually quite simple, but I still need to all the docker stuff and I saw that there is no more letsencrypt plugin, which is a shame. I guess I have to containerize that as well, right?

    Yes. Most of the complex/application plugins are containers now, and for a good reason.

    If you don't know anything about docker, here on the forum there are excellent beginner guides!


    Happy to know the fresh install went smoothly.

    For your own sanity, go for a fresh install.

    Most plugins have changed, deprecated or even merged.


    And yes, you WILL face issues after the upgrade(s). It's less time consuming to grab existing configurations, do a snapshot just in case and start from scratch.

    Don't think of it as a solution solution but as a tool to isolate the problem. Also look at top or jnettop to see if resources are being overloaded.

    I have Glances running when there's a copy going on. Nothing strange: during a 1 GB/s SMB transfer one core is doing 25% and the others are pretty idle. If you check out my signature you'll agree I'm not lacking resources for a gigabit SMB.


    I have done further testing using rsync: rsync -Pa /sourcefile /destinationfile and a 3GB file generated with dd (

    dd if=/dev/urandom of=3GB.bin bs=64M count=48 iflag=fullblock)


    I have 3 different HDDs for data.

    I moved this file from a NVMe SSD to each of these HDDs back and forth: no performance impact at all. Always over 100 MB/s, both Glances and rsync show correct values.


    However, when I target my MergerFS pool with these drives, SOMETIMES I am able to reproduce the performance issue, however there's no stress on the server.

    I will open a case to MergerFS's creator, but here's my configuration:


    - 3 HDDs

    - All default options: defaults,allow_other,cache.files=off,use_ino

    - Policy: Most Free Space (yes my HDDs are balanced and evenly used)

    In many cases a share to share transfer that is initiated remotely will actually transfer the data from the server to the remote client and then back to the server. Are you sure this is not happening?

    Yes I am.
    Task Manager shows 0% activity and the transfer speed is above 1 GB/s.
    Has been said before, but the current combination of Windows + Samba supports remote transfers.

    has Adaptive QoS (Quality of Service) advertised to support "deliver lag-free online gaming and smooth 4K UHD streaming" .

    If I remember correctly in another thread QoS was a reason for bad SMB performance.

    Are there other gaming or streaming devices connected?
    How many other WLAN routers are visible at your location?

    No, all QoS / traffic analysis features are disabled.


    When I'm doing these tests the network is not loaded by other devices.


    I still believe the issue is not the network: I get these drops also when I transfer a file inside the NAS from a share to another, which is handled remotely

    that is a great decision :)

    From the other details provided a simple software related root cause can be ruled out.


    In the past network hardware was often the root cause in other cases.
    Mainly cable connections and WLAN routers with single core processors.

    Could you please elaborate on the network equipment and topology used?

    I get the same issue both in PC to NAS transfers and NAS to NAS transfers, so my network doesn't really matter.
    Still, I am using a Asus RT-AC86U as router, my NAS and PC are both connected with CAT6 cables at 1GB/s. The router has plenty of horsepower!


    I still believe there's something going wrong with SMB itself, because this issue also happens If I copy from/to the NVMe SSD on the NAS.

    According to https://www.samba.org/samba/do…/man-html/smb.conf.5.html both options are enabled by default.

    Yeah I eventually figured this out too, but it seems that they are are actually improving performance. It doesn't make sense, so I'll do more testing soon.


    Do you agree that if an option is not specified in the main configuration but is enabled by default according to SMB documentation, is actually enabled?


    In a test VM I get Yes, so it should be. I'll do actual testing soon.


    Code
    root@omv6:~# testparm --parameter-name="write raw"
    Load smb config files from /etc/samba/smb.conf
    Loaded services file OK.
    Weak crypto is allowed
    Press enter to see a dump of your service definitions
    
    
    Yes

    You may also like the following:


    Code
    mangled names = no
    # fixes weird file names that cannot be shown by Windows clients
    access based share enum = yes
    # only show shares that are available to user during enumeration
    hide unreadable = yes
    # only shows shares that are available to user during normal use

    UPDATE!

    Easy peasy fix for me.


    I have to admit I always ignored Samba Optimization threads, because they commonly target low powered machines with speed problems. My server at first glance maxes out 1 GB/s and I don't lack computing power.


    I also observed that many recommendations are deprecated, not relevant, do not follow best practices or simply specify... the default value.


    However, these settings actually make the difference on systems like mine, even if hits only 25% CPU utilization during a F A T file transfer.

    Code
    read raw = yes
    write raw = yes
    
    getwd cache = yes # IT IS ALREADY ENABLED BY DEFAULT, do not bother with this.

    I am now maxing out my HDD write speed and there are no dips or interruptions whatsoever!

    LMRig please try and let me know.



    EDIT: According to the official documentation, getwd cache is already enabled by default. Now you understand why I don't trust random dudes on the web who tell to enable this and that...


    EDIT2: YES it is enabled by default indeed.

    I removed the option, then ran testparm -v which shows all of your settings + defaults. The option is enabled.


    So you only need

    Code
    read raw = yes
    write raw = yes

    EDIT3: Apparently read raw and write raw are also default settings, but they truly make a difference when explicitly enabled. I'm not sure why, can somebody explain?

    Thanks I put logging to minimum, it seems the logs are now under control.

    I also deleted all the extra options and now SMB saturate my 1gb lan speed consistently.


    I will monitor the logs and post back if they re being spammed again in a few days.

    I'm glad logging is now ok.

    To be honest I just found out that SOME extra features are actually useful.


    Most guides you'll find on the web are outdated or specify redundant values.


    However, these values appear to be still useful.


    Code
    read raw = yes
    write raw = yes
    getwd cache = yes

    What kind of logging have you set on the Samba configuration page? These entries come from it.
    Set it to "None" or "Minimum"


    Also note that socket options = TCP_NODELAY IPTOS_LOWDELAY is redundant, as is already applied by default with OMV.

    According to Samba Wiki the performance improvement feature "Server-Side_Copy" is available since SAMBA version 4.1


    Server-side copying already happens since a lot of time. My network activity is close to zero when I move a file within the NAS.


    I'm not sure that "File Explorer on your PC" does make use of the feature.

    yes it does:

    Clients making use of server-side copy support, such as Windows Server 2012 and Windows 8, can experience considerable performance improvements

    I'm using Windows 10 (21H2).

    Have you enabled SMBv1 protocol in OMV?

    For the love of god, please, no.

    I am actually enforcing: min protocol = SMB2_10


    What software versions are used on client?

    Using Windows 10 21H2, means I'm getting the latest and greatest SMB versions. I also have a Windows 11 client.

    If I run Get-SmbConnection, my shares show up as SMB 3.1.1.



    Make it extra clear: when I transfer a big file, speed is good and almost maxes out my hard drives write speed. However, during the transfer, speed can fall almost to 0, then it goes back up. It does not fluctuate as much as OP, it only happens 2-3 times during a large file transfer.


    I'm sure is related to Samba and how it manages writes. I am using MergerFS, but since we are talking about just one big file, it does not matter as it is being written to just one HDD.

    I am using stock OMV SMB settings, plus the following:

    Code
    min protocol = SMB2_10
    mangled names=no
    access based share enum = yes
    hide unreadable = yes

    Here's an example: average speed was pretty good around 150/160 MB/s, but there's a clear dip that took some time to recover.