Posts by Tupsi

    Maybe related to this? docker iptables

    Check the end of the thread and see if you have the mentioned network adapter.

    Thank you very much for that, this is it. My network card identifies as e1000e instead of e1000, but it seems effected anyway. Doing what is written on that hetzner page did the trick, so I will leave that here as reference if another comes by with that odd piece.

    ethtool -K <interface> tso off gso off

    (taken from…rformance-intel-i218-nic/)

    Everyone who tried to help, thanks for that! I have a feeling you all know by first hand experience how frustrating it can be, if you do not find an answer to a question where you once started with "can't be that hard, can it?". Turned out again it was. So thanks again for the help guys, much appreciated!

    and for people (like me) wondering afterwards how to-do that on every boot, here is a nice how-to. Just scroll to the end and use the systemd variant.

    This makes no difference. Both methods install the openmediavault package using apt. The install script does add optimizations to samba that the ISO doesn't. I find it hard to believe they would cause the issue but just remove them from the Samba settings extra option box to test.

    and you are right, it is NOT the different installation method. It is ...drumroll... DOCKER.

    I reinstalled from scratch and kept testing. All was good until I hit the Install button for docker. Afterwards its down to that 0.5.

    If I remove docker again AND reboot the full speed comes back.

    So whatever docker is doing with my main NIC its bad. Now I "only" have to find what's exactly doing it and how to reverse it.

    You need to do iperf testing in both directions. To do this you must run it simultaneously on both machines as either client or server as appropriate.

    Then you need to run as client on one machine against the server running on the other. Then reverse the roles and test again. This will you show the effective link speed in both directions.

    and that would achieve anything different from the -R switch I used in what way exactly?

    I already know the effective link speed and more speed testing will not change the result without changing anything first.

    Actually I didn't expect Jeff's issue to be identical with yours, but he had to question basic assumption to find the root cause.

    In enterprise environments I often found the root cause of strange issues to stem from outdated firmware or driver versions (80%) or new issues in latest versions (20%)

    aye, it's all good. I appreciate you are trying to help instead just telling me (it works for me, so it can not be a problem like others do). ;-)

    My gut still tells me it is the installation and I have to find it there. As it is usually right I will keep digging there and leave you guys alone.


    I did found something interesting. Remembered I still had an usb stick in the internal socket I tested OMV with and booted from there. Turns out with that install I get the full speed in both directions, so it IS definitely something wrong with my normal install. Both installs use the same OMV in the end, but I installed them differently. My current main installation used the "install debian first, then OMV on top of it method", the usb install used the original omv iso for the install.

    so you should know that crystal balls are in short supply or in other words little details have been provided to allow for troubleshooting

    My apologise. You are right about the fact that I should have started with a clearer picture for you guys. In not doing so, you guessed my level of cluelessness based on what and how I wrote it. So I got what I deserved out of it. :-)

    I'd recommend the post "Ethernet was slower only in one direction on one device"

    from Jeff Geerling proving the opposite experience

    That is a very interesting read. Although if you compare the writers story with my problem, you will see that it is a little different at key points.

    • I changed the cable and nothing changed
    • The speed problem is limited to only one computer, but on every NIC you use there.
    • Using a different set of computers (with no OMV), I reach the full speed every time in both directions in my network.
    • he talks about having speed issues "sometimes", I have it "all the time". The download speed from OMV is capped at around 0.5 Gbit/s (see the edited iperf above)

    I just had a crazy idea and did the following. To explain the settings, you have to know that so far I only used a 2nd NIC inside my NAS only for docker and my DMZ for docker containers. To achieve that I use a docker network in maclan mode, so OMV so far had no direct access itself to that NIC (unconfigured enp1s0f0.102).

    • configured the 2nd NIC in OMV so it can be used directly with OMV and assigned an ip address.
    • started ipferf3 directly on an OMV/Debian command line and did a test resulting in the same bad/capped performance
    • created a new docker network with parent NIC DMZ in bridge mode (so the settings of the configured OMV NIC has to be used)
    • started a docker container (latest debian image with just network tools and iperf installed) in that network with same bad results (half gbit capped)
    • Then I changed the network and started the same docker container on the docker network I use for my DMZ (configured in maclan mode ) and got FULL 1 Gbit/s both way.

    These tests even went over my opnsense firewall hardware as it had to cross networks from my windows computer inside my LAN over to the DMZ network.

    So if the NIC is used which is configured by OMV/Debian the result is a capped transfer speed in one direction. Using the docker container in a network configured in maclan mode I get the full speed of the NIC in both directions.

    In case you are not familiar with docker and/or maclan mode. This doe not use the host as a bridge but simulate the container being its own hardware on the network with its own mac address, completely ignoring anything which is configured on the LAN adapter inside the host. Before I did these test I had it running with nothing configured at all in OMV, meaning the NIC showed as DISABLED in the network GUI of OMV.

    I hope I have explained a bit better why it seems to me a debian/omv config issue with the NIC/base system and not something outside of OMV.

    I have worked with computers for over 40 years now, so please do not ask me to-do the basics. I have already done this before coming here and asking for help.

    But to make it crystal clear, I already checked cables, even tried direct without a switch in between and used a 2nd NIC in the NAS for the same results. :-)

    I had full 1 Gbit/s network speed (both directions) with my NAS (same NAS, some network, some PC) when I had TrueNAS running for over 4 years now and only now switched to OMV because I wanted docker support, which is pita on FreeBSD. I am also pretty sure I had full 1 Gbit/s when I tested with OMV the first weeks before I did the switch.

    My house has Cat7 wires and everything else works fine 1 Gbit/s wise. Although your idea about the cable would not explain why it is 1 Gbit/s UPLOAD but 0.5 GBit/s DOWNLOAD (on the same cable).

    It is definitely NOT the environment but something coming from Debian, as you can see above I adjusted some iperf settings and the numbers no say the same thing as this screenshot from my taskmanager.

    It seems to me that the system is unable to fetch/catch enough data into a sending queue when it comes to sending stuff onto the NIC (as sending inside to other hardware is fine, so is receiving).

    Edit: run again with more buffer, now seeing the same results as described in my initial post.

    I am putting this under network because it is NOT related to samba as my tests turned out. So let me begin...

    I noticed that my file download from my omv is capped at around halve of what it should be (0.5 Gbit/s).

    So naturally I first blamed samba :P but it turned out that it also happened when I transferred stuff with sftp. So not samba.

    Then it was time to blame my raid, although having just upgraded that I highly doubted it, but tried it anyway. Copying stuff inside OMV around from/to my raid (to a nvme) always yielded way more then what the nic is capable off, so another miss.

    Then I blamed my windows client only to notice that another client had the same limitations.

    Then I blamed the NICs on both side and checked with the switch if they connected there full duplex and 1 Gbit/s (which they do).

    Then for whatever reason I did an UPLOAD instead of a DOWNLOAD to my OMV and to my surprise I got the full 1 Gbit/s of what my network can handle.

    So now I am very confused and hope anyone here has an educated guess on what is going on here and if there is a way to fix this, so I can get my full 1 Gbit/s speed down from my NAS again.

    ok, while this worked it is what I would call in german "unfassbar hässlich"; my partition resides now in


    while I appreciate every auto thing, thats a bit to far for my taste. Is there a way I can amend that AND keep OMV in the loop? I noticed that OMV put its own entry in fstab for this as you already said, but I would assume just editing that line would be to easy (and not working) as the rest of OMV needs to know what the name/mountpoint of that partition is.

    Or is this the fault of btrfs (you mentioning a naming above) and I could get rid of that by just reformating with ext4? Using btrfs was just a spur of a moment thing, not sure I really want/need that there.

    It will be mounted via fstab. But the fstab entry will be generated by OMV and an entry will be made in the database of OMV, so that OMV is aware of this.

    If there will be an issue with docker, it can be fixed, as the installation of docker via omv-extras makes sure that docker start only after known filesystems are mounted.

    nice, I will give that a spin then and report back.

    although on a second note, I do not think this will work, as I have installed my docker stuff there and I already crashed my idea of having that in my zpool because that lead to very unpleasant things when I tried that.

    If I leave it to omv to mount that partition I would imagine that being done long after system services like docker tried to start?

    I have partitioned my nvme I boot with two partitions, one from where I boot omv, the 2nd I use for data stuff I dont care if it gets lost when the disc dies.

    Now I want to share a folder from that 2nd partition and noticed that I can not do that, because in omv under "Shared Folders" I have no option to select that partition. The only options in the drop down are from my zpool I have on three other discs.

    So I was wondering if this is not allowed in OMV in general, or if it might had something todo with the fac that I already partitioned/formated that disc during installation of my debian system, as I noticed that under Storage/File Systems the Label entry of the partition I want is empty and my gut tells me that drowndown in Shared Folders is looking for exactly that. ;-)

    So If my gut is right, the real question would be if its possible to add/fix that label thing and add a label to that one partition I want a shared folder from?