Beiträge von Tupsi

    From the time when I was upgrading from omv 5 to 6 I had an usb stick plugged into my NAS. After removing that I still see boot entries under System/Kernel in the OMV GUI I am unable to get rid off.



    Selecting one entry and hitting the delete button shows me the following window, but after closing that, the entry is still there.


    Is there any other way to get rid of these entries?

    well, thats only for the proxmox stuff right? I just got that installed to get rid of my problem. I was talking about the "normal" kernel which got installed during one of the latest updates before I even had the kernel plugin (and proxmox) installed. But maybe I am looking at this the wrong way. As OMV is ment to make everything easier for the normal home user, it is not ment to be to fizzle around with kernels in the first place, so there is no option to mess with the normal flow of kernel updates. I can respect that. Makes sense in a way. It is more of "nice to have" thing anyway. I have set the proxmox as default kernel now and can just patiently wait until the normal update process kicks the 5.18 kernel out of the system because at some point in the future it will be "the old" which gets automatically deleted anyway.


    All good!

    That is actually something that has changed. There is now a new "kernel"plugin to install

    explains why I did not find it after the upgrade to v6 nicely. It is all done and working again. Only thing left for me is find a way to get rid of that 5.18 kernel. It is still sitting in my grub list. Should I go to the command line for that and uninstall it in the normal debian style with apt or is there a gui option to remove a certain kernel?

    I just did a omv-upgrade on the console and the update process tried to upgrade from 5.16 to 5.18. While compiling the kernel there was an error compiling the zfs module which resulted in no zfs available when booting the new kernel. I could temp fix this by rebooting and selecting the old 5.16, but I am not wondering how I can fix this by getting back into a sane state, either removing that latest update with the 5.18 from my system or retrying the build.


    Any help appreciated.


    below some errors of the logs. Seems like problems in the build script, so I cam really clueless how to fix this:


    also the strange thing here is, that I do not get this on EVERY reboot, but only on some (for instance the latest updates I applied with the kernel and docker stuff this week). The first time you can imagine that was pretty scary, but now I am kinda used to with just importing zfs pool, reboot again in order to get it working. Still would be nice to get this fixed to a more solid state :).


    What I also noticed is that after that successful reboot the UI shows my pool BUT it things I did anything worth saving, because the yellow banner pops up immediately after I visited the zfs page and the pool is shown. The next reboot everything seems fine, so very strange and not really reproducible (sadly I know).

    Probably too late for you but may be useful for others. I just had to deal with that issue of the pool not staying after reboot and it seems the solution is to re-export and reimport the zfs pool.

    I have the same problem here. Did you do that on the console with -f, because mine refuses to export on the omv gui and in the console I get an "unable to mount". Already stopped my docker and samba and can't think of anything else hugging the drives.

    Maybe related to this? docker iptables


    Check the end of the thread and see if you have the mentioned network adapter.

    Thank you very much for that, this is it. My network card identifies as e1000e instead of e1000, but it seems effected anyway. Doing what is written on that hetzner page did the trick, so I will leave that here as reference if another comes by with that odd piece.


    ethtool -K <interface> tso off gso off


    (taken from https://docs.hetzner.com/robot…rformance-intel-i218-nic/)


    Everyone who tried to help, thanks for that! I have a feeling you all know by first hand experience how frustrating it can be, if you do not find an answer to a question where you once started with "can't be that hard, can it?". Turned out again it was. So thanks again for the help guys, much appreciated!



    and for people (like me) wondering afterwards how to-do that on every boot, here is a nice how-to. Just scroll to the end and use the systemd variant.


    https://linuxhint.com/use-etc-rc-local-boot/

    This makes no difference. Both methods install the openmediavault package using apt. The install script does add optimizations to samba that the ISO doesn't. I find it hard to believe they would cause the issue but just remove them from the Samba settings extra option box to test.

    and you are right, it is NOT the different installation method. It is ...drumroll... DOCKER.


    I reinstalled from scratch and kept testing. All was good until I hit the Install button for docker. Afterwards its down to that 0.5.

    If I remove docker again AND reboot the full speed comes back.

    So whatever docker is doing with my main NIC its bad. Now I "only" have to find what's exactly doing it and how to reverse it.

    You need to do iperf testing in both directions. To do this you must run it simultaneously on both machines as either client or server as appropriate.


    Then you need to run as client on one machine against the server running on the other. Then reverse the roles and test again. This will you show the effective link speed in both directions.

    and that would achieve anything different from the -R switch I used in what way exactly?


    I already know the effective link speed and more speed testing will not change the result without changing anything first.


    Actually I didn't expect Jeff's issue to be identical with yours, but he had to question basic assumption to find the root cause.

    In enterprise environments I often found the root cause of strange issues to stem from outdated firmware or driver versions (80%) or new issues in latest versions (20%)

    aye, it's all good. I appreciate you are trying to help instead just telling me (it works for me, so it can not be a problem like others do). ;)


    My gut still tells me it is the installation and I have to find it there. As it is usually right I will keep digging there and leave you guys alone.

    Edit:

    I did found something interesting. Remembered I still had an usb stick in the internal socket I tested OMV with and booted from there. Turns out with that install I get the full speed in both directions, so it IS definitely something wrong with my normal install. Both installs use the same OMV in the end, but I installed them differently. My current main installation used the "install debian first, then OMV on top of it method", the usb install used the original omv iso for the install.

    so you should know that crystal balls are in short supply or in other words little details have been provided to allow for troubleshooting

    My apologise. You are right about the fact that I should have started with a clearer picture for you guys. In not doing so, you guessed my level of cluelessness based on what and how I wrote it. So I got what I deserved out of it. :)


    I'd recommend the post "Ethernet was slower only in one direction on one device"

    from Jeff Geerling proving the opposite experience

    That is a very interesting read. Although if you compare the writers story with my problem, you will see that it is a little different at key points.


    • I changed the cable and nothing changed
    • The speed problem is limited to only one computer, but on every NIC you use there.
    • Using a different set of computers (with no OMV), I reach the full speed every time in both directions in my network.
    • he talks about having speed issues "sometimes", I have it "all the time". The download speed from OMV is capped at around 0.5 Gbit/s (see the edited iperf above)


    I just had a crazy idea and did the following. To explain the settings, you have to know that so far I only used a 2nd NIC inside my NAS only for docker and my DMZ for docker containers. To achieve that I use a docker network in maclan mode, so OMV so far had no direct access itself to that NIC (unconfigured enp1s0f0.102).


    • configured the 2nd NIC in OMV so it can be used directly with OMV and assigned an ip address.
    • started ipferf3 directly on an OMV/Debian command line and did a test resulting in the same bad/capped performance
    • created a new docker network with parent NIC DMZ in bridge mode (so the settings of the configured OMV NIC has to be used)
    • started a docker container (latest debian image with just network tools and iperf installed) in that network with same bad results (half gbit capped)
    • Then I changed the network and started the same docker container on the docker network I use for my DMZ (configured in maclan mode ) and got FULL 1 Gbit/s both way.

    These tests even went over my opnsense firewall hardware as it had to cross networks from my windows computer inside my LAN over to the DMZ network.


    So if the NIC is used which is configured by OMV/Debian the result is a capped transfer speed in one direction. Using the docker container in a network configured in maclan mode I get the full speed of the NIC in both directions.


    In case you are not familiar with docker and/or maclan mode. This doe not use the host as a bridge but simulate the container being its own hardware on the network with its own mac address, completely ignoring anything which is configured on the LAN adapter inside the host. Before I did these test I had it running with nothing configured at all in OMV, meaning the NIC showed as DISABLED in the network GUI of OMV.


    I hope I have explained a bit better why it seems to me a debian/omv config issue with the NIC/base system and not something outside of OMV.

    I have worked with computers for over 40 years now, so please do not ask me to-do the basics. I have already done this before coming here and asking for help.

    But to make it crystal clear, I already checked cables, even tried direct without a switch in between and used a 2nd NIC in the NAS for the same results. :)


    I had full 1 Gbit/s network speed (both directions) with my NAS (same NAS, some network, some PC) when I had TrueNAS running for over 4 years now and only now switched to OMV because I wanted docker support, which is pita on FreeBSD. I am also pretty sure I had full 1 Gbit/s when I tested with OMV the first weeks before I did the switch.

    My house has Cat7 wires and everything else works fine 1 Gbit/s wise. Although your idea about the cable would not explain why it is 1 Gbit/s UPLOAD but 0.5 GBit/s DOWNLOAD (on the same cable).

    It is definitely NOT the environment but something coming from Debian, as you can see above I adjusted some iperf settings and the numbers no say the same thing as this screenshot from my taskmanager.





    It seems to me that the system is unable to fetch/catch enough data into a sending queue when it comes to sending stuff onto the NIC (as sending inside to other hardware is fine, so is receiving).


    Edit: run again with more buffer, now seeing the same results as described in my initial post.

    I am putting this under network because it is NOT related to samba as my tests turned out. So let me begin...


    I noticed that my file download from my omv is capped at around halve of what it should be (0.5 Gbit/s).


    So naturally I first blamed samba :P but it turned out that it also happened when I transferred stuff with sftp. So not samba.


    Then it was time to blame my raid, although having just upgraded that I highly doubted it, but tried it anyway. Copying stuff inside OMV around from/to my raid (to a nvme) always yielded way more then what the nic is capable off, so another miss.


    Then I blamed my windows client only to notice that another client had the same limitations.


    Then I blamed the NICs on both side and checked with the switch if they connected there full duplex and 1 Gbit/s (which they do).


    Then for whatever reason I did an UPLOAD instead of a DOWNLOAD to my OMV and to my surprise I got the full 1 Gbit/s of what my network can handle.


    So now I am very confused and hope anyone here has an educated guess on what is going on here and if there is a way to fix this, so I can get my full 1 Gbit/s speed down from my NAS again.

    ok, while this worked it is what I would call in german "unfassbar hässlich"; my partition resides now in


    /srv/dev-disk-by-id-nvme-Samsung_SSD_980_250GB_S64BNF0R810589P-part2


    while I appreciate every auto thing, thats a bit to far for my taste. Is there a way I can amend that AND keep OMV in the loop? I noticed that OMV put its own entry in fstab for this as you already said, but I would assume just editing that line would be to easy (and not working) as the rest of OMV needs to know what the name/mountpoint of that partition is.


    Or is this the fault of btrfs (you mentioning a naming above) and I could get rid of that by just reformating with ext4? Using btrfs was just a spur of a moment thing, not sure I really want/need that there.

    It will be mounted via fstab. But the fstab entry will be generated by OMV and an entry will be made in the database of OMV, so that OMV is aware of this.


    If there will be an issue with docker, it can be fixed, as the installation of docker via omv-extras makes sure that docker start only after known filesystems are mounted.

    nice, I will give that a spin then and report back.

    although on a second note, I do not think this will work, as I have installed my docker stuff there and I already crashed my idea of having that in my zpool because that lead to very unpleasant things when I tried that.


    If I leave it to omv to mount that partition I would imagine that being done long after system services like docker tried to start?