Apply changes in GUI takes a very long time.

  • Hi,

    after the disaster of upgrading my OMV 5.x installation I did a fresh install of OMV6 and I think I managed to recover most of my services.

    The only thing that bothers me it's that every time I have to apply configuration changes in the Web UI it takes a long long time (more than 10 minutes) to complete the task.

    My server is a Xeon 1245v3 with 32Gb of RAM and SSD disk so it is not slow.

    Wuring the update I ran some commands from the cli and I see that the CPU load is very low, and if I run a

    Code
    ps aux |grep salt

    I see that every now and then the various modules are been processed for example

    Code
    python3 /sbin/omv-salt deploy run --no-color postfix
    python3 /sbin/omv-salt deploy run --no-color nut

    But every module takes a significant amount of time.

    I searched in the forums and I exclude disk or cpu problems. Another thread mentions problems with resolv.conf file, and I noticed that mine was a normal file and not a symlink to /run/systemd/resolve/resolv.conf

    Not sure why . Anyway I made it a symlink to /run/systemd/resolve/resolv.conf but the applying of updates it's still slow.


    Any suggestion?

    • Offizieller Beitrag

    This will give you debug output.

    sudo salt-call -l debug --local --retcode-passthrough state.apply omv.deploy.postfix

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks.

    FIrst thing I see:


    /run/systemd/resolve/resolv.conf: The domain and search keywords are mutually exclusive.


    I've run the command with 'time' in front of it and I see


    real 1m9.618s

    user 0m3.475s

    sys 0m1.319s


    I've removed the domain line from resolv.conf and the result is:

    Summary for local

    -------------

    Succeeded: 21 (changed=5)

    Failed: 0

    -------------

    Total states run: 21

    Total run time: 5.573 s


    real 1m7.589s

    user 0m3.635s

    sys 0m1.198s


    I've tried to run


    time sudo salt-call -l debug --local --retcode-passthrough state.apply omv.deploy.nut


    real 1m1.842s

    user 0m1.315s

    sys 0m0.413s


    The strange thing is the difference between real time and user/sys time...

  • I think I found another strange thing:

    In the debug output I see this:


    Elapsed time getting FQDNs: 168.1788203716278 seconds


    Which point me to some DNS issues (it's *always* DNS :P )

    I have two NIC, both configured with static IP. One of the two is a bridge (for qemu).


    These are the screenshots of my config (general, interfaces and DNS settings).




    The /etc/resolv.conf is a symlink to systemd/run/


    Code
    gianpaolo@omv:/etc/resolvconf$ ls -l /etc/|grep resolv.conf
    lrwxrwxrwx  1 root     root      32 Sep 13 15:43 resolv.conf -> /run/systemd/resolve/resolv.conf


    And its content are:


    Code
    nameserver 192.168.3.1
    nameserver 192.168.3.1
    nameserver 192.168.3.1
    search canonica


    192.168.3.1 is my PfSense router which acts as DNS for my network.


    Now, I suspect something is wrong with my DNS config and this is causing delays to the omv-salt process.

    Do anyone have an hint on how to solve this? Maybe ryecoaaron ?


    Thanks a lot and have a nice day.

  • Hey, I think I found a solution (not sure if it's a workaround and I'm losing some important bits though).

    I found on the forum that adding a enable_fqdns_grains: False to /etc/salt/minion.d/openmediavault-test.conf could help with slowness of updating process and it did!


    I ran the time sudo salt-call -l debug --local --retcode-passthrough state.apply omv.deploy.nut again and the results are good.


    Code
    Total states run:     13
    Total run time:  158.371 ms
    
    real    0m1.711s
    user    0m1.272s
    sys    0m0.315s

    Now, is it ok to leave that options there or not?

    • Offizieller Beitrag

    The problem is that some files are using grains['fqdn'] to get the FQDN of the host. This will e.g. make postfix not working correct.

    To me it looks more like a network problem that should be fixed.

  • The problem is that some files are using grains['fqdn'] to get the FQDN of the host. This will e.g. make postfix not working correct.

    To me it looks more like a network problem that should be fixed.

    I agree. I don't know what to do to fix it though.

    Reinstall/reconfigure some package? I have a daemon (resolvd?) listening on port 53

    Code
    tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN 


    but no reference to it in resolv.conf

  • FWIW, when installing OMV for a friend, I had a similar issue where applying every config would take 2 minutes.


    I debugged the issue with the very same command and found out that an error regarding DNS, but was explicitly mentioning IPv6. We found out my friend's crappy ISP router was not handling IPv6 correctly and Salt went all over the place. We fixed it by adding an entry to the host file (not manually, needs a specific salt config)

    Can you post the exact error about the DNS during the salt procedure and the content of /etc/hosts

    ?

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

  • Can you post the exact error about the DNS during the salt procedure and the content of /etc/hosts

    ?

    I don't have a specific DNS error during the salt procedure, but if I disable enable_fqdns_grains in the salt configuration I can update in a heartbeat. My /etc/hosts file looks like this:


  • I don't have a specific DNS error during the salt procedure,

    So the only thing you see is

    Elapsed time getting FQDNs: 168.1788203716278 seconds

    ?

    Doesn't say wich FQDN?


    My host file also has the following:

    127.0.1.1 nas.[REDACTED] nas


    Try to manually manually add an entry like this:

    127.0.1.1 nas.canonica omv


    Revert enable_fqdns_grains and check if it improves.

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

  • are these IP addresses correct?

    Code
    192.168.3.2             omv.canonica omv
    192.168.3.5             omv.canonica omv

    You could also try to disconnect one of the two nics, disable/remove the nic configuration and see if it makes a difference. When disabling the NIC the process will be still slow, afterwards try again.


    EDIT:

    Also run salt-call grains.get fqdn. It should take a long time (in your case, mine takes a split second).


    Also run hostname --all-fqdns and paste the result!



    Additionally, the entry 127.0.1.1 omv.canonica omv should be in the hostfile. I checked my VM I use for testing and is there as well, I don't know why yours is missing.

    Please try again, I previously mistyped the entry NAS (which is mine)

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

    3 Mal editiert, zuletzt von auanasgheps ()

  • are these IP addresses correct?

    Yes.


    Also run hostname --all-fqdns and paste the result!

    It takes ages!


    Code
    # hostname --all-fqdns
    omv.canonica omv omv omv omv omv omv omv omv omv omv omv omv omv.canonica 


    Additionally, the entry 127.0.1.1 omv.canonica omv should be in the hostfile. I checked my VM I use for testing and is there as well, I don't know why yours is missing.

    Please try again, I previously mistyped the entry NAS (which is mine)

    I've put the correct one (I spotted the typo)

  • striscio allright, the FQDNs look correct.

    I believe there's something dirty from the upgrade.


    I recommend to disable manual config on every network, remove all settings and simply use DHCP. Set the IP reservation in your router.


    If it doesn't work, remove one NIC and try again

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

  • Ok, auanasgheps I did what you suggested. I already had IP reservation on the router.

    The interfaces are correctly configured by DHCP.

    My resolv.conf file looks like this now:

    Code
    # See man:systemd-resolved.service(8) for details about the supported modes of
    # operation for /etc/resolv.conf.
    
    nameserver 192.168.3.1
    nameserver 192.168.3.1
    search .

    Which seems odd to me since I've configured my domain name in the GUI under Network -> General.

    So I tried to change the domain to 'casa', applied the changes but the resolv.conf file still has the search . line which seems odd to me.

    The hostname --all-fqdn still takes ages to complete and still returns that strange line:

    omv.casa omv.canonica omv omv omv omv omv omv omv omv omv omv omv

    Then I tried the following:

    - edited /etc/systemd/resolved.conf file and like this

    192.168.3.1 is my internal DNS on my router. I restarted systems-resolved.services, but the resolv.conf file wasn't updated. Then I manually edit my resolv.conf file and put 127.0.0.53 the hostname command is a LOT quicker And I can resolve internal and external names.

    I reverted the modification on /etc/salt/minion.d/openmediavault.conf to enable FQDN grains and the salt command was very very quick. So, problem solved!

    The question is:

    Why is my resolv.conf file pointing to 192.168.3.1 nameserver despite having enabled local DNS?

    And why the search line is not updated when I change domain in the GUI?

  • So I tried to change the domain to 'casa',

    Italian spotted here. Ciao!

    but the resolv.conf file still has the search .

    I agree it's odd. But your router should send the search domain, (mine does).


    I'm happy you resolved the problem, but manually... I never had to edit /etc/systemd/resolved.conf file, clearly this is still a workaround. Actually /etc/resolv.conf should automatically set the internal DNS.


    I believe there's still something dirty left or forgotten from the upgrade, but only votdev can tell.


    Try running omv-firstaid and select the option to reconfigure network(s). Maybe this tool can delete all existing network configs! I used it in the past to fix mistakes.

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

    3 Mal editiert, zuletzt von auanasgheps ()

  • Try running omv-firstaid and select the option to reconfigure network(s). Maybe this tool can delete all existing network configs! I used it in the past to fix mistakes.

    Well. After a while after my supposed victory my server went offline. Since it's headless I rebooted and it came back online. In the GUI it said it has to apply config changes (what changes?!?) I applied and the server went offline again. So I attached a monitor and I had a look at configuration.xml and oddly I noted that for my eno1 interface had both DHCP and address/netmask set. So I thought to start over and ran omv-firstaid to reconfigure network interfaces.

    Well. It didn't go well. I tried with dhcp and it failed. I tried with manual assignmen t and it failed.

    I attach the photos of the error from daemon.log. It seems that netplan has some issues but I don't know anything about netplan. I can manually configure interfaces via ifconfig command of in /etc/network/intefaces file but I'm afraid that at the next update my server will be offline again. I really don't know how to fix this.


Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!