SSH service fails to start on port 22 after some irregular behavior

  • Some context that might or might not be helpful:

    I've been setting up a docker container with a wireguard client which I wanted to send some containers' internet traffic through, my apache web server container amongst others. To set up the IP routing rules, I first had to install iproute2 on the container - not via Dockerfile yet, but on the running container, for testing purposes. During the installation process though there was some strange, unexpected occurrence where the terminal completely froze and all other services (DNS, nginx,...) - mostly running on containers - failed to respond for several minutes. Only after killing the PuTTY terminal the situation normalized - not sure though if this is just timing coincidence though. I have no idea what actually caused this stuckup, I had installed iproute2 on containers before without any issue. IIRC the stuckup happened while the libpam-cap package was being set up.


    Now after everything seemed to be back up running stably, I tried to spin up that wireguard client container that I already had spun up several times with its configuration unchanged. While starting the container, the exact same occurence happened again: Terminal freezing, services failing to respond... this time waiting for some time or killing the terminal didn't make a difference. Connecting to the OMV web GUI was loading very slow, and any login attempts did fail with a timeout. As I had no way to communicate with my NAS now - forgetting I could just have tried to hook up screen and keyboard to the device itself - I plugged out the network cable, hoping to trigger some timeout which might bring the system back up again. Instead after a few seconds, the NAS signalled through beeps that it ungracefully rebooted itself.


    The current problem:

    Now the result of all of this hassle for some reason is that connecting via SSH on port 22 won't work anymore. After some research I found that I still could connect if I changed the port to some different one on the web GUI, but not without triggering some error first which goes like "Failed to execute command 'export... - I'm assuming some logging process is failing here, the GUI keeps informing me about pending configuration changes since. I think the relevant part of this log correctly formatted looks like this though:

    So running sshd -t throws the error Missing privilege separation directory: /run/sshd, but the SSH service start just fine.

    On the other hand, if I switch the SSH port back to 22, another error comes up, this time like this:

    This time, sshd -t is silent, but starting the SSH service fails.

    Now that I finally got the idea to hook up to the NAS directly to check on that condition when I cannot connect via SSH, sshd -t and systemctl status ssh.service provide me this:



    ... which is not saying a lot. journalctl -xe's output is very verbose, but doesn't seem to have any hint either - grepping for ssh gives me nothing. Trying to start SSH gives me the same error as in the log above.


    Now while I could deal with SSH no longer running through port 22, this whole situation seems rather spooky to me and I fear that I might run into further issues down the road, given the premise that lead to this issue. That's why I'd like to evaluate what could have caused all of this havoc and what I can do to fix and prevent issues like this in the future. So... does anyone have a clue what is up with all that? Are there any logs I should check for further information? (Still kind of a novice to OMV and Linux in general, so please bare with me.)

    NAS model: Terramaster F2-221 (Intel Celeron J3355)

    OMV version: 6.3.0-2 (Shaitan) (the latest one to date)


    I set up this NAS about 6 weeks ago, so the system is still kind of fresh.


    Any help is greatly appreciated!

    NAS model: Terramaster F2-221 (Intel Celeron J3355 @ 2GHz, 2GB RAM)

    OMV version: 6.8.0-1 (Shaitan)

    • Offizieller Beitrag

    According to https://askubuntu.com/a/1110843 ,try

    sudo crontab -e and add the following entry:

    @reboot mkdir -p -m0755 /var/run/sshd && systemctl restart ssh.service

    to fix "Missing privilege separation directory: /run/sshd"

    This isn't a fix. I do not recommend doing this.


    does anyone have a clue what is up with all that?

    Are you using the flashmemory plugin?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • This isn't a fix. I do not recommend doing this.

    I know that, but I tried all methods from ChatGPT, not work. This directory should be created when bootup and make sshd service run successfully. I don't know why it didn't be created automaticlly. But it is just a softlink folder, I think creat it manully won't harm anything.

    Although I can ssh to OMV machine without do this, sshd server not running will lead SSH in OMV panel turn red. It's not a solution, just help sshd server run.

    • Offizieller Beitrag

    I know that, but I tried all methods from ChatGPT, not work. This directory should be created when bootup and make sshd service run successfully. I don't know why it didn't be created automaticlly. But it is just a softlink folder, I think creat it manully won't harm anything.

    Although I can ssh to OMV machine without do this, sshd server not running will lead SSH in OMV panel turn red. It's not a solution, just help sshd server run.

    I understand that it can get it running but I have ideas on why it is broken. If flashmemory is installed, it could be breaking it. Using folder2ram -syncall could possibly fix the problem with no hack. ChatGPT is neat but shouldn't be used to fix these things because you can't supply it with enough information.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • According to https://askubuntu.com/a/1110843 ,try

    sudo crontab -e and add the following entry:

    @reboot mkdir -p -m0755 /var/run/sshd && systemctl restart ssh.service

    to fix "Missing privilege separation directory: /run/sshd"

    I am aware of this, but I'm not trying this out because:

    1. This doesn't quite seem to be the source of the problem since SSH still works with other ports. The problem has to be port specific.

    2. I'm not going to meddle with any inner workings that OMV is already dealing with - already had a hard time finding that out when I missed the DNS field in the interface configuration and tried to set up the resolve configuration manually....

    Are you using the flashmemory plugin?

    I have flashmemory 6.2 installed, although I'm not sure why it is, I cannot remember doing it manually.

    I understand that it can get it running but I have ideas on why it is broken. If flashmemory is installed, it could be breaking it. Using folder2ram -syncall could possibly fix the problem with no hack.

    I ran Sync All via the Web GUI but the problem still persists.

    NAS model: Terramaster F2-221 (Intel Celeron J3355 @ 2GHz, 2GB RAM)

    OMV version: 6.8.0-1 (Shaitan)

    • Offizieller Beitrag

    I have flashmemory 6.2 installed, although I'm not sure why it is, I cannot remember doing it manually.

    You must have used the install script and not specified the skip flashmemory flag.

    I ran Sync All via the Web GUI but the problem still persists.

    You could make sure the directory exists after removing the flashmemory plugin and see if that helps.

    I would also like to see the output of: systemctl cat ssh

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I would also like to see the output of: systemctl cat ssh

    As of now, on port 22:

    You could make sure the directory exists after removing the flashmemory plugin and see if that helps.

    I removed the plugin but it doesn't seem to have any effect. What exactly do you mean by "making sure the directory exists", should I try to create it manually whatsoever? It's not there yet, and sshd -t still provides the usual error.

    NAS model: Terramaster F2-221 (Intel Celeron J3355 @ 2GHz, 2GB RAM)

    OMV version: 6.8.0-1 (Shaitan)

    • Offizieller Beitrag

    What exactly do you mean by "making sure the directory exists", should I try to create it manually whatsoever?

    I meant make sure /run/sshd exists if removing the plugin because that would tell you if the plugin was deleting it.


    The RuntimeDirectory parameter in the ssh unit file should be creating the directory when the ssh service is started.


    What is the output of:


    cat /etc/default/ssh

    grep -vE "docker|overlay" /proc/mounts

    sudo journalctl -u ssh | tail -n100

    grep -r ssh /usr/lib/tmpfiles.d/*

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I meant make sure /run/sshd exists if removing the plugin because that would tell you if the plugin was deleting it.

    Ah my bad, misunderstood you then. Might it help to reinstall the plugin and try again?


    What is the output of:

    cat /etc/default/ssh


    grep -vE "docker|overlay" /proc/mounts


    sudo journalctl -u ssh | tail -n100

    It seems like port 22 is already in use? How could that be?


    grep -r ssh /usr/lib/tmpfiles.d/* returns nothing.


    EDIT: So I checked via netstat -ltnp if something was using port 22, which turned out to be SSH itself already (?). Then via GUI I changed the port from 22 back to 26 one more time, and suddenly I'm not only able to log back in via port 22, but also port 26 as well. Both netstat -ltnp and lsof now list both ports as being listened on by different SSH processes.


    Could I just run kill 879, or should I handle this via OMV somehow?

    NAS model: Terramaster F2-221 (Intel Celeron J3355 @ 2GHz, 2GB RAM)

    OMV version: 6.8.0-1 (Shaitan)

    3 Mal editiert, zuletzt von StrikeAgainst ()

    • Offizieller Beitrag

    should I handle this via OMV somehow?

    You are doing so many things from the command line that I wouldn't try to use ssh. Just kill the PIDs.


    The only thing I see is a very small tmpfs for /run. How much ram does your system have? What about:


    sudo mkdir /run/sshd

    sudo chmod 0755 /run/sshd

    sudo systemctl start ssh

    sudo systemctl stop ssh


    Then I would reboot and see if /run/sshd exists.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • My NAS has about 2GB of RAM.


    So I killed both SSH PIDs, created the directory like you said, and after a reboot it is still there. But now there's again two SSH instances running on port 22 and 26. Maybe one service has somehow decoupled itself from OMV?

    NAS model: Terramaster F2-221 (Intel Celeron J3355 @ 2GHz, 2GB RAM)

    OMV version: 6.8.0-1 (Shaitan)

    • Offizieller Beitrag

    But now there's again two SSH instances running on port 22 and 26. Maybe one service has somehow decoupled itself from OMV?

    Not knowing exactly what you have done from the command line, it is hard to say. OMV's saltstack code can only configure one instance in /etc/ssh/sshd_config. So, not sure what is starting your second instance.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Not that I know either, I haven't tampered with any SSH related settings from the command line before this issue first came up. So is there no way to retrace whatever is launching the second instance on bootup?

    NAS model: Terramaster F2-221 (Intel Celeron J3355 @ 2GHz, 2GB RAM)

    OMV version: 6.8.0-1 (Shaitan)

    • Offizieller Beitrag

    So is there no way to retrace whatever is launching the second instance on bootup?

    Sure there is but somethings I could do in minutes take hours in back and forth on forum posts.


    What is the output of:

    ls -al /etc/ssh/

    cat /etc/ssh/sshd_config

    sudo omv-salt deploy run ssh

    systemctl list-units | grep ssh

    ps aux | grep ssh

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ls -al /etc/ssh/

    cat /etc/ssh/sshd_config

    sudo omv-salt deploy run ssh (redacted some user info there with [R])

    systemctl list-units | grep ssh

    Code
      ssh.service                                                                                                                          loaded active running   OpenBSD Secure Shell server

    ps aux | grep ssh

    Code
    root         589  0.0  0.1  13360  2836 ?        Ss   Feb14   0:00 sshd: /usr/sbin/sshd -D -f /etc/ssh/omv_sftp_config [listener] 0 of 10-100 startups
    root         616  0.0  0.0  13360  1332 ?        Ss   Feb14   0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
    root         791  0.0  0.1 322212  2596 ?        Ssl  Feb14   0:53 node /opt/yarn-v1.22.19/bin/yarn.js start --force-ssh --ssh-port 26 --ssh-host nas2 --base  --port 2222 --ssl-cert /cert.crt --ssl-key /cert.key
    root         829  0.9  0.6 10816104 12764 ?      Sl   Feb14  22:39 /usr/local/bin/node . --force-ssh --ssh-port 26 --ssh-host nas2 --base  --port 2222 --ssl-cert /cert.crt --ssl-key /cert.key
    root      658770  0.0  0.4  14064  8364 ?        Ss   21:54   0:00 sshd: root@pts/0
    root      660390  0.0  0.0   6244   712 pts/0    S+   21:59   0:00 grep ssh

    NAS model: Terramaster F2-221 (Intel Celeron J3355 @ 2GHz, 2GB RAM)

    OMV version: 6.8.0-1 (Shaitan)

    • Offizieller Beitrag

    I see you are using the sftp plugin. Did you change the port in there to 22? What is the output of: sudo grep -rw 22 /etc/ssh/*

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Oof that's it! Installed SFTP and turned it on about one or two weeks ago because I had thought I might need it eventually but then forgot about it. Surely also had no idea it would use the same port by default. Turned it off and now I can switch SSH back to 22 just fine.


    Sorry for all the hassle and thank you so much for taking your time!

    NAS model: Terramaster F2-221 (Intel Celeron J3355 @ 2GHz, 2GB RAM)

    OMV version: 6.8.0-1 (Shaitan)

  • votdev

    Hat das Label gelöst hinzugefügt.
    • Offizieller Beitrag

    Surely also had no idea it would use the same port by default. Turned it off and now I can switch SSH back to 22 just fine.

    Glad it is working. The sftp plugin uses port 222 by default. https://github.com/OpenMediaVa…onf.service.sftp.json#L18

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!