Rsync in Openmediavault

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Rsync in Openmediavault

      Hi,

      Myself Lokesh Kamath. I am very new to Openmediavault.

      Recently I have installed OMV in our organization for LAN System Backup.

      Through server I am unable to backup the client machines. I am getting below error. (Remote Backup with Pull mode)

      Please wait, syncing <rsync lokeshkamath@192.xxx.xxx.8:/home/lokeshkamath/Documents> to </srv/dev-disk-by-label-DataBackup/Admin/Lokesh> ...
      Permission denied, please try again.
      Permission denied, please try again.
      Permission denied (publickey,password).
      rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
      rsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]


      The synchronisation has completed successfully.
      Done ...

      If i try below command from client machine to server it is working perfect.
      rsync -avz /home/lokeshkamath/Documents root@192.xxx.xxx.10:/sharedfolders/Lokesh


      I tried with telnet in client machine (Ubuntu)
      telnet 192.xxx.xxx.8 873


      Trying 192.xxx.xxx.8...
      telnet: Unable to connect to remote host: Connection refused

      I tried with ufw status from terminal
      ufw status
      Status: active


      To Action From
      -- ------ ----
      22/tcp ALLOW Anywhere
      873/tcp ALLOW Anywhere
      873/udp ALLOW Anywhere
      22/tcp (v6) ALLOW Anywhere (v6)
      873/tcp (v6) ALLOW Anywhere (v6)
      873/udp (v6) ALLOW Anywhere (v6)

      I am not understanding where is the mistake & how to solve this issue.

      In OMV rsync jobs configured like this


      Please guide me. Thanks in advance.
      Images
      • rsync.png

        39.23 kB, 456×619, viewed 27 times
    • I don't use ssh to backup with rsync but instead use nfs shares with autofs. So my suggestion may be off...

      Do you have ssh-server running on the clients? Without that running I don't think you can connect from the server to the client to run backups. But, as you say, you can connect from the client to the server fine. And/or run the rsync daemon on the clients.

      linuxize.com/post/how-to-enable-ssh-on-ubuntu-18-04/
      OMV 4: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4
    • Thank you so much Adoby.

      Can you please guide how to do NFS shares with AUTOFS?

      The output of sudo systemctl enable ssh

      Synchronizing state of ssh.service with SysV service script with /lib/systemd/systemd-sysv-install.
      Executing: /lib/systemd/systemd-sysv-install enable ssh
      root@slk:/home/lokeshkamath# sudo systemctl status ssh
      ● ssh.service - OpenBSD Secure Shell server
      Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
      Active: active (running) since Fri 2019-11-29 10:51:34 IST; 1 day 11h ago
      Main PID: 1507 (sshd)
      Tasks: 1 (limit: 4915)
      CGroup: /system.slice/ssh.service
      └─1507 /usr/sbin/sshd -D


      Nov 30 14:55:49 slk sshd[15759]: Invalid user rsync lokeshkamath from 192.xxx.xxx.10 port 40942
      Nov 30 14:55:49 slk sshd[15759]: Failed none for invalid user rsync lokeshkamath from 192.xxx.xxx.10 po
      Nov 30 14:55:49 slk sshd[15759]: Failed password for invalid user rsync lokeshkamath from 192.192.0.1
      Nov 30 14:55:49 slk sshd[15759]: Failed password for invalid user rsync lokeshkamath from 192.192.0.1
      Nov 30 14:55:49 slk sshd[15759]: Connection closed by invalid user rsync lokeshkamath 192.xxx.xxx.10 po
      Nov 30 22:28:28 slk sshd[6216]: Invalid user rsync lokeshkamath from 192.xxx.xxx.10 port 42752
      Nov 30 22:28:28 slk sshd[6216]: Failed none for invalid user rsync lokeshkamath from 192.xxx.xxx.10 por
      Nov 30 22:28:28 slk sshd[6216]: Failed password for invalid user rsync lokeshkamath from 192.xxx.xxx.10
      Nov 30 22:28:28 slk sshd[6216]: Failed password for invalid user rsync lokeshkamath from 192.xxx.xxx.10
      Nov 30 22:28:28 slk sshd[6216]: Connection closed by invalid user rsync lokeshkamath 192.xxx.xxx.10 por
      lines 1-18/18 (END)


      *************************************************************************************************************
      UFW Status

      root@slk:/home/lokeshkamath# ufw status
      Status: active


      To Action From
      -- ------ ----
      22/tcp ALLOW Anywhere
      873/tcp ALLOW Anywhere
      873/udp ALLOW Anywhere
      22/tcp (v6) ALLOW Anywhere (v6)
      873/tcp (v6) ALLOW Anywhere (v6)
      873/udp (v6) ALLOW Anywhere (v6)

      Once again thank you so much.

      Lokesh Kamath

      The post was edited 1 time, last by slkamath ().

    • I'm not sure what you mean by the logs. Were you successful? Could the server pull a backup from the client? Note: Unless you have centralized login you must be careful using the right users and passwords. I always setup two users at once on all my computers and servers. One is me with ability to escalate to root via sudo. The other is guests with only read access to some resources.

      Please note that running ssh-server or creating NFS shares on a client computer makes it into a server. With all the dangers involved. For instance if you allow anyone to connect via SSH or NFS share then you must expect anyone to do so, including hackers trying to install ransom ware via a laptop at a coffee shop. Beware!

      Also, as an employee, if I was provided a PC or a laptop with ssh-server enabled by default I would give that careful consideration. And ask how access was secured and used. And if I didn't get a very, very good answer I would start looking for another employer.

      It is much safer to only have clients connect to servers that are locked into server rooms and only accessible via a local network or a encrypted wifi network with only verified clients allowed.

      I use nfs with autofs between all my servers (mostly small SBCs) and also between my PC (desktop and laptop) clients and the servers. But NOT from my servers to the PC clients. This means that all my servers can access all shares on all servers. This is used for backups and also sometimes for Emby (media streamer) to access and stream media from several different servers. I can freely restart servers in any order and when they are up they automatically reconnect and are reconnected to. Very nice!

      From the clients it is used to access shares on the servers and also for backups, but importantly it is strictly from the PC to the servers. Never the other way. I don't run ssh-server or a NFS server on my PCs. Only client software to connect using SSH or NFS. So I must always initiate backups or other file transfers from a PC.

      I run autofs with the NFS client software on my PCs as well as on my servers. Thanks to that my servers and other network resources are available to my PC and laptop automatically if I am connected to the LAN using ethernet or Wi-Fi.

      So autofs automate only the NFS client connecting to a remote NFS share. Autofs can also work with SMB/CIFS but I haven't tried that.

      I have written about how to setup autofs in other posts on this forum. I suspect my way of doing it is a bit old school, but I I haven't seen a updated write-up I could understand and use.

      Typically I just:

      * Install nfs-common and autofs.
      * Copy over my customized auto.master and auto.nfs files to /etc.
      * Create the mount point, typically /srv/nfs.
      * Reboot. And it just works.

      remote mount plugin with offline drives
      OMV 4: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4
    • Adoby wrote:

      I run autofs with the NFS client software on my PCs as well as on my servers. Thanks to that my servers and other network resources are available to my PC and laptop automatically if I am connected to the LAN using ethernet or Wi-Fi.

      So autofs automate only the NFS client connecting to a remote NFS share. Autofs can also work with SMB/CIFS but I haven't tried that.

      I have written about how to setup autofs in other posts on this forum. I suspect my way of doing it is a bit old school, but I I haven't seen a updated write-up I could understand and use.

      Typically I just:

      * Install nfs-common and autofs.
      * Copy over my customized auto.master and auto.nfs files to /etc.
      * Create the mount point, typically /srv/nfs.
      * Reboot. And it just works.

      remote mount plugin with offline drives
      Thank you so much for your details info & reply.

      I will also try with nfs & autofs. In case of any doubt i may ping you, :)

      Lokesh Kamath