Posts by mcgyver83

    HI, I moved from scheduled jobs that execute a custom bash scripts running rsync command to the already available omv interface to run rsync.

    I realised that user that runs the rsync command is root, so all backup files are owned by root and also log files.
    How can I keep files owner?
    With custom bash script I was used to run them with a standard user.

    Thanks for your feedback.

    I did it via script to "learn" how rsync is used.
    I know that there is OMV gui to use rsync but I want to understand it before using the gui.


    The folder is automatically mounted in / srv / xxxxxxx (numbers ...), but you must handle it with the shared folder name that you created.

    Where in the filesystem I can find the folder with "Shared folder" as name?
    I mean accessing filesystem via terminal?


    I edit. Sorry. I just reread your first post. I don't think I got it right. What is the reason to do this from CLI with a scrypt? You can do any rsync work in the GUI. Doesn't that seem easier to you?

    Anyway what you've done should work, you access the correct folder with a symbolic link.

    The issue I had (root fs filled by files sync done by rsync) happened because I reboot the router, so the remote mount gone away. When the router was again up&running OMV didn't remounted the remote smb share so /srv/xxxxxx become a "local folder" and rsync started putting files there.

    There is a way to make OMV remount/re-attach a "Remote mount" as soon as it come back?

    I'm using remote mount



    but how I can access di mounted remote share?
    In the picture I put in the first message I showed the filesystem, here I can see that the remote mount is mounted under /srv/xxxxxxxxxxxxx.

    You suggestion is to use that folder?

    Probably I got what happened when I had the full root disk issue: I restarted the router so the remote mount is gone.
    When router restart ended OMV didn't remount the share so as soon as someone tried to write into /srv/xxxxxx (accessing the folder via symlink but I think isn't a problem) in fact is writing on the local file system not the remote one.

    Could be possible?

    Hi,
    I configured a remote mount linked to a smb share exposed by the router.

    In filesystem page I see that is mounted:



    in the home dir I have a symlink to the path where the remote is mounted.

    lrwxrwxrwx 1 pi users 42 May 20 07:56 router_drive -> /srv/4892764a-9c63-4689-83c1-4627bbaa2a84/


    I have a bash script that triggers rsync from a local folder to this mounted drive.

    this is the rsync script:

    rsync -av \

    --delete \

    --no-perms \

    --no-owner \

    --no-group \

    --progress \

    --exclude-from='/home/pi/raspberry-mcg/script/backup/rsync_exclude.txt' \

    --log-file='/var/log/rsync/rsync_router_drive.log' \

    /home/pi/backup_drive/ /home/pi/router_drive/rsync/


    The highlight path is the destination path, a symlink to the remote folder as shown above.

    What happened yesterday is that, instead of copying stuff to the remote, the script wrote in /home/pi/router_drive/rsync/ files (like the usual behaviour) but instead of the remote this files filled up my root partition. Seems that the mount disappeared so rsync created the folder tree into /home/pi/router_drive/rsync/ but now this was a local folder.

    What is the right way to access from ssh a remote mount shared folder?

    Hi I have a FritzBox 7530 with the attached usb drive exposed via SMB in the same "workgroup" of smb shared_folder configured in OMV.



    I configured in OMV "remote mount" for the smb exposed by the fritzbox router.


    After I have an rsync script that syncs stuff from local to the router smb share.
    When it runs I have a lot of messages like below:

    Code
    Aug 12 10:11:00 raspberrypi kernel: [89954.494635] CIFS: VFS: bogus file nlink value 0
    Aug 12 10:11:00 raspberrypi kernel: [89954.630354] CIFS: VFS: bogus file nlink value 0
    Aug 12 10:11:00 raspberrypi kernel: [89954.709232] CIFS: VFS: bogus file nlink value 0
    Aug 12 10:11:00 raspberrypi kernel: [89954.811440] CIFS: VFS: bogus file nlink value 0
    Aug 12 10:11:00 raspberrypi kernel: [89954.914900] CIFS: VFS: bogus file nlink value 0
    Aug 12 10:11:00 raspberrypi kernel: [89955.012669] CIFS: VFS: bogus file nlink value 0


    Code
    cat /proc/mounts | grep "TOSHIBA"
    //192.168.1.1/FRITZ.NAS/TOSHIBA_EXT /srv/4892764a-9c63-4689-83c1-4627bbaa2a84 cifs rw,relatime,vers=3.0,cache=strict,username=aaaa,uid=1000,forceuid,gid=100,forcegid,addr=192.168.1.1,file_mode=0755,dir_mode=0755,soft,nounix,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1 0 0
    cat /etc/fstab | grep "TOSHIBA"
    //192.168.1.1/FRITZ.NAS/TOSHIBA_EXT /srv/4892764a-9c63-4689-83c1-4627bbaa2a84 cifs _netdev,uid=pi,gid=users,iocharset=utf8,vers=3.0,nofail,noserverino,credentials=/root/.cifscredentials-aaaaaaa-3d26-41a7-a8b0-3393dcf0ddde 0 0



    Any hints?

    Hi,

    I'm running OMV with a raspberry pi 2B.Docker storage is configured on the raspberry SD card so no issue; container persistent storage is located on a usb drive so I face the issue described at reboot (only on 2 containers, I have 6 containers started at boot).

    I tried the delay way proposed here delay docker start but no luck.

    I think my issue is related to external drive mount availability so I tried


    omv-extras writes the waitLocalFs.conf. So, I wouldn't change that one. And instead of delay, delete all of the files in /etc/systemd/system/docker.service.d/ and try the following (as root):


    Code

    Code
    cat <<EOF > /etc/systemd/system/docker.service.d/waitAllMounts.conf
    [Unit]
    After=local-fs.target $(systemctl list-units --type=mount | grep /srv | awk '{ print $1 }' | tr '\n' ' ')
    EOF


    and works!!!
    It takes some time to have containers up & running but now I can rely on those containers also after a power outage :D

    Hi, since some months I had the feeling that when I enter Update Management page items shown are always the same.
    Now I had the time to check and yesterday I execute the update without error, blank Update Management page after update.
    This morning I enter the page and I still see same updates as yesterday.
    As example:


    Here the log: log

    Hi all,
    coming back to this topic.
    I still have the same issue: "Remte Mount" + "Shared folder" make directories in the file system that are owned by root.
    I want to use the remote mounted drive (usb drive attached to router) for backup and I don't want that backup files are owned by root.

    How can I change ownership for folder inside the "Remote Mount"?

    I made changes suggested.

    Now I have the same issue with another job.
    Mail message is:

    Code
    /var/lib/openmediavault/cron.d/userdefined-fee68f4c-8454-49b2-9d8e-56c8bc8d631a: 31: /var/lib/openmediavault/cron.d/userdefined-fee68f4c-8454-49b2-9d8e-56c8bc8d631a: source: not found

    The command (runned as user `pi`) in gui is:

    I need to run the job with `pi` user to keep permission and ownership (and because I cannot see why I should use sudo).

    Right now I still receive the mail message with `source` command complain



    sudo: ok, I can remove it using "root" as user in scheduled job. It comes from the manual command I used to test it.
    sh: ok, in fact I can execute it instead of using sh.

    parameter "5": this could be an issue. Same script is used to refresh different libraries in plex; I'll try putting it inside the script just to see if it solves the issue

    Hi all, with OMV4 I was used to use the LetsEncrypt plugin to renew/refresh my SSL cert used to secure OMV gui access.
    To be honest I cannot understand how to keep using letsencrypt to keep my cert renewed

    No hints?
    Who is using the `source` command to execute scheduled job?

    I see that `userdefined-014d8449-8363-4241-a6ee-5904509cea43` script has `#!/bin/sh -l`, I changed my sh script with this line but I still have the issue